Title
stringlengths
12
257
Annotation
stringlengths
101
3.94k
PDF
stringlengths
38
45
Latex
stringlengths
1
261k
Title: The effects of local stellar radiation and dust depletion on non-equilibrium interstellar chemistry
Abstract: Interstellar chemistry is important for galaxy formation, as it determines the rate at which gas can cool, and enables us to make predictions for observable spectroscopic lines from ions and molecules. We explore two central aspects of modelling the chemistry of the interstellar medium (ISM): (1) the effects of local stellar radiation, which ionises and heats the gas, and (2) the depletion of metals onto dust grains, which reduces the abundance of metals in the gas phase. We run high-resolution (400 M$_\odot$ per baryonic particle) simulations of isolated disc galaxies, from dwarfs to Milky Way-mass, using the FIRE galaxy formation models together with the CHIMES non-equilibrium chemistry and cooling module. In our fiducial model, we couple the chemistry to the stellar fluxes calculated from star particles using an approximate radiative transfer scheme, and we implement an empirical density-dependent prescription for metal depletion. For comparison, we also run simulations with a spatially uniform radiation field, and without metal depletion. Our fiducial model broadly reproduces observed trends in HI and H2 mass with stellar mass, and in line luminosity versus star formation rate for [CII] 158$\mu$m, [OI] 63$\mu$m, [OIII] 88$\mu$m, [NII] 122$\mu$m and H$\alpha$ 6563A. Our simulations with a uniform radiation field predict fainter luminosities, by up to an order of magnitude for [OIII] 88$\mu$m and H$\alpha$ 6563A, while ignoring metal depletion increases the luminosity of carbon and oxygen lines by a factor $\approx$2. However, the overall evolution of the galaxy is not strongly affected by local stellar fluxes or metal depletion, except in dwarf galaxies where the inclusion of local fluxes leads to weaker outflows and hence higher gas fractions.
https://export.arxiv.org/pdf/2208.02288
\date{Accepted 2022 August 16. Received 2022 August 15; in original form 2022 August 03} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \pubyear{2022} \label{firstpage} \begin{keywords} astrochemistry -- ISM: atoms -- ISM: molecules -- galaxies: evolution -- galaxies: ISM \end{keywords} \section{Introduction}\label{intro_sect} The chemistry of ions and molecules in interstellar gas plays a vital role in galaxy formation. The rate at which gas can cool depends on the relative abundances of chemical species, as different species radiate away the thermal energy at different rates due to transitions between their excited states. Radiative cooling enables gas to condense onto dark matter halos and trigger the formation of stars and galaxies \citep[e.g.][]{rees77, white78, white91}, while heating due to photoionisation from the UV background can suppress galaxy formation at low masses after reionisation \citep[e.g.][]{efstathiou92, fauchergiguere11, benitezllambay17, benitezllambay20}. Furthermore, there is a huge wealth of spectroscopic observations that identify ions and molecules through their emission and absorption lines. Such observations probe a wide range of phases of interstellar gas, including cold, dense molecular clouds \citep[e.g.][]{leroy09, saintonge17, rosolowsky21}, gas ionised by star-forming regions and/or a central Active Galactic Nucleus (AGN; \citealt{baldwin81, kauffmann03, kewley06}), and diffuse, highly ionised plasmas in the Circum-Galactic Medium (CGM; \citealt{tripp08, tumlinson11, turner14, burchett19}). By studying the chemistry, we can connect these observations to the conditions of the gas that they trace, which is crucial for understanding the physical mechanisms that drive the formation and evolution of galaxies. Large-scale cosmological simulations of the Universe often treat gas cooling using pre-computed tables of the cooling rate that depend on, for example, temperature, density, metallicity and redshift. When tabulating the cooling rate in this way, it is common to assume that the chemical reactions have reached equilibrium, either for a collisionally ionised plasma (Collisional Ionisation Equilibrium, CIE; \citealt{cox69, sutherland93}), or under the influence of photoionisation from a background UV radiation field (Photo-Ionisation Equilibrium, PIE; \mbox{\citealt{wiersma09}}; \citealt{gnedin12, ploeckinger20}). This approach of using pre-computed cooling tables has been applied in many state-of-the-art cosmological simulations \citep[e.g.][]{dubois14, schaye15, tremmel17, pillepich18, lee21}. To connect these hydrodynamic simulations to observations, we can create synthetic spectra of emission and absorption lines from the simulation outputs in post-processing if we again assume that the chemical abundances are in equilibrium. These abundances can be computed either using the temperatures and densities of each gas particle or cell directly \citep[e.g][]{hummels17, katz19, nelson21, oppenheimer20, wijers20}, or using subgrid models that capture unresolved features important for the observational tracers \citep[e.g][]{rahmati13a, vallini13, narayanan14, hirschmann17, olsen21, tan21}. Subgrid approaches can also be used to predict observable line emission in semi-analytic models of galaxy formation \citep[e.g][]{lagos12, popping19, baugh22}. The above methods for modelling gas cooling and synthetic observations in simulations of galaxy formation all rely on the assumption that the chemistry has had sufficient time to reach equilibrium. While this is reasonable in many cases, it is not applicable in scenarios where the gas is evolving rapidly, for example when the cooling time is short \citep{sutherland93, gnat07, oppenheimer13a}, in the presence of turbulence \citep{gray17}, or if the UV radiation field is fluctuating \citep{oppenheimer13b, segers17, oppenheimer18}. To capture such non-equilibrium effects in hydrodynamic simulations, we need to follow the time-dependent evolution of ions and molecules using a chemical reaction network, which integrates the rate equations, together with the resulting cooling and heating rates that determine the temperature evolution, for each gas particle or cell. Several astrochemistry codes have been developed for this purpose in recent years. The \textsc{krome} package implements chemical networks that include hydrogen, deuterium, helium, and low-ionisation metal species at temperatures $< \! 10^{4} \, \rm{K}$ \citep{grassi14, bovino16}, and has been widely applied to hydrodynamic simulations on galactic scales (e.g. \mbox{\citealt{lupi18, lupi20}}; \citealt{sillero21}). The \textsc{grackle} library follows the non-equilibrium chemistry of hydrogen, deuterium and helium species \citep{smith17}, and has been applied to cosmological simulations such as the \textsc{agora} project \citep{kim14} and \textsc{simba} \citep{dave19}. Several studies have also explored molecular networks that include the formation and destruction of CO \citep[e.g][]{nelson97, glover10, glover12, richings14a, richings14b}, which have been applied to simulations of the turbulent Interstellar Medium (ISM) and molecular clouds \citep[e.g][]{walch15, seifried17, smith20, hu21} and whole galaxies \citep[e.g][]{hu16, richings16}. The main disadvantage of non-equilibrium chemical models is the high computational cost, which limits the complexity of the reaction network that can be included and/or the size and resolution of the simulations to which they can be applied. Nevertheless, ongoing advances in this field are producing faster chemistry codes, for example through algorithms that reduce the complexity of the chemical network \citep[e.g][]{tupper02, grassi12, grassi21}, or using neural networks to emulate the full time-dependent calculation \citep{holdship21}. When coupling a hydrodynamic simulation of galaxy formation to a chemical reaction network, one important aspect to consider is the UV radiation field, which ionises the gas and dissociates molecules, as well as providing a crucial source of heating. In the Inter-Galactic Medium (IGM), the radiation field is typically dominated by an extragalactic background, consisting of contributions from quasars and star-forming galaxies throughout the Universe \citep[e.g][]{haardt12, fauchergiguere20}. However, in the ISM regime local sources of radiation such as young stars become important \citep[e.g][]{mathis83, schaye06, rahmati13b}. We also need to consider how dense gas becomes shielded from radiation \citep[e.g.][]{federman79, vandishoeck86, visser09, wolfire10, fumagalli11, wolcottgreen11, rahmati13a}. Modelling the spatial variations of the radiation field within the ISM of a galaxy requires a treatment of the 3D radiative transfer of ionising and dissociating radiation. Radiative transfer codes also incorporate non-equilibrium chemical networks to capture the interaction between the radiation and gas chemistry \citep[e.g][]{pawlik11, rosdahl13, kannan19, chan21a, katz22}, but solving the full radiative transfer equations in this way adds additional computational expense, on top of the cost of the chemical network itself. Approximate methods have therefore been developed to account for the radiation from local sources and/or gas self-shielding \citep[e.g][]{clark12, richings14b, safranekshrader17, hopkins18a, ploeckinger20}. Dust grains are another important aspect of the thermo-chemistry, as they shield the gas from UV radiation and catalyse the formation of molecules such as H$_{2}$ on grain surfaces, as well as providing heating and cooling channels such as photoelectric heating. Dust also depletes metals from the gas phase \citep[e.g][]{jenkins09, decia16}, as metals that are locked up in dust grains cannot participate in gas-phase chemical reactions and thermal processes. The simplest approach is to assume the dust abundance scales linearly with the overall metallicity, however observations suggest that dust to metal ratios may not be constant \citep{remyruyer14, delooze20}. Alternatively, models have recently been developed to follow the formation and destruction of dust grains in hydrodynamic simulations \citep[e.g.][]{bekki15, mckinnon18, choban22}. In this work we couple the \textsc{chimes} non-equilibrium chemistry module \citep{richings14a, richings14b} to hydrodynamic simulations of isolated galaxies using the \textsc{fire-2} subgrid galaxy formation models \citep{hopkins18a}, to study the effects of local UV sources and dust depletion on the interstellar chemistry and their impact on the overall galaxy evolution and observable tracers of the ISM. The \textsc{chimes} reaction network covers a wide range of gas phases from cold ($\sim$$10 \, \rm{K}$), dense molecular clouds to hot ($\sim$$10^{9} \, \rm{K}$), highly ionised plasmas, and captures non-equilibrium effects in most of the chemical species (including metal ions) commonly detected in spectroscopic observations. This will enable us to confront our simulations with a great variety of observational data sets. The \textsc{fire-2} galaxy formation models have been developed to implement un-resolved physical processes that typically are not explicitly captured in hydrodynamic simulations, such as the formation of stars and the subsequent feedback of energy and momentum via supernovae, stellar winds, photoionisation of the surrounding gas, and stellar radiation pressure. By following individual feedback channels in this way, the \textsc{fire-2} models produce a realistic multiphase ISM down to scales of Giant Molecular Clouds and star-forming regions, which will be crucial for this work. Applied to cosmological simulations, the \textsc{fire} models have been shown to reproduce many properties of observed galaxy populations at both low and high redshift, including the mass-metallicity \citep{ma16}, stellar mass-halo mass \citep{hopkins18a}, size-kinematics \citep{elbadry18}, and Kennicutt Schmidt \citep{orr18} relations. The remainder of this paper is organised as follows. In Section~\ref{methods_sect} we describe our methods, including a summary of the \textsc{fire}-2 subgrid models (Section~\ref{physics_sect}) and \textsc{chimes} (Section~\ref{chemistry_sect}). We introduce the isolated galaxy simulations in Section~\ref{sim_sect}, including a description of the initial conditions (Section~\ref{IC_sect}) and results for the morphology and evolution of the simulated galaxies (Section~\ref{morph_sect}), the stellar fluxes predicted by our models (Section~\ref{sim_flux_sect}), and the dust properties (Section~\ref{dust_properties_sect}). In Section~\ref{HI_H2_sect} we explore the transition from atomic to molecular gas in our simulations and compare to observations, while in Section~\ref{emission_line_sect} we study emission line tracers of the total star formation rate. We summarise our conclusions in Section~\ref{conclusions_sect}. In Appendix~\ref{esc_fraction_sect} we summarise how we calibrated the escape fraction parameters from individual H\textsc{ii} regions, and we present a study of the numerical convergence of our results in Appendix~\ref{resolution_sect}. \section{Methods}\label{methods_sect} The simulations in this paper were run with the gravity and hydrodynamics code \textsc{gizmo} \citep{hopkins15}, using the Lagrangian Meshless Finite Mass (MFM) method to solve the hydrodynamics equations. We include subgrid models for the physical processes relevant to galaxy formation that are not explicitly resolved. These are mostly based on the \textsc{fire}-2 simulation models from the \textsc{fire} project\footnote{See the project website at \url{http://fire.northwestern.edu}} \citep{hopkins18a}, as detailed in Section~\ref{physics_sect} below, except for radiative cooling, which we describe in Section~\ref{chemistry_sect}. \subsection{\textsc{fire}-2 subgrid physics models}\label{physics_sect} Gas particles can be turned into stars if they are above a density threshold of $n_{\rm{H}} = 10^{3} \, \rm{cm}^{-3}$ and are locally self-gravitating and Jeans-unstable. If these criteria are met for a given particle, it is turned into a star particle stochastically at a rate given by the particle mass over the free-fall time. The details of the star formation algorithm are described in appendix~C of \citet{hopkins18a}. Note that, unlike the default \textsc{fire}-2 model, we do not scale the star formation rate by the fraction of gas that is molecular/shielded. In \citet{hopkins18a}, this fraction is calculated according to the analytic approximation from \citet{krumholz11}. Our simulations follow the time-dependent molecular chemistry, but if we used this molecular fraction in the star formation model it may introduce additional time-dependent effects in the star formation rate. For example, when a gas cloud cools there will be a lag before it becomes fully molecular as it takes time for molecules to form. However, molecules may not necessarily be required before star formation can proceed \citep{glover12}, so we would not expect a corresponding lag in the star formation rate. Nevertheless, \citet{hopkins18a} found that the molecular/self-shielded criterion has little effect in the \textsc{fire}-2 model, as gas that meets the other criteria will typically be fully molecular anyway, so we omit this criterion altogether in this work. Star particles inject energy, momentum and mass via stellar feedback as follows. The rate of type Ia supernovae (SNe) is calculated according to \citet{mannucci06} for both prompt and delayed populations, while the rates of type II SNe and stellar mass loss from OB and AGB winds are obtained from simple fits to the stellar evolution models from \textsc{starburst99} \citep{leitherer99} with a \citet{kroupa01} initial mass function (IMF; see appendix~A of \citealt{hopkins18a}). The SNe and stellar winds are implemented using a mechanical feedback scheme described in appendix~D of \citet{hopkins18a} (see also \mbox{\citealt{hopkins18b}}). Radiation pressure is coupled to the gas using the \textsc{lebron} approximate radiative transport algorithm, in which extinction of the stellar radiation is assumed to occur locally around the emitting star particle and absorbing gas particle, and is then transported between the two under the optically thin approximation. This algorithm is described in appendix~E of \citet{hopkins18a}; see also \citet{hopkins19}. Note that, in this work, we treat photoheating and H\textsc{ii} regions differently from the fiducial \textsc{fire}-2 model as they are coupled to the \textsc{chimes} chemistry solver; see Section~\ref{flux_sect} for details. The \textsc{fire-2} models also track the enrichment and evolution of the 11 elements that are used in the \textsc{chimes} chemistry network (see Section~\ref{chemistry_sect}). Star particles inject metals via SNe and stellar mass loss, with type Ia SNe yields from \citet{iwamoto99}, type II SNe yields from \citet{nomoto06}, and OB/AGB stellar wind yields from \citet{vandenhoek97}, \citet{marigo01} and \citet{izzard04}. These yields are summarised in appendix~A of \citet{hopkins18a}. The turbulent diffusion of metal elements between gas particles is modelled as described in appendix~F3 of \citet{hopkins18a}; see also \citet{hopkins17}. \subsection{Non-equilibrium chemistry and cooling}\label{chemistry_sect} We follow the non-equilibrium evolution of 157 ions and molecules important for gas cooling using the \textsc{chimes} chemistry and cooling module\footnote{\url{https://richings.bitbucket.io/chimes/home.html}} \citep{richings14a, richings14b}. This includes all ionisation states of H, He, C, N, O, Ne, Mg, Si, S, Ca and Fe; the negative ions H$^{-}$, C$^{-}$ and O$^{-}$; and the molecules H$_{2}$, H$_{2}^{+}$, H$_{3}^{+}$, OH, OH$^{+}$, H${_2}$O, H$_{2}$O$^{+}$, H$_{3}$O$^{+}$, O$_{2}$, O$_{2}^{+}$, C$_{2}$, CH, CH$_{2}$, CH$_{3}^{+}$, CH$^{+}$, CH$_{2}^{+}$, CO, CO$^{+}$, HCO$^{+}$, HOC$^{+}$. Appendix~B of \citet{richings14a} contains a complete list of the chemical reactions in \textsc{chimes}, which includes collisional ionisation, recombination (in the gas phase and on the surface of dust grains), charge transfer, photoionisation (including Auger ionisation) and photodissociation, cosmic ray ionisation and dissociation, molecular hydrogen formation on dust grains, and gas-phase molecular creation and destruction channels. The photochemistry reactions require the total UV intensity, which we calculate using the redshift zero extragalactic UV background \citep{fauchergiguere20} plus the local radiation from star particles in the galaxy, along with a local treatment for self-shielding. The methods used to compute the UV fluxes are described in Section~\ref{flux_sect}. We assume a constant primary ionisation rate of H\textsc{i} due to cosmic rays of $\zeta_{\rm{HI}} = 1.8 \times 10^{-16} \, \rm{s}^{-1}$, which corresponds to the value in the Milky Way as inferred from observations of H$_{3}^{+}$ \citep{indriolo12}. The ionisation and dissociation rates of other species due to cosmic rays are then scaled relative to this value as described in section~2.3 of \citet{richings14a}. Since cosmic rays are produced in sites of active star formation via acceleration in supernova remnants, we might expect the cosmic ray rate to scale with the local star formation rate \citep[e.g.][]{ploeckinger20}. However, depending on the cosmic ray transport mechanism, the cosmic rays may rapidly escape from dense regions and produce a smoother distribution than predicted by such a scaling. As the aim of this work is to explore the effects of local stellar radiation, and given the uncertainties in how the cosmic ray ionisation rates will depend on local properties, we therefore decided to use a constant, uniform rate, rather than coupling it to the stellar fluxes. With this approach, we can be confident that the effects of the local fluxes will be driven by the UV radiation and not by variations in the cosmic ray rate. In a future work, we will explore coupling the cosmic ray ionisation rates in \textsc{chimes} to methods that follow the production and transport of cosmic rays in galaxy formation simulations \citep[e.g.][]{chan19, chan21b, hopkins21}. The reactions on the surface of dust grains utilise a density- and temperature-dependent dust abundance using an empirical model based on observed metal depletion factors. This model is also used to deplete the elemental abundances of metals from the gas phase, as any metals that are located in dust grains will be unavailable for the gas phase chemical reactions. The dust model is described in Section~\ref{depl_sect}. The resulting ion and molecule abundances are used to calculate the radiative cooling and heating rates. The thermal processes included in \textsc{chimes} are summarised in table~1 of \citet{richings14a}, although the rate of photoelectric heating from dust grains has been updated to use equations 19 and 20 from \citet{wolfire03}. We thus obtain a set of coupled ordinary differential equations (ODEs) for the rate equations and the thermal energy equation, which we integrate for each particle over each hydrodynamic time-step. To accelerate the integration of these ODEs, we first calculate the solution from the explicit forward Euler method. If the relative change in the thermal energy and chemical abundances is less than 0.05 (excluding species with an abundance below $10^{-10}$, which are negligible), then we take the explicit solution. Otherwise, we integrate the ODEs using the implicit backward difference formula method and Newton iteration, as implemented in the \textsc{cvode} library from the \textsc{sundials}\footnote{\url{https://computing.llnl.gov/projects/sundials}} suite of differential and algebraic equation solvers, with a relative tolerance of $10^{-4}$ and an absolute tolerance of $10^{-10}$. We find that using the explicit solution in this way does not affect our results, but allows us to avoid the more expensive implicit solver for particles that are either close to equilibrium or are evolving very slowly. We also include the turbulent diffusion of ions and molecules between gas particles, using the same subgrid model described in appendix~F3 of \citet{hopkins18a} for the diffusion of elemental abundances but applied to each species in the \textsc{chimes} network. \subsubsection{Local stellar fluxes}\label{flux_sect} We follow the propagation of radiation from star particles in the simulation using an approximate radiative transport method based on the \textsc{lebron} algorithm used to model stellar radiation pressure in \textsc{fire} \citep{hopkins18a, hopkins19}. This approximation assumes that the radiation is only absorbed locally around the star particle and the receiving gas particle. The subsequent transport of radiation from the star to the gas particle is then treated in the optically thin limit, which allows us to utilise the gravity solver to propagate the radiation between particles. In this work, we implement radiation pressure using the standard \textsc{lebron} method as in \textsc{fire}. However, when coupling the radiation to \textsc{chimes}, we modify the method as follows. Firstly, the standard \textsc{lebron} method for radiation pressure tracks three stellar fluxes from all star particles, in the infrared, optical and ultraviolet bands. However, for the photochemistry we track the radiation in eight separate stellar age bins, which allows us to accurately capture the age-dependence of the UV spectra. The age bins are spaced logarithmically by 0.2 dex up to 10~Myr and by 1.0 dex above 10~Myr. Stars with an age $<$1~Myr are placed into a single bin, as are stars with an age $>$100~Myr. We further divide the radiation from each age bin into the non-ionising Far-UV band (FUV; 6$-$13.6~eV) and the ionising Extreme-UV band (EUV; $>$13.6~eV). This gives us 16 stellar fluxes in total for the photochemistry (the optical and infrared bands are not required for the photochemical reactions). We calculate the average cross sections of the photochemical reactions for each stellar age bin using spectra from \textsc{starburst99} models \citep{leitherer99} with a \citet{kroupa01} IMF, the Geneva stellar evolution tracks with a rotation velocity of 0.4 times the break-up velocity, and a metallicity $Z = 0.014$. These are the same models that were used for the rates of type II SNe and mass loss from OB and AGB winds in \textsc{fire}. By including only models at a fixed metallicity we do not account for the metallicity dependence of the stellar spectra, as this would require us to track even more fluxes. However, this will only affect the dwarf galaxies in our sample, as the galaxies at higher masses are close to solar\footnote{Throughout this paper we use the solar elemental abundances from Table~1 of \citet{wiersma09}, where the total solar metallicity is $\rm{Z}_{\odot} = 0.0129$, unless stated otherwise. However, the \textsc{starburst99} models used the \citep{asplund09} solar abundances.} metallicity (see Section~\ref{IC_sect}). Fig.~\ref{SB99_fig} shows the \textsc{starburst99} spectra for each age bin. The luminosity of each star particle in the FUV and EUV bands, $L_{i}(t)$, are calculated as a function of stellar age, $t$, and current mass, $M_{\ast}(t)$ (accounting for mass loss). We use an analytic fitting function, which we fit to the outputs from the \textsc{starburst99} models with a spacing of 0.01~Myr. This fitting function is given in equation \ref{fuv_euv_eqn} below, and the fit parameters are shown in Table~\ref{SB99_fit_pars}. \begin{equation}\label{fuv_euv_eqn} \frac{L_{i}(t)}{\rm{photon} \, \rm{s}^{-1}} = \begin{cases} \left( \frac{M_{\ast}(t)}{\rm{M}_{\odot}} \right) \! \exp \! \left[p_{1} \! + \! p_{2} \left( \frac{t}{\rm{Myr}} \right)^{p_{3}} \right] & \! t \! < \! 3.7 \, \rm{Myr} \\ p_{4} \left( \frac{M_{\ast}(t)}{\rm{M}_{\odot}} \right) \left( \frac{p_{5}}{t} \right)^{p_{6}} & \\ \, \, \, \, \, \, \, \, \times \left[ 1 + \left( \frac{t}{p_{5}}\right)^{p_{7}} \right]^{p_{8}} & \rm{otherwise}. \\ \end{cases} \end{equation} \begin{table} \begin{minipage}{84mm} \centering \caption{Best fit parameters for the fitting functions to the FUV and EUV luminosities (see equation~\ref{fuv_euv_eqn}).} \label{SB99_fit_pars} \begin{tabular}{ccc} \hline Parameter & FUV & EUV \\ \hline $p_{1}$ & 108.1 & 107.2 \\ $p_{2}$ & 0.17 & 0.11 \\ $p_{3}$ & 0.92 & 0.97 \\ $p_{4}$ & 6.4 $\times$ 10$^{37}$ & 3.3 $\times$ 10$^{21}$ \\ $p_{5}$ & 1.77 $\times$ 10$^{6}$ Myr & 6.89 $\times$ 10$^{5}$ Myr \\ $p_{6}$ & 1.67 & 4.79 \\ $p_{7}$ & 28.2 & 1.12 \\ $p_{8}$ & 1.65 & $-$1.7 $\times$ 10$^{4}$ \\ \hline \end{tabular} \end{minipage} \end{table} The stellar luminosities are then attenuated by local absorption around the star particle. In the fiducial \textsc{lebron} method used in \textsc{fire}, the infrared, optical and UV fluxes used for the radiation pressure are shielded due to dust based on a local estimate of the gas column density around the star particle. However, as the ionising radiation is also absorbed by H\textsc{i}, the \textsc{fire} model treats the effects of ionising radiation separately based on a Str\"{o}mgren argument, where neighbouring gas particles are flagged as H\textsc{ii} regions and ionised, starting with the nearest neighbour, until the available ionising photon budget from the star particle is used up (see Appendix~E of \citealt{hopkins18a} for details). H\textsc{ii} region particles are then prevented from cooling below 10$^{4} \, \rm{K}$. This approach assumes that all ionising radiation is confined within H\textsc{ii} regions. However for this work we are also interested in the diffuse ionising radiation that escapes these regions, and the effects it has on the interstellar chemistry. For the stellar fluxes used in the photochemistry calculations, we therefore modify the attenuation around star particles, in both the FUV and EUV bands, by introducing two free parameters, $f_{\rm{FUV}}^{\rm{esc}}$ and $f_{\rm{EUV}}^{\rm{esc}}$. These parameters represent the escape fractions of radiation in the FUV and EUV bands, respectively, from H\textsc{ii} regions. The stellar luminosities are then reduced by these fractions, before being propagated in the optically thin limit through the gravity tree to the receiving gas particles as described in Appendix~E of \citet{hopkins18a}. We also identify neighbouring gas particles in the H\textsc{ii} region of each star particle, using the same Str\"{o}mgren method as in the fiducial \textsc{lebron} model. However, rather than impose a temperature floor of 10$^{4} \, \rm{K}$, we instead disable shielding in H\textsc{ii} region particles, so that they receive the full flux from the local star particle. This allows us to follow the non-equilibrium chemistry and temperature evolution of the H\textsc{ii} regions. To determine the values of the escape fraction parameters, we note that, in our simulations, the median flux incident on each gas particle scales linearly with the star formation rate surface density averaged over the galaxy disc, albeit with a scatter of $\pm$0.5~dex (see Section~\ref{sim_flux_sect}), while the normalisation of this scaling depends on the assumed escape fraction. We therefore calibrated these parameters so that the scaling relation between the median stellar fluxes and disc-averaged star formation rate surface density reproduces the observed Milky Way FUV and EUV fluxes from \citet{black87} at the star formation rate surface density of the Milky Way. We thus find escape fractions of $f_{\rm{FUV}}^{\rm{esc}} = 0.1$ and $f_{\rm{EUV}}^{\rm{esc}} = 0.05$. The details of this calibration can be found in Appendix~\ref{esc_fraction_sect}. \citet{diemer18} used a similar approach of calibrating the escape fraction based on the relation between UV flux and star formation rate surface density normalised to values in the Milky Way, which they used to model the atomic to molecular transition in the \textsc{illustris-tng} cosmological simulations in post-processing. Based on this calibration, they found an escape fraction at a wavelength of $1000 \, \rm{\AA}$ (in the Lyman-Werner band) of 0.1, which agrees with our value in the FUV band. There are many examples of numerical studies in the literature that have modelled the escape fraction of ionising and non-ionising radiation from H\textsc{ii} regions \citep[e.g.][]{dale12, howard17, rahner17, kim19}. They find that the escape fractions vary widely, from zero to nearly unity, depending on the age of the H\textsc{ii} region and local conditions such as density and the initial mass of the stellar birth cloud. While our model using constant escape fractions reproduces the observed strength of the diffuse FUV and EUV radiation field, based on constraints from the Milky Way, we do not capture variations in the escape fractions. In the future, this could be improved by developing a subgrid model for the escape fractions as a function of resolved properties of the H\textsc{ii} regions in the simulations. Finally, for gas particles that do not lie within an H\textsc{ii} region the incident radiation is further attenuated due to shielding by the local gas cloud. We calculate the shielding length, $L_{\rm{sh}}$, based on a Sobolev-like approximation using the density gradient as follows: \begin{equation}\label{sobolev_eqn} L_{\rm{sh}} = \frac{1}{2} \left( \frac{\rho}{\nabla \rho} + h_{\rm{inter}} \right), \end{equation} where $\rho$ is the gas density and $h_{\rm{inter}}$ is the mean inter-particle spacing. The first term accounts for the size of the resolved gas cloud around the particle, while the second term accounts for the size of the particle itself. The local column density of a given species, $i$, is given by $N_{i} = n_{i} L_{\rm{sh}}$, where $n_{i}$ is the density of species $i$. We then suppress the photochemical rates as a function of the local column densities of H\textsc{i}, H$_{2}$, He\textsc{i}, He\textsc{ii}, CO and dust, using the methods described in \citet{richings14b}. Equation~\ref{sobolev_eqn} treats the shielding using a single, average shielding length (and hence column density) for the local gas cloud. However, as the photochemical rates are typically dominated by the low column density sight lines through the cloud, we caution that this approximation will tend to overestimate the shielding, which could lead to higher molecular abundances. To study the impact of using a local treatment for stellar fluxes, we also repeat our simulations with a uniform Interstellar Radiation Field (ISRF). In these runs, we scale the normalisation of the radiation field by the star formation rate over the preceding 10~Myr averaged over the disc of the galaxy, $\Sigma_{\rm{SFR,} \, \rm{disc}}$. Throughout this paper we define the galaxy disc as a cylinder with a radius of $6 R_{\rm{exp}}$, where $R_{\rm{exp}}$ is the initial exponential scale radius of the stellar disc component, and extending to $\pm$1.2$R_{\rm{exp}}$ (i.e. 20 per cent of the radius of the cylinder) above and below the mid-plane. The flux incident on each gas particle in the FUV and EUV bands is then scaled from the Milky Way values as follows: \begin{align} \mathcal{F}_{\rm{FUV,} \, \rm{uniform}} &= \mathcal{F}_{\rm{FUV,} \, \rm{MW}} \frac{\Sigma_{\rm{SFR,} \, \rm{disc}}}{\Sigma_{\rm{SFR,} \, \rm{MW}}}, \label{uni_fuv_flux} \\ \mathcal{F}_{\rm{EUV,} \, \rm{uniform}} &= \mathcal{F}_{\rm{EUV,} \, \rm{MW}} \frac{\Sigma_{\rm{SFR,} \, \rm{disc}}}{\Sigma_{\rm{SFR,} \, \rm{MW}}}, \label{uni_euv_flux} \end{align} where $\mathcal{F}_{\rm{FUV,} \, \rm{MW}} = 1.7 \times 10^{8} \, \rm{photon} \, \rm{cm}^{-2} \, \rm{s}^{-1}$ and $\mathcal{F}_{\rm{EUV,} \, \rm{MW}} = 1.1 \times 10^{7} \, \rm{photon} \, \rm{cm}^{-2} \, \rm{s}^{-1}$ are the FUV and EUV fluxes in the local solar neighbourhood of the Milky Way, respectively \citep{black87}, and $\Sigma_{\rm{SFR,} \, \rm{MW}} = 4 \times 10^{-3} \, \rm{M}_{\odot} \, \rm{kpc}^{-2}$ is the star formation rate surface density in the Milky Way \citep[e.g.][]{robertson08}. This time-dependent radiation field is applied uniformly to all gas particles. Local shielding around the receiving gas particles is implemented using the Sobolev-like shielding length in equation~\ref{sobolev_eqn} as before. The shape of the UV spectrum is obtained by averaging the \textsc{starburst99} spectra shown in Fig.~\ref{SB99_fig}, assuming a constant star formation rate. We also disable the H\textsc{ii} region prescription in these runs, as this is a local effect of the stellar fluxes. \subsubsection{Depletion of metals onto dust grains}\label{depl_sect} The non-equilibrium chemistry solver requires as input the gas-phase abundances of each element in the reaction network. The simulations track the total elemental abundances, but for some elements a fraction of the total abundance will be in dust grains and therefore will not contribute to the gas-phase chemistry. We therefore need to determine the fraction of each element in dust grains, and reduce the gas-phase abundances accordingly. \citet{jenkins09} determined the fraction of metals that are depleted onto dust grains in the solar neighbourhood on an element-by-element basis by measuring the column densities of 17 metals and neutral hydrogen along 243 sight lines in the Milky Way (although not all sight lines include measurements for all elements). By assuming that the total metal abundances in the solar neighbourhood are at their solar values (for which they used the solar abundances from \citealt{lodders03}), they inferred that any discrepancies between the measured and solar abundances were due to depletion onto dust grains. They parameterised the overall strength of dust depletion along a given sight line according to a parameter $F_{\ast}$, which was normalised such that, in this Milky Way sample of sight lines, $F_{\ast}$ varied between values of 0 (the least depleted sight line, not including sight lines with neutral hydrogen column densities $<$10$^{19.5} \, \rm{cm}^{-2}$ which they exclude due to potential contamination from ionised hydrogen) and 1 (the $\zeta$ Oph sight line). The fraction of each individual element that remains in the gas phase can then be expressed as a linear function of $F_{\ast}$ as follows, from equation~10 of \citet{jenkins09}: \begin{equation}\label{depl_eqn} \log_{10} [ M_{X}^{\rm{gas}} / M_{X}^{\rm{tot}} ] = B_{X} + A_{X} (F_{\ast} - z_{X}), \end{equation} where $M_{X}^{\rm{gas}}$ and $M_{X}^{\rm{tot}}$ are the gas-phase and total masses of element $X$, respectively. The best-fit linear coefficients $A_{X}$, $B_{X}$ and $z_{X}$ for each element are given in Table~4 of \citet{jenkins09}. \citet{jenkins09} also showed that $F_{\ast}$ is closely correlated with the average neutral hydrogen density along the line of sight between the observer and the background source, $\langle n_{\rm{H}} \rangle$ (see the left-hand panel of Fig.~16 in \citealt{jenkins09}). They find the best-fit relation is: \begin{equation}\label{Fast_eqn} F_{\ast} = 0.772 + 0.461 \log_{10} \langle n_{\rm{H}} \rangle. \end{equation} \citet{decia16} expanded on the results of \citet{jenkins09} by adding a sample of 70 damped Lyman-$\alpha$ absorbers (DLAs) observed in quasar spectra, in addition to the Milky Way sight lines. This allowed them to extend the linear fits of the depletion factors to $F_{\ast} < 0$ (i.e. systems with weaker overall dust depletion than seen in the solar neighbourhood). For our simulations, we implement an empirical model for the depletion of metals onto dust grains based on these observations as follows. We use equation~\ref{Fast_eqn} to calculate the overall strength of dust depletion, $F_{\ast}$, for each gas particle as a function of density. We assume that the particle's total hydrogen density, $n_{\rm{H, \, tot}}$, is approximately equal to $\langle n_{\rm{H}} \rangle$ in equation~\ref{Fast_eqn}, which is the average neutral hydrogen density along the line of sight to the background source in the observations. However, this will overestimate the strength of depletion because observationally the true density at which the depletion occurs will tend to be higher than the average density along the line of sight. At high densities ($n_{\rm{H, \, tot}} \! > \! 3.12 \, \rm{cm}^{-3}$), we limit $F_{\ast}$ to be no greater than unity, corresponding to the strongest overall dust depletion strength observed in the Milky Way sight lines. It is possible that $F_{\ast}$ may exceed unity in dense environments, however it is uncertain how to extrapolate the observed relations to this regime. We also impose a temperature cut such that, above $10^{6} \, \rm{K}$, all metals are in the gas phase, as we expect dust grains will be rapidly destroyed via sputtering above this temperature \citep[e.g.][]{tsai95}. \begin{table} \begin{minipage}{84mm} \centering \caption{Linear fit coefficients used in equation~\ref{depl_eqn} for the depletion of metals onto dust grains.} \label{depl_coeff} \begin{tabular}{ccccc} \hline Element & $A_{X}$ & $B_{X}$ & $z_{X}$ & Ref.\footnote{References: J09 \citep{jenkins09}; DC16 \citep{decia16}.} \\ \hline C & $-0.101$ & $-0.193$ & $0.803$ & J09 \\ N & $0.0$ & $-0.109$ & $0.55$ & J09 \\ O & $-0.101$ & $-0.02$ & $-1.50$ & DC16 \\ Mg & $-0.412$ & $-0.03$ & $-1.50$ & DC16 \\ Si & $-0.426$ & $-0.03$ & $-1.50$ & DC16 \\ S & $-0.189$ & $-0.04$ & $-1.50$ & DC16 \\ Fe & $-0.851$ & $-0.01$ & $-1.50$ & DC16 \\ \hline \end{tabular} \vspace{-0.2 in} \end{minipage} \end{table} We then obtain the depletion factors of individual elements using equation~\ref{depl_eqn}, with linear fit coefficients derived from the fits of \citet{decia16} where available. They fit the depletion factors as a function of [Zn/Fe], which is related to $F_{\ast}$ by $F_{\ast} = 1.48 [\rm{Zn} / \rm{Fe}] - 1.50$, so we convert the fit coefficients reported in Table~3 of \citet{decia16} to the coefficients $A_{X}$, $B_{X}$ and $z_{X}$ used in equation~\ref{depl_eqn}. For elements not included in \citet{decia16}, we instead use the linear fits from \citet{jenkins09}. We summarise the fit coefficients used in this work in Table~\ref{depl_coeff}. The fits for some of these elements are uncertain due to limited observational data. For example, the \citet{jenkins09} sample contains only a handful of carbon depletion measurements based on weak-line transitions of C\textsc{ii}. However, \citet{sofia11} find that the gas-phase column densities of carbon measured from strong-line transitions of C\textsc{ii} are a factor $\approx$2 lower than those measured from weak-line transitions. This would result in stronger depletion of carbon than expected from these fits by a factor $\approx$2. Some elements in the \textsc{chimes} network do not appear in Table~\ref{depl_coeff} as they are not depleted onto dust grains. We use the resulting depletion factors to reduce the gas-phase abundance of each element. We also sum the mass of each element in dust grains, using all 17 elements in \citet{decia16} and/or \citet{jenkins09}, to determine the total dust abundance. We use this to scale the rate of reactions that occur on the surface of dust grains (e.g. the formation of H$_{2}$, and grain surface recombination reactions), and thermal processes involving dust grains, such as photoelectric heating. However, we only scale the rates by the total dust abundance and we do not consider varying the grain size distributions that were originally assumed in the calculation of the rates for these processes (which used either the \citealt{mathis77} or the \citealt{weingartner01} distributions; see \citealt{richings14a} and references therein for details of how these rates were calculated). The top panel of Fig.~\ref{depl_fig} shows the mass fraction of each element in the gas phase as a function of hydrogen density. We see that at the highest densities carbon, nitrogen and oxygen are reduced by up to approximately a factor of two, while iron exhibits the strongest depletion as it is reduced by more than two orders of magnitude. In the bottom panel of Fig.~\ref{depl_fig} we show the total dust to metals mass ratio ($DTM$), normalised to the dust to metals ratio along the sight lines with the strongest dust depletion in the Milky Way ($DTM_{\rm{MW}} = 0.485$), corresponding to $F_{\ast} = 1$. This empirical model for the depletion of metals onto dust grains is based on observations in the Milky Way and DLAs. However, it does not explicitly follow the formation and destruction mechanisms that govern the abundance of dust grains. Other studies have developed numerical models that capture these processes \citep[e.g.][]{asano13, bekki15, hirashita15, mckinnon18, choban22}, which allows for a more complex evolution of the dust grain population. In this work we only consider the empirical model, as it is the simplest implementation that reproduces observed depletion factors. However, in the future it would be interesting to compare how the different approaches to modelling dust grains impact the non-equilibrium interstellar chemistry. To study the effects of dust depletion on the non-equilibrium chemistry, we also repeat each simulation with a constant dust to metals ratio equal to the maximum Milky Way value ($DTM_{\rm{MW}}$), but without reducing the gas-phase element abundances by the corresponding depletion factor, so that the chemistry solver uses the total elemental abundances. This approach is inconsistent, as all metals are in the gas phase but dust grains are also present, so some metals are counted twice. However, such an approach has been used in previous studies \citep[e.g.][]{richings16}, so this will allow us to quantify the uncertainties that are introduced if the depletion of metals onto dust grains is not correctly accounted for. \section{Simulations}\label{sim_sect} \subsection{Initial conditions}\label{IC_sect} We simulate a series of isolated disc galaxies, with initial conditions created using the \textsc{MakeDisk} code \citep{springel05} as follows. The model galaxies consist of a rotating disc of gas and stars along with a central stellar bulge, embedded within a live dark matter halo. The halo and stellar bulge are spherical, with a \citet{hernquist90} radial density profile. The stellar and gaseous components follow an exponential radial profile. The vertical structure of the stellar disc follows that of an isothermal sheet, with a constant scale height that we set to 0.1 times the radial exponential scale length. For the gas disc, the vertical profile is computed to be in hydrostatic equilibrium for the given gravitational potential, at a temperature of $10^{4} \, \rm{K}$. The parameters of the galaxy models are chosen according to redshift zero scaling relations, to represent typical disc galaxies in the nearby Universe. We consider galaxies with halo masses ranging from dwarfs, with $M_{200, \, \rm{crit}} = 10^{10} \, \rm{M}_{\odot}$, to Milky Way-mass galaxies with $M_{200, \, \rm{crit}} = 10^{12} \, \rm{M}_{\odot}$. The concentration parameter of the dark matter halo is calculated as a function of $M_{200, \, \rm{crit}}$ using the redshift zero mass-concentration relation from \citet{duffy08}, using their full halo sample. The total stellar mass is calculated using the abundance matching model of \citet{moster13}, which we modify according to \citet{sawala15} to account for the inefficiency of galaxy formation at low halo masses. To divide the stellar mass between the bulge and disc components, we need to determine the ratio of the bulge stellar mass to total stellar mass ($B/T$). In Fig.~\ref{bulge_fig}, the red data points show the median and tenth to ninetieth percentiles of the $B/T$ ratio in bins of total stellar mass from a sample of galaxies in the Sloan Digital Sky Survey (SDSS; \citealt{benson07}). The black data points show $B/T$ for individual galaxies in the \textit{Spitzer} Survey of Stellar Structure in Galaxies (S$^{4}$G; \citealt{salo15}), while the open black circles show the median $B/T$ ratio in bins of stellar mass for the S$^{4}$G sample. We fit a power-law function to the median $B/T$ ratios versus total stellar mass, using the S$^{4}$G and SDSS samples at stellar masses below and above $10^{9} \, \rm{M}_{\odot}$, respectively. We enforce $B/T = 0.0$ at $M_{\ast, \rm{tot}} \! < \! 3.0 \times 10^{7} \, \rm{M}_{\odot}$, and $B/T = 1.0$ at $M_{\ast, \rm{tot}} \! > \! 8.0 \times 10^{10} \, \rm{M}_{\odot}$. The resulting best-fit power-law relation (black curve in Fig.~\ref{bulge_fig}) is: \begin{equation}\label{bulge_eqn} B/T = \begin{cases} \! 0.0 & \frac{M_{\ast, \rm{tot}}}{\rm{M}_{\odot}} \! < \! 3 \! \times \! 10^{7} \\ \! 0.424 \! \left( \frac{M_{\ast, \rm{tot}}}{10^{10} \, \rm{M}_{\odot}} \right)^{0.3887} \! & 3 \! \times \! 10^{7} \! \leq \! \frac{\mathit{M}_{\ast, \rm{tot}}}{\rm{M}_{\odot}} \! \leq \! 8 \! \times \! 10^{10} \\ \! 1.0 & \rm{otherwise}. \\ \end{cases} \end{equation} \citet{lange16} study the relation between the stellar half-light radius and stellar mass of the bulge and disc components in galaxies from the Galaxy And Mass Assembly (GAMA) survey. We use their best-fit power-law relations for their final redshift zero disc and spheroid samples (see Table~1 of \citealt{lange16}) to calculate the stellar half-light radii of the disc and bulge, respectively, in our galaxy models. We assume that these are equal to the half-mass radii, $R_{1/2}$. From the half-mass radius, we calculate the exponential scale length of the disc as $R_{\rm{exp}} = R_{1/2} / 1.68$, and the scale radius of the \citet{hernquist90} profile as $a = R_{1/2} / (1 + \sqrt{2})$. Both the stellar and gaseous discs use the same scale length. \begin{table*} \begin{minipage}{168mm} \centering \caption{Model galaxy parameters: total halo mass ($M_{200, \, \rm{crit}}$), dark matter halo concentration parameter ($c_{200}$), total stellar mass ($M_{\ast, \, \rm{tot}}$), bulge to total ratio ($B/T$), \citet{hernquist90} scale radius of the bulge ($R_{\rm{bulge}}$), exponential scale length of the stellar and gaseous discs ($R_{\rm{disc}}$), total galaxy gas fraction ($f_{\rm{gas}}$), initial metallicity ($Z_{\rm{init}}$), and initial mass of gas and star particles ($m_{\rm{b}}$).} \label{galaxy_pars} \begin{tabular}{lccccccccc} \hline Name & $M_{200, \, \rm{crit}}$ & $c_{200}$ & $M_{\ast, \, \rm{tot}}$ & $B/T$ & $R_{\rm{bulge}}$ & $R_{\rm{disc}}$ & $f_{\rm{gas}}$ & $Z_{\rm{init}}$ & $m_{\rm{b}}$ \\ & (M$_{\odot}$) & & (M$_{\odot}$) & & (kpc) & (kpc) & & (Z$_{\odot}$) & (M$_{\odot}$) \\ \hline \multicolumn{10}{c}{\textbf{Fiducial, Uniform ISRF, and No Depletion}} \\ \hline m1e10 & $10^{10}$ & 9.9 & $6.6 \times 10^{6}$ & 0.0 & N/A & 0.41 & 0.90 & 0.06 & 400 \\ m3e10 & $3 \times 10^{10}$ & 8.9 & $8.9 \times 10^{7}$ & 0.07 & 0.12 & 0.82 & 0.77 & 0.3 & 400 \\ m1e11 & $10^{11}$ & 7.9 & $1.4 \times 10^{9}$ & 0.20 & 0.33 & 1.68 & 0.49 & 0.8 & 400 \\ m3e11 & $3 \times 10^{11}$ & 7.1 & $1.1 \times 10^{10}$ & 0.43 & 0.70 & 2.66 & 0.30 & 1.1 & 400 \\ m3e11\_lowGas & $3 \times 10^{11}$ & 7.1 & $1.1 \times 10^{10}$ & 0.43 & 0.70 & 2.66 & 0.10 & 1.1 & 400 \\ m3e11\_hiGas & $3 \times 10^{11}$ & 7.1 & $1.1 \times 10^{10}$ & 0.43 & 0.70 & 2.66 & 0.50 & 1.1 & 400 \\ m1e12 & $10^{12}$ & 6.4 & $3.1 \times 10^{10}$ & 0.66 & 1.03 & 3.10 & 0.19 & 1.2 & 400 \\ \hline \multicolumn{10}{c}{\textbf{Resolution tests, Fiducial only}} \\ \hline m1e10\_lowRes08 & $10^{10}$ & 9.9 & $6.6 \times 10^{6}$ & 0.0 & N/A & 0.41 & 0.90 & 0.06 & 3200 \\ m3e10\_lowRes08 & $3 \times 10^{10}$ & 8.9 & $8.9 \times 10^{7}$ & 0.07 & 0.12 & 0.82 & 0.77 & 0.3 & 3200 \\ m1e11\_lowRes08 & $10^{11}$ & 7.9 & $1.4 \times 10^{9}$ & 0.20 & 0.33 & 1.68 & 0.49 & 0.8 & 3200 \\ m3e11\_lowRes08 & $3 \times 10^{11}$ & 7.1 & $1.1 \times 10^{10}$ & 0.43 & 0.70 & 2.66 & 0.30 & 1.1 & 3200 \\ m1e12\_lowRes08 & $10^{12}$ & 6.4 & $3.1 \times 10^{10}$ & 0.66 & 1.03 & 3.10 & 0.19 & 1.2 & 3200 \\ m3e10\_hiRes08 & $3 \times 10^{10}$ & 8.9 & $8.9 \times 10^{7}$ & 0.07 & 0.12 & 0.82 & 0.77 & 0.3 & 50 \\ \hline \vspace{-0.3 in} \end{tabular} \end{minipage} \end{table*} To calculate the gas fractions for our model galaxies, we use observed galaxy H\textsc{i} and H$_{2}$ masses from The H\textsc{i} Nearby Galaxy Survey (THINGS; \citealt{leroy08}). The black data points in Fig.~\ref{fgas_fig} show the gas fraction, $f_{\rm{gas}} = M_{\rm{gas}} / (M_{\rm{gas}} + M_{\ast, \, \rm{tot}})$, plotted against total stellar mass in the THINGS galaxies. The gas mass is the sum of the atomic and molecular masses, $M_{\rm{gas}} = M_{\rm{HI}} + M_{\rm{H}2}$, and includes a factor 1.36 correction for helium. Using the \citet{leroy08} data, we fit the following function to the gas fraction versus stellar mass (black curve in Fig.~\ref{fgas_fig}): \begin{align}\label{fgas_eqn} f_{\rm{gas}} &= \frac{M_{\rm{gas}}}{M_{\rm{gas}} + M_{\ast, \, \rm{tot}}} \\ &= \! \begin{cases} \! 0.9 & \! \! \! \! \frac{M_{\ast, \, \rm{tot}}}{\rm{M}_{\odot}} \! < 2 \! \times \! 10^{7} \\ \! 0.524 \! - \! 0.222 \! \log_{10} \! \left( \frac{M_{\ast, \, \rm{tot}}}{10^{9} \, \rm{M_{\odot}}} \right) & \! \! \! \! 2 \! \times \! 10^{7} \! < \! \frac{M_{\ast, \, \rm{tot}}}{\rm{M}_{\odot}} \! < \! 2 \! \times \! 10^{11} \\ \! 0.0 & \! \! \! \! \rm{otherwise}. \end{cases} \end{align} We cap the gas fraction in the fitting function to be no greater than 0.9, at stellar masses $M_{\ast, \, \rm{tot}}$$<$$2 \times 10^{7} \, \rm{M_{\odot}}$, and by definition it can be no less than zero. We use this fitting function to determine the gas fraction for our galaxy models, given the total stellar mass calculated above. For the model with a halo mass $M_{200, \, \rm{crit}} = 3 \times 10^{11} \, \rm{M}_{\odot}$ (m3e11), we also run two additional models with a gas fraction reduced/increased by 20 per cent from the best-fit scaling relation (m3e11\_lowGas and m3e11\_hiGas, respectively). This will allow us to explore the effects of different gas fractions at fixed halo mass. For comparison, the red curve in Fig.~\ref{fgas_fig} shows the best-fit scaling relation for a sample of low-mass isolated galaxies in SDSS from \citet{bradford15}, taken from the first two rows of their Table~3 (see also their Fig.~5). The gas masses in this SDSS sample were determined from H\textsc{i} observations, including a correction for helium, but do not include the molecular component. We also plot the gas fractions in the xCOLD GASS sample (\citealt{saintonge17}; blue curve), using the H\textsc{i} and H$_{2}$ masses in bins of stellar mass from their Fig.~13. Finally, we set the initial metallicity of the gas and stars in our model galaxies according to the mass-metallicity relation of SDSS galaxies from \citet{andrews13}. The relative abundances between different metal elements are assumed to be solar, with the initial Helium abundance scaled between primordial and solar according to the total metallicity. Our simulations include the injection of metals from winds and supernovae (see Section~\ref{physics_sect}), so the metallicity in our model galaxies will increase over time. The idealised nature of these model galaxies means that we do not include cosmological accretion of primordial or low-metallicity gas onto the galaxy. We therefore might not expect the evolution to maintain the observed mass-metallicity scaling relation. However, we find that, as we only evolve each galaxy for 800~Myr in total (see below), the change in metallicity is relatively small, and the galaxies do not evolve far from this relation by the end of the simulation. The parameters of our seven galaxy models are summarised in Table~\ref{galaxy_pars}. As discussed in Sections~\ref{flux_sect} and \ref{depl_sect}, we run each galaxy model three times: first with the fiducial model, including the prescriptions for local stellar fluxes and dust depletion; second with a uniform ISRF, in which a uniform radiation field is applied to all gas particles; and third with no depletion, in which we use a constant dust to metals ratio and we do not reduce the gas phase metal abundances to account for depletion onto dust grains. In these runs, we use a mass resolution of $400 \, \rm{M}_{\odot}$ per particle for the gas and stars. The mass of dark matter particles is $1910 \, \rm{M}_{\odot}$, which corresponds to $(\Omega_{\rm{m}} - \Omega_{\rm{b}}) / \Omega_{\rm{b}}$ times the baryonic particle mass, where $\Omega_{\rm{m}}$ and $\Omega_{\rm{b}}$ are the cosmological density parameters for the total matter and baryonic content of the Universe, respectively. We use a constant gravitational softening length of 2.8~pc and 1.6~pc for dark matter and star particles, respectively. The gas particles use an adaptive gravitational softening length equal to the mean inter-particle spacing, down to a minimum gas softening of 0.08~pc. At the star formation density threshold of $n_{\rm{H}} = 10^{3} \, \rm{cm}^{-3}$ the gas softening length is 2.2~pc. To test the importance of numerical resolution on our results, we also repeat some of the galaxy models with 8 times lower mass resolution, and we repeat the m3e10 dwarf galaxy with 8 times higher mass resolution. In each case, the gravitational softening lengths are scaled with $m_{\rm{b}}^{1/3}$, where $m_{\rm{b}}$ is the baryonic particle mass. We only run the resolution tests with the fiducial model. At the beginning of each simulation, the gas disc rapidly cools from its initial temperature of $10^{4} \, \rm{K}$ and starts to form stars. However, there is a delay before the onset of stellar feedback, which is needed to regulate this process. This delay results in a strong initial burst of star formation, which disrupts the gas disc and in some cases can destroy it altogether. To alleviate this disruption and allow the disc to settle into a self-regulated steady state, we therefore modify the subgrid feedback models during the first 300~Myr of the simulation as follows. For the inital 150~Myr, we reduce the time-scales for supernova feedback by a factor of 100, and renormalise the rates by the same factor so that the total number of supernovae per unit mass of stars formed remains unchanged. This enables the stellar feedback to regulate the initial burst of star formation more rapidly. Then from 150 to 300~Myr we smoothly reduce the factor by which the supernova time-scales are suppressed, until they reach their fiducial value after 300~Myr. We use the resulting snapshot at 300~Myr as our initial conditions for the main runs, which are then run for a further 500~Myr. For all of the results presented in this paper, we denote the time $t = 0$ as starting from the snapshot at the end of the 300~Myr settling in period. We do not include any of the snapshots prior to this point in our analysis. \subsection{Galaxy morphology and evolution}\label{morph_sect} The Milky Way-mass simulation (m1e12) using the fiducial model is shown at 500~Myr in Fig.~\ref{m1e12_morph_fig}. The left-hand panels show mock Hubble images of the stellar light, including dust attenuation calculated using dust abundances from our fiducial dust depletion model. These images use the Hubble filters F336W, F555W and F814W. We have superimposed images of the continuum-subtracted H$\alpha$ emission line in red. These mock observations were created in post-processing using the publicly available radiative transfer code \textsc{radmc-3d}\footnote{\url{http://www.ita.uni-heidelberg.de/~dullemond/software/radmc-3d/}} \citep{dullemond12}. We describe how we post-process the simulation outputs with \textsc{radmc-3d} in more detail in Section~\ref{line_emission_method_sect}. The H$\alpha$ emission generally coincides with regions containing young, blue stars. These H$\alpha$-emitting regions are somewhat more extended at large galactic radii, due to the lower gas densities compared to the galactic centre, which results in larger Str\"{o}mgren radii. We also see prominent dust lanes, particularly along the flocculent spiral arm structures. The right-hand panels of Fig.~\ref{m1e12_morph_fig} show the distribution of gas. The brightness of each pixel indicates the gas density, while the gas temperature is shown by the colour scale. The cold ($\sim$100~K), dense gas, shown in dark blue, forms a thin ($\approx$600~pc) disc, and is arranged in flocculent spiral structures that coincide with the dust lanes seen in the mock Hubble images. The warm ($10^{4} \, \rm{K}$), diffuse phase of the ISM forms a somewhat thicker disc, with a vertical extent of $\approx$2$-$3~kpc, while the outflows are heated to temperatures $\gtrsim$10$^{6} \, \rm{K}$. Fig.~\ref{m3e10_morph_fig} shows mock Hubble images (left-hand panels) and the gas distribution (right-hand panels) in the m3e10 dwarf galaxy after 500~Myr, with the fiducial model. Compared to m1e12, the disc is less well defined in the dwarf galaxy. In particular, the cold gas phase is not confined to a thin disc, but instead exhibits strong, turbulent motions that lead to a broader vertical structure. This is due to the weaker gravitational potential of the dwarf galaxy, which makes it more difficult to retain gas that is driven out by stellar feedback. The overall morphology of gas and stars in the simulations using the fiducial model (Figs.~\ref{m1e12_morph_fig} and \ref{m3e10_morph_fig}) are similar to the corresponding runs using the uniform ISRF and no depletion models (not shown). The only significant difference is in the H$\alpha$ emission, which is much weaker and does not trace star-forming regions in the uniform ISRF model. We study the effects of the model variations on H$\alpha$ emission, and other emission line tracers of the star formation rate, in more detail in Section~\ref{emission_line_sect}. To compare the evolution of the simulated galaxies with the three model variations, Fig.~\ref{SFH_fig} shows the total star formation rate (top panel) and disc gas fraction (bottom panel) versus time. We calculate the star formation rate from the total mass of stars in the galaxy disc that formed in the preceding 10~Myr, using the initial mass of each star particle (before stellar mass loss). See the final paragraph of Section~\ref{flux_sect} for the definition of the galaxy disc. The fiducial, uniform ISRF and no depletion models are shown by the solid, dashed and dotted curves respectively, while the line colours indicate the different galaxies. In the highest-mass galaxies, with halo masses $\geq$10$^{11} \, \rm{M}_{\odot}$, the star formation rate and disc gas fraction are very similar in the three models. However, the dwarf galaxies exhibit deviations in disc gas fraction between different models. In particular, the dwarf galaxies run with the uniform ISRF model have lower gas fractions than the fiducial model by up to $\approx$30 per cent. The uniform ISRF model does not include H\textsc{ii} regions, so this may suggest that removing this early stellar feedback channel (which acts preferentially in star-forming regions) leads to stronger outflows in dwarf galaxies, thereby reducing the disc gas fraction. \citet{hopkins20} also found that the removal of radiative stellar feedback processes leads to more violent star formation histories, which would be consistent with the trends that we see in our dwarf galaxies. However it is difficult to draw strong conclusions as we only have two dwarf galaxies in our sample. Nevertheless, this reduction in disc gas fraction in the dwarf galaxies does not lead to a reduction in the star formation rate. The total mass of stars formed over 500~Myr is actually higher with the Uniform IRSF than the fiducial model, by 27 and 20 per cent in m1e10 and m3e10 respectively. We therefore conclude that, apart from the lower gas fractions and higher total mass of stars formed in dwarf galaxies with the uniform ISRF, the three model variations do not have a strong impact on the overall evolution of the galaxy. We compare the evolution of star formation rate and disc gas fraction in simulations at different numerical resolutions in Appendix~\ref{resolution_sect}. While the high-mass galaxies ($M_{200}$$\geq$$10^{11} \, \rm{M}_{\odot}$) show good numerical convergence, the evolution in dwarf galaxies can vary significantly between runs at different resolutions. As we discuss further in Appendix~\ref{resolution_sect}, this may be due to stochastic variations between runs due to the bursty nature of star formation in the dwarf galaxy regime \citep[e.g.][]{fauchergiguere18}. \subsection{Stellar fluxes}\label{sim_flux_sect} The images in Fig.~\ref{m1e12_flux_fig} show mass-weighted projections of the stellar fluxes received by gas particles in the final snapshot of m1e12, after 500~Myr, calculated using our fiducial model. The left and right panels show the total fluxes in the FUV and EUV bands, respectively, summed over all stellar age bins. These fluxes include attenuation by the escape fraction from the H\textsc{ii} region around the emitting star particle, but do not include self-shielding by the receiving gas particle. This represents the flux that would be incident at the edge of a gas cloud, before the local self-shielding of the cloud itself has been applied. We see that the stellar fluxes vary by more than three orders of magnitude over the galaxy disc. The strongest fluxes are found in small regions ($\sim$10$-$100~pc across) that predominantly lie in the spiral arms. Unsurprisingly, these regions coincide with the locations of young stars (see the top-left panel of Fig.~\ref{m1e12_morph_fig}). The simulations with a uniform ISRF do not capture these strong spatial variations. For comparison, we can calculate the fluxes that would have been used in the uniform ISRF model, using equations~\ref{uni_fuv_flux} and \ref{uni_euv_flux}. The star formation rate surface density over the whole disc, viewed face-on, in m1e12 after 500~Myr is $\Sigma_{\rm{SFR,} \, \rm{disc}} = 1.2 \times 10^{-3} \, \rm{M}_{\odot} \, \rm{kpc}^{-2}$. Hence the FUV and EUV fluxes in the uniform ISRF case would be $\log_{10} [\mathcal{F} \, (\rm{photons} \, \rm{cm}^{-2} \, \rm{s}^{-1})] = 7.7$ and $6.5$, respectively. The uniform ISRF model assumes that the stellar fluxes scale linearly with $\Sigma_{\rm{SFR,} \, \rm{disc}}$. Similar approaches have been used by other theoretical models, often using the local (rather than disc-averaged) star formation rate surface density \citep[e.g.][]{robertson08, lagos15}. It is therefore interesting to explore the extent to which the stellar fluxes in our fiducial model scale with the global and local star formation rate surface densities. Fig.~\ref{flux_fullDisc_fig} shows the FUV (left-hand panels) and EUV (right-hand panels) fluxes incident on gas particles (before local gas self-shielding) that lie within the galaxy disc, combining snapshots at 10~Myr intervals. The solid curves show the median fluxes in bins of $\Sigma_{\rm{SFR,} \, \rm{disc}}$, while the shaded regions indicate the tenth to ninetieth percentile range. Different galaxy models are represented by different colours, from the dwarf galaxy m1e10 (dark purple) to the Milky Way-mass galaxy m1e12 (light orange), as shown in the legend. The dotted black line indicates a linear scaling between the fluxes and $\Sigma_{\rm{SFR,} \, \rm{disc}}$ normalised to the Milky Way values, as given by equations~\ref{uni_fuv_flux} and \ref{uni_euv_flux}. We see that the median fluxes in all galaxy models broadly follow the linear relation, spanning two orders of magnitude in $\Sigma_{\rm{SFR,} \, \rm{disc}}$, although the dwarf galaxies are up to 0.5~dex below this relation. The tenth to ninetieth percentile spread in each galaxy extends to $\pm$0.5~dex about the median relation at fixed $\Sigma_{\rm{SFR,} \, \rm{disc}}$. In Appendix~\ref{esc_fraction_sect} we show that the normalisation of the linear scaling between the median stellar fluxes and $\Sigma_{\rm{SFR,} \, \rm{disc}}$ is determined by the escape fraction from H\textsc{ii} regions in each band. We use this to calibrate the escape fraction parameters in the fiducial model, such that the median fluxes in the simulations follow the same normalisation as the linear relation normalised to the Milky Way values (black dotted lines in Fig.~\ref{flux_fullDisc_fig}). For comparison, the FUV and EUV fluxes in the uniform ISRF model follow the linear Milky Way scaling relations, with no scatter. The strong linear scaling between stellar flux and $\Sigma_{\rm{SFR,} \, \rm{disc}}$ seen in Fig.~\ref{flux_fullDisc_fig} suggests that the fluxes are driven by star formation over the whole disc of the galaxy. However, we might expect that the strongest contribution to the stellar fluxes comes from star formation in the local region. We therefore explored how the fluxes depend on the local star formation rate surface density, $\Sigma_{\rm{SFR,} \, 1 \, \rm{kpc}}$, calculated in two-dimensional cells 1~kpc across viewing the disc of the galaxy face-on. The median and tenth to ninetieth percentile fluxes in bins of $\Sigma_{\rm{SFR,} \, 1 \, \rm{kpc}}$ are shown in Fig.~\ref{flux_1kpc_fig}. We see that the fluxes no longer follow a linear scaling with star formation surface density when averaged on 1~kpc scales. The slope of this relation flattens towards lower $\Sigma_{\rm{SFR,} \, 1 \, \rm{kpc}}$. This suggests that, in regions with relatively little star formation, additional contributions to the stellar fluxes from other regions of the galaxy with higher star formation rates dominate, which increases the total flux beyond what we would expect from local star formation alone. Thus the fluxes in our simulations are driven by star formation over the whole disc of the galaxy, and not just local star formation. This conclusion may however by a consequence of the assumptions in the approximate \textsc{lebron} radiative transfer method, which assumes that radiation is only attenuated locally around the emitting star particle and the receiving gas particle. It therefore neglects absorption through the plane of the galaxy disc between widely separate regions, which might otherwise shield a gas cloud from young stars on the opposite side of the disc. We may therefore underestimate the spatial variations in the local stellar fluxes over the galaxy disc. To tackle this question more accurately would require a full 3D radiative transfer method coupled to the non-equilibrium chemistry network. \subsection{Dust properties}\label{dust_properties_sect} The empirical model for the depletion of metals onto dust grains described in Section~\ref{depl_sect} primarily aims to capture how the removal of metals from the gas phase affects the cooling and observable emission lines. However, it may also affect the total abundance of dust, as the dust to metals ratio in the fiducial model depends on gas density (see the bottom panel of Fig.~\ref{depl_fig}). In contrast, the simulations run with the no depletion model assume a constant dust to metals ratio. In this section, we compare the dust properties of our simulated galaxies with the fiducial and no depletion models to observations. Fig.~\ref{dust_mass_fig} compares the ratio of dust mass to stellar mass versus stellar mass in simulations with the fiducial model (grey symbols) and no depletion model (blue open symbols), calculated within the disc of the galaxy. For each simulation we show five snapshots at intervals of 100~Myr. We also plot observations from three galaxy surveys in the nearby Universe. The dark purple symbols show galaxies from the JINGLE survey \citep{saintonge18}, with dust masses measured by \citet{delooze20}. The stellar masses reported by \citet{saintonge18} assume a \citet{chabrier03} IMF. We convert these to a \citet{kroupa01} IMF, as used in the simulations, by multiplying by a factor of 1.06 (see equation~2 of \citealt{speagle14}). Galaxies from the KINGFISH survey \citep{kennicutt11} are shown by the light purple symbols, with stellar masses from \citet{hunt19} (also converted from a \citealt{chabrier03} to a \citealt{kroupa01} IMF as above), and dust masses from \citet{delooze20}. Finally, the light orange data points show a sample of 20 star-forming dwarf galaxies from \citet{grossi16}, which were selected from the \textit{Herschel} Virgo Cluster Survey (HeViCS; \citealt{davies12}). The dust to stellar mass ratios in the simulations overlap with observations at the same stellar mass, suggesting that we reproduce a realistic total abundance of dust. We also find little difference in the dust mass predicted by the fiducial and no depletion models. From the lower panel of Fig.~\ref{depl_fig}, we see that the dust to metals ratio ($DTM$) in the fiducial model only deviates significantly from the Milky Way value ($DTM_{\rm{MW}}$) at low densities. For example, $DTM / DTM_{\rm{MW}}$$>$0.5 at densities $\log_{10} [ n_{\rm{H}} \, (\rm{cm}^{-3})]$$>$$-3.3$ in our simulations (although as we note in Section~\ref{depl_sect}, our depletion model equates the particle density in the simulations to the average line of sight density in the observations, so we will underestimate the true local density for a given level of depletion). However, most of the gas mass in the galaxy disc is at much higher densities than this (see Section~\ref{Trho_sect}). Hence the empirical dust depletion model has little impact on the total dust abundance, although we will see in Section~\ref{emission_line_sect} that it does have a significant effect on observable emission lines arising from metals in the gas phase. Given that the dust depletion model does not have a strong effect on the dust to metals ratio, the total dust mass is determined by the total gas mass and metallicity. However, the scaling relations between stellar mass, gas fraction and metallicity were set in the initial conditions according to the redshift zero relations (see Section~\ref{IC_sect}). It may therefore seem unsurprising that we can reproduce the observed scaling between dust and stellar masses in the simulations. In Fig.~\ref{dust_fraction_fig} we plot the ratio of dust to H\textsc{i} mass ($M_{\rm{dust}} / M_{\rm{HI}}$) versus the average gas-phase oxygen abundance ($12 + \log_{10} [\rm{O/H}]$) over the galaxy disc in simulations with the fiducial (grey symbols) and no depletion (blue open symbols) models. We compare these to observations from JINGLE (dark purple symbols, with metallicities from \citealt{saintonge18} and H\textsc{i} masses from \citealt{durbala20}), KINGFISH (light purple symbols, with metallicities from \citealt{devis19} and H\textsc{i} masses from \citealt{remyruyer14}), and HeViCS (light orange symbols, with metallicities and H\textsc{i} masses from \citealt{grossi16}). The ratio $M_{\rm{dust}} / M_{\rm{HI}}$ can be used as a proxy for the dust to gas ratio \citep[e.g.][]{delooze20}, although it does not include the molecular gas phase. Nevertheless, it allows us to include galaxies with no molecular observations, and it avoids uncertainties in the conversion factors between observational tracers such as CO emission and total molecular mass \citep[e.g.][]{chiang21}, although there are still uncertainties in $M_{\rm{dust}}$ which can differ by up to a factor of 3 depending on the assumed dust emission model \citep[e.g.][]{chastenet21}. The simulations with the fiducial model follow a tight linear relation between $M_{\rm{dust}} / M_{\rm{HI}}$ and $12 + \log_{10} [\rm{O/H}]$, which is expected given that the dust to metals ratio in this model is almost constant, as discussed above. The no depletion model also follows a tight linear relation, but offset to higher metallicity by a factor $\approx$2. This offset arises because the $12 + \log_{10} [\rm{O/H}]$ abundance only includes oxygen in the gas phase, and so is reduced in the fiducial model due to depletion of oxygen onto dust grains. In the no depletion model this is not accounted for, and so oxygen atoms in dust grains are also counted in the gas phase. At high metallicities, with $12 + \log_{10} [\rm{O/H}]$$>$8.5, the dust to gas ratios predicted by the simulations overlap with the observed ratios. However, the simulations do not reproduce the observed scatter in $M_{\rm{dust}} / M_{\rm{HI}}$ at fixed metallicity. This may be due to the idealised nature of our initial conditions, which we set according to redshift zero scaling relations between galaxy properties. Apart from the gas fraction, we do not consider the scatter in these scaling relations for the initial conditions, which may reflect the lack of scatter in dust to gas ratio. Observational studies of the scaling between dust to gas ratio and metallicity have also noted a strong dependence on the galaxy's evolutionary stage \citep[e.g.][]{remyruyer14, delooze20}. Galaxies at an early stage of their evolution, which have not had time to process much of their gas reservoir into stars, have a lower fraction of their metals in dust grains. As our simulations do not capture a range of evolutionary histories for each set of galaxy parameters, this may also explain the lack of scatter in our results. \citet{delooze20} find a best-fit relation between $M_{\rm{dust}} / M_{\rm{HI}}$ and $12 + \log_{10} [\rm{O/H}]$ with a logarithmic slope of 2.26$\pm$0.07, which is steeper than the linear relation that we find in our simulations with a slope of 1. \citet{remyruyer14} find that a linear relation provides a good fit to their observational data at high metallicities, $12 + \log_{10} [\rm{O/H}]$$\gtrsim$8, but they require a super-linear relation, with a slope of 2.02$\pm$0.28, at lower metallicities. \citet{devis19} also find a super-linear relation, with a slope of 2.15$\pm$0.11, while \citet{casasola20} observe a linear scaling. However, apart from \citet{delooze20}, these observational results include the molecular component in the dust to gas ratio. The shallower relation exhibited by our simulations compared to the JINGLE, KINGFISH and HeViCS samples results in dust to gas ratios that are up to an order of magnitude higher than observed at low metallicities (i.e. in the dwarf galaxies). This may suggest that the empirical dust model that we employed in our fiducial simulations, which relies on the correlation between depletion strength ($F_{\ast}$) and gas density ($n_{\rm{H}}$), may not extrapolate well to the dwarf galaxy regime. For example, in our model the dust to metals ratio saturates at the Milky Way value even in the dwarf galaxies. However, observations find lower dust to metals ratios in low-metallicity galaxies, which may be due to either less efficient growth of grains in the ISM or more efficient grain destruction in this regime \citep[e.g.][]{galliano21, priestley22}. This could explain the discrepancy in the M$_{\rm{dust}} / M_{\rm{HI}}$ ratios that we find in our dwarf galaxy simulations compared to the observational data. Resolving this discrepancy may require live dust evolution models \citep[e.g][]{choban22}. However, \citet{mckinnon17} also find tensions between the predictions of their live dust evolution models for this relation and the observations (see their figure~8). In their fiducial model the slope of this relation is too flat, while their `M16' model reproduces the observed slope but with a normalisation that is too high. It might appear surprising that our simulated dwarf galaxies can reproduce the dust to stellar mass ratios seen in observations (Fig.~\ref{dust_mass_fig}) when the dust to gas ratio may be overestimated by an order of magnitude (Fig.~\ref{dust_fraction_fig}). If we instead plot $M_{\rm{dust}} / M_{\rm{HI}}$ versus stellar mass (not shown), we find that the simulated dwarf galaxies lie near the upper bound of the observed range in dust to gas ratio at fixed stellar mass, although they again do not reproduce the scatter in this relation. The discrepancy between the dwarf galaxy simulations and observations is therefore less pronounced at fixed stellar mass than at fixed metallicity. \section{The transition from atomic to molecular gas}\label{HI_H2_sect} Molecular gas is a vital component of the ISM that is typically found to correlate with the star formation rate \citep[e.g.][]{bigiel11, leroy13, tacconi20}, although this correlation is not necessarily a causal relationship and may arise because molecules and star formation both require the gas to be shielded \citep{glover12}. Nevertheless, the transition from atomic to molecular gas is important for understanding the multiphase structure of the ISM and how the different ISM phases fuel star formation, and has been the topic of many studies both from an observational \citep[e.g.][]{gillmon06, shull21} and a theoretical \citep[e.g.][]{krumholz08, sternberg14} perspective. The interstellar chemistry that drives this transition is sensitive to the local UV radiation field, which destroys molecules via photodissociation. Dust grains also play an important role as they can shield molecules from dissociating radiation, and can promote the formation of H$_{2}$ on grain surfaces. In this section, we explore how the variations in our models for local stellar radiation and dust depletion affect the properties of the atomic and molecular ISM phases, and compare our simulation predictions to observations. \subsection{Thermodynamic properties of the interstellar gas}\label{Trho_sect} We start by looking at the temperature and density distribution of interstellar gas in our simulations. Fig.~\ref{Trho_fig} shows the temperature versus density for all gas in the disc of the galaxy in the simulation of m1e12 with the fiducial model. We see that the gas forms different phases in temperature-density space. At densities $n_{\rm{H}} \! \sim \! 0.01 \! - \! 1 \, \rm{cm}^{-3}$ most of the gas mass lies in a warm phase with temperatures close to $T \! \sim \! 10^{4} \, \rm{K}$. Thermal instabilities enable this warm phase to cool starting at $n_{\rm{H}} \! \sim \! 0.1 \, \rm{cm}^{-3}$, and by $n_{\rm{H}} \! \gtrsim \! 1 \, \rm{cm}^{-3}$ most of the gas mass is in the cold phase ($T \! < \! 10^{3} \, \rm{K}$). As we go to higher densities the ISM continues to cool, reaching temperatures $\lesssim \! 100 \, \rm{K}$ by $n_{\rm{H}} \! \gtrsim \! 10 \, \rm{cm}^{-3}$. Since lines of constant pressure follow a power law with a slope of $-1$ in this plot, we see that the different ISM phases are approximately in pressure equilibrium, albeit with a large scatter of more than 1 dex. This picture is qualitatively similar to other theoretical models for the thermodynamic structure of the multiphase ISM \citep[e.g.][]{wolfire03}. The horizontal branch at temperatures just below $10^{4} \, \rm{K}$ and densities $\gtrsim \! 1 \, \rm{cm}^{-3}$ is due to H\textsc{ii} regions, where gas has been identified within the Str\"{o}mgren radius of a star particle and is there photoionised and photoheated by the stellar radiation (see Section~\ref{flux_sect}). The hot phase ($T \! > \! 10^{4} \, \rm{K}$) is created by stellar feedback in these simulations, as we do not include feedback from AGN, nor do we include a hot gaseous halo. At low temperatures ($\lesssim$30~K) the gas forms distinct tracks in temperature-density space. We find that gas particles currently in this region have recently undergone a period of rapid cooling within the preceding few Myr, typically from temperatures of a few hundred Kelvin or more. In \citet{richings14a} we showed that non-equilibrium effects can enhance the cooling rate below $10^{4} \, \rm{K}$, which allows gas to initially cool below the thermal equilibrium temperature expected at the given density, before heating back up towards thermal (and chemical) equilibrium. The distinct tracks seen at low temperatures in Fig.~\ref{Trho_fig} are therefore likely due to particles piling up at the minimum temperature in this non-equilibrium evolution. Indeed, we will see below that the chemical abundances in this region are out of equilibrium. We can now look at the distribution of atomic and molecular hydrogen in the temperature-density phase space. The top row of Fig.~\ref{Trho_HI_H2_fig} shows the fraction of the total hydrogen mass in each species ($M_{i} / M_{\rm{H, \, tot}}$) as a function of temperature and density for m1e12 using the fiducial model, including all gas within the galaxy disc. We see that H\textsc{i} (left-hand panel) dominates at densities $n_{\rm{H}} \! \sim \! 0.01 \! - \! 10 \, \rm{cm}^{-3}$, covering a broad range of temperatures $T \! \sim 10 \! - \! 10^{4} \, \rm{K}$. Molecular hydrogen (right-hand panel) dominates at high densities, $n_{\rm{H}} \! \gtrsim \! 10 \, \rm{cm}^{-3}$, and is mostly found at temperatures $T \! \lesssim \! 300 \, \rm{K}$. However, non-negligible H$_{2}$ fractions can also be found at lower densities and higher temperatures than this. The species fractions in the top row of Fig.~\ref{Trho_HI_H2_fig} use the non-equilibrium chemical abundances from the simulations. However, as many simulations of galaxy formation assume that the species are in chemical equilibrium, it is interesting to explore the impact of non-equilibrium effects on our chemical predictions. We therefore ran the \textsc{chimes} chemistry solver on each gas particle from the simulation snapshots in post-processing to calculate the equilibrium chemical abundances. The bottom row of Fig.~\ref{Trho_HI_H2_fig} shows the ratio of non-equilibrium to equilibrium species masses ($M_{i} / M_{i}^{\rm{eqm}}$) in m1e12 as a function of temperature and density. Molecular hydrogen (right-hand panel) shows particularly strong non-equilibrium effects. There is an enhancement of more than three orders of magnitude in the non-equilibrium H$_{2}$ fraction at $n_{\rm{H}} \! \sim \! 0.1 \, \rm{cm}^{-3}$ and $T \! \sim \! 10^{3} \, \rm{K}$, which is due to colder, denser molecular gas that was recently heated for example by stellar feedback but has not yet had sufficient time for the molecules to be fully destroyed. While H$_{2}$ does not dominate the total hydrogen budget in this region of the $T \! - \! n_{\rm{H}}$ space, the non-equilibrium H$_{2}$ fraction still reaches $\sim \! 1 \! - \! 10$ per cent here. At this temperature the ro-vibrational transitions of the H$_{2}$ molecule can be collisionally excited, so this non-equilibrium enhancement may have important consequences for observational predictions of the infrared H$_{2}$ emission lines. We will explore these predictions further in a future work. At higher densities, there are two distinct regions where the non-equilibrium H$_{2}$ fraction is suppressed by up to an order of magnitude, at $n_{\rm{H}} \! \sim \! 1 \! - \! 10 \, \rm{cm}^{-3}$, $T \! \lesssim \! 100 \, \rm{K}$ and $n_{\rm{H}} \! \sim 100 \, \rm{cm}^{-3}$, $T \! \sim \! 100 \! - \! 10^{3} \, \rm{K}$. These regions also coincide with enhancements in the non-equilibrium H\textsc{i} abundances (left-hand panel). These effects may be due to gas that was previously at higher temperatures and has recently cooled but has not yet had sufficient time for molecules to fully form. Gas in H\textsc{ii} regions, with $n_{\rm{H}} \! \gtrsim \! 1 \, \rm{cm}^{-3}$ and temperatures just below $10^{4} \, \rm{K}$, exhibits strongly suppressed H\textsc{i} and H$_{2}$ abundances in non-equilibrium. This is perhaps unsurprising given that H\textsc{ii} regions evolve on relatively short time-scales of a few~Myr \citep[e.g.][]{kim19}. However, even in equilibrium the mean H\textsc{i} and H$_{2}$ fractions in H\textsc{ii} regions are low ($\approx \! 0.05$ and $\approx \! 2 \! \times \! 10^{-8}$, respectively), since the hydrogen is predominantly ionised. Other studies using galaxy simulations with time-dependent models for the H$_{2}$ chemistry have also found strong non-equilibrium effects in the H$_{2}$ abundances \citep[e.g.][]{dobbs08, pelupessy09, richings16}, although \citet{gnedin11} find that an equilibrium treatment is sufficient to capture the atomic to molecular transition. Figs.~\ref{Trho_fig} and \ref{Trho_HI_H2_fig} focussed on the m1e12 simulation using our fiducial model. To quantitatively compare the effects of local stellar radiation and dust depletion, we show in Fig.~\ref{nH_hist_fig} the one-dimensional density distributions in low-temperature ($T \! \leq \! 10^{3} \, \rm{K}$; blue), intermediate-temperature ($10^{3} \! < \! T \! \leq \! 10^{4} \, \rm{K}$; orange) and high-temperature ($T \! > \! 10^{4} \, \rm{K}$; red) gas. These temperature ranges are also illustrated in Fig.~\ref{Trho_fig}, and were chosen as they highlight the different phases and transitionary stages that are relevant to the ISM. We compare the fiducial (solid), uniform ISRF (dashed) and no depletion (dotted) models, using the final snapshot after 500~Myr from m1e12 in each case. The top, middle and bottom panels show the total gas, H\textsc{i} and H$_{2}$ distributions, respectively. The overall gas distributions are fairly similar in the three models, with a few exceptions. In the uniform ISRF model, there is less intermediate-temperature gas at densities $n_{\rm{H}} \! \gtrsim \! 10 \, \rm{cm}^{-3}$ in the top panel of Fig.~\ref{nH_hist_fig}. This is due to H\textsc{ii} regions created by the photoionisation of gas within the Str\"{o}mgren radius around star particles, as we do not include the subgrid H\textsc{ii} region model in the uniform ISRF runs (see Section~\ref{flux_sect}). This difference is not seen in H\textsc{i} and H$_{2}$ (middle and bottom panels), as hydrogen is predominantly ionised in these regions. In the bottom panel of Fig.~\ref{nH_hist_fig}, the distribution of intermediate-temperature H$_{2}$ is enhanced in the uniform ISRF model compared to the other two models. Given that in this temperature range the total available gas mass at these densities ($n_{\rm{H}} \! \lesssim \! 10 \, \rm{cm}^{-3}$) is similar between all three models, this suggests that the local treatment of stellar fluxes is more efficient at dissociating molecules in this regime. This has little impact on the total H$_{2}$ mass, which is dominated by low-temperature gas, but will be important for observational predictions of infrared H$_{2}$ emission, which arises from molecular gas at these intermediate temperatures. Comparing the dotted and solid curves in the top and middle panels, we see that the no depletion model enhances the low-temperature H\textsc{i} component at $n_{\rm{H}} \! \lesssim \! 1 \, \rm{cm}^{-3}$. This is due to increased metal cooling when we do not reduce metals in the gas phase to account for depletion onto dust grains, which makes it easier for gas from the high- and intermediate-temperature phases to cool at these intermediate densities. \subsection{Transition column density}\label{transition_sect} The formation of the molecular phase requires that the gas becomes shielded from dissociating UV radiation. As the shielding is sensitive to the column density of the gas cloud, with higher column densities able to absorb a greater proportion of the incident radiation, it is therefore useful to look at the transition from atomic to molecular gas as a function of column density. For each galaxy simulation, we create maps of the H\textsc{i} and H$_{2}$ column densities with pixels 4~pc across, viewing the galaxy disc face-on. We then bin the pixels according to the total neutral hydrogen column density, $N_{\rm{HI}} + 2 N_{\rm{H}2}$, combining five snapshots at 100~Myr intervals for each simulation, and calculate the median and tenth to ninetieth percentile H$_{2}$ fraction, $2 N_{\rm{H}2} / (N_{\rm{HI}} + 2 N_{\rm{H}2})$, in each bin. Fig.~\ref{H2_fraction_fig} shows the median (solid curves) and tenth to ninetieth percentile range (shaded regions) of the H$_{2}$ fraction versus neutral hydrogen column density in simulations using the fiducial (top panel), uniform ISRF (middle panel) and no depletion (bottom panel) models. Different galaxies are indicated by different colours, as shown in the legend. The data points in Fig.~\ref{H2_fraction_fig} show observed H\textsc{i} and H$_{2}$ column densities along lines of sight in the Small and Large Magellanic Clouds (SMC and LMC; purple and blue respectively) from \citet{tumlinson02}, and in the Milky Way at galactic latitudes $|b| \! > \! 20^{\circ}$ (MW Halo; green) from \citet{gillmon06} and at $|b| \! < \! 10^{\circ}$ (MW Disk; yellow) from \citet{shull21}. These observational studies all measure H$_{2}$ column densities from far-UV absorption lines in the Lyman Werner bands using the Far Ultraviolet Spectrographic Explorer (FUSE) telescope. The H\textsc{i} column densities used in \citet{shull21} were obtained by fitting Ly$\alpha$ absorption, while \citet{tumlinson02} and \citet{gillmon06} derive H\textsc{i} column densities from 21~cm emission. In the galaxy simulations with the highest masses (m3e11 to m1e12), the molecular phase dominates at neutral column densities $\gtrsim \! 10^{22} \, \rm{cm}^{-2}$. The H$_{2}$ fraction declines steeply at lower column densities, reaching median fractions below $10^{-6}$ at column densities $\lesssim \! 10^{20.5} \, \rm{cm}^{-2}$. The median and tenth to ninetieth percentile range of H$_{2}$ fractions in these high-mass simulations broadly overlap the two observational samples in the Milky Way (green and yellow points), particularly at the higher column densities of the MW Disk sample. In the three simulations with a halo mass of $3 \! \times \! 10^{11} \, \rm{M}_{\odot}$, the transition from H\textsc{i} to H$_{2}$ moves towards higher column densities as the disc gas fraction increases. This may be due to the increasing star formation rate with increasing gas fraction, which increases the strength of the interstellar radiation field in the galaxy disc and hence tends to increase the column density of the atomic to molecular transition \citep[e.g.][]{schaye04, sternberg14}. Compared to the high-mass galaxies, the simulations m3e10 and m1e11 exhibit larger H$_{2}$ fractions at column densities $\lesssim \! 10^{20.5} \, \rm{cm}^{-2}$, resulting in a more gradual transition from atomic to molecular gas, whilst in the lowest mass galaxy in our sample, m1e10, the H$_{2}$ fractions are lower than the intermediate-mass galaxies at all column densities. We therefore do not see a monotonic trend in the H\textsc{i} to H$_{2}$ transition with halo mass. However, we caution that, while the H$_{2}$ fractions in m3e11 and m1e12 exhibit good numerical convergence, those in the lower-mass galaxies show significant differences at low column densities ($<$10$^{21} \, \rm{cm}^{-2}$) between runs with different resolutions (see Appendix~\ref{resolution_sect}). In m1e11 the H$_{2}$ fractions at low column densities increase from low to standard resolution, while in m3e10 they decrease from standard to high resolution. The low-column density trends that we see for the dwarf galaxies in Fig.~\ref{H2_fraction_fig} are therefore not robust. The structural properties of the LMC are closest to our m1e11 simulated galaxy, while the SMC is nearest to m1e10. If we compare these simulations to the observational data from \citet{tumlinson02} in Fig.~\ref{H2_fraction_fig}, the simulations lie close to the highest H$_{2}$ fractions measured in the LMC and SMC. However, there are many sight lines in these two observational samples with much lower H$_{2}$ fractions than are found in the simulations, by up to 4 orders of magnitude at the same column density. This discrepancy suggests that our simulations do not correctly capture the atomic to molecular transition in the dwarf galaxy regime. However, in the simulations we calculate the column densities by projecting the gas onto a grid viewing the galaxy disc face on, rather than modelling mock observations of H2 absorption spectra along lines of sight. We therefore do not determine column densities in the same way as the observations, and we do not capture the same selection effects that may be present in the observational samples. The latter approach of using mock absorption spectra would allow for a more direct comparison between the simulations and observations, but such an analysis is beyond the scope of this work. As noted above, the low-column density H$_{2}$ fractions in dwarf galaxies are not well converged when we vary the numerical resolution. However, running m3e10 at 8 times higher mass resolution did not improve the agreement with observational data at high column densities ($\gtrsim$10$^{21} \, \rm{cm}^{-2}$; see Appendix~\ref{resolution_sect}). The discrepancies between our dwarf galaxies and observations of the LMC and SMC are therefore unlikely to be caused by limited numerical resolution alone. In Section~\ref{flux_sect}, we cautioned that our shielding model uses a single, average column density for the local gas cloud, which will tend to overestimate the strength of the shielding as the photochemical rates are typically dominated by the lines of sight at low column densities. This would lead to higher molecular abundances, which could contribute to the discrepancy in the atomic to molecular transitions that we see between our model predictions for dwarf galaxies and the observations of the LMC and SMC. In the top and middle panels of Fig.~\ref{H2_fraction_fig} we see that the trends of H$_{2}$ fraction with column density are similar in the fiducial and uniform ISRF models. This suggests that the large local variations in stellar flux seen in Fig.~\ref{m1e12_flux_fig} with the fiducial model do not have a strong impact on the atomic to molecular transition. This result is consistent with the theoretical studies of \citet{schaye04} and \citet{krumholz09}, who concluded that the transition from H\textsc{i} to H$_{2}$ is driven primarily by column density and secondarily by metallicity, with a weaker dependence on the incident radiation field. Comparing the top and bottom panels of Fig.~\ref{H2_fraction_fig}, we find that the no depletion model exhibits higher H$_{2}$ fractions at low column densities ($\lesssim \! 10^{21} \, \rm{cm}^{-2}$) in high-mass galaxies compared to the fiducial model, resulting in a shallower transition between the atomic and molecular phases. This is due to the increased dust abundance at low densities in the no depletion model. However, at higher column densities the H$_{2}$ fraction is similar in the two models, as it is dominated by high-density gas for which the dust to metals ratio is identical in both cases. \subsection{Global H\textsc{i} and H$_{2}$ properties}\label{global_HI_H2_sect} In the previous section we studied the H$_{2}$ fraction along individual lines of sight through each galaxy. We now consider how the total mass of atomic and molecular hydrogen in each galaxy ($M_{\rm{HI}}$ and $M_{\rm{H2}}$, respectively) vary with stellar mass. While the total gas mass in the galaxy simulations was determined from the stellar mass in the initial conditions according to observed redshift zero scaling relations (see Section~\ref{IC_sect}), the partitioning of the gas between atomic and molecular phases remains a prediction of the chemical modelling. Fig.~\ref{global_HI_H2_fig} shows the ratios $M_{\rm{HI}} / M_{\ast}$ (left-hand column) and $M_{\rm{H2}} / M_{\ast}$ (right-hand column) plotted against $M_{\ast}$ in our simulated galaxies (grey symbols). We include five snapshots from each galaxy at intervals of 100~Myr. The top, middle and bottom rows show the fiducial, uniform ISRF and no depletion models, respectively. The scaling between H\textsc{i}, H$_{2}$ and stellar components of galaxies has also been the subject of many observational studies. The solid curves in Fig.~\ref{global_HI_H2_fig} show observed scaling relations from three low-redshift galaxy surveys. For the xGASS survey, we plot the median H\textsc{i} ratios in bins of stellar mass reported in Table~1 of \citet{catinella18}. For the MAGMA survey, we show median H\textsc{i} and H$_{2}$ ratios in stellar mass bins from figure~6 of \citet{hunt20}, together with the $\pm 1 \sigma$ deviations in each bin as indicated by the shaded region. For the xCOLD GASS survey, we show the median H$_{2}$ ratios for the whole sample given in Table~6 of \citet{saintonge17}, where the error bars denote the uncertainty in the median ratio for each bin. The H$_{2}$ masses reported in \citet{saintonge17} include the contribution from helium, however in our simulations we only show the H$_{2}$ mass. We have therefore divided the H$_{2}$ masses from \citet{saintonge17} by a factor of 1.36 to remove the helium correction. Finally, the coloured data points in Fig.~\ref{global_HI_H2_fig} show individual galaxies from the JINGLE \citep{saintonge18}, KINGFISH \citep{kennicutt11, hunt19}, and HeViCS \citep{grossi16} surveys. As noted in Section~\ref{dust_properties_sect}, we multiply the stellar masses from the JINGLE and KINGFISH surveys by 1.06 to convert from the \citet{chabrier03} IMF assumed in the observations to the \citet{kroupa01} IMF used in the simulations \citep{speagle14}. We also applied this conversion to the stellar masses in the xGASS, xCOLD GASS and MAGMA samples, which also assumed a \citet{chabrier03} IMF. The simulated H\textsc{i} fractions in the left-hand column of Fig.~\ref{global_HI_H2_fig} are in good agreement with the observational data, lying well within the scatter of the observed relations. The simulations reproduce the trend of decreasing $M_{\rm{HI}} / M_{\ast}$ with increasing stellar mass, with the H\textsc{i} fraction decreasing by approximately 1~dex from m1e10 to m1e12. In the right-hand column of Fig.~\ref{global_HI_H2_fig}, the H$_{2}$ fractions of the high-mass galaxy simulations ($M_{200, \, \rm{crit}} \! \geq \! 3 \times 10^{11} \, \rm{M}_{\odot}$, or $M_{\ast} \! \gtrsim \! 10^{10} \, \rm{M}_{\odot}$) are close to the observed median relation from \citet{saintonge17}, except for m3e11\_lowGas which exhibits lower H$_{2}$ masses due to the lower total gas fraction. At lower masses, the simulations increasingly appear to underpredict the H$_{2}$ mass expected for their stellar mass, with the dwarf galaxies generally lying below the $\pm 1 \sigma$ scatter of the MAGMA sample. However, as the observed molecular masses were derived from CO luminosities, the observational samples are more sensitive to high-H$_{2}$ fraction galaxies. For example, the MAGMA sample only includes galaxies that have been detected in CO, while 9 out of the 20 dwarfs in the HeViCS sample are upper limits. We do not account for these selection effects in our simulations, so it is unclear whether these apparent discrepancies are a true failing of the model or arise simply because the handful of dwarf galaxies in our simulated sample would not be included in the observed surveys. There are also uncertainties in converting CO luminosity to H$_{2}$ mass \citep[e.g.][]{bolatto13,chiang21}, particularly at low metallicities. As the \textsc{chimes} chemistry network includes CO, we will explore the relations between CO luminosity and H$_{2}$ mass further in a future work. The apparent discrepancy between simulations and observations in the dwarf galaxies in the right-hand column of Fig.~\ref{global_HI_H2_fig} may also seem to contradict the results of Fig.~\ref{H2_fraction_fig}, in which we saw that the dwarf galaxy simulations cannot reproduce the very low H$_{2}$ fractions seen in many sight lines through the LMC and SMC. However, the LMC and SMC observations from \citet{tumlinson02} measure H$_{2}$ in absorption rather than from CO emission, and so the observational data sets in Figs.~\ref{H2_fraction_fig} and \ref{global_HI_H2_fig} are subject to different selection effects and may be probing different regimes. Comparing the three rows in Fig.~\ref{global_HI_H2_fig}, we find that the total atomic and molecular components are similar in the fiducial, uniform ISRF and no depletion models. This agrees with our results from Section~\ref{transition_sect}, in which we saw that the transition from atomic to molecular hydrogen is not strongly affected by the treatment of local versus uniform stellar fluxes, while the inclusion of metal depletion from the gas phase onto dust grains only affects H$_{2}$ fractions at low column densities ($\lesssim$$10^{21} \, \rm{cm}^{-2}$) which do not dominate the total molecular mass. \section{Emission line tracers of the star formation rate}\label{emission_line_sect} There are a wide range of observational diagnostics that are commonly used to determine the star formation rate \citep{kennicutt98, kennicutt12, davies16}. As young, massive stars emit predominantly at UV wavelengths, observations of continuum UV emission can measure recent star formation activity, although such observations are sensitive to dust. These can be supplemented with observations of the infrared continuum to account for UV radiation that has been absorbed by dust grains and re-emitted at longer wavelengths. The star formation rate can also be inferred from emission lines produced by species that are photoionised by massive stars. The H$\alpha$ line is perhaps the most famous example \citep[e.g.][]{kennicutt94}, although as it lies at optical wavelengths it is also affected by dust attenuation. Far infrared (FIR) lines from metal ions have also been shown to correlate with the star formation rate \citep{delooze14}, and are less sensitive to dust effects. As these observational tracers probe star formation on different time-scales, they can also depend on the recent star formation history in the galaxy \citep[e.g.][]{sparre17, floresvelazquez21}. In this section we study emission line predictions from our simulations and how they correlate with the total star formation rate, which we compare to observed galaxy surveys. The detailed interstellar chemistry modelled in these simulations will be important for calculating these emission lines, as it determines the relative abundances of the ions involved. As the star formation rate is inferred from the emission lines based on how the young stars photoionise the surrounding gas, we would expect that the treatment of the stellar radiation will also play a vital role. Other studies have explored predictions for line emission from cosmological simulations based on subgrid models \citep[e.g.][]{hirschmann17, olsen21}. These approaches have the advantage that they do not need to explicitly resolve the regions that produce the emission, as they are treated in a subgrid fashion, which is particularly important for large-scale simulations of the Universe. However, they rely on assumptions for the structure of the unresolved components, and they do not capture effects of non-equilibrium chemistry. Our simulation predictions in this work do account for the non-equilibrium chemistry, but rely on explicitly resolving the emitting regions, and so they offer a complementary approach to these subgrid models. \subsection{Modelling line emission in post-processing}\label{line_emission_method_sect} We calculate the emission lines from our simulations by post-processing the simulation outputs with the publicly available radiative transfer code \textsc{radmc-3d} \citep{dullemond12}, which follows the emission, propagation and absorption of spectral lines together with stellar emission and the absorption, scattering and thermal emission from dust grains. As \textsc{radmc-3d} operates on a grid, we first construct an Adaptive Mesh Refinement (AMR) grid from the particle distribution in the simulation. Each cell is refined until it contains no more than 8 gas and/or star particles. The non-equilibrium ion and molecule abundances of each gas particle, together with the ion-weighted temperatures and velocities, are then projected onto the AMR grid, using the same smoothing kernel as the MFM hydro solver. The star particles are also smoothed and projected onto the grid, split between the eight stellar age bins with spectra shown in Fig.~\ref{SB99_fig}. We include graphite and silicate grains in our \textsc{radmc-3d} calculations. We take the abundance of graphite and silicate grains at solar metallicity from the `ISM' grain abundances in v13.01 of the \textsc{cloudy} photoionisation \citep{mathis77, ferland13}, which are typical of the ISM in the Milky Way. For each gas particle we scale these grain abundances by the total metallicity relative to solar $(Z / \rm{Z}_{\odot})$. We then scale these by the density-dependent dust to metals ratio predicted by our empirical dust depletion model ($DTM / DTM_{\rm{MW}}$, i.e. the bottom panel of Fig.~\ref{depl_fig}), except for simulations run with the no depletion model for which we assume $DTM / DTM_{\rm{MW}} = 1$. We thus obtain a dust to gas ratio of $2.4 \times 10^{-3} (DTM / DTM_{\rm{MW}}) (Z / \rm{Z}_{\odot})$ and $4.0 \times 10^{-3} (DTM / DTM_{\rm{MW}}) (Z / \rm{Z}_{\odot})$ for graphite and silicate grains, respectively. The dust temperature in each cell of the AMR grid is calculated by \textsc{radmc-3d} using the stellar radiation. The level populations of ions and molecules in each cell are calculated in \textsc{radmc-3d} from the gas properties and non-equilibrium species abundances. We use the Local Velocity Gradient (LVG) method to calculate the level populations, as this approximates the effects of non-Local Thermodynamic Equilibrium (non-LTE). We use atomic data and collisional excitation rates from the \textsc{lamda}\footnote{\url{https://home.strw.leidenuniv.nl/~moldata/}} \citep{schoier05} and \textsc{chianti}\footnote{\url{https://www.chiantidatabase.org}} \citep{dere97, landi13} databases. The line emissivity in each cell can then be calculated from the level populations. As H$\alpha$ emission is typically dominated by recombination (which is not accounted for in the calculation of level populations in \textsc{radmc-3d}), we instead calculate the H$\alpha$ emissivities due to recombination of H\textsc{ii} and collisional excitation of H\textsc{i} for each cell in the \textsc{radmc-3d} AMR grid using rates from \citet{raga15}. For each emission line we produce a 3D data cube in position-position-velocity space, with velocities spanning $\pm 200 \, \rm{km} \, \rm{s}^{-1}$ about the line centre at a spectral resolution of $2 \, \rm{km} \, \rm{s}^{-1}$, and a spatial resolution of $20 \, \rm{pc}$. We repeat the \textsc{radmc-3d} calculation with emission lines disabled to determine the continuum spectrum from thermal dust emission and starlight, which we subtract from the total emission to obtain the line emission. \subsection{Synthetic emission line predictions}\label{line_prediction_sect} Fig.~\ref{emission_map_fig} shows velocity-integrated maps of the continuum-subtracted line emission from the FIR lines [C\textsc{ii}]$_{158 \rm{\mu m}}$ and [O\textsc{iii}]$_{88 \rm{\mu m}}$ in the left- and right-hand columns, respectively. The three rows from top to bottom show the m1e12 simulation using the fiducial, uniform ISRF and no depletion models. In the fiducial model, [C\textsc{ii}]$_{158 \rm{\mu m}}$ emission is strongest along the spiral arms, but there remains a significant diffuse component in between. In contrast, [O\textsc{iii}]$_{88 \rm{\mu m}}$ is more strongly concentrated in small, bright regions along the arms, with very little diffuse emission. Comparing these to the image of stellar light and H$\alpha$ emission in Fig.~\ref{m1e12_morph_fig}, we find that [O\textsc{iii}]$_{88 \rm{\mu m}}$ predominantly arises from H\textsc{ii} regions around young stars in our simulations. This is unsurprising, as high-energy photons produced by young stars are required to photoionise oxygen to O\textsc{iii}. The [C\textsc{ii}]$_{158 \rm{\mu m}}$ emission in the uniform ISRF model is somewhat weaker and misses the brightest intensities seen along the spiral arms in the fiducial model, although there is still a significant diffuse component. The difference is more dramatic in [O\textsc{iii}]$_{88 \rm{\mu m}}$, which is much weaker in the uniform ISRF model, due to the lack of H\textsc{ii} regions in this case. The [C\textsc{ii}]$_{158 \rm{\mu m}}$ and [O\textsc{iii}]$_{88 \rm{\mu m}}$ morphology in the no depletion model, which does include H\textsc{ii} regions, is very similar to the fiducial model. As we will see below, the total luminosity of these lines is stronger with the no depletion model, as the gas-phase abundances of carbon and oxygen are higher when we do not account for the depletion onto dust grains. To compare our simulation predictions to observations, we calculate the total line luminosity, $L_{\rm{line}}$, integrated over the disc of the galaxy. We consider four fine-structure FIR metal lines that are important for metal cooling and have been found observationally to correlate with star formation rate: [C\textsc{ii}]$_{158 \rm{\mu m}}$, [O\textsc{i}]$_{63 \rm{\mu m}}$, [O\textsc{iii}]$_{88 \rm{\mu m}}$ and [N\textsc{ii}]$_{122 \rm{\mu m}}$, together with the optical line H$\alpha_{6563 \text{\AA}}$. These are plotted in Fig.~\ref{SFR_tracers_fig} versus the total star formation rate. The grey symbols show the simulation predictions for the fiducial (top row), uniform ISRF (middle row), and no depletion (bottom row) models. We include five snapshots from each simulation, at intervals of 100~Myr. In the simulations, the star formation rate is averaged over the preceding 10~Myr. The coloured symbols in Fig.~\ref{SFR_tracers_fig} show observed line luminosities, with star formation rates derived from continuum measurements, as detailed below. We include FIR emission line measurements from the \textit{Herschel} Dwarf Galaxy Survey \citep{cormier15} and the galaxies in the \citet{brauher08} sample of ISO observations that are identified as starbursts. We take the star formation rates for these samples from \citet{delooze14}, which were derived from FUV and 24~$\rm{\mu m}$ emission using the calibrations of \citet{kennicutt09} and \citet{hao11}. Measurements of H$\alpha_{6563 \text{\AA}}$ emission in these galaxies are taken from \citet{gildepaz03}, \citet{moustakas06}, \citet{kennicutt08} and \citet{ostlin09}, where available. We also show FIR observations of star-forming galaxies and Luminous Infrared Galaxies (LIRGs) from the SHINING survey \citep{herreracamus18}. We calculate the star formation rates in this sample from the $63 \, \rm{\mu m}$ continuum flux densities, based on the calibration between the $70 \, \rm{\mu m}$ luminosity and star formation rate from \citet{calzetti10}. At optical wavelengths, we also include H$\alpha_{6563 \text{\AA}}$ observations from \citet{calzetti07} for galaxies selected from the SINGS survey, with star formation rates derived from $24 \, \rm{\mu m}$ continuum emission using the calibration from \citet{rieke09}. Finally, we show the calibration between H$\alpha_{6563 \text{\AA}}$ luminosity and star formation rate from \citet{kennicutt12} in the right-hand column (solid lines). For the observational data we use continuum-derived star formation rates as they are independent from the emission line measurements. However, simple estimators such as these might be subject to uncertainties, for example \citet{utomo14} find systematic differences between the star formation rates from UV plus IR estimators compared to those derived from modelling the composite spectral energy distributions with stellar population synthesis models. Any such uncertainties will affect our comparisons to the simulation data, for which we use the true star formation rate. In the top row of Fig.~\ref{SFR_tracers_fig} we see that the fiducial model broadly reproduces the observed correlations between luminosity and star formation rate for these emission lines. There are some deviations between the simulation predictions and the observations though. Most notably, the simulations of the most massive galaxies overpredict the [O\textsc{iii}]$_{88 \rm{\mu m}}$ luminosity by up to a factor $\approx$2 compared to observations at the same star formation rate. Furthermore, the dwarf galaxy m1e10 exhibits greater scatter in [O\textsc{iii}]$_{88 \rm{\mu m}}$ and [O\textsc{i}]$_{63 \rm{\mu m}}$, with some snapshots lying up to an order of magnitude below the observations, while other snapshots from m1e10 are close to the observed correlation. Nevertheless, the simulation predictions overall are in reasonably good agreement with the observational data. In the uniform ISRF model, the [O\textsc{iii}]$_{88 \rm{\mu m}}$ and H$\alpha_{6563 \text{\AA}}$ luminosities are up to an order of magnitude lower than in the fiducial model. As noted above, this model does not include the subgrid prescription for H\textsc{ii} regions, which dominate the total emission of these lines in the fiducial model. The decrease in [C\textsc{ii}]$_{158 \rm{\mu m}}$ and [O\textsc{i}]$_{63 \rm{\mu m}}$ luminosities in the uniform ISRF model is more modest. As we saw in Fig.~\ref{emission_map_fig}, the [C\textsc{ii}]$_{158 \rm{\mu m}}$ emission includes a significant diffuse component outside H\textsc{ii} regions, which is still present in the uniform ISRF model. The no depletion model exhibits stronger FIR metal line luminosities, by up to a factor $\approx$2, compared to the fiducial model. This is due to the increased elemental abundances of carbon, oxygen and nitrogen in the gas phase at fixed total metallicity when we do not account for the depletion of these metals onto dust grains. This leads to increased tension between the simulation predictions and observational data, particularly for the [O\textsc{iii}]$_{88 \rm{\mu m}}$ luminosity in the most massive galaxies. However, the H$\alpha_{6563 \text{\AA}}$ luminosity remains unaffected by the model for metal depletion. Other elements such as iron are depleted more strongly than carbon or oxygen, with the gas phase abundance of iron reduced by two orders of magnitude at high densities (see Fig.~\ref{depl_fig}). We therefore expect dust depletion could have an even greater effect on emission lines from species such as Fe\textsc{ii} and Fe\textsc{iii} \citep[e.g.][]{osterbrock92, rodriguez02, delgadoinglada09}. We will explore emission from heavily depleted species such as this in a future work. In Appendix~\ref{resolution_sect} we compare the emission line luminosities in the fiducial model from simulations run at different resolutions. While many of these luminosity predictions show good numerical convergence, there are some galaxies for which particular emission lines differ significantly between resolution levels. For example, in m3e10 the luminosities of [C\textsc{ii}]$_{158 \rm{\mu m}}$, [O\textsc{i}]$_{63 \rm{\mu m}}$ and [N\textsc{ii}]$_{122 \rm{\mu m}}$ increase by up to an order magnitude from standard to high resolution, while the [C\textsc{ii}]$_{158 \rm{\mu m}}$ luminosity of m1e11 decreases by an order of magnitude from low to standard resolution. However, we do not see any trends of particular emission lines always increasing or decreasing systematically with resolution. It is unclear to what extent these differences may be due to stochastic variations between runs. \definecolor{red1_1}{rgb}{0.962, 0.933, 0.933} \definecolor{red1_2}{rgb}{0.979, 0.962, 0.962} \definecolor{red1_3}{rgb}{0.669, 0.408, 0.408} \definecolor{red1_5}{rgb}{0.944, 0.901, 0.901} \definecolor{red2_2}{rgb}{0.897, 0.816, 0.816} \definecolor{red2_3}{rgb}{0.934, 0.882, 0.882} \definecolor{red2_5}{rgb}{0.941, 0.894, 0.894} \definecolor{red3_1}{rgb}{0.986, 0.976, 0.976} \definecolor{red3_2}{rgb}{0.902, 0.825, 0.825} \definecolor{red3_3}{rgb}{0.904, 0.828, 0.828} \definecolor{red3_4}{rgb}{0.818, 0.674, 0.674} \definecolor{red3_5}{rgb}{0.857, 0.744, 0.744} \definecolor{red4_1}{rgb}{0.994, 0.990, 0.990} \definecolor{red4_2}{rgb}{0.875, 0.778, 0.778} \definecolor{red4_3}{rgb}{0.589, 0.265, 0.265} \definecolor{red4_5}{rgb}{0.598, 0.283, 0.283} \definecolor{red5_1}{rgb}{0.997, 0.994, 0.994} \definecolor{red5_3}{rgb}{0.312, 0.030, 0.030} \definecolor{red5_5}{rgb}{0.878, 0.782, 0.782} \definecolor{red6_1}{rgb}{0.993, 0.988, 0.988} \definecolor{red6_2}{rgb}{0.903, 0.826, 0.826} \definecolor{red6_3}{rgb}{0.640, 0.358, 0.358} \definecolor{red6_5}{rgb}{0.917, 0.852, 0.852} \definecolor{red7_2}{rgb}{0.921, 0.859, 0.859} \definecolor{red7_3}{rgb}{0.422, 0.030, 0.030} \definecolor{red7_5}{rgb}{0.901, 0.823, 0.823} \definecolor{blue2_1}{rgb}{0.936, 0.954, 0.964} \definecolor{blue2_4}{rgb}{0.700, 0.784, 0.832} \definecolor{blue4_4}{rgb}{0.180, 0.409, 0.541} \definecolor{blue5_2}{rgb}{0.747, 0.818, 0.859} \definecolor{blue5_4}{rgb}{0.061, 0.324, 0.474} \definecolor{blue6_4}{rgb}{0.428, 0.588, 0.679} \definecolor{blue7_1}{rgb}{0.977, 0.984, 0.987} \definecolor{blue7_4}{rgb}{0.030, 0.201, 0.379} \begin{table} \begin{minipage}{84mm} \centering \caption{Ratios of emission line luminosities calculated with non-equilibrium and equilibrium abundances, $L_{\rm{noneq}} / L_{\rm{eqm}}$, at 500~Myr using the fiducial model. Values highlighted in red and blue correspond to an enhancement and reduction, respectively, of the luminosity when non-equilibrium abundances are used.} \label{noneq_vs_eqm_L} \begin{tabular}{cccccc} \hline & \multicolumn{5}{c}{$L_{\rm{noneq}} / L_{\rm{eqm}}$} \\ \cline{2-6} Galaxy & [C\textsc{ii}] & [O\textsc{i}] & [O\textsc{iii}] & [N\textsc{ii}] & H$\alpha$ \\ & $158 \rm{\mu m}$ & $63 \rm{\mu m}$ & $88 \rm{\mu m}$ & $122 \rm{\mu m}$ & $6563 \rm{\AA}$ \\ \hline m1e10 & \cellcolor{red1_1} 1.08 & \cellcolor{red1_2} 1.04 & \cellcolor{red1_3} \textcolor{white}{1.66} & 1.00 & \cellcolor{red1_5} 1.11 \\ m3e10 & \cellcolor{blue2_1} 0.93 & \cellcolor{red2_2} 1.21 & \cellcolor{red2_3} 1.13 & \cellcolor{blue2_4} 0.75 & \cellcolor{red2_5} 1.12 \\ m1e11 & \cellcolor{red3_1} 1.03 & \cellcolor{red3_2} 1.20 & \cellcolor{red3_3} 1.19 & \cellcolor{red3_4} 1.36 & \cellcolor{red3_5} 1.29 \\ m3e11 & \cellcolor{red4_1} 1.01 & \cellcolor{red4_2} 1.25 & \cellcolor{red4_3} \textcolor{white}{1.82} & \cellcolor{blue4_4} \textcolor{white}{0.52} & \cellcolor{red4_5} \textcolor{white}{1.80} \\ m3e11\_lowGas & \cellcolor{red5_1} 1.01 & \cellcolor{blue5_2} 0.78 & \cellcolor{red5_3} \textcolor{white}{2.38} & \cellcolor{blue5_4} \textcolor{white}{0.49} & \cellcolor{red5_5} 1.24 \\ m3e11\_hiGas & \cellcolor{red6_1} 1.01 & \cellcolor{red6_2} 1.19 & \cellcolor{red6_3} \textcolor{white}{1.72} & \cellcolor{blue6_4} \textcolor{white}{0.61} & \cellcolor{red6_5} 1.17 \\ m1e12 & \cellcolor{blue7_1} 0.97 & \cellcolor{red7_2} 1.16 & \cellcolor{red7_3} \textcolor{white}{2.16} & \cellcolor{blue7_4} \textcolor{white}{0.45} & \cellcolor{red7_5} 1.20 \\ \hline \end{tabular} \end{minipage} \end{table} The emission line luminosities shown in Fig.~\ref{SFR_tracers_fig} were computed using the non-equilibrium ion abundances from the simulations. To study the impact of non-equilibrium chemistry on the predicted luminosities in the fiducial model, we also repeated the \textsc{radmc-3d} calculations from the final snapshot at 500~Myr using ion abundances in chemical equilibrium. The equilibrium abundances were determined by integrating the \textsc{chimes} reaction network to equilibrium at constant density and temperature for each gas particle in the snapshot. Table~\ref{noneq_vs_eqm_L} summarises the ratios of luminosities calculated with non-equilibrium and equilibrium abundances, $L_{\rm{noneq}} / L_{\rm{eqm}}$, for each of the five emission lines that we considered in Fig.~\ref{SFR_tracers_fig}. Values highlighted in red indicate where non-equilibrium effects enhance the luminosity, while blue values show where the luminosity is suppressed in non-equilibrium. The ratios in Table~\ref{noneq_vs_eqm_L} were calculated from the final snapshot of each simulation, however they may also vary in time, particularly in the dwarf galaxies which exhibit strong variations in the star formation rate (see Fig.~\ref{SFH_fig}). The luminosity of [O\textsc{iii}]$_{88 \rm{\mu m}}$ shows the greatest enhancement in non-equilibrium, by up to a factor of 2.38 in m3e11\_lowGas. The photoionisation of O\textsc{ii} to O\textsc{iii} requires the presence of high energy photons ($>$35~eV) produced by young, massive stars. This emission line is therefore particularly sensitive to short time-scale variations in the local star formation rate, which could drive these non-equilibrium effects. In contrast, [N\textsc{ii}]$_{122 \rm{\mu m}}$ typically exhibits a lower luminosity in most of our simulations when we use non-equilibrium abundances, by up to a factor of 0.45 in m1e12. The [C\textsc{ii}]$_{158 \rm{\mu m}}$ luminosity is least affected by the non-equilibrium chemistry, differing by less than 10 per cent compared to equilibrium in all of our simulations with the fiducial model. \section{Conclusions}\label{conclusions_sect} We have presented a suite of simulations of isolated disc galaxies ranging from dwarfs to Milky Way-mass, with a mass resolution of $400 \, \rm{M}_{\odot}$ per particle, and structural properties initially set according to observed scaling relations at redshift zero. These simulations combine the \textsc{fire-2} subgrid galaxy formation models with the \textsc{chimes} non-equilibrium chemistry and cooling module. In our fiducial model, we coupled the chemical reaction network to the local stellar fluxes computed from star particles using the approximate \textsc{lebron} radiative transfer method. This method assumes that the absorption of stellar radiation occurs locally around the star particle producing the radiation and around the receiving gas particle. We also implemented an empirical density-dependent model for the depletion of metals from the gas phase onto dust grains, based on observed depletion factors. We then repeated each simulation with two model variations. First, we replaced the local stellar fluxes with a spatially uniform interstellar radiation field normalised according to the star formation rate surface density of the galaxy disc, which we averaged over the preceding 10~Myr (the uniform ISRF model). Second, we applied a constant dust to metals ratio and disabled the depletion of metals from the gas phase due to dust grains (the no depletion model). By comparing these model variations to the fiducial runs, we explored the impact of local stellar fluxes and metal depletion on the non-equilibrium chemistry, and resulting observational diagnostics, of the ISM. We particularly focus on observations of the H\textsc{i} to H$_{2}$ transition and emission line tracers of the star formation rate. Our main results are as follows: \begin{enumerate}[leftmargin=\parindent] \item Dwarf galaxies run with the uniform ISRF model exhibit stronger outflows, resulting in disc gas fractions up to 30 per cent lower than the fiducial model, which may be due to the lack of H\textsc{ii} region feedback in the uniform ISRF runs, while the total mass of stars formed over 500~Myr is up to 27 per cent higher with the uniform ISRF model. In contrast, the model variations have little effect on the total star formation rates and disc gas fractions in galaxies with halo masses $M_{200, \, \rm{crit}} \! \geq \! 10^{11} \, \rm{M}_{\odot}$ (see Fig.~\ref{SFH_fig}). \item Non-equilibrium effects can lead to strong enhancement and suppression of H\textsc{i} and H$_{2}$ abundances in certain regions of density-temperature space. At densities $n_{\rm{H}} \! \sim \! 0.1 \, \rm{cm}^{-3}$ and temperatures $T \! \sim \! 10^{3} \, \rm{K}$ the H$_{2}$ abundance is enhanced by more than 3 orders of magnitude compared to chemical equilibrium, due to recent heating of cold, dense gas that has had insufficient time to fully destroy the molecules. This may have important consequences for predictions of infrared emission lines produced by rovibrational transitions of H$_{2}$ at these temperatures. In contrast, the non-equilibrium H$_{2}$ fraction is suppressed by up to an order of magnitude at $n_{\rm{H}} \! \sim \! 1 \! - \! 10 \, \rm{cm}^{-3}$, $T \! \lesssim \! 100 \, \rm{K}$ and $n_{\rm{H}} \! \sim 100 \, \rm{cm}^{-3}$, $T \! \sim \! 100 \! - \! 10^{3} \, \rm{K}$, with a corresponding enhancement in H\textsc{i}. This may be due to recently cooling gas that has had insufficient time to fully form molecules (Fig.~\ref{Trho_HI_H2_fig}). \item Compared to the fiducial model, the m1e12 simulation run with a uniform ISRF produces less intermediate-temperature ($10^{3} \! < \! T \! \leq \! 10^{4} \, \rm{K}$) gas at $n_{\rm{H}} \! \gtrsim \! 10 \, \rm{cm}^{-3}$, due to the lack of H\textsc{ii} regions, and more intermediate-temperature H$_{2}$ gas at $n_{\rm{H}} \! \lesssim \! 10 \, \rm{cm}^{-3}$. Meanwhile, the no depletion model enhances the low-temperature ($T \! \leq \! 10^{3} \, \rm{K}$) H\textsc{i} component at $n_{\rm{H}} \! \lesssim \! 10 \, \rm{cm}^{-3}$, due to an increase in metal cooling (Fig.~\ref{nH_hist_fig}). \item Our simulation predictions for the H$_{2}$ fraction versus total neutral hydrogen column density in high-mass galaxies ($M_{200, \, \rm{crit}} \! \gtrsim \! 3 \times 10^{11} \, \rm{M}_{\odot}$) broadly overlap with absorption line observations in the Milky Way \citep{gillmon06, shull21}. However, our dwarf galaxy simulations can only reproduce the highest H$_{2}$ fractions observed by \citet{tumlinson02} in the LMC and SMC (Fig.~\ref{H2_fraction_fig}). \item The ratio of total H\textsc{i} to total stellar mass as a function of stellar mass in our simulations is in good agreement with observations. However, while the simulated H$_{2}$ to stellar mass fractions agree with observations at high stellar masses ($M_{\ast} \! \gtrsim \! 10^{10} \, \rm{M}_{\odot}$), our dwarf galaxy simulations underpredict the observations by $\sim \! 1 - 2$ orders of magnitude (Fig.~\ref{global_HI_H2_fig}). This may be due to selection effects in the observational samples. \item In our fiducial model, [C\textsc{ii}]$_{158 \rm{\mu m}}$ emission in m1e12 is brightest along the spiral arms, but with a significant diffuse component from inter-arm regions. The [O\textsc{iii}]$_{88 \rm{\mu m}}$ line is more strongly concentrated in compact regions along the spiral arms, with very little diffuse emission, as it is produced mostly within the Str\"{o}mgren radii of young stars. The morphology of the [O\textsc{iii}]$_{88 \rm{\mu m}}$ emission differs dramatically in the uniform ISRF model, which lacks these H\textsc{ii} regions, but the C\textsc{ii} emission still retains the diffuse component in this case (Fig.~\ref{emission_map_fig}). \item The fiducial model broadly reproduces observed correlations between line luminosity and star formation rate for the emission lines [C\textsc{ii}]$_{158 \rm{\mu m}}$, [O\textsc{i}]$_{63 \rm{\mu m}}$, [O\textsc{iii}]$_{88 \rm{\mu m}}$, [N\textsc{ii}]$_{122 \rm{\mu m}}$ and H$\alpha_{6563 \text{\AA}}$ (Fig.~\ref{SFR_tracers_fig}). The most significant deviation between our simulation predictions and observations is for the [O\textsc{iii}]$_{88 \rm{\mu m}}$ line, which is overpredicted by up to a factor $\approx$2 in the simulations of the most massive galaxies in our sample. The line luminosities are lower in the uniform ISRF model, by up to an order of magnitude for [O\textsc{iii}]$_{88 \rm{\mu m}}$ and H$\alpha_{6563 \text{\AA}}$, due to the lack of H\textsc{ii} regions. The no depletion model predicts up to a factor $\approx$2 higher luminosities for the FIR metal lines due to the increase in gas-phase metal abundances, but H$\alpha_{6563 \text{\AA}}$ is unaffected. Non-equilibrium effects enhance the luminosity of [O\textsc{iii}]$_{88 \rm{\mu m}}$ in our fiducial model by up to a factor of 2.38, while the [N\textsc{ii}]$_{122 \rm{\mu m}}$ luminosity is typically suppressed by up to a factor of 0.45. In contrast, [C\textsc{ii}]$_{158 \rm{\mu m}}$ differs by less than 10 per cent when comparing luminosities calculated with non-equilibrium and equilibrium abundances. \end{enumerate} We have thus shown that the treatment of local stellar fluxes and depletion of metals onto dust grains affects the synthetic emission line predictions from our simulations, particularly for lines commonly used as star formation rate tracers. In the case of stellar radiation, this is primarily because we need to capture the irradiation of H\textsc{ii} regions within the Str\"{o}mgren radius surrounding young stars, as these regions contribute to, and in many cases dominate, the emission from these lines. Correctly accounting for metal depletion is important as it reduces the gas-phase abundance of metal species available to produce line emission. However, the local stellar fluxes and metal depletion generally have little impact on the overall galaxy evolution, for example in terms of total star formation rate, except in the case of dwarf galaxies. In \citet{richings16} we compared simulations of isolated galaxies run using spatially uniform radiation fields with constant normalisations of different strengths. We found that weaker radiation fields led to higher star formation rates and stronger galactic outflows, as they enabled more gas to cool to the cold, star-forming ISM phase. We therefore conclude that the evolution of the galaxy depends on the average strength of the interstellar radiation field throughout the galactic disk, but is not sensitive to local variations of stellar fluxes within the disk, although this conclusion may depend on the subgrid treatment of star formation and stellar feedback employed in the simulation. It may also depend on the treatment of radiative transfer, as the assumption of local extinction in the \textsc{lebron} method may lead to an overestimate of the importance of distant sources as the intervening extinction is not fully accounted for. While we have only focussed on a handful of comparisons between the simulations and observations in this paper, the detailed chemical modelling in this simulation suite will enable us to make predictions for many additional observational diagnostics, such as FIR line deficits, nebular emission line ratios from individual star-forming regions, and emission line tracers of molecular gas. We will explore these aspects further in future works. \section*{Acknowledgements} We thank Caleb Choban and Jonathan Stern for useful comments and suggestions. AJR was supported by a COFUND/Durham Junior Research Fellowship under EU grant 609412; and by the Science and Technology Facilities Council [ST/T000244/1]. CAFG was supported by NSF through grants AST-1715216, AST-2108230, and CAREER award AST-1652522; by NASA through grants 17-ATP17-0067 and 21-ATP21-0036; by STScI through grants HST-AR-16124.001-A and HST-GO-16730.016-A; by CXO through grant TM2-23005X; and by the Research Corporation for Science Advancement through a Cottrell Scholar Award. ABG was supported by an NSF-GRFP under grant DGE-1842165. This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. \section*{Data availability} The data underlying this article will be shared on reasonable request to the corresponding author. A public version of the \textsc{gizmo} code can be found at \url{http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html}, and a public version of the \textsc{chimes} code can be found at \url{https://richings.bitbucket.io/chimes/home.html}. {} \appendix \section{Calibration of escape fraction parameters}\label{esc_fraction_sect} As discussed in Section~\ref{flux_sect}, our model for local stellar fluxes contains two free parameters, the escape fractions of FUV and EUV radiation from H\textsc{ii} regions. These parameters determine how much radiation from star particles can propogate beyond the surrounding Str\"{o}mgren radius around the star and contribute to the diffuse interstellar radiation field. To understand how the resulting stellar fluxes depend on these parameters, we repeated the m3e11\_lowRes08 galaxy simulations with FUV and EUV escape fractions varied independently between 0.005 and 0.5. Fig.~\ref{fesc_cal_fig} shows the median (solid curves) and tenth to ninetieth percentile (shaded regions) fluxes in the FUV (left-hand panel) and EUV (right-hand panel) bands, plotted against the star formation rate surface density averaged over the whole disc in the preceding 10~Myr ($\Sigma_{\rm{SFR, \, disc}}$). As we saw in Fig.~\ref{flux_fullDisc_fig}, the fluxes scale linearly with $\Sigma_{\rm{SFR, \, disc}}$. However, the normalisation increases with increasing escape fraction, as more radiation propagates into the diffuse component. The dotted lines in Fig.~\ref{fesc_cal_fig} indicate a linear scaling normalised to the flux of the interstellar radiation field of the Milky Way in the local solar neighbourhood \citep{black87} at $\Sigma_{\rm{SFR,} \, \rm{MW}} = 4 \times 10^{-3} \, \rm{M}_{\odot} \, \rm{kpc}^{-2}$ \citep{robertson08}. For our fiducial model, we therefore chose escape fractions of 0.1 and 0.05 in the FUV and EUV bands, respectively, as these best reproduce the normalisation expected from observations of the radiation field in the Milky Way. \section{Variations in numerical resolution}\label{resolution_sect} To test the numerical convergence of our results we repeated some of our simulations with different resolutions. Our main runs use baryonic and dark matter particle masses of $m_{\rm{b}} = 400 \, \rm{M}_{\odot}$ and $m_{\rm{DM}} = 1910 \, \rm{M}_{\odot}$, respectively. The gravitational softening of gas particles is adaptive and set to the mean inter-particle spacing at the particle's density, with a minimum of 0.08~pc. This results in a softening length of 2.2~pc at the star formation density threshold $n_{\rm{H}} = 10^{3} \, \rm{cm}^{-3}$. The star and dark matter particles use constant gravitational softenings of 1.6~pc and 2.8~pc, respectively. We then repeated the five galaxy models with varying halo mass using 8 times lower mass resolution and gravitational softenings increased by a factor of 2. We also repeated the m3e10 dwarf galaxy with 8 times higher mass resolution and gravitational softenings reduced by a factor of 2. See Table~\ref{galaxy_pars} for a summary of the simulation parameters. The simulations at lower and higher resolution were only run using the fiducial model. Fig.~\ref{SFH_resTest_fig} shows the evolution of star formation rate (top panel) and disc gas mass fraction (bottom panel), calculated as in Fig.~\ref{SFH_fig}, comparing simulations run at low (dashed lines), standard (solid lines), and high (dotted lines) resolution. In the dwarf galaxies (m1e10 and m3e10) we see large differences in the gas fraction between the low and standard resolution runs. This might be due to the bursty nature of star formation in this regime \citep[e.g.][]{fauchergiguere18}, as dwarf galaxies can easily lose a significant proportion of their gas content via outflows driven by stellar feedback after periods of intense star formation, which must then be re-accreted before further star formation can continue. The differences we see here may therefore simply reflect stochastic variations between runs. The m1e10 dwarf galaxy in particular has relatively few baryonic particles at low resolution (initially 1.9$\times$10$^{4}$ gas and 2.1$\times$10$^{3}$ star particles), and so the timing of individual feedback events can be somewhat random due to poor sampling. These differences in the gas evolution of dwarf galaxies between low and standard resolution may also be caused by the difficulty in setting up the initial conditions in a stable disc configuration. As described in Section~\ref{IC_sect}, we first run each galaxy model for an initial 300~Myr settling in period, during which the supernova feedback time-scales have been reduced. This enables the gas to settle into a stable disc, without an initial burst of star formation destroying the gas disc altogether. The simulation snapshot after 300~Myr is then used as the initial conditions for the main run with the full \textsc{chimes} chemistry model, and so the evolution shown in Fig.~\ref{SFH_resTest_fig} starts from this point and does not include the initial settling in period. However, we see that the gas fraction at time $t = 0 \, \rm{Myr}$ in m1e10 at low resolution (10 per cent) is much less than at standard resolution (60 per cent). This indicates that the initial gas disc in m1e10 at low resolution was disrupted during the initial settling in phase, and then re-accretes onto the galaxy over the following 200~Myr. At high resolution, the evolution of star formation rate and disc gas fraction in m3e10 is much closer to those at standard resolution, although the high resolution run retains higher gas fractions by up to $\approx$10 per cent. However, for the high resolution run we used the snapshot from the standard resolution simulation after the 300~Myr settling in period and increased the resolution by splitting each particle into eight, to avoid the computational expense of re-running the initial 300~Myr period again at higher resolution. This is why the high resolution run starts from the same gas fraction at $t = 0 \, \rm{Myr}$, which also reduces uncertainties in how we set up the initial conditions. The galaxy models at higher masses ($M_{200}$$\geq$$10^{11} \, \rm{M}_{\odot}$) show much closer agreement in the star formation rates and disc gas fraction between low and standard resolution. Fig.~\ref{H2_fraction_resTest_fig} compares the H$_{2}$ fraction versus neutral hydrogen column density, calculated as in Fig.~\ref{H2_fraction_fig}, from simulations at low (top panel), standard (middle panel), and high (bottom panel) resolution. The solid lines and shaded regions show the median and tenth to ninetieth percentile range of the H$_{2}$ fraction, respectively, from the simulations, while the data points show the observational data as described in Section~\ref{transition_sect}. The H$_{2}$ fractions in the high-mass galaxies (m3e11 and m1e12) are in good agreement between the low and standard resolution runs. However, in the m1e11 galaxy the H$_{2}$ fraction decreases more strongly towards low column densities ($<$10$^{21} \, \rm{cm}^{-2}$) at low resolution than at standard resolution. The dwarf galaxies m1e10 and m3e10 exhibit similar behaviour between the low and standard resolution runs. However, at high resolution the H$_{2}$ fraction in m3e10 is lower at low column densities ($<$10$^{21} \, \rm{cm}^{-2}$) than at standard resolution. These trends suggest that at lower galaxy masses (and with lower metallicities) we require higher resolution to correctly capture the H$_{2}$ fractions at low column densities. Fig.~\ref{global_HI_H2_resTest_fig} shows the H\textsc{i} to stellar mass and H$_{2}$ to stellar mass ratios in the left- and right-hand columns, respectively, as calculated in Fig.~\ref{global_HI_H2_fig}. The blue, grey and pink symbols show simulations run at low, standard and high resolution, respectively, while the solid curves, shaded regions and remaining data points indicate the observational data as described in Section~\ref{global_HI_H2_sect}. The H\textsc{i} and H$_{2}$ masses in the dwarf galaxies, m1e10 and m3e10, show large spreads of up to an order of magnitude between different resolutions, and also between different snapshots of the same resolution run. This broadly reflects the differences in the total disc gas fractions of the dwarf galaxies, as discussed above. In the high-mass galaxies m3e11 and m1e12, the H\textsc{i} masses are similar between the low and standard resolution. The H$_{2}$ masses of these galaxies are slightly higher at standard resolution than at low resolution, suggesting that the total molecular content may not be fully converged, however these differences are smaller than those seen in the dwarf galaxies. In Fig.~\ref{SFR_tracers_resTest_fig} we compare the emission line luminosity versus star formation rate relation, as in Fig.~\ref{SFR_tracers_fig}, for simulations run at low (blue symbols), standard (grey symbols) and high (pink symbols) resolution. The observational data, shown by the remaining symbols, are described in Section~\ref{line_prediction_sect}. In many cases the emission line predictions show good agreement between the different resolutions. However, there are several examples where this is not the case. In m3e10, the high resolution run exhibits luminosities of [C\textsc{ii}]$_{158 \rm{\mu m}}$, [O\textsc{i}]$_{63 \rm{\mu m}}$ and [N\textsc{ii}]$_{122 \rm{\mu m}}$ that are up to an order of magnitude higher than those at low and standard resolution. While the star formation rate in the high resolution run is also somewhat higher, this does not fully account for the increased luminosities of these three lines as the high resolution predictions lie above the observational data, in contrast to the low and standard resolution runs which are broadly in agreement with observations. The H$\alpha_{6563 \text{\AA}}$ luminosity is also higher in the high resolution run of m3e10, however this appears to be primarily due to the increased star formation run, as the high resolution predictions still follow the relation expected from observations in this case. In the high-mass galaxies, m3e11 and m1e12, the largest discrepancies can be seen in the [C\textsc{ii}]$_{158 \rm{\mu m}}$ luminosity, which is up to a factor $\approx$4 times higher at standard resolution than at low resolution, despite the star formation rate being unchanged. In contrast, the [C\textsc{ii}]$_{158 \rm{\mu m}}$ luminosity in m1e11 decreases by an order of magnitude from low to standard resolution. The luminosities of the other emission lines show better agreement between the low and standard resolution runs in the high-mass galaxies. Thus while some galaxies do exhibit significant differences in some of the emission lines between resolution levels, there is no systematic trend of particular emission lines increasing or decreasing in luminosity with numerical resolution for all galaxies. It is therefore unclear to what extent these differences might be driven, at least partially, by stochastic variations between runs. Extending our sample of galaxy models would help reduce these uncertainties. \label{lastpage}
Title: Detection of Cosmic Fullerenes in the Almahata Sitta Meteorite: Are They an Interstellar Heritage?
Abstract: Buckminsterfullerene, C60 , is the largest molecule observed to date in interstellar and circumstellar environments. The mechanism of formation of this molecule is actively debated. Despite targeted searches in primitive carbonaceous chondrites, no unambiguous detection of C60 in a meteorite has been reported to date. Here we report the first firm detection of fullerenes, from C30 to at least C100 , in the Almahata Sitta (AhS) polymict ureilite meteorite. This detection was achieved using highly sensitive laser desorption laser ionization mass spectrometry. Fullerenes have been unambiguously detected in seven clasts of AhS ureilites. Molecular family analysis shows that fullerenes are from a different reservoir compared to the polycyclic aromatic hydrocarbons detected in the same samples. The fullerene family correlates best with carbon clusters, some of which may have been formed by the destruction of solid carbon phases by the impacting laser. We show that the detected fullerenes are not formed in this way. We suggest that fullerenes are an intrinsic component of a specific carbon phase that has yet to be identified. The nondetection of fullerenes in the Murchison and Allende bulk samples, while using the same experimental conditions, suggests that this phase is absent or less abundant in these primitive chondrites. The former case would support the formation of fullerenes by shock-wave processing of carbonaceous phases in the ureilite parent body. However, there are no experimental data to support this scenario. This leaves open the possibility that fullerenes are an interstellar heritage and a messenger of interstellar processes.
https://export.arxiv.org/pdf/2208.10122
command. \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \definecolor{darkgreen}{rgb}{0,0.50,0} \shorttitle{Fullerenes in AhS} \shortauthors{Sabbah et al.} \graphicspath{{./}{figures/}} \begin{document} \title{Detection of cosmic fullerenes in the Almahata Sitta meteorite: are they an interstellar heritage? } % \correspondingauthor{Christine Joblin, Hassan Sabbah} \email{christine.joblin@irap.omp.eu, hassan.sabbah@irap.omp.eu} \author[0000-0001-5722-4388]{Hassan Sabbah} \affiliation{IRAP, Université Toulouse III - Paul Sabatier, CNRS, CNES \\ 31028 Toulouse Cedex 4, France} \author{Mickaël Carlos} \affiliation{IRAP, Université Toulouse III - Paul Sabatier, CNRS, CNES \\ 31028 Toulouse Cedex 4, France} \author[0000-0003-4735-225X]{Peter Jenniskens} \affiliation{SETI Institute \\ Mountain View, California 94043, USA} \author{Muawia H. Shaddad} \affiliation{University of Khartoum \\ Khartoum 11115, Sudan} \author[0000-0001-7408-3089]{Jean Duprat} \affiliation{IMPMC, CNRS-MNHN-Sorbonne Université \\ 57 rue Cuvier, 75005 Paris, France} \author[0000-0002-9820-3329]{Cyrena A. Goodrich} \affiliation{Lunar and Planetary Institute \\ USRA, Houston, TX 77058, USA} \author[0000-0003-1561-6118]{Christine Joblin} \affiliation{IRAP, Université Toulouse III - Paul Sabatier, CNRS, CNES \\ 31028 Toulouse Cedex 4, France} \keywords{astrochemistry – meteorites, meteors, meteoroids – methods: laser mass spectrometry – methods: laboratory astrophysics} \section{Introduction} \label{sec:intro} Kroto and co-workers discovered the C$_{60}$ buckminsterfullerene in the laboratory during the synthesis of carbon clusters (C clusters) of interest for carbon chemistry in evolved stars \citep{Kroto1985}. The spectral signatures of C$_{60}$ and its cation C$_{60}^{+}$ have subsequently been identified in the diffuse interstellar medium \citep{Campbell2015, Berne2017} as well as in a variety of circumstellar and interstellar environments, which include planetary nebulae \citep{Cami2010, Garcia-Hernandez2012,Otsuka2014} and photodissociation regions \citep{Sellgren2010, Castellanos2014}. Different scenarios have been proposed to account for C$_{60}$ formation in these environments, including processing of grains by shocks or high-energy ions \citep{Scott1997, Otsuka2014, Bernal2019}, and UV photo-processing of large polycyclic aromatic hydrocarbons (PAHs) \citep{Berne2012, Zhen2014, Berne2015, Berne2016}. The most energetic conditions are expected to favor the most stable species, buckminsterfullerene C$_{60}$. Recent progress has been made in the laboratory to mimic the gas phase chemistry in the environment of evolved stars using the dedicated Stardust machine \citep{Martinez2020Prevalence}. However, even with these improved methods, no fullerenes could be formed from reactivity of a carbon vapor (C/C$_2$) with H$_2$ and C$_2$H$_2$ \citep{Martinez2020Prevalence, Santoro2020chemistry}. This is likely due do the relatively low gas temperature during aggregation in the Stardust machine, which is estimated to be $<$1,000 K. Indeed, it has been shown in the laboratory that the formation of fullerenes requires temperatures above 3500~K in a gas-phase condensation experiment \citep{Jaeger2009}. The “shrinking hot giant” pathway has been demonstrated by quantum chemical molecular dynamics simulations of the dynamics of carbon vapour \citep{Irle2006}. In this scenario, the formation of fullerenes results from the assembly of hot polyyne chains followed by shrinking of the vibrationally excited cages towards the sizes of C$_{60}$ and C$_{70}$. In astrophysical environments, gas-phase formation of C clusters, fullerenes, and graphite grains has been modeled by Clayton et al. in the C + O cores of core-collapse supernovae \citep{Clayton2001, Clayton2018}, although the authors limit their model to temperatures below 2000~K. \cite{Cherchneff2000Dust} have proposed the formation of the same species in the pre-supernova stage of Wolf Rayet (WR) stars. These stars have different phases including a carbon-rich phase (WC stage) that can produce very large amounts of carbon dust \citep{Crowther2003Dust}. The extreme conditions encountered in these environments combined with a very rich medium in carbon could thus be favorable to the formation of C clusters and fullerenes \citep{Cherchneff2000Dust}. However until now no link has been established between this carbon chemistry modeled in massive stars and the observed C$_{60}$. Because of their stability under UV irradiation, C$_{60}$ in particular, and fullerenes in general, are good candidates to survive the journey from stars and the interstellar medium to our Solar System (SS) and would be naturally incorporated into SS solid bodies. Becker et al. searched for fullerenes in two primitive carbonaceous chondrites, Allende and Murchison \citep{Becker1994, Becker1997, Becker1999Higher, Becker2000}. The authors combined chemical extraction to increase the concentration of fullerenes with one-step laser desorption ionization (LDI) mass spectrometry. They reported first the detection of C$_{60}$ and C$_{70}$ \citep{Becker1994} and then that of larger fullerenes However, the level of detection was found to be highly variable between samples of the same meteorite \citep{Becker1997}. Moreover, their detection could not be confirmed by other groups \citep{Buseck2002}. \cite{Hammond2008Identifying} demonstrated that the detected fullerenes were not intrinsic to the samples but generated by the one-step LDI process used to analyze the samples. A similar conclusion was reached in a recent study of insoluble organic matter in the Paris meteorite by \cite{Danger2020}. \cite{Buseck2002} concluded that the detection of fullerenes in meteorites is a difficult task that must combine careful molecular extraction, analysis close to the limits of sensitivity, and exquisite care to avoid contamination at very low concentration levels. In the experiments mentioned above, a single laser was used to perform both desorption and ionization (one-step LDI). In the two-step laser desorption laser ionization mass spectrometry (L2MS) technique, desorption and ionization are separated in both time and space using two different lasers. Compared to one-step LDI, this technique provides better control of the laser desorption fluence and increases sensitivity by one to two orders of magnitude. The desorption laser can be used at a lower fluence, thereby limiting chemical interactions in the desorbed plume, thus circumventing the ambiguities of LDI analysis raised by previous measurements by Becker et al. and also described in \cite{Danger2020}. The new experimental setup AROMA (The Aromatic Research of Organics with Molecular Analyzer) combines L2MS with ion trapping. We have demonstrated the ability of AROMA to detect PAHs and fullerenes (C$_{60}$) with high sensitivity (up to 100 femtograms per laser desorption shot) and with almost no fragmentation \citep{Sabbah2017Identification}. The PAH distribution we obtained from the Murchison meteorite, is consistent with previous studies \citep{Callahan2008} but with improved mass resolution and sensitivity, particularly for species with m/z greater than 200. Combining the double equivalent (DBE) method and collision-induced dissociation experiments, we identified the dominant peak from the Murchison analysis at m/z=202.07 as pyrene. Series of methylated species were also identified. In this earlier study, no C clusters or fullerenes were detected. L2MS analysis was previously applied to nine samples of the Almhata Sitta (AhS) polymict ureilite meteorite \citep{Sabbah2010Polycyclic}. The authors confirmed the presence of PAHs in AhS and their dispersion among a variety of clast types, and concluded that this dispersion results from impacts on the fragmented ureilite parent body (UPB). Interestingly, some of the recorded m/z peaks indicated the possible presence of C clusters or aromatics (not the usual standard PAHs), but further assignment was not possible due to lack of sufficient mass resolution. These observations motivated us to undertake a detailed systematic analysis of 13 AhS samples using the new AROMA L2MS setup to explore chemical diversity across different samples of different lithologies. The AhS meteorite was recovered after the impact on Earth of the asteroid 2008 TC$_3$. This exceptional event was observed in space and followed until the final impact over the Nubian desert in October 2008 \citep{Jenniskens2009impact}. The collected samples were found to be a polymict ureilite. This is the first witnessed polymict ureilite fall collected shortly after the fall. The samples were recovered from a well isolated environment with minimal terrestrial contamination. They consist primarily of ureilites of various types, but also include various chondrites, including enstatite chondrites, ordinary chondrites, and carbonaceous chondrites (CC). The ureilites originated from the UPB, a carbon-rich planetesimal that formed in the inner SS, based on its place in the SS nucleosynthetic isotope dichotomy \citep{Warren2011}, at $<$1 Ma after the earliest solids, i.e., calcium–aluminium-rich inclusions (CAI), formed \citep{Wilson2008Thermal}. The UPB experienced rapid heating and partial differentiation, but was disrupted by a major impact at ~5-5.4 Ma after the CAI, before complete cooling \citep{Downes2008Evidence, Goodrich2015Origin, Goodrich2004Ureilitic,Herrin2010Thermal}. Subsets of fragments produced by the disruption reassembled by self-gravity \citep{Michel2015Selective, Michel2015Collisional}, forming daughter bodies from which the known ureilites derive \citep{Goodrich2015Origin}. These bodies subsequently developed complex regoliths including foreign remnants of impacting chondritic meteorites, some of which are preserved as the chondritic stones from AhS. Polymict ureilites, including AhS, are samples of these regoliths. In this article, we report the detection of fullerenes in clasts of the polymict ureilite AhS, which is the first strong evidence for the presence of such carbonaceous species in a meteorite. The experimental setup and methodology are described in Section~\ref{sec:experimental}. Section~\ref{sec:Results} demonstrates the detection of fullerenes for 7 of the 13 samples of the AhS meteorite. Using the same experimental conditions, we were unable to detect fullerenes in the Murchison and Allende CC bulk samples. In section~\ref{sec:discussion}, we propose two scenarios to explain the presence of fullerenes in the AhS meteorite, an origin within the solar system (i.e. from shocks to the surface of the UPB) or an interstellar heritage. We conclude in Section~\ref{sec:conclusion}. \section{Experimental techniques} \label{sec:experimental} \subsection{Sample preparation} \label{subsec:sample} For this study 13 clasts from the AhS meteorite were used to track chemical diversity among them. Seven of them (AhS \#04, \#22, \#24, \#27, \#28, \#38, and \#48) are ureilites with a high concentration of aggregates of carbonaceous material (up to 500 microns in diameter) composed of graphite and minor diamonds \citep{Jenniskens2009impact,Zolensky2010}. The remaining six samples are enstatite and ordinary chondrites, which are classified as follows: AhS \#41 (EL6), \#58 (H4-5), \#1001 (Metal+Sulfide), \#1002 (EL4-5), \#1054 (LL3-4), and \#2012 (EH4-5). In addition, a bulk sample from the interior of the Allende CC was used. For each analysis fresh fragments of few mg were crushed using a mortar and pestle. The powder was then attached to a 10\,mm disc of stainless steel with conductive copper tape. The disk holding the powder was then mounted on the sample holder to be inserted inside the instrument via an automated vacuum interlock system. After introduction, the sample was positioned and moved in two axis using a motorized XY-manipulator with a minimum step of 100\,$\mu$m. It was demonstrated that the copper tape did not produce a background signal that could interfere with the analysis. \subsection{The AROMA setup} \label{subsec:AROMA} AROMA, the Astrochemistry Research of Organics with Molecular Analyzer, is a unique experimental setup developed to study, with micro-scale resolution, the content in carbonaceous molecules of cosmic dust analogues and meteoritic samples \citep{Sabbah2017Identification}. Mass spectrometry data and chemical analysis tools for all studied samples available to the public in the AROMA database at http://aroma.irap.omp.eu. AROMA consists of a microprobe laser desorption ionization source and a segmented linear quadrupole ion trap (LQIT) connected to an orthogonal time of flight (oTOF) mass spectrometer, as shown in Figure~\ref{fig:aroma}. Ions are produced at a low background pressure (10$^{-6}$\,mbar) by performing a one- or two-step LDI. The laser-generated ions are collimated by a set of lenses at high DC voltage and are thermalized in a radio-frequency (RF) octapole, which maximizes ion transmission and reduces fragmentation that occurs in a typical LDI source. The ions are stored in the LQIT and processed if desired. A set of RF-DC optics is used to transfer the ions to high vacuum and finally they are monitored on an oTOF mass analyzer equipped with a two-stage reflectron and a fast microchannel plate (MCP) detector. The total mass spectrum of an experiment is the superposition of multiple scans recorded over a given m/z range. The amplitude and RF frequency applied to the LQIT electrodes are optimized to maximize the ion signal in this mass range. Each scan is the result of 50 laser shots. The interaction of the desorption laser with the sample is controlled by a mechanical shutter. The laser hits the sample once every two seconds. The sample is then moved so that the next laser shot hits a fresh spot. In order to perform LDI, AROMA uses a pulsed (5\,ns) infrared (IR) laser (Nd:YAG at 1064\,nm) focused on the sample with a spot size of 300\,$\mu$m to cause rapid and localized heating, promoting thermal desorption rather than decomposition. The typical IR laser desorption fluence used in this work is \emph{F}$_{des} = 300$\,mJ/cm$^{2}$ (60\,MW/cm$^{2}$ in irradiance). This low fluence is one to two orders of magnitude lower than the explosive vaporization threshold that leads to plasma formation during ablation processes \citep{Hoffman2014effect}. The interaction of the laser with the sample is able to efficiently produce ions from the metallic phases, which are present in minerals, salts and the matrix of natural samples \citep{Jayasekharan2013Elemental,Koumenis1995Quantitation,Shi2016Recent,Tulej2011miniature}. In this step, C clusters can also be generated from the decomposition of a pure carbonaceous phase \citep{Sedo2006}. In the L2MS analysis, a pulsed (5\,ns) ultraviolet (UV) laser (fourth harmonic of an Nd:YAG at 266\,nm) perpendicularly intercepts the expanding plume. This leads to the selective ionization of species that can undergo (1+1) resonance-enhanced multiphoton ionization (REMPI), which is the case of fullerenes and aromatic species. The laser ionization fluence used in this work is \emph{F}$_{ion} =20$\,mJ/cm$^{2}$ (4\,MW/cm$^{2}$ in irradiance). \subsection{Element and carbonaceous molecular family analysis} \label{subsec:DBE} For each detected m/z peak with a signal-to-noise ratio (S/N) greater than 10, a chemical formula is assigned using mMass software, an open source mass spectrometry tool \citep{Strohalm2010}. The chemical formula assignment is performed with a typical accuracy of 0.01 between measured and calculated m/z values. Peaks corresponding to various metallic elements (most intense peaks: Fe, K, Na, Ca, Al, Cr…) seen as atomic ions or small clusters (e.g., Fe$_2$) have been identified thanks to their mass defect and the high mass resolution provided by AROMA. This family is referred to hereafter as “metals” and reflects the different silicate phases found in AhS. As is well known in LDI mass spectrometry, signal from elements, in particular Na and K, is ubiquitous in natural and standard laboratory samples. For the molecular family analysis, we calculate in addition the double bond equivalent (DBE) \citep{Marshall2008Petroleomics} for each molecular formula, which is defined as: \begin{equation} DBE = C\# - H\#/2 + N\#/2 +1, \end{equation} with C\#, H\# and N\#, the number of C, H and N atoms in molecules (in our analysis we only include C and H because the mass resolution of the AROMA setup is not sufficient to disentangle the mass of N from that of CH$_2$). The DBE is representative of the unsaturation level of the molecules and thus corresponds to a direct measure of their aromaticity. It is equal to the number of rings plus double bonds involving carbon atoms (as each ring or double bond results in the loss of two hydrogen atoms). In the recorded mass spectra, only a few species containing oxygen, not exceeding C$_7$, could be firmly identified. The DBE calculation allows us to sort the detected pure carbon and hydrocarbon ions into different molecular families \citep{Martinez2020Prevalence, Sabbah2020Molecular}. Pure carbon species are sorted into C clusters (C\# $<$ 30) and fullerenes (C\# $\geq$ 30). The transition between C clusters and fullerenes is somewhat arbitrary. \citet{vonHelden1993} have shown that fullerenes appear in the C\# range of 30 to 40 but this does not mean that all species in this range are actually fullerenes. Hydrocarbons are classified into three categories, HC clusters, PAHs, and aliphatics species. This classification is done using empirical factors and DBE limits established for hydrocarbon mixtures in complex natural organic matter \citep{Hsu2011Compositional, Koch2006From, Lobodin2012Compositional}. Hydrocarbons with 0.5 $\leq$ DBE/C\# $\leq$ 0.9 are considered to be PAHs. HC clusters and aliphatic species are located at DBE/C\# $>0.9$ and $<0.5$, respectively. \section{Results} \label{sec:Results} \subsection{Detection of fullerenes in the AhS meteorite} \label{subsec:Fullerenes} Figure~\ref{fig:fullerenes_AhS} gathers the data that were recorded in the range m/z=[500,1300] for the seven ureilites. We could not detect any ion signal in this range for the other clasts. Figure~\ref{fig:fullerenes_AhS} shows that species from C$_{42}$ to about C$_{100}$ (C$_{80}$ for \#22 and \#24) are observed with a mass spacing of 24 m/z (C$_{2}$), which is characteristic of fullerene series. Figure~\ref{fig:figHPOG} shows the mass spectra corresponding to the molecular content associated with a highly ordered pyrolitic graphite (HPOG) sample, which is revealed by one-step LDI. Different values of the desorption laser fluence were used. For \emph{F}$_{des} < 300$\,mJ/cm$^{2}$ almost no signal is observed (except two peaks at the noise level corresponding to C$_{10}$ and C$_{12}$). At \emph{F}$_{des} = 300$\,mJ/cm$^{2}$ a distribution of C clusters is observed, extending to C$_{28}$. Increasing the fluence to \emph{F}$_{des} =1$\,J/cm$^{2}$ leads to higher peaks intensities and a distribution extending to C$_{35}$. These observations indicate that ablation of carbonaceous material and chemistry within the plume can produce C clusters but not fullerenes. We expect similar results for other carbonaceous materials \citep[e.g. diamond;][]{Sedo2006}. We also studied the evolution as a function of laser conditions of the size distribution of fullerenes in the AhS~\#04 sample (see Fig.~\ref{fig:fullerenes_cal}). In this sample, two-step LDI scheme with wavelengths and fluences typical of those used to analyze PAHs and fullerenes in complex hydrocarbon mixtures without significant fragmentation \citep{Faccinetto2011High-sensitivity, Homann1998Fullerenes, Lykke1993Molecular}. The fullerene ion signals were summed considering three size ranges covering medium ($42\leq C\# \leq58$), large ($60\leq C\# \leq70$), and very large ($C\#>70$) sizes. Figure~\ref{fig:fullerenes_cal} shows that the fullerene size distributions remain comparable within a factor of typically 2, independent of the laser conditions. These calibration measurements support the fact that fullerenes are intrinsic to the samples. By examining the relative fullerene ion signal between the 7 AhS ureilite samples (Fig.~\ref{fig:fullerenes_cal}), we conclude that the fullerene size distribution is comparable in all samples, with the exception of an obvious lack of very large fullerenes in AhS~\#22 and \#24 in which the detected species only go up to C\#=80. We cannot conclude whether this difference reflects an intrinsic difference in fullerene sources. Rather, it indicates the difficulty of extracting very large fullerenes from bulk samples and the possible role of sample heterogeneity. \subsection{Classification in molecular families} \label{subsec:families} Figure~\ref{fig:lowMS} shows that hundreds of peaks are detected in the low m/z mass range [10-500] of the thirteen AhS samples. These peaks include a large diversity of carbonaceous species. We sorted these molecules into families by applying the DBE method (see Section~\ref{subsec:DBE}). Examples of DBE/C\# plots are provided in Figure~\ref{fig:DBE}. The families observed include aliphatics, HC clusters, PAHs, C clusters, and fullerenes as described in \cite{Sabbah2020Molecular}. The aliphatic family is not considered because we could detect only a few species falling into this category, five species in AhS~\#04 and three in AhS~\#1001, all at C\#$\leq 7$. The sum of peak intensities for the different families are presented in Figure~\ref{fig:family_int} and compared to the sum of peak intensities of the metals. Note that Figure~\ref{fig:family_int} shows that the intensity of metals in non-ureilites is higher compared to ureilites. Conversely, the ion signal of carbonaceous species is much weaker in non-ureilites, with the exception of PAHs, which show similar sum of peak intensities in both clast types and HC clusters in AhS~\#1001. The observation of PAHs in the porous chondritic components of the asteroid was previously explained as being due to the mobilization of organic matter during impacts \citep{Sabbah2010Polycyclic}. The PAHs detected are relatively small with a maximum of C\#=24 and should be much easier to evaporate than the larger fullerenes. Similar to the fullerene analysis (Fig.~\ref{fig:fullerenes_cal}), we summed the C clusters peak intensities for the two size categories, small ($8\leq C\# \leq 17$) and medium ($18\leq C\# \leq 29$). As shown in Fig.~\ref{fig:cc_AhS}, the ion signal of these species is large and generally exceeds the ion signal of fullerenes by a factor of 10. The contribution to the ion intensity of the family with $30\leq C\# \leq 40$ (which contains both large C clusters and small fullerenes) is small. Adding this contribution to the C clusters family or the fullerenes family, as was done in Fig.~\ref{fig:family_int} should therefore not affect the results. The observed variations in the sum of peak intensities with cluster sizes (Fig.~\ref{fig:cc_AhS}) may indicate different origins for small and medium-sized clusters. We have seen that, under our experimental conditions, C clusters can be formed by the interaction of the IR laser with a solid carbonaceous material (e.g. HPOG) whereas fullerenes cannot (cf. Section~\ref{subsec:Fullerenes}). This does not exclude a contribution from C clusters trapped in molecular form in the samples, as suggested in our earlier work on laboratory analogs of stardust \citep{Martinez2020Prevalence}. The different possible origins of the carbon cluster ion signal could explain why this signal is not closely correlated with the fullerenes ion signal. If fullerenes are associated with only one specific carbonaceous phase among several, the fullerene ion signal would depend on the concentration of that phase and its accessibility by the desorption laser to achieve extraction of its molecular content. Therefore, the apparent absence of fullerenes in non-ureilites could be only a sensitivity problem. \subsection{The lack of evidence for the presence of fullerenes in the Murchison and Allende meteorites}\label{subsec:fullerenes_allende} Previously reported detections of fullerenes in the Allende \citep{Becker1994} and Murchison meteorites \citep{Becker2000} have been questioned both because of the analytical techniques used \citep{Hammond2008Identifying} and the non-detection of these species by other groups \citep{Buseck2002}. We did not detect fullerenes or C clusters in our previous analysis of a bulk sample of Murchison with the AROMA setup \citep{Sabbah2017Identification}. To complete this result, we analyzed a powder from the interior of the Allende meteorite. The molecular composition of the carbonaceous species (see mass spectrum in Fig.~\ref{fig:Allende}) is dominated by PAHs and a few m/z peaks attributed to HC clusters (4 peaks) and aliphatics (3 peaks). There is no fullerene signature in the mass spectrum. The PAH distribution peaks around m/z=202.07, which corresponds to pyrene and its isomers. It is found to be similar to that previously detected in several samples from the interior of Allende \citep{zenobi1989}. Figure~\ref{fig:dbe_c_meteorites} shows the DBE/C\# values in the C\# range of 5 to 35 for the Murchison, Allende, AhS~\#1002, and AhS~\#04 samples. The molecular families of these samples are clearly different, suggesting possible differences in the initial chemical reservoirs or processes undergone by their parent body. Each plot of DBE/C\# pattern is indeed unique. PAHs dominate in three of the four selected samples, but they show some differences. Murchison contains a substantial fraction of methylated PAHs, while Allende contains more pyrene and isomers. In AhS~\#04, the ion signal is comparable for pyrene and indene, the smallest PAH (see Fig.~\ref{fig:dbe_c_meteorites}). Using the DBE/C\# values, we sorted the carbonaceous molecules into families (for m/z greater than 100) and the results obtained for the four samples of Fig.~\ref{fig:dbe_c_meteorites} are presented in Fig.~\ref{fig:family_meteorites}. The total PAH ion signal in Murchison is the highest among the 4 samples, due to the greater variety of PAHs detected (Fig.~\ref{fig:dbe_c_meteorites}). The fullerene ion signal in AhS~\#04 is lower than that of PAHs in Allende and AhS~\#1002 by a factor of 2-3 and by a factor of 5-6 than that of PAHs in Murchison. It is accompanied by a high signal of C clusters, at the same level as that of PAHs in Allende and AhS~\#1002. \section{Discussion} \label{sec:discussion} \subsection{About the concentrations of PAHs, C clusters and fullerenes}\label{subsec:quantification} Quantifying species abundance as a function of ion signal remains a challenge in LDI techniques \citep{Elsila2004}. The ion signal depends on several parameters, including the desorption and ionization efficiencies of the associated species, the nature of the analyte, the properties of its bonding with the surface, and the nature of the substrate itself. Signal dispersion can therefore be attributed to a combined effect of sample heterogeneity and the fact that mainly species from the sample surface are desorbed. This is different from destructive techniques such as ICP-MS using laser ablation in which all material from the laser crater is analyzed \citep{Xiao2021}. Relating the ion signal to abundances (on the surface) therefore requires calibration tests using several internal standards \citep{Elsila2004} and samples of known concentrations \citep{Koumenis1995Quantitation}. The inferred mass concentration for PAHs in the Murchison meteorite ranges from 15 to 28 ppm, with an average of 22 ppm \citep{Sephton:2002bc}. Because the Murchison and AhS samples were analyzed with AROMA using the same experimental conditions and similar mass of each meteoritic sample, the sum of peak intensities for PAHs from the two experiments can be directly compared (see Figure~\ref{fig:family_meteorites}). Only two samples show a value slightly higher than the Murchison value, while the others are lower by a factor of about two (six of them) and up to 20 for two of them. Considering the number of factors that can affect the absolute intensities \citep{Elsila2004}, we can conclude that the PAH mass concentrations are lower but comparable in AhS compared to Murchison. In a previous study on soot, we demonstrated the ability to identify fullerene species in nascent soot particles produced in a slightly sooting ethylene/air premixed flame \citep{Sabbah2020Molecular}. In addition, we were able to follow the evolution of carbonaceous molecular families, including fullerenes, by analyzing soot collected at different heights above the burner. In this study, we have shown that AROMA can quantitatively trace the thermal processing of large PAHs (C\#$\geq$50) leading to the formation of fullerenes in this hydrocarbon flame and that the detected molecules mainly originate from the surface of soot particles. This suggests that the efficiency of AROMA in detecting both molecular species (i.e. large PAHs and fullerenes) is similar \citep[at least when associated with soot particles;][]{Sabbah2020Molecular}. Assuming this is the case for the AhS samples, the mass concentrations of fullerenes can be deduced by considering that the summed ion signal for fullerenes and PAHs is proportional to their respective concentrations. Therefore, the mass concentration of fullerenes is most probably comparable to that of PAHs in AhS, i.e. a few ppm (after correction of the ratio between the average mass of PAHs (m/z$\sim$212) and that of fullerenes (m/z$\sim$740)). However, this value should be taken with caution because PAHs and fullerenes are distributed in different phases within the AhS meteorite, PAHs being quite widespread while fullerenes are supposed to be associated with a specific solid carbon phase. \subsection{Scenarios of fullerene formation in AhS}\label{subsec:origin} In the following, we discuss the pros and cons of possible scenarios that could explain the presence of fullerenes in the AhS meteorite. One possible scenario for the presence of fullerenes in AhS, which is suggested by the unique history of the ureilite parent asteroid \citep{Goodrich2015Origin}, is the introduction by a primitive (CC-like) impactor that was responsible for the catastrophic disruption of the UPB. The timing of the UPB disruption led to the suggestion \citep{Yin2018} that the impactor was an outer SS body, associated with a large-scale migration of outer SS bodies into the inner SS driven by the growth and/or migration of the giant planets during the gaseous disk phase \citep{Walsh2012Populating}. Some impactor material may have been added to the ureilitic daughter bodies formed at this time and then redistributed to other clast types by subsequent impact gardening of the regolith. However, we have only observed fullerenes in ureilites and have not been able to detect fullerenes in two well-known CC, Murchison and Allende. An alternative scenario is the possibility that the fullerenes were produced in the ureilites themselves sometime during their history. The UPB was a carbon-rich asteroid that experienced igneous processing involving temperatures up to $\sim$1250-1300$^{\circ}$C \citep{Collinet2020}. During this processing, graphite was formed from the primitive carbon-rich materials that accreted onto the body. However, there is no known mechanism by which the fullerenes could have formed during igneous processing. Subsequently, the UPB experienced a number of shock events, in particular the large impact that resulted in a major disruption of the parent body \citep{Goodrich2015Origin}; such events offer a potential mechanism for the formation of fullerenes. It has been proposed that the diamonds present in most ureilites formed by transformation of graphite during this shock \citep[e.g.][and references therein]{Lipschutz1964, Nabiei2018}. A recent laboratory study \citep{Popov2020} reports a zone of instability of diamond, for a range of temperatures, at pressures of 55 to 115~GPa. This instability leads to the transformation of diamond into fullerene-like onions, which consist of multiple shells whose number decreases with temperature to a minimum value of 2-3 at a temperature of 2400~K or 2127$^{\circ}$C \citep{Popov2020}. This suggests the possibility that fullerenes in ureilites formed almost immediately after the formation of diamonds in an impact environment. However, it is unlikely that the extreme conditions necessary for the creation of these fullerene-like onions were achieved in ureilites. \cite{Nestola2020} have shown that diamonds in ureilites could have formed by impact shock events at much lower pressure and temperature. Furthermore, onion species formed by this mechanism differ from molecular fullerenes as shown by Raman spectroscopy \citep{Popov2020}. The idea that fullerenes could have formed by hypervelocity impacts in space has been investigated by the analysis of impacts collected on the Long Duration Exposure Facility (LDEF). Fullerenes were detected in a single LDEF impact crater, which motivated experimental simulations \citep{Radicati1994}. Although these experiments were limited to impacts at a velocity of 6\,km\,s$^{-1}$, the authors concluded that the fullerenes observed in the LDEF crater are unlikely to have formed during a hypervelocity impact. Since this crater also contained chondritic elements, these authors suggested that the fullerenes were instead intrinsic to the impacting micrometeoroid. Another scenario would be that these fullerenes formed much earlier and were inherited from the interstellar medium at the time of SS formation. The abundance of carbon in the form of C$_{60}$ in the diffuse interstellar medium is estimated to be on the order of a few 10$^{-4}$ to a few 10$^{-3}$ \citep{Berne2017}. This value is substantially lower than the abundance value of 10$^{-1}$ for interstellar PAHs, which are the carriers of the aromatic infrared bands. Interstellar PAHs are expected to be large, typically containing more than 50 C due to processing by UV photons \citep[e.g.][]{Montillaud2013}. Such large PAHs are not detected in our study. We therefore expect the concentration of interstellar PAHs and/or fullerenes to be very low in AhS (as in Murchison and Allende). Another possibility is that these species are included in a phase from which they cannot be extracted by the AROMA desorption laser. The small PAHs (less than 30 carbon atoms in size) detected in AhS (as well as in Murchison and Allende) are more likely products of active cold chemistry in the dense molecular clouds from which SS formed. This suggestion of \cite{Woods2007} was supported by the recent detection of small PAHs in the cold prestellar core TMC-1, in particular indene C$_9$H$_8$ \citep{Cernicharo2021,Burkhardt2021}, as well as the availability of chemical pathways to form these small aromatic species by neutral neutral reactions at low temperature \citep{Cernicharo2021, Cernicharo2021b}. Finally, a local interstellar heritage has been proposed by several authors to account for anomalies in short-lived radionuclides (SLRs) such as $^{26}$Al \citep[e.g.][]{Podosek2005Overview, Huss2009}. More precisely, the idea arose that the SS prenatal cloud was seeded by small dust grains produced at the end of the life of nearby very massive stars \citep{Arnould2006production, Dauphas2010Neutron-rich,Gaidos200926Al, Gounelle2012, Tatischeff2010}. It is interesting to note that a production of fullerenes at the end of the life of very massive stars, at the presupernova stage of WR stars or at the supernova stage has been suggested by modelers \citep{Cherchneff2000Dust,Clayton2001, Clayton2018} as discussed in Sect.~\ref{sec:intro}. The model by \citeauthor{Clayton2018} indicates that C-rich regions are strongly enriched in $^{12}$C, although processes leading to mixing with $^{13}$C may occur. This would help to understand the fact that, although some of the presolar grains of SN origin are strongly enriched in $^{12}$C, a significant fraction of them have an isotopic ratio close to the solar value \citep{Lodders2005Presolar,Croat2003}. We derived a value of 0.68$\pm$0.04 for the ratio of $^{13}$C$^{12}$C$_{59}$/$^{12}$C$_{60}$ for the 7 AhS samples, to be compared with the value of 0.62$\pm$0.04 derived from molecular analysis of terrestrial samples. The two ratios are therefore similar. A systematic study would be needed to refine these values and extend them to the entire fullerene population. However, we emphasize that the inferred $^{12}$C/$^{13}$C ratio may be underestimated due to a possible contribution from fullerane species \citep{Becker1997}. At this stage, a scenario in which the fullerenes present in AhS originate from massive stars can therefore not be demonstrated on the basis of these isotopic ratios or other arguments. But neither can it be contradicted. This scenario has obvious advantages from an astrochemical perspective. It could rationalize why C$_{60}$ is observed in some astronomical environments and not in others. Its detection is mainly in evolved stars, but in a limited number of them \citep[a few \% of planetary nebulae;][]{Otsuka2014} including H-rich but not H-poor circumstellar environments \citep{GarciaHernandez2011}. C$_{60}$ has also been detected in the environment of a number of massive young stellar objects \citep{Roberts2012Detection, Sellgren2010}. This diversity is difficult to rationalize and may indicate star-forming regions associated with the shell of a WR bubble \citep{Dwarkadas2017Triggered}. \section{Conclusion}\label{sec:conclusion} This work was motivated by the previous detection of PAHs and molecules containing only carbon in the AhS meteorite. Using the L2MS technique on bulk samples, we showed that all AhS samples of ureilite type exhibit a large size distribution of fullerenes. We performed different calibration experiments to demonstrate that these fullerenes are indeed intrinsic to the samples and are probably associated with a carbon phase that also produces C clusters under exposure to the desorption laser. The inferred concentration of fullerenes is most likely on the order of a few ppm. Using the same experimental conditions, we were unable to detect fullerenes in the Murchison and Allende carbonaceous chondrites. Further investigation of the carbonaceous phase in which the fullerenes are found is essential to improve the sensitivity of the detection and constrain the formation scenarios of these fullerenes. Since the UPB was disrupted by a giant impact and the re-accumulated debris then experienced further smaller impacts, it is possible that the fullerenes in AhS originated from synthesis driven by impact shocks. However, there is no experimental data demonstrating that this can be achieved. We put forward the alternative possibility that the fullerenes detected in this work could be of interstellar heritage. The remarkable thermal stability of fullerenes, both in the gas phase and in solid particles, for temperatures up to at least 2500~K \citep{Sommer1996,Biennier2017} may have allowed these primitive species to survive the igneous processing conditions of the parent body as well as the impact shock events experienced by the ureilites. These fullerenes may have formed in late phases of a massive (WR) star that ended its life near the parent molecular cloud of the SS. The ideas discussed here would deserve further investigation. In particular, efforts to search for fullerenes, especially in the most primitive meteorites, should be renewed. Studying carbon-enriched phases to look for fullerenes could still reveal these species in primitive CCs and confirm previous work by Becker et al. who used such a strategy in their measurements. The detection of fullerenes may have implications not only for the SS formation but also for our understanding of the origin of fullerenes in astrophysical environments, a question widely debated in the literature and one that will certainly progress with the upcoming James Webb Space Telescope observations. \section*{acknowledgments}\label{sec:acknowledgments} The authors thank the colleagues who kindly provided some of the samples used in this study, most notably Jes\'us Mart\'inez Fr\'ias and Jos\'e Cernicharo for the Allende sample, and Jos\'e \'Angel Mart\'in-Gago for HPOG. Funding: The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) ERC-2013-SyG, Grant agreement N$^{o}$610256 NANOCOSMOS. MC also acknowledges support from La Region Occitanie, Grant N° 15066466. CAG and PJ are supported by NASA Emerging Worlds program grant 80NSSC19K0513 and its pilot study 00A062101. \bibliography{AhSfullerenes}{} \bibliographystyle{aasjournal}
Title: Simulating the collapse of rotating primordial gas clouds to study the survival possibility of Pop III protostars
Abstract: It has been argued that the low-mass primordial stars ($m_{\rm Pop III}\,\leq 0.8\,M_\odot$) are likely to enter the main sequence and hence possibly be found in the present-day Galaxy. However, due to limitations in existing numerical capabilities, current three-dimensional (3D) simulations of disk fragmentation are capable of following only a few thousands of years of evolution after the formation of the first protostar. In this work we use a modified version of {\sc Gadget}-2 smoothed particle hydrodynamics(SPH) code to present the results of non-linear collapse of the gas clouds associated with various degrees of initial solid body rotation (parameterized by $\beta$) using a piecewise polytropic equation of state. The 3D simulations are followed till the epoch when 50$M_{\odot}$ of mass has been accreted in protostellar objects, which is adequate enough to investigate the dynamics of the protostars with the surrounding gaseous medium and to determine the mass function, accretion rate and survival possibility of these protostellar objects till present epoch. We found that evolving protostars that stay within slow-rotating parent clouds can become massive enough due to accretion in the absence of radiative feedback, whereas $10-20 \%$ of those formed within a fast-rotating clouds ($\beta \ge 0.1$) have the possibility to get ejected from the gravitational bound cluster as low mass stars.
https://export.arxiv.org/pdf/2208.10789
\title{Simulating the collapse of rotating primordial gas clouds to study the survival possibility of Pop III protostars} \author{Shubham P. Raghuvanshi} \affil{Harish-Chandra Research Institute (HRI), Chhatnag Road, Jhusi, Prayagraj, 211019, Uttar Pradesh, India} \author[0000-0002-6903-6832]{Jayanta Dutta} \affil{Harish-Chandra Research Institute (HRI), Chhatnag Road, Jhusi, Prayagraj, 211019, Uttar Pradesh, India} \keywords{--Pop III stars -- Smoothed particle hydrodynamics -- {\sc Gadget}-2 -- sink particles} \section{Introduction} \label{sec:intro} The age of the Universe and the expected time at which the very first stars formed makes direct observations a difficult prospect \citep[see recent survey, e.g.,][]{frebel19,sdb20,suda21,hartwig22, finkelstein22}. Theoretical prediction of $\Lambda$CDM model show that the entire process is led by gravitational collapse of dark matter halos as a consequence of hierarchical structure formation \cite[see the latest results, e.g.,][]{springel20,wang20,bohr20,may21,latif22}. At the time of collapse the primordial gas in halos is very hot and remains spread out due to its high pressure \citep{barrow17,chon18,barkana18}. Gas cools by radiating away energy and collapse to form a thin rotating circumstellar disk that grows over time and fragments due to gravitational and spiral-arm instability \citep{iy20,wollenberg20,chiaki22}. Some of the fragments that go on to become stars are not isolated and continue to interact with the surrounding gas. This interaction leads to an increase in mass of fragments as well as changes in their orbits. This leads to a very basic question of what is the final fate of these evolving fragments in the cluster. Do they merge with the central star \citep{kulkarni19,klessen19}, or do they move away from the cluster after their dynamical interaction with each other and with the surrounding gas \citep{sharda19,sugimura20}? It may happen that a fraction of them can either become massive due to rapid accretion \citep{umeda16,woods17,fukushima20} such that the resulting stars explode as (pair-instability) supernovae \citep{whalen14,welsh19,jeon21} or collapse to blackholes \citep{mr01,matsumoto15} or may lead to form supermassive blackhole \cite[SMBH:][]{alister20,herrington22}. There might also exist a fraction that remain as low mass protostars and hence can survive to the present day provided their mass remain as low as 0.8 $M_{\odot}$ \citep{marigo01,ishiyama16,susa19,dutta20}. Thus, the mass function of these fragments remains unclear and needs more investigation \cite[see review by][]{haemmerle20}. While it is possible to run detailed simulations of a few systems, it is difficult to explore a wide range of parameter space with this approach. When the numerical integration over density regime is computed beyond the formation of the first protostellar core, the collapse tends to be chaotic and highly nonlinear and becomes difficult to follow the dynamical system for long a time. As a consequence, current simulations lack the ability of following the evolution of fragments over a sufficient number of orbital revolutions within the disk. In this paper, we aim to develop a model, building upon our earlier work \citep{dutta16} on the fragmentation of the unstable disk centred within the rotating collapsing gas clouds, using {\it modified version} of the Gadget-2 SPH simulations and piecewise polytropic equation of state, in order to put some upper bound on the final mass of the protostars after long time evolution of the gas. In the next section \S \ref{sec:nm} we describe in detail the initial conditions, implementation of polytropic index profile in the mathematical model and the modified numerical scheme. The details of dynamics are outlined in \S \ref{sec:result} with an emphasis on fragments that stay below the critical mass for surviving until present-day. We summarise work in \S \ref{sec:summary}. \section{Numerical Methodology} \label{sec:nm} We start our discussion by considering uniform density spheres of gas with number density $n = 10^4 \text{cm}^{-3}$ and temperature $T = 250$ K, with initial solid body rotation. The gas density is represented by SPH particles. The clouds are numerically designed to model the local thermodynamic equilibrium (LTE) conditions for primordial gas and to study the effects of rotation on the time scales associated with the collapse and subsequent fragmentation. \subsection{Initial condition} \label{subsec:ic} The gas clouds are modelled with approximately 2 million SPH particles, each with mass $m_{\rm gas}= 4.639 \times 10^{-4}$ $M_{\odot}$, uniformly distributed inside a sphere of radius equal to the Jeans radius at LTE i.e. $R$ = $R_{\rm J}$ $\approx$ 0.857 pc with total mass $M$ = $M_{\rm J}$ $\approx$ 940$M_{\odot}$. This numerical set-up allows us to follow the collapse accurately for about 10 orders of magnitude in density up to the number density $5 \times 10^{14} \text{ cm}^{-3}$ till the formation of first central sink (i.e., central hydrostatic core), and about 4 orders of magnitude in size up to about 10 AU. The mass resolution for $N_{\rm ngb}=100$ SPH neighbours is about 0.04639$M_{\odot}$. This implies that the rotating gas clouds are numerically well-resolved up to a critical number density $n_{\rm crit} = 2.02 \times 10^{15} \text{ cm}^{-3} $, given by \begin{equation} n_{crit} = \left(\frac{3}{ 4\pi}\right) \left( \frac{5K_BT}{G} \right)^3 \left(\frac{1}{\mu m_p}\right)^4 \left( \frac{1}{m_{\rm gas} N_{\rm ngb}} \right)^2\,, \label{eqn_ncrit} \end{equation} for temperature $T=1300$ K. Here $\mu$ is the hydrogen mass fraction of the gas, $m_{\rm p}$ is the mass of the proton and all other symbols have usual meaning. The free fall time over sound crossing time $t_{\rm ff}/t_{\rm sc} \sim 0.1$ for the clouds confirms the validity of the initial conditions for some degree of gravitational collapse. Initial velocities are assigned to the SPH particles depending upon the angular velocity $(\Omega)$ of the clouds in addition to the thermal distribution of velocities and no internal turbulent motion. In absence of internal turbulent motions, the degree of rotation (i.e., strength of centrifugal support) of clouds is modelled by estimating the rotational energy over the total gravitational potential energy \cite[quantified by the parameter $\beta = R^3 \Omega^2/3GM$, as depicted in][]{sdz03}. We also model the distribution of angular momentum that originates either from distortion of the clouds or from their nonaxisymmetric nature due to differential rotation between the high and low- density regimes \citep{larson84,meynet02}. The gravitational forces from the dark matter are negligible compared to the self gravity of the gas on the length scales of our simulation, therefore for the sake of simplicity, we do not consider dark matter or the expansion of space itself in our simulations. Although our calculation is based on the initial condition that assumes a spherical cloud with uniform density distribution, it is to be noted that molecular clouds are in general very irregular in shape. This is because in reality the formation of molecular clouds due to continual accumulation of baryonic matter within a dark matter halo encounters different types of forces occurring simultaneously \citep{larson72, shu77,bodenheimer81}. For example, the interplay between the self-gravity of the cloud and the internal pressure of the infalling gas can introduce asymmetries or inhomogeneities in the clouds, which will be amplified during the collapse at a later time. That is why we see in the cosmological simulation that the initial density distribution of molecular clouds before it reaches LTE are non-homogeneities in nature and rather closer to the non-singular isothermal sphere \citep{suto88, on98}. In addition, as the gas cloud possesses small angular momentum, it soon experiences a differential rotation between the layers of the gas due to which the cloud becomes slightly centrally condensed. As a consequence, non-axisymmetric features are likely to appear at a very initial phase of formation of molecular clouds and even tend to wind up to form trailing spiral patterns at later stages of collapse. Larson 1984 \cite{larson84} has elaborated the fact that these tiny irregularities during the formation of the molecular cloud can certainly develop non-radial gravitational forces in a non-axisymmetric mass distribution. The associated gravitational/tidal torques that transfer angular momentum outward on an orbital time-scale is another reason for the non-homogeneities, which we see in cosmological simulations. Besides turbulent motion viscosity, and even magnetic fields can also play a role in shaping the clouds during its initial dynamical phase \citep{Truelove98, McKee02, Krumholz05}. \subsection{Modeling Polytropic equation of state} \label{subsec:polyt} In order to account for various heating and cooling processes happening simultaneously, which become important at certain number densities during the collapse, we model the thermal behaviour for the gas clouds \cite[following the discussion in][]{jappsen05} with a piecewise polytropic equation of state \begin{equation} T_i(n) = a_i n^{\gamma_i -1 } \text{ , } i = 1,2,3\,. \end{equation} Here the polytropic index $\gamma_i$ changes values in a piecewise constant manner as a function of the number density $n$. The constant of proportionality $a_i$, which is initially determined from the thermal conditions of the gas is also rescaled in order to maintain the continuity of temperature across the certain intermediate densities $n_{\rm int}$ \begin{equation} T_i(n_{\rm int}) = T_{i+1}(n_{\rm int}) \end{equation} according to the following equation \begin{equation} a_{i+i} = a_i n_{\rm int}^{ \gamma_i - \gamma_{i+1} } \label{eqn_gamma} \end{equation} The intermediate densities and the values for the polytropic index in different density intervals are chosen carefully in order to reproduce the temperature-density profile resulting from the primordial chemistry \citep{dnck15,pallottini17,bsg19}. Furthermore as the fast rotating clouds have larger time scales associated with the collapse and tend to have lower rates of compressional heating, they are significantly colder than their slow rotating counterparts \citep{dutta16}. Therefore all the values of the polytropic index also depend on the degree of rotation of the clouds. Table~1 summarizes the chosen values for the polytropic indices for all the clouds, where the piecewise polytropic index profile is divided in to three regions separated by intermediate densities $n_{\rm int} = 10^9, 10^{12} \text{cm}^{-3} $. In order to implement general polytropic process in the publicly available {\sc Gadget}-2, we have added a polytropic index variable that controls the rate of change of entropic function same as the adiabatic index in original code. In addition to this, we have identified the original adiabatic index variable in the code with the quantity $1 + 1/C_V$, where $C_V$ is specific heat at constant volume for the gas. The implementation is explained in detail in the appendix section. This is necessary in order to accurately model the thermal and chemical evolution of gas. This also reduces the computational cost and allows the simulations to be followed for a long period of time. \begin{table}[] \begin{tabular}{|c ||c |c|c |} \hline & $\gamma_1$ & $\gamma_2$ & $\gamma_3$ \\ \cline{2-4} $\beta$ & $10^4$-$10^9 \textrm{cm}^{-3}$ & $10^9$-$10^{12}\textrm{cm}^{-3}$ & $10^{12}$-$10^{14} \textrm{cm}^{-3} $ \\ \hline \hline (0.0, 0.04) & 1.1362 & 0.9874 & 1.0363 \\ \hline (0.05, 0.09) & 1.0684 & 1.0865 & 1.0422 \\ \hline (0.10, 0.12) & 1.0408 & 1.1174 & 1.0591 \\ \hline (0.13, 0.15) & 1.0214 & 1.1326 & 1.0781 \\ \hline \end{tabular} \caption{The values of the polytropic index $\gamma$ used at three density intervals (represented by the number density in cm$^{-3}$) of the collapsed gas for different degrees of rotational support parametrized by $\beta$} \end{table} \subsection{Simulation details} \label{subsec:sim} Once the central density reaches the critical value given by equation \ref{eqn_ncrit}, the total mass enclosed in single kernel volume $\left(m_{\rm gas}N_{\rm ngb}\right)$ becomes greater than the local Jeans mass which limits the density resolution for SPH simulations. Furthermore, the adaptive time steps for the integration near the critical value become of the order of 0.01 year, which is too small to be able to follow the simulations for any reasonable amount of time after the formation of the central core and hence no fragmentation can be seen. To circumvent this problem we search among all the processors for the highest density particle every ten time steps after the number density($n$) reaches $5 \times 10^{13} \text{ cm}^{-3}$ for the first time. Since the particles are distributed over a number of processors according to the domain decomposition, we broadcast the information of this highest density particle to all the processors to check for neighbours in their own domain. We then dynamically replace the entire region centralised at the highest density particle with $n \ge 5 \times 10^{14} \text{ cm}^{-3}$ and $T \ge 1300$ K by non gaseous sink particles upon satisfaction of {\it sink formation criteria} as given in \citet{bate97}, i.e., the particle is on current time step and the divergence of both the velocity and acceleration are negative in the vicinity of this particle. The total potential energy within two smoothing lengths is greater than the sum of thermal and rotational kinetic energies and that this region is also virialy unstable. The sink particles are formed from about 50 neighbouring gas particles within one smoothing length and thereafter interact with the rest of the gas only gravitationally. A sink can accrete the gas particles falling into an accretion radius $r_{\rm acc}$ that we fix to be about 8 AU. This is done provided the particle is on the current time step, and the total energy of a gas particle relative to the sink is negative i.e. the particle is gravitationally bound to the candidate sink. In addition, the specific angular momentum of the gas particle around the sink is less than what is required to form a circular orbit. When a gas particle is accreted, its mass and linear momentum are added to the sink particle and the location of the sink particle is shifted to occupy the centre of mass of the two. The accreted gas particles are removed from the simulation and their effect is taken into account using appropriate boundary conditions near the accretion region. In addition to the accretion radius, we also define an outer accretion radius $r_{\rm outeracc} = 1.25 r_{\rm acc}$ such that the gas particles falling into this outer accretion radius are evolved only gravitationally till they reach the accretion radius and are possibly accreted by the candidate sink. The gas particles may also leave the outer accretion region in the course of their motion. We prevent sink particles to be formed within $2r_{\rm outeracc}$ of each other in order to restrain the counterfeit formation of sink particles from the gas, which eventually would have been accreted by the candidate sink. As a check, we also keep track of the global quantities of the gaseous system such as total energy, angular momentum and entropy throughout the simulation. The sink particles are usually created at protostellar density and temperature, and subsequently identified with growing protostars. The gravitational softening for the sink particles is set to be equal to $r_{\rm acc}$ while for the gas particles we use variable gravitational softening length, which is proportional to their SPH smoothing length. This greatly improves the time taken for the simulations to run. Besides, following the discussion in \citet{clark11}, we also implement a constant external pressure boundary in addition to vacuum and periodic boundary conditions in {\sc Gadget}-2.. To this end, we modify the original SPH momentum equation, \begin{equation} \frac{dv_i}{dt} = - \sum_{j} m_j \left[ f_i \frac{P_i}{\rho_i^2} \nabla_iW_{ij}(h_i) + f_j \frac{P_j}{\rho_j^2} \nabla_iW_{ij}(h_j) \right] \end{equation} by subtracting the contribution of the external pressure, $P_{\rm ext} = 2.5 \times 10^6 K_B \text{K} \text{cm}^{-3} $ from both $P_{\rm i}$ and $P_{\rm j}$. All the other symbols have usual meaning. \subsection{Check for self-similarity solution} \label{subsec:self} In this section we quickly check the gas distributions at different epochs of time associated with the runway phase of collapse of the clouds for various degrees of initial solid body rotation. For non rotating $\left(\beta=0\right)$ clouds, the density remains spherically symmetric throughout and follows a well-known power law profile $r^{-2.2}$ while the central density increases monotonically with time as $\propto 1/(t-t_{\rm ff} )^2$. The initial phase of collapse is likely to remain self-similar at different epochs of free-fall time. This means that the collapsing gas distribution is invariant, i.e., looks similar in every scale of density regime. This can be seen from Figure \ref{fig:self}, consistent with the conventional studies \citep{shu77,suto88,on98} in which the gas distribution is a self-similar corresponding to $\gamma_{\rm eff} \sim 1.09$. Clouds with various rotational support also follow roughly the same power-law density profile \citep{matsumoto99,momi08}. However as collapsed gas gets redistributed and accumulated near the centre of mass of the cloud, the degree of rotational support also increases. This causes the density and its gradient to be slightly lower and gas temperatures to be little lower near the centre for clouds with higher degree of rotation \citep{saigo08,by11,meynet13,dutta15}. \section{Result} \label{sec:result} The transport of angular momentum to smaller scales results in the formation of rotationally supported spiral-arm, {\it the so-called circumstellar disk or disk-like structures}, around the central hydrostatic core. Till this point the collapse has been studied rigorously \citep{greif12,hirano14,dutta16am,riaz18}. Please see Appendix for runaway collapse phase. Here we follow the simulation further till the epoch of time when $50 M_{\odot}$ of mass are accreted in total onto the dynamically created sink particles as a consequence of instabilities within the spiral-arm and its fragmentation. We study the dynamics associated with multiple sinks and their interaction with ambient gas. \subsection{Gas distribution during disk fragmentation} \label{subsec:fragment} Figure~\ref{fig:img} \& \ref{fig:tmp} shows the snapshots of logarithmically scaled densities and the temperature distribution of the gas in the equatorial plane of the circumstellar disk for the sixteen values of rotation parameter $\beta$ = 0.0 - 0.15. All the images understandably reflect the fact that the collapsed gas in the spiral-arm becomes unstable by accreting mass from the surrounding and hence prone to fragmentation. This may also be the consequences of interplay between the gravitational torque and pressure gradient throughout the {\it layers} of the spiral-arms. In all these snapshots, the white dots represent the sink particles in the simulation and the circle shows the scale in AU at which the fragmentation takes place. The size of the circle generally keeps on extending with increasing $\beta$-parameter. For example, the central region for $\beta = 0.02$ is approximately 600 AU, whereas for $\beta = 0.1$ it is $\sim$ 3000 AU. The snapshots also reflect the fact that all the clouds with non zero rotation form a small $N$-body protostellar system immersed in the ambient medium near the centre of the clouds. As expected, the fast-rotating clouds are likely to develop noticeable dense spiral-arms in which protostellar mass evolves substantially, and in all probability to fragment reasonably more, say, $N \sim 8 - 11$ for $\beta > 0.05$, as seen in the Fig.~\ref{fig:img} \& \ref{fig:tmp}. This satisfies the justification for conservation of angular momentum during the gas evolution \citep{hirano18}. In contrast, the slow-rotating clouds contain a relatively small number of protostars, $N \sim 4-7$, indicating the possibility of having a high accretion rate, as predicted in theoretical calculations \citep{md13,liu21a}. We have limited our calculations up to this epoch because it is extremely difficult to follow simulations for the fast-rotating clouds above $\beta > 0.15$ beyond this stage of evolution. Another feature to be noted in Figure \ref{fig:img} is that the gas distribution is highly complicated and non-linear in nature that can be thought of a supersonic and compressible flow coupled to the accreting sinks in the gravitationally bound protostellar system. Due to interaction with the ambient medium, the evolving sinks experiences a strong friction, also known as drag forces \citep{dutta20iau}, which can change the movement and orbit of the sinks \cite[similarly what is seen in the X-ray binaries system][]{bdc17,park21}. \subsection{Evolution of the sinks} \label{subsec:evolution} The fragmentation of the rotating unstable spiral-arms within the circumstellar disk has significant implications on the final mass of the evolving sinks. Figure \ref{fig:mass} \& \ref{fig:acc} shows the mass of the sinks in our simulation as a function of time. The first feature to be noted is that all clouds are inclined to fragment on an estimated mass scale that evolve up to $\sim$0.001 - 20 $M_{\odot}$ depending on the strength of rotational support of the parent clouds. Because the fragmentation takes place as a consequence of the gravitational instability, the characteristic mass scale may be substantially smaller. Second, as the gas continues to collapse to higher densities, the spiral-arms keep on developing instabilities that heralds in the successive formation of secondary sinks within the circumstellar disk. We see that most of the fragmentation takes place within $\sim$100 - 200 years from the formation of the central core for clouds with $\beta \le 0.05$. Higher is the degree of rotation, longer is the time $t_{\rm frag}$ taken by the gas to become gravitationally unstable to fragmentation. This is expected as the sinks within slow rotating clouds begin to be Jeans unstable much earlier due to very strong accretion rate, $\sim 1 M_{\odot}/yr$ (as seen in Figure~\ref{fig:acc}). In addition, we see that sinks moving with lower radial velocity within dense ambience are likely to have high accretion rate. Thus, even if the sinks had low mass at the time of formation, their mass can be increased approximately by an order of magnitude relative to their initial mass. As a consequence, sinks now face more gravitational drag due to increase of mass, and therefore they are likely to change their orbits. This also triggers the sinks to be more centrally condensed and continue to accrete gas to end up as massive protostars within a few thousands of years of evolution. We also see that sinks for low $\beta$-values are quite strongly bound gravitationally. However, clouds with higher degree of rotation tend to fragment more vigorously due to spiral-arm instabilities on larger scales and contain both low and high-mass sinks (some of which even have quite a high radial velocity as compared to the escape velocity of the cloud). For example, a number of protostars with $m_{*} < 1\,M_{\odot}$ are formed for the clouds with $\beta \ge 0.1$. This is consistent with the probability of existence of the smallest fragmentation scale $\sim$$0.03$ AU with $\sim$$0.01\,M_\odot$ \citep{becerra15,hirano17}. We conclude that the formation of the sinks and their dynamical interaction with the ambience depend on the history of collapse (i.e., evolution history of chemical/thermal changes, turbulence and angular momentum conservation). \subsection{Histogram of mass function and radial velocity} \label{subsec:hist} Here we try to understand the basic properties of the sinks, such as mass, radial velocity and rotational velocity, as reflected in Figure \ref{fig:prop}. On the other hand, Figure \ref{fig:hist} depicts the histogram of the mass function (Top) and the ratio of the radial velocity of the sink particles to the local escape velocity of the cloud (Bottom) at the end of the simulations, i.e., when $50 M_{\odot}$ has been accreted in total. The newly formed sinks in their parent clouds tend to have a wide range of velocities, the typical value of both radial ($v_{\rm rad}$) and rotational ($v_{\rm rot}$) component lies within a span of roughly $\sim$$0.01 - 25$ km s$^{-1}$. This is consistent with the previous studies \cite[e.g.,][]{greif11,dutta16}. In addition, we see that in the absence of radiative feedback, relatively high-mass sinks are likely to be formed irrespective of the rotation of the clouds. Another interesting aspect from the simulations is that a tiny fraction of low-mass sinks for $\beta$ $\equiv$ 0.1 - 0.15 move with relatively high radial velocity as compared to others. They can therefore directly travel at the periphery at later stage of evolution, and can even go away from potential well of the gaseous system with their radial velocity exceeding the escape velocity \cite[$v_{\rm escape} \equiv 10 - 12$~km s$^{-1}$, as seen in][]{dutta20}. Due to high velocity, the ejected sinks can accrete negligible mass from the surrounding. There are subtle deviations between the rotationally supported clouds. From the analysis, it is clear that the mass function for fast rotating clouds peaks at lower mass value, and also the central sink can get quite massive for fast rotating clouds due to larger time scale for fragmentation. The radial velocities for these sinks can also get 2-3 times their escape velocity depending on the strength of the cloud's rotation. \subsection{Survival possibility} \label{subsec:survival} Here comes a very important issue to address: the survival possibility of such evolving sinks that may be identified as primordial protostars. Note that the lifetime of a star is inversely proportional to the mass that it contains, say a combination of hydrogen, helium and other higher metallicity gas \cite[see e.g.,][for a detailed calculation of stellar dynamics]{binney08}. In this scenario, a star can survive for billions of years provided its accretion rate remains very minimal so that estimated final mass would be as low as $\sim 0.8$~M$_\odot$ \citep{komiya09,kirihara19}. In Figure \ref{fig:mejected}, we plot the mass of the sinks that get ejected from the multi-scaled N -body cluster for rotating clouds parametrized by $\beta$. As expected, the ejections are likely to happen for the higher rotating gas clouds (e.g. $\beta > 0.1$, as shown in Figure \ref{fig:prop}, the overall mass distribution of the protostars). These realisations show that due to conservation of angular momentum, a fraction of protostars are spread out to the outer periphery of the clouds out of which only a tiny fraction (10 - 12\%) are being ejected as low mass stars. As they continue to have negligible accretion rate for a long period of time, their final mass remains as low as 0.8$M_{\odot}$, and hence likely to survive till today. The dotted horizontal line in Figure \ref{fig:mejected} reflects the threshold mass($0.8 M_{\odot}$) for the ejected protostars. The others can be the massive ones. There is hence a high possibility that a fraction of the sinks within the fast rotating clouds can escape the cluster as low-mass sinks. There are considerable chances that they can be evolved as main-sequence stars before entering ZAMS to survive for billions of years till the present epoch. This also confirms the theoretical prediction of the existence probability of first generation stars (it may be either Pop III/Pop II stars or extremely metal poor stars) if they would have contained very low mass ($\le 1$~M$_\odot$) before entering as a ZAMS \citep{andersen09}. Following the recent study by \citet{dutta20} of the Bondi-Hoyle accretion flow, one can estimate the mass-velocity relations for the protostars in which an initial high speed ensures that the mass accretion is relatively smaller. There is hence a good possibility of their existence even in our Galaxy (either in bulge or in halo). Recent state-of-the art observational tools and wide-range surveys have suggested the existence of extremely metal-poor stars that could be the possible candidates of low-mass Pop III/Pop II stars \cite[e.g.][]{komiya10,deb17,husain21}. Therefore searching for more such low-mass stars and where they are located in our Galaxy have become one of the primary interests in present-day observations \citep{johnson15,elbadry18,griffen18,susa19,lb20}. \section{Summary and Discussion} \label{sec:summary} In this work, we performed a suite of 3D simulations using our {\it modified} version of {\sc Gadget}-2 SPH code to follow the gravothermal evolution in a number of primordial gas clouds associated with a degree of rotation that spans over roughly two order of magnitude. The heating and cooling phenomenon that arises during chemical and thermal evolution of the collapsing gas is approximated with a piecewise polytropic equation of state appropriate for the primordial chemistry. Below we outline the main points related to the modification in the SPH simulations and summary of results along with open issues. \subsection{Code modification} \begin{itemize} \item In addition to the {\sc ``adiabatic''} and {\sc ``isothermal''} modes available in the publicly available version of {\sc Gadget}-2, we added a {\sc ``polyttropic''} mode in the code, enabling of which in the makefile evolves the gas system with a general polytropic equation of state(EOS). This is done by introducing a polytropic exponent, and appropriately modifying the formulas for internal energy, entropy and rate of change of entropy. \item We have carried out sink particle technique in the original {\sc Gadget}-2, following the discussion in \citet{bate97} along with the sink boundary conditions near the accretion surface. We have also defined an outer accretion radius $r_{\rm outeracc} \approx 1.25 r_{\rm acc}$, for the sinks, such that the gas particles falling into this outer accretion radius are evolved only gravitationally. Sink particles can be activated in code by enabling the {\sc sink} mode in the makefile. In this mode, the simulations run till 50 {\it unit of mass} (for example, here we use 50 $M_{\odot}$) has been accreted into the sinks in total, and writes the data related to all the sinks (e.g. mass, position, velocity, internal spin) in their individual text files. \end{itemize} \subsection{Summary of results} \begin{itemize} \item Irrespective of the cloud's rotation, most of the sinks start to accrete from the ambient gaseous medium while orbiting the central region. In the absence of radiation mechanism, the continued accretion results in the increase of mass of the evolving sinks, which are likely to turn out to be the high mass protostars, or even massive one, depending on the dynamical evolution. More the mass they accrete, the more the friction they experience due to gravitational drag, which in turn slows their velocity and constrain them to either change their orbital movement or stop. This is more noticeable in slow-rotating clouds where protostars are formed within a few hundreds of AU from the central region and keep on evolving in the absence of radiative feedback from them. \item Clouds with higher degree of rotational support tend to fragment more vigorously at far distance from the central region and on larger time scales. After a few thousand years of evolution, a small fraction of these sinks still remain as low mass protostellar objects. In this case, they remain loosely bound to each other gravitationally and may possess radial velocities larger than the escape velocity of the clouds. Thus there is considerable plausibility of these protostars to be ejected from their parent clouds and remain on the main-sequence for a long time. \item The detailed calculation arguably confirms the long-time conjectures of two completely opposite views regarding the mass function of the first generation of stars -- whether they are high-mass or low-mass. Based on our simulations, we have corroborated the fact that the primordial stars might have been formed in a broad range of masses depending on dynamical history of the collapse and initial configuration of the accreting protostars. By looking at the results one can anticipate the fact that the ejected protostars are likely to be survived till present epoch provided their mass remains as low as $m_{\rm Pop III} \le 0.8 M_{\odot}$. The lowest probable mass range of the protostars from our model is comparable to the recent observations \citep{schlaufman18}. \end{itemize} We conclude that the survival rate of a primordial star depends on the initial degree of rotation of the parent gas clouds. Hence simulating long term evolution of the primordial protostellar system is indispensable in order to have an estimation of the initial mass function(IMF) of the Pop III protostars. However, one needs to perform a more rigorous investigation along with inclusion of more sophisticated simulations such as magnetic field, radiative feedback and primordial chemistry. \subsection{Effect of magnetic field} A number of studies have confirmed the impact of primordial magnetic fields on the thermodynamics of Pop III star formation \citep{sur10} and on the 21 cm emission line of atomic hydrogen \citep{schleicher09}. Other magneto-hydrodynamics (MHD) simulations \cite[for example,][]{price07,machida08,schleicher2010} show the contribution from so-called magnetic flux \citep{maki07} and magnetic braking \citep{meynet11}. Besides, magnetic fields can also be amplified by orders of magnitudes over their initial cosmological strengths by a combination of small-scale dynamo action and field compression \citep{doi11,sur12,turk12,md13}. Magnetic fields may also provide support against fragmentation \citep{peters14} and outflow \citep{hirano19}. Hence, it is important to note that the inclusion of magnetic fields may substantially influence the gas dynamics, especially the disk evolution and fragmentation phenomenon \cite[see e.g.,][]{mckee20,stacy22}. \subsection{Effect of radiative feedback} The radiation-hydrodynamics (RHD) simulations on the other hand show that the radiation from protostars can significantly change the accretion phenomenon \citep{whalen04,whalen08rad,wise08,wise12,susa09} and can even evaporate the disk \citep{hosokawa11,johnson13,hirano13}. In general we expect feedback to lower the accretion of mass and metals onto protostars \citep{suzuki18} and hence our estimates can be thought of as being upper bounds on these masses \citep{barkana16,barrow17,chon18}. However, it is noted that the radiative feedback becomes important approximately after $\sim$$10^{4}$ year has elapsed from the time of formation of the first protostar. Here, the results from our calculations clearly demonstrate that protostars with $v_{\rm r} \ge v_{\rm esc}$ are able to escape clusters within a few times $10^{4}$ year. Hence they can accrete negligible bulk of mass, which implies that radiation emitted from the surface of these stars are unlikely to have an impact on their mass accretion process. On the other hand, it is obvious that feedback effects need to be included for those protostars roaming around with $v_{\rm r} \ll v_{\rm esc}$. However, analyzing the fate of these protostars is beyond the scope of the present study. \subsection{Effect of primordial chemical network} In a realistic collapse scenario, various chemical species go through the numerous reactions that happen concurrently. This results in the formation of a number of other molecules depending on the local thermodynamic state of the gas \cite[see wonderful reviews by][]{bl_01,cf05}. The thermodynamic balance among these primordial chemical species then determines the net rate of compressional heating and radiative cooling in the gas. Therefore a detailed knowledge of several possible chemical reactions and mass fractions of all the chemical species is required in order to specify the overall chemo-thermal state of the gas \citep{oy03,ripamonti07hd,dutta15}. Hence one has to follow the entire network in detail in order to determine the thermodynamic evolution of the gas, which seems to be fairly complicated to follow in the simulations even for primordial gas with a relatively simpler chemical network of Hydrogen and Helium. \subsection{Combined 3D large-scale simulation with Bondi-Hoyle semi-numerical approach} As discussed above, the protostellar system can numerically be considered as a classical $N$-body problem in which the evolving protostars accretes the ambient gas and coupled with gravity while orbiting around the central gas cloud. However, in reality, this complex phenomenon is indeed difficult to simulate in full 3D simulations \citep{bagla09tree,stacy10} as there is a huge difference of density gradient between the layers of spiral arms. This results in a substantial computational cost. However, with some approximation, the protostellar system can be modeled using semi-analytical calculations with the help of Bondi-Hoyle accretion \citep{bondi44,bondi52} that may provide an satisfactory outcome for the study of stellar dynamics \cite[see e.g.,][] {edgar04,lee14,bdc17,xu19}. It is therefore important to focus on combining the 3D simulation along with Bondi-Hoyle semi-numerical approach \citep{dutta20,park21,keto22} to study the long-term evolution of gas, especially the instabilities that grow during the build-up of the circumstellar disk and complex fragmentation process of its spiral-arms (in preparation). \subsection{Evidences for metal-poor stars} In recent past, a number of studies have predicted that there is a substantial feasibility of finding the first generation of stars, that could be the possible candidate for Pop III stars or extremely metal poor(EMP) stars, in the present-day local universe (\citealt{ws00,tumlinson10,schneider12,gibson13,bsw15,komiya16,ishiyama16}). The search for EMP stars or very metal poor stars have been a cutting-edge research now-a-days, and with the advent of new ultra-modern telescope and futuristic surveys there have been a number of pioneering observational studies that provide a crucial information about the early Universe \citep{dawson04,eisenstein05,tegmark06,fob06,cooke09,bkp09,caffau11,caffau12,sdss12,dawson13,rydberg13,mac13,frebel14,mirocha17}. There have been numerous observational analysis on the evolution of baryonic matter both from cosmological viewpoint \citep{wells21} and statistical interpretation on how the Universe has evolved into the present state \cite[please see recent studies by][for a preliminary understanding of the evolution]{planck11,planck14,planck16,planck20}. Besides, Pop III stars can also be major origins of both merging binary blackholes and EMP stars, as shown in \citet{tanikawa22}. Recent studies have also shown the possibility of detecting primeval galaxies at higher redshift $z \ge 10$ through JWST observation \citep{jb19,riaz22}. This may constrain the mass function of Pop III stars. Interestingly, Hubble has just detected a very old magnified star of mass $\sim 50 - 100 M_{\odot}$ around redshift 6.2 \citep{welch22}. With the help of state-of-the-art observations from Atacama Large Millimeter/submillimeter Array, \citet{tokuoka22} has also confirmed the possible systematic rotation in the mature stellar population of a $z \sim 9.1$ Galaxy. \section*{Acknowledgements} All the simulation are run on the HPSC cluster at Harish-Chandra Research Institute (HRI) at Prayagraj. We thank Jasjeet Singh Bagla, Athena Stacy, Sharanya Sur and Sukalpa Kundu for their constructive comments on the manuscript. This research has made use of NASA's Astrophysics Data System Bibliographic Services. \appendix{} \section{Implementation of polytropic equation of state in {\sc Gadget}-2} The standard model of thermodynamic behaviour of the primordial gas clouds has been well-understood \cite[see for e.g.,][]{abn02,yoha06,gs09,tao09}. The primordial chemical network primarily contains numerous concurrent reactions between hydrogen and helium. For example, at low densities $( n_H \approx 1-10^4 \text{cm}^{-3})$, the hydrogen atoms combine with the free electrons to produce hydrogen molecule ion $H^-$ which in turn combines with the hydrogen atoms to form a small abundance of $H_2$. The gas is then cooled through $H_2$ rotational and vibrational line emission up to temperature of 200K. However, the small abundances of $H_2$ is not sufficient to cool the gas further and the gas begins to heat up with increase in density up to a number density of about $10^8 \text{cm}^{-3}$. At this stage of collapse, the hydrogen atoms are converted to molecules via three body reactions, which again cool the gas through line emissions. The cloud becomes optically thick to the strongest of $H_2$ emission lines beyond number density $10^{11} \text{cm}^{-3}$ \citep{ra04,clark11,dnck15}. However, if we are only interested in the thermodynamic evolution of the gas and not the detailed balance between abundant chemical species, then we can make substantial simplifications and use a general polytropic equation of state for all the chemical processes that involve transfer of heat, \cite[in the line of discussion][]{jappsen05}: \begin{equation} T = a \rho^{\eta-1}\,, \end{equation} where $\eta$ is the polytrpoic index. Therefore following the above discussion, we use a polytropic equation of state with a piecewise constant profile for a polytropic index appropriate for thermodynamic evolution of primordial gas clouds. Where the polytropic index $\eta$ changes its values at certain critical number densities, according to the following relations. The publicly available version of {\sc Gadget}-2 can be used to examine the numerically evolved ideal gas system with adiabatic equation of state \begin{equation} P = A \rho^{\gamma} ,\, \end{equation} where $A$ is constant and $\gamma = C_P/C_V $ is the adiabatic index. The internal energy per unit mass, u, of the gas as calculated in the code is \begin{equation} u = \left(\frac{A}{\gamma-1 }\right) \rho^{\gamma -1 } \label{inten} \end{equation} In order to model the general polytropic process that can involve the heat transfer as well, we write the following identity from the the first law of thermodynamics \begin{equation} NK_BC \Delta T = P\Delta V + NK_BC_V \Delta T ,\, \label{firstlaw} \end{equation} where C is a rate of heat added to the system (for adiabatic process C=0), N is the total number of gas particles and the other symbols have their usual meaning. From the above equation and ideal gas law $PV = NK_BT$ we can derive the polytropic equation of state as \begin{equation} P = B \rho^{\eta} ,\, \label{poleqs} \end{equation} where $B$ is constant and $\eta = \left( C-C_P \right)/ (C - C_V) $ is the polytropic index. Unlike the adiabatic index $\left(\gamma\right)$ the polytropic index $\left(\eta\right)$ can be greater, smaller or equal to 1, and the two are related by. \begin{equation} \gamma = \eta + \frac{C}{C_V} \left( 1 - \eta \right) \end{equation} Therefore in the code we choose $\gamma = 1 + 1/C_V = 7/5$ for diatomic gas and replace $\gamma$ with $\eta$ at appropriate places, for example the internal energy formula in \ref{inten} is modified to be \begin{equation} u = \left(\frac{B}{\gamma-1 }\right) \rho^{\eta -1 } \label{intenmod} \end{equation} With these modification the code can handle general polytropic processes where the temperature can increase $\left( \eta > 1\right)$ or decrease $\left( \eta < 1\right)$ with density and does not require special treatment for the isothermal $\left( \eta = 1\right)$ case. \section{Runaway collapse phase} In this section we describe the gas distributions, velocity profile and time scales associated with the initial phase of collapse for the clouds for various degrees of initial solid body rotation till the formation of the central core in our simulations, as illustrated in Figure \ref{fig:temp}. All the physical quantities are radially averaged within logarithmic binned as calculated from our simulations. The estimated free-fall time over sound crossing time justifies the gravitational collapse of primordial gas, for which the density distribution obey the power-law profile with $n \sim r^{-2.2}$ irrespective of the rotational strength of the clouds. This also confirms that collapse is a self-similar \citep{susa98} process. Therefore the density profile of the collapsed clouds at the outer part is nearly the same as that of the inner regime till the core is formed at the center of clouds \cite[see, e.g.,][for detailed discussion]{meynet13,stacy13,latif15,dutta16am}. Due to high strength of rotation, the radial velocities are considerably lower, as expected for clouds with $\beta \ge 0.05$. So the radial component of velocity is less dominating and gradually becomes comparable to the rotational component near the centre of mass till about 100 AU. This is due to the fact that the infalling gas loses angular momentum near the centre as it gets accreted by the central core. The loss of angular momentum near the centre is compensated by transport of angular momentum farther from the centre. Hence, the angular momentum transport is more noticeable for the clouds with higher degree of rotational support. This is evident from the distribution of the rotational component of the velocities. This also vindicates the formation of larger rotationally supported spiral-arms in the clouds with fast rotating clouds. The flow outside the spiral-arms remains sub keplerian i.e. $v_{\rm rot}(r) \le v_{\rm kep}(r) $, as seen in the middle panel of Figure \ref{fig:temp}. In order to quantify the effects of rotation on the accretion phenomenon and associated time scales, we estimate the mass accretion rate, $\dot{M}(r) = 4 \pi r^2 \rho(r) v_{\rm rad}(r)$ and the accretion time $t_{\rm acc} = M_{\rm enc}(r) / 4\pi \rho v_{\rm rad}(r) r^2 $, and plot them as a function of radial distance for different value of the $\beta$-parameter. As can be seen from figure \ref{fig:temp}, the mass accretion rate reaches a maximum value of about 0.1 $M_{\odot}$ per year at about 20 AU from the centre of mass. For distances smaller than this the sound crossing time tends to become comparable to the free fall time which decreases the mass accretion rate near the centre. Because the accretion phenomenon is directly related to the instability within the gas, we also check the degree of Jeans instability in our clouds by measuring the two quantities, $ t_{\rm acc}(r)/t_{\rm ff}(r) $ and $t_{\rm frag}(r)/t_{\rm ff}(r) $ as a function of the radial distance. Here, the fragmentation time, $t_{\rm frag}(r)$, has been estimated by the ratio of Jeans mass and the mass accretion rate of the core \citep{dnck15}. This can provide an approximation of the fragmentation of the gas for different strength of rotation of the clouds. As the infalling gas gets redistributed near the centre of mass of the cloud, the rotational support near the centre also increases, heralding the formation of spiral-arm or disk-like structure. These spiral-arms are likely to become unstable and prone to fragmentation by accreting mass near its boundaries. The fragmentation time scales and sizes of the disks are proportional to their degree of rotation and are considerably larger for fast rotating clouds $\beta \ge 0.05$ than their slowly rotating counterparts for the fixed polytropic index profile.
Title: Laue and Fresnel lenses
Abstract: The low-energy gamma-ray domain is an important window for the study of the high energy Universe. Here matter can be observed in extreme physical conditions and during powerful explosive events. However, observing gamma-rays from faint sources is extremely challenging with current instrumentation. With techniques used at present collecting more signal requires larger detectors, leading to an increase in instrumental background. For the leap in sensitivity that is required for future gamma-ray missions use must be made of flux concentrating telescopes. Fortunately, gamma-ray optics such as Laue or Fresnel lenses, based on diffraction, make this possible. Laue lenses work with moderate focal lengths (tens to a few hundreds of metres), but provide only rudimentary imaging capabilities. On the other hand, Fresnel lenses offer extremely good imaging, but with a very small field of view and a requirement for focal lengths $\sim$10$^8$ m. This chapter presents the basic concepts of these optics and describes their working principles, their main properties and some feasibility studies already conducted.
https://export.arxiv.org/pdf/2208.12362
\title*{Laue and Fresnel lenses} \author{Enrico Virgilli\thanks{corresponding author}, Hubert Halloin, and Gerry Skinner} \institute{ Enrico Virgilli \at Istituto Nazionale di Astrofisica INAF-OAS, Via Piero Gobetti, 93/3, 40129 Bologna - Italy\\ \email{enrico.virgilli@inaf.it} \and Hubert Halloin\at Université de Paris, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France\\ \email{hubert.halloin@apc.in2p3.fr} \and Gerald Skinner\at University of Birmingham, Birmingham B15 2TT, UK\\ \email{gerald.k.skinner@gmail.com} } \section*{Abstract} \label{abstract} The low-energy gamma-ray domain is an important window for the study of the high energy Universe. Here matter can be observed in extreme physical conditions and during powerful explosive events. However, observing gamma-rays from faint sources is extremely challenging with current instrumentation. With techniques used at present collecting more signal requires larger detectors, leading to an increase in instrumental background. For the leap in sensitivity that is required for future gamma-ray missions use must be made of flux concentrating telescopes. Fortunately, gamma-ray optics such as Laue or Fresnel lenses, based on diffraction, make this possible. Laue lenses work with moderate focal lengths (tens to a few hundreds of metres), but provide only rudimentary imaging capabilities. On the other hand, Fresnel lenses offer extremely good imaging, but with a very small field of view and a requirement for focal lengths $\sim$10$^8$ m. This chapter presents the basic concepts of these optics and describes their working principles, their main properties and some feasibility studies already conducted. \section{Introduction} \label{introduction} The `low-energy' gamma-ray band from $\sim100$~keV to a few tens of MeV is of crucial importance in the understanding of many astrophysical processes. It is the band in which many astrophysical systems emit most of their energy. It also contains the majority of gamma-ray lines from the decay of radioactive nuclei associated with synthesis of the chemical elements and also the 511~keV line tracing the annihilation of positrons. However, observations at these energies are constrained in ways that those at lower and higher energies are not. At lower energies grazing incidence optics enable true focusing of the incoming radiation, forming images and concentrating power from compact sources onto a small detector area. At higher energies the pair production process allows the direction of the incoming photon to be deduced. But in the low energy gamma-ray band grazing incidence optics are impractical (the graze angles are extremely small) and the dominant Compton interaction process provides only limited directional information. Detector background due to particle interactions and to photons from outside the region of interest is a major problem in gamma-ray astronomy. A large collecting area is essential because the fluxes are low, but unless a means is found to concentrate the radiation this implies a large detector and hence a lot of background. Shielding helps but it is imperfect and the materials in the shield themselves produce additional background. If the flux from a large collecting area $A$ can be concentrated with efficiency $\eta$ onto a small area $A_d$ of detector then for background-dominated observations there is an advantage in sensitivity of $\sqrt{\eta A/A_d}$ compared with a `direct-view' instrument of area $A$ having the same background per unit area, energy band and observation time. At energies where grazing incidence optics are not viable, only two technologies are available for concentrating gamma-rays. Both use diffraction. Laue lenses use diffraction from arrays of crystals while Fresnel lenses utilise diffraction from manufactured structures. Both type of lens can provide a high degree of concentration of flux from a compact on-axis source. Fresnel lenses provides true imaging, albeit with chromatic aberrations, whereas the Laue lens is a `single-reflection' optic, where the off-axis aberrations are severe. MeV astrophysics is now eagerly waiting for the launch of NASA's COSI mission \cite{Tomsick2019} which is scheduled for 2025. This will be a survey mission with a large field of view - and consequently a relatively high background. Nevertheless, {\textcolor{green}{COSI}} is expected to be a factor of 10 more sensitive than {\textcolor{green}{Comptel}} the pioneering instrument for the MeV gamma-ray range \cite{shoenfelder93}. A focusing telescope using the techniques discussed here may improve the sensitivity for the study of individual sources by another large factor, allowing studies not possible with scanning, high-background, instruments. Laue lenses and Fresnel lenses are discussed separately below. \section{Laue lenses} The concept of a Laue telescope is shown in Figure \ref{fig:LLconcept1}. The essential element is a `{\textcolor{green}{Laue Lens}}' containing a large number of high-quality crystals. Each crystal must be correctly oriented to diffract radiation in a narrow spectral range from a distant source towards a detector located at a common focus behind the lens. The crystals are used in the Laue-mode (transmission) since the very small diffraction angles make it impractical to rely on surface reflections. Excellent reviews of previous work on this topic can be found in \cite{frontera10a} and \cite{Smither2014}. The term `Laue lens´ is actually a misnomer and it would be more correct to refer to a Laue-mirror. Such a `lens' relies on the mirror \emph{reflection} of gamma-rays from the lattice planes in the crystals. The reflective power of the electrons bound in atoms in a single lattice plane is very small, but the power increases with the square of the number of planes - or electrons - acting coherently. \subsection{Laue lenses basic principles: Bragg's law} The requirement for coherent diffraction is both the strength and the weakness of Laue lenses. It provides for the possibility of high reflectivity, but at the same time it imposes a strict dependence of the diffraction angle, $\theta_B$, on the wavelength, $\lambda$, (or the energy, $E$) of the radiation. This dependence is expressed by {\textcolor{green}{Bragg's law}}: \begin{equation} \sin\theta_B = \ n \frac{\lambda}{2 d_{hkl}} \hspace{3em}{\rm with: }\hspace{1em} n=1,2,3,... \ \label{eq:t_bragg} \end{equation} where $n$ is the diffraction order, $d_{hkl}$ is the spacing of the crystal lattice planes and $\lambda$ is the wavelength of the gamma-rays. The first order ($n$ = 1) contributions are by far the most significant. For energies which concern us here the Bragg angles are always small $(\simeq 1^{\rm o} )$, so we can set $\tan\theta = \sin\theta = \theta$. In the following we shall often prefer to speak in terms of energy, $E$, rather than wavelength. Bragg's law then takes the form: \begin{equation} \sin\theta_B = \ n \frac{h c}{2 d_{hkl} E} \ , \label{eq:E_bragg} \end{equation} where $h$ is Planck's constant and $c$ the velocity of light. ($\lambda$ (Å) $= 12.39 / E$ (keV)). From the Bragg equation for first diffraction order, with simple geometrical considerations it can be shown that there is a relation between the distance $r_i$ at which the crystal is positioned with respect to the Laue lens axis and the diffracted energy E$_i$: \begin{equation} E_i = \frac{hc F}{d_{hkl} r_i}, \label{e_r} \end{equation} where F is the focal length of the Laue lens. Eq.~\ref{e_r} shows that for a given focal length, if crystals with a fixed $d_{hkl}$ are used, those placed closer to the lens axis are dedicated to the highest energies while those positioned further away from the axis diffract the lower energies. Consequently, at a given focal length there is a direct link between the energy band of a Laue lens and its spatial extent, r$_{in}$ to r$_{out}$. For a narrow band Laue lens, with the whole surface optimised for a single energy, it would be necessary to arrange that $d_{hkl}$ varies in the radial direction such that ${d_{hkl} r_{i}}$ is constant (an analogous condition will be seen when the zone widths of Fresnel lenses are discussed in section \nameref{sect:fresnel}). \subsection{Crystal diffraction} \label{CrystalDiff} \subsubsection{Ideal and mosaic crystals} Before going into details of the calculation of diffraction efficiencies it is useful to introduce a distinction between `ideal' and `mosaic' crystals. In ideal (defect free) crystals the crystalline pattern is continuous over macroscopic distances. Examples of such crystals are the highly perfect Silicon and Germanium crystals now commercially available thanks to their great commercial interest and the consequent intense development effort. `{\textcolor{green}{Mosaic crystals}}' on the other hand are described by a very successful theoretical model introduced by {\textcolor{green}{Darwin}} \cite{darwin1914,darwin1922} a century ago. Darwin modelled imperfect crystals as composed of a large number of small `crystallite' blocks, individually having a {\textcolor{green}{perfect crystal}} structure but slightly misaligned one to another. The spread of the deviations, $\omega$, of the orientation of a lattice plane in one block from the mean for the entire mosaic crystal is described by means of a probability distribution $\Omega '(\omega)$, the so-called mosaic distribution function. Darwin's model has been very useful and quantitatively describes many aspects of real crystals. The mosaic distribution function, $\Omega '$ can be found experimentally for a given crystal through observation of the `{\textcolor{green}{rocking curve}}', which is the measured reflectivity as function of angle for an incident beam of parallel, monochromatic gamma-rays when a crystal is scanned through the angle corresponding to a Bragg-reflection. The crystal sample should be thin enough that the reflectivity is never close to the saturation value. Examples of measured rocking curves are shown in Figure~\ref{fig:RockingCurves}. For good quality crystals the mosaic distribution can be well approximated by a Gaussian function. Its width, as observed through the rocking curve, is called the {\textcolor{green}{mosaic width}} of the crystal. Mosaic widths are specified as angular quantities, characterized by either the Full Width at Half Maximum (FWHM) or the standard deviation, $\sigma_\theta$, of the rocking curve. Rocking curves are measured at constant diffraction angle, hence constant energy. For Laue lens design where the source direction is the fixed quantity, what is often of interest is the mosaic distribution as a function of the energy offset from the energy, $E_B$, corresponding to the {\textcolor{green}{Bragg angle}}. If the standard deviation of the distribution as a function of energy is $\sigma_E$, then the two quantities are related by: \begin{equation} \frac{\sigma_E}{E_B} = \frac{\sigma_\theta }{\theta_B} \label{dTheta-dE-relation} \end{equation} To be useful in a Laue telescope the rocking curve for each crystal should possess a single, narrow peak. Unless great care is taken in the growth of crystals the mosaic distribution may not be well behaved and the rocking curves may be broad or exhibit multi-peaked structures. \subsubsection{Diffraction efficiency} \label{diff_eff} According to Schneider \cite{Schneider1981b} the crystal reflectivity, $\nu (E)$, for mosaic crystals of macroscopic thickness in the Laue-case can be calculated from: \begin{equation} \nu (E) = \frac{1}{2} e^{-\mu (E) t} (1 - e^{-\Omega (E - E_0) R(E) t}). \label{eq:XtalRefl} \end{equation} Here $ \mu(E)$ is the linear attenuation coefficient for photons of energy $E$ and $t$ is the crystal thickness. $\Omega (E-E_0)$ is the mosaic distribution as function of energy, and $R(E)$ is the specific reflectivity (reflectivity per unit thickness). $E_0$ is defined as the energy where $\Omega$ is at its maximum. In the following we shall assume that the mosaic distribution $\Omega$ has a Gaussian shape: \begin{equation} \Omega (E-E_0) = \frac {1}{\sqrt{2\pi}\sigma} e^{-(E-E_0)^2 / {2 {\sigma^2}}} . \label{Omega} \end{equation} We then get for the peak reflectivity: \begin{equation} \nu (E_0) = \frac{1}{2}e^{-\mu (E_0) t} (1 - e^{-\frac{R(E_0) t}{\sqrt{2\pi} \sigma}}). \label{PeakRefl} \end{equation} The value of the crystal thickness which maximizes the peak reflectivity is \begin{equation} t_{max} = \frac{\ln{(1+\alpha})}{\mu (E_0)\alpha} \label{MaxThick} \end{equation} with: \begin{equation} \alpha = \frac{R(E_0)}{\mu (E_0)} \frac{1}{\sqrt{2\pi} \sigma} \label{MaxThickAlpha} \end{equation} and the corresponding peak reflectivity is \begin{equation} \nu_m(E_0) = \frac{1}{2} \alpha (1+\alpha )^{-\frac{1+\alpha }{\alpha }}. \label{MaxRefll} \end{equation} Note that the specific reflectivity, R, and the attenuation coefficient, $\mu$, are characteristic of a particular material and set of crystalline planes, whereas the mosaic width, $\sigma$, depends on the method of manufacture and subsequent treatment of the crystals. It is therefore reasonable to start by seeking crystals that maximise the value of $R / \mu$ and leave the choice of the mosaic width to the detailed lens design. The specific reflectivity is given by: \begin{equation} R(E) = 2r_e^2 \frac{\lambda ^3}{\sin(2\theta _B)} \Big (\frac{F_{struct}(x)}{V}\Big )^2 e^{-2Bx^2} \label{SpecifRefl} \end{equation} with $x = \sin (\theta_B) / \lambda$. Here $r_e$ is the classical electron radius, $V$ is the volume of the unit cell and $F_{struct}$ is the `structure factor' for the crystal unit cell. The structure factor depends on the crystal structure type (e.g. body centred cubic, face centred cubic), on the atoms, and on the choice of lattice planes involved, described by the Miller indices (h,k,l). The exponential factor describes the reduction in the diffraction intensity due to the thermal motion of the diffracting atoms. Considering only crystals of the pure elements, \ref{SpecifRefl} can be rewritten as: \begin{equation} R(E) \propto E^{-2} a^{-5/3} (f_1(x,Z))^2 e^{-2B n^2 / {2d^2}}, \label{SpecifElemRefl} \end{equation} where $a$ is the atomic volume, $d$ is the interplanar distance of the diffracting planes, $f_1(x,Z)$ is the atomic form factor and $n$ is the diffraction order. Here use has been made of the approximations $\sin (2\theta_B) \sim 2 \sin (\theta_B)$ and $ x \sim n / 2d$. The linear attenuation coefficient, $\mu (E)$, can be expressed as % a function of the total atomic cross section, $\kappa (E)$, and the atomic volume : \begin{equation} \mu (E) = \frac{\kappa (E)}{a} \label{eq:LinearAtten} \end{equation} thus \begin{equation} \frac{R}{\mu} \propto E^{-2} a^{-2/3} \frac{f_1(x,Z)^2}{\kappa (E)} e^{{-B n^2}/{2d^2}} . \label{eq:R_mu_ratio} \end{equation} Since at high energies both $f_1(x,Z)$ and $\kappa (E)$ are roughly proportional to $Z$ it is clear that for gamma-ray energies a high atomic number and a small atomic volume (a high atomic density) are important for maximizing R/$\mu$. For energies below ${\sim100}$ keV photoelectric absorption may rule out the use of crystals of the heaviest elements for Laue lenses. The atomic density of crystals of the pure elements varies systematically with the atomic number, $Z$, as illustrated in Figure \ref{fig:AtomicDensity}. The most suitable elements for Laue lens crystals are found near the peaks in this plot, that is near $Z = 13$ (Al), $Z = 29$ (Ni, Cu,), $Z = 45$ (Mo, Ru, Rh Ag) or $Z = 76$ (Ta, W, Os, Ir, Pt, Au). The atomic form factors are tabulated in the literature \cite{IntTblXCryst1977} and software for their calculation is publicly available \cite{XOP-paper}. The thermal factor, $B$, turns out to be anti-correlated with the atomic density, thereby strengthening somewhat the case for a high atomic density \cite{Warren1969}. \subsubsection{Extinction effects} \label{sect:Extinction} In the above derivation of the diffraction efficiencies only the attenuation by incoherent scattering processes was explicitly considered in equation \ref{eq:LinearAtten}. However losses due to coherent effects also occur. These are termed `extinction effects' \cite{Chandrasekhar1960}. One type of extinction loss is due to the diffraction process itself and occurs in both mosaic and perfect crystals. Photons are removed from both the incoming beam and from the diffracted beam by diffraction. (The diffracted beam is a mirror image of the incoming beam with respect to the lattice planes and fulfills the Bragg condition just as well). This dynamic interaction is termed `secondary extinction' and accounts for the factor $\frac{1}{2}$ in \ref{eq:XtalRefl}. A more subtle extinction effect, which is only present if phase coherence is maintained through multiple diffractions, is termed the `primary extinction'. Every diffraction instance is associated with a phase shift of $\pi /2$. Consequently, after two coherent diffraction processes the photon has accumulated a phase shift of $\pi$ and destructively interferes with the incoming beam. In the same way, three coherent diffraction processes will cause destructive interference in the diffracted beam. This effect only occurs in perfect crystals or in mosaic crystals in which the size of the crystallites is large enough that there is a significant probability of multiple successive coherent scatterings. The critical dimension here is the `extinction length' (see \cite{zachariasen}) which can be estimated as: \begin{equation} { t_{ext} \approx \frac{1}{r_0} \frac{V}{F_{struct}(x) \lambda} \ }. \label{eq:extinct_length} \end{equation} The extinction length for the Cu(111) reflection at 412 keV is 66 $\mu$m \cite{Schneider1981b}~. As $t_{ext}$ is proportional to the energy it will be at the lower gamma-ray energies that extinction effects may become noticeable. For Laue lenses it is important to find or develop crystals in which the defect density is high enough to keep the crystallite size below $t_{ext}$ at the lowest energies where the crystals are to be used. \subsection{Focusing elements} \subsubsection{Classical perfect crystals} \label{perfect_crystals} Perfect crystals, where the ideal lattice extends over macroscopic dimensions, are not particularly suitable for the use of Laue lenses in Astrophysics because they are too selective regarding the photon energy, even for Laue lenses intended for narrow line studies. Perfect crystals diffract with high efficiency, but only for an extremely narrow range of energy/angle combinations. For example, at 511 keV a perfect Germanium crystal will have an angular width of the diffraction peak (the `Darwin width') of only 0.25~arc-seconds. This should be compared to the Bragg angle, which at this energy is 750~arc-seconds, {\emph{i.e.}} about one part in 3000. The corresponding energy bandwidth is then only 0.14~keV! Thus, perfect crystals are not preferred for observations of astrophysical sources. \subsubsection{Classical mosaic crystals} \label{mosaic_crystals} Fortunately, perfect crystals are not the norm. Most artificial crystals grow as mosaic crystals. According to the Darwin model, such crystals can be viewed as an ensembles of perfect micro-crystals with some spread of their angular alignments. Photons of a specific energy may traverse hundreds of randomly oriented crystallites with little interaction and still be strongly diffracted by a single crystallite oriented correctly for this energy. Mosaic crystals generally perform much better than perfect crystals in the context of Laue lenses. The internal disorder, the mosaic width, may be controlled to some extent during the crystal growth or by subsequent treatment. Mosaic widths of some arc-minutes can be obtained with relative ease for a range of crystal types. For the lenses described below a mosaic width of about 0.5 arcminutes is typical. Such values can be obtained, but this has required substantial development effort \cite{courtois2005}. Copper crystals in particular have attracted interest because of the need for large size, high quality, Copper crystal for use in low energy neutron diffraction. It must be kept in mind that Bragg's law is always strictly valid, even for mosaic crystals. As illustrated in Figure \ref{fig:LaueOption}-(a), after diffraction from a mosaic crystal a polychromatic beam of parallel gamma-rays with a spread of energies % will emerge as a rainbow coloured fan. Its angular width will be twice the angular mosaic width of the crystal. Even if the crystal is oriented so that the central ray of the emerging beam hits the detector, the extreme rays of the fan may miss it. At a given energy, the radiation diffracted from a flat mosaic crystal forms in the detector plane a projected image of the crystal. It is important to note that this projected image does not move if the crystal tilt is changed. Its position is fixed by Bragg's law and it only changes in intensity. \subsubsection{Crystals with {\textcolor{green}{curved lattice planes}}} As well as perfect and mosaic crystals, a third group has attracted interest as potential diffractive elements for Laue lenses. These are perfect crystals with curved lattice planes. The curvature of the lattice planes has two remarkable effects: i) the secondary {\textcolor{green}{extinction}} may be suppressed and ii) the energy pass-band is not constrained anymore to the Darwin width but will be defined by the total range of lattice direction, {\emph{i.e.}} in principle something under our control. If the lattice curvature is correctly chosen relative to the photon energy secondary extinction may be suppressed and the diffraction efficiency can approach 100 \% - ignoring incoherent absorption, which is always present. Different methods to create such lattice curvature have been proposed and experimentally demonstrated. One involves imposing a thermal gradient on the crystal along its thickness, such that the hot side expands and the cold side contracts. This method is very convenient in the laboratory because both the degree of bending (the bending radius) and the `sign' of the bending can be changed with minimal change in the experimental set up \cite{smither05b}. Unfortunately, this method cannot be used in space due to the significant power dissipation required to maintain the thermal gradient. A second method relies on specific pairs of elements (or compounds) which can form perfect crystals across a range of component fractions. Stable, curved lattice planes exist in these binary crystals in the regions where a composition gradient is present \cite{Abrosimov05}. Silicon/Germanium composition-gradient crystals have been proposed for the `MAX' Laue lens described in Section \nameref{MAXproject}. A further bending method relies on externally applied mechanical forces to bend the crystals. Mechanically bent crystals are used in several applications in laboratory experiments including monochromators. However the mass of the structures necessary to maintain the bending forces is unlikely to be acceptable for a space experiment. Controlled permanent bending of Silicon and Germanium wafers by surface scratching has been developed by the Institute of Materials for Electronics and Magnetism, (IMEM-CNR) in Parma ~\cite{Buffagni11} in connection with the Italian {\it{Laue}} project~\cite{virgilli2014}. The lapping procedure introduces defects in a superficial layer of a few microns, providing a high compressive stress resulting in a convex surface on the worked side. In such \textit{transversally bent crystals} the orientation of the diffraction planes with respect to the incident radiation continuously changes in the direction of the curvature of the crystal. If the bending radius is equal to twice the focal length of the lens, the effect is to produce achromatic focusing as illustrated in Figure \ref{fig:LaueOption}. A spectrum of finite extent may be focused into an area which is considerably smaller than the crystal cross-section. In this way the overall Point Source Response Function (PSF) of a Laue lens can be narrower than achievable with flat crystals of the same size. Transversally bent crystals were studied in the {\it{Laue}} project in which a number of different aspects of the Laue lens technology were faced, from the production of suitable crystals to the definition of an accurate and fast method for their alignment. It was demonstrated through simulations and experimental tests \cite{virgilli13} that a transversally bent crystal focuses a fraction of the radiation arriving on its surface into an area which, depending on mosaicity, can be smaller than the cross section of the crystal itself. For some crystals and crystallographic orientations, if an external (primary) curvature is imposed through external forces a secondary curvature may arise. This effect is a result of crystalline anisotropy. It has been termed quasi-mosaicity ~\cite{ivanov05} and leads to an increased diffraction efficiency and angular acceptance \cite{camattari11}. \begin{comment} \subsubsection{Enlarging the pass-band of perfect crystals} The introduction of curved lattice planes also allows perfect crystals to diffract a range of photon energies, similar to the situation with mosaic crystals. However, only a limited set of crystals are known for which the mechanical properties are such that the perfect crystal structure is preserved when the crystal is mechanically deformed. Among the best known are Silicon, Germanium and Quartz. In practice only thin plates (wafers) can be bent, and this put constraints on the application of such bent crystals. \end{comment} A promising manufacturing technology for bent crystals which may overcome some of the limitations of flat crystal diffraction optics is based on so-called Silicon Pore Optics (SPO)~\cite{bavdaz2012}. This is a bonding technology for silicon wafers which is being developed for the ESA {\athena} mission. It has made possible the development of novel units for focusing gamma-rays called Silicon Laue Components (SiLCs~\cite{ackermann13, girou17}). These components are being developed at the {\sc Cosine} company (The Netherlands) in collaboration with the University of California at Berkeley. They are self-standing Silicon diffracting elements which can focus in both the radial and the azimuthal directions. SiLCs consist of a stack of thin Silicon wafers with a small wedge angle between adjacent plates such that the diffracted rays from all the plates converge at a common focus. The incidence angle of the radiation is small so the radiation passes through only one plate. The wafer angle with respect to the optical axis of the telescope is selected such that the mean angle enables diffraction at energy $E$, and the overall range of wedge angles between the wafers dictates the energy bandpass around the centroid $E$. As can be seen in Figure~\ref{fig:silcs} the curvature of the wafers allows focusing in the ortho-radial direction. \subsection{Laue lens optimization} The response of a Laue lens is strongly energy dependent. For a given focal length and crystal plane spacing the area diffracting a particular energy pass-band is inversely proportional to E. Furthermore, the diffraction efficiency of the crystals adopted for realizing these optics decreases with energy. These two reasons, combined with the fact that the gamma-ray emission of astrophysical sources typically decreases with energy according to a power law, make observations at high energies even more challenging. The dependence of the effective area on energy is a geometric effect and can be mitigated only at particular energies in narrow pass-band Laue lenses. The decrease in diffraction efficiency with energy can be mitigated by choosing crystals to maximize the reflectivity. A number of parameters can be tuned in the optimization of a Laue lens. They are mainly related to the crystals properties (mosaicity, crystal material, diffraction planes, crystallite size, crystal thickness), or to the overall lens structure (lens diameter, focal length, inner and outer radius, geometrical configuration). The optimization is complex and depends on the Laue lens requirements (lens bandwidth, point spread function extension, total weight of the lens). The main factors involved in the optimization are described in the following sections. \subsubsection{Crystal selection} As discussed in Section ~\nameref{diff_eff}, crystals with high atomic density and high atomic number are generally preferable as Laue lens elements except at the lowest energies where photo-electric absorption may render their use less attractive. The technical difficulties involved in the fabrication and handling of crystals of the different chemical elements are also important factors. These difficulties vary significantly among the elements. The mechanical properties of the crystals are an important issue - for example Silicon and Germanium are quite hard and rugged whereas Copper, Silver and Gold crystals are soft and require special care in treatment and handling. As already observed in Sect.~\nameref{sect:Extinction}, the crystallite thickness plays an important role in the reflectivity optimization. For given values of the mosaicity and crystal thickness, the highest reflectivity is obtained for a crystallite size much smaller than the extinction length of the radiation. At the energies of interest this thickness must be of the order of few $\mu$m. The crystal mosaicity also has a primary role in the Laue lens optimization. The higher the mosaicity, the larger is the integrated reflectivity, and thus the effective area, but the broader is the signal on the focal plane detector. These two effects act in opposite senses in the optimization of the sensitivity. The crystal thickness is also an important factor for the optimization of the crystal reflectivity and therefore for the maximization of the Laue lens sensitivity. Eq.~\ref{MaxThick} provides the thickness that maximizes the reflectivity for a given material and for fixed diffraction planes. As the best thickness is also a function of energy it is expected that, depending on the adopted geometry, crystals dedicated to high energy are thicker than those used to diffract low energies. It must be also taken into account that, the choice of the thickness maximizing the reflectivity would often lead to a mass unacceptable for a satellite-borne experiment. A trade-off between lens throughput and mass is then necessary. \subsubsection{Narrow- and broad-band Laue lenses} Depending on the scientific goal to be tackled, Laue lens can be designed, or adjusted, with two different optimizations: lenses for a broad energy pass-band (e.g. 100~keV - 1~MeV) or those configured to achieve a high sensitivity over one or more limited range(s) of energy. The latter can be valuable for studying gamma-ray lines, or narrow band radiation. Relevant energies of interest might be the 511~keV e+/e- annihilation energy or the 800-900 keV energy range for its importance in Supernova emission. The two classes of Laue lenses need different optimizations and dispositions of the crystals over the available geometric area. \\ For a \textbf{narrow energy pass-band} Laue lens as many as possible of the crystals should be tuned to the same energy. According to Eq.~\ref{e_r}, the d-spacing of the crystals should ideally increase in proportion to their radial distance from the focal axis in order keep the diffracted energy fixed. \\ The energy range of a \textbf{broadband Laue lens} follows from Eq. \ref{e_r}. With the focal length and d-spacing of the crystalline diffraction planes both fixed, the energy range will be from $$E_{min} = \frac{hc~F}{d_{hkl}~R_{max}} \: \: \textrm{ to } \: \: E_{max} = \frac{hc~F}{d_{hkl}~R_{min}},$$ where the radial extent of the lens is R$_{min}$ to R$_{max}$. If the inner and outer radii are fixed, the simultaneous use of different materials, and thus of different d$_{hkl}$, would allow enlarging the Laue lens energy pass-band compared with a single-material Laue lens. Equivalently, for given energy pass-band and focal length, the use of multi-material crystals would allow a more compact lens. \subsubsection{Tunable Laue lens} \label{tunable} \label{sect:tunable} A classical Laue lens with fixed inner/outer radius and focal length has a pass-band which is uniquely defined by the d-spacing of the crystals used. The red curve in Figure ~\ref{fig:tunable} shows the effective area of an example 100~m focal length Laue lens configured to cover a 300 - 800~keV energy pass-band. If all of the crystals could be retuned for different focal lengths, the Laue less could be made sensitive to different pass-bands. Furthermore, as shown in Figure ~\ref{fig:tunable}, the larger the focal length, the broader the pass-band and the higher the integrated effective area. The adjustment in orbit is not trivial - both the orientation of thousands of crystals and the lens to detector separation must be changed and verified. The former requires thousands of actuators and a sophisticated optical system. An innovative mechanism for the adjustment of the orientation of a crystal, along with an optical system for monitoring alignment of each one, has nevertheless been proposed~\cite{lund2021a, lund2021b}. The mechanism is based on a miniature piezo-actuator coupled with a tilt pedestal and does not require power once a crystal has been correctly oriented. It is assumed that the lens and detector are on separate spacecraft that can be manoeuvred to adjust their separation. \subsubsection{Multiple layer Laue lenses} \label{sect:multiplelayer} A possible way to increase the flux collection from a lens is to use two or more layers of crystals covering the same area. For instance, two layers of crystals can be used, one on each side of the lens structure. In order to focus at the same position, crystals placed at the same radius but in different layers must diffract at the same angle, so different crystals or Bragg planes must be used to diffract different energies. In a simulations \cite{lund2021a} two layers of thin crystals made with Ag(111) and Ag(200), increased the effective area by about 65\% (see Fig. \ref{fig:multilayerLaue}). A third layer did not further increase the lens throughput. It must be stressed that with multiple layers, the diffracted radiation from any one layer will be attenuated by all of the other layers. The number of layers maximizing the effective area will depend on the crystals parameters (thickness, mosaicity, diffraction efficiency). \subsubsection{Flux concentration and imaging properties of Laue lenses} \label{PSF} The sensitivity of a telescope using a Laue lens depends on the effective area over which flux is collected, but because observations will almost always be background limited it is also a function of the extent to which the collected flux is concentrated into a compact region in the detector plane. For a given lens design the collecting area at a particular energy is just the sum of the areas of the crystals multiplied by their reflecting efficiency at that energy. Obviously only those crystals for which the incidence angle is close to the Bragg angle need be considered. In practice this means that the only crystals that contribute are those with centres that fall inside an annulus whose width depends on the extent, $\Delta\theta$, of the rocking curve. For broadband lenses the incidence angle is a simple inverse function of radius from the axis. Consequently crystals at the centre of the band will contribute most, with the response decreasing towards the edges of the annulus. Making the small angle approximation $\theta = r / 2F$, the width of the annulus is given by: \begin{eqnarray} \Delta r = \frac{\Delta\theta}{d\theta/dr} = 2F \Delta \theta . \end{eqnarray} As the radius of the annulus is also proportional to $F$ this means that its area is proportional to $F^2$. Because each crystal simply changes the direction of the incoming parallel beam it will illuminate a region in the detector plane identical in size and shape to the profile it presents to the incoming radiation. This immediately means that the focal spot can never be smaller than the size of the crystals. Moreover, crystals that are not at the centre of the illuminating annulus will have a `footprint' in the detector plane that is offset from the instrument axis by a corresponding distance. Thus to a good approximation the radial form of the on-axis PSF of a broadband Laue lens is the convolution of the (scaled) rocking curve and a flat-topped function corresponding to the crystal size\footnote{The azimuthal extent of the crystals has been ignored and it is assumed that the crystals are smoothly distributed in radius.}. As is usual with telescopes, other things being equal, the best signal to noise ratio will be obtained with a compact PSF. \begin{itemize} \item{ In circumstances where use of smaller crystals is feasible and would lead to a significantly narrower PSF then choosing them will always offer an advantage.} \item{ If the PSF width is dominated by the rocking curve width the situation is more complex. Decreasing the mosaicity will then narrow the PSF but also decrease the area of the diffracting annulus, reducing the advantage to be gained. } \item{ Lastly because of the $F^2$ dependence of the area of lens over which crystals diffract a given energy, a wider PSF can actually provide an advantage if it is the result of an increased focal length - although the signal will be spread over a detector area proportional to $F^2$, the uncertainty due to background only increases as $F$. This of course assumes that such a design is feasible given the cost and mass of the larger lens and detector.} \end{itemize} The above discussion concerns the response to an on-axis source. As Laue lenses are single reflection optics, they do not fulfill the Abbe sine condition and therefore for off-axis sources they are subject to coma. The aberrations are very severe as illustrated in Figure ~\ref{fig:ImageAberration}. The integrated flux is little affected by off-axis pointing, but importantly the image is smeared over a larger area and thus the signal-to-noise ratio of the signal is significantly reduced. \subsection{Technological challenges} \label{challenges} The development of Laue lenses presents several technological challenges. These can be divided into two categories. The first is associated with the search for, and production of, suitable materials and components. Highly reproducible crystals with high reflectivity are needed, as are thin, rigid and low Z substrates and structures to minimize the absorption. The second category is related to the required accuracy of positioning and alignment, both for the mounting of the crystals in the Laue lens and for the alignment of the lens with respect to the focal plane detector. Some of the main issues that are being faced in studies and development of Laue lenses are described below. \subsubsection{I. Production of proper crystals and substrate} \label{crystals} In order to cover a competitive geometric area a Laue lens must contain a large number of crystal tiles. The production of a large quantities of crystals having the optimal properties for providing a reflectivity that is as high as possible is still problematic. Crystal growth has been described as an art as well as a science. Advanced technologies present their own problems. For instance, it has been shown that bent crystals reduce the width of the PSF compared with flat crystals, but the advantages depend on very accurate control of the curvature. \subsubsection{II a. Crystal mounting methods and accuracy} In a Laue lens it obviously important that each crystal is properly oriented. It is useful to consider three angles describing the orientation of a generic crystal: \begin{itemize} \item (i) a rotation $\alpha$ about an axis parallel to the instrument axis. If there is an error in $\alpha$, diffraction will occur with the expected efficiency but the photons will arrive in the focal plane displaced by a distance $R \Delta\alpha$, where $R$ is the distance of the crystal from the instrument axis. This displacement should be kept small compared with the spatial scale of the PSF. \\ \item (ii) a rotation $\phi$ about an axis normal to the diffracting crystal planes. To first order an error in $\phi$ will not have any effect on efficiency or imaging.\\ \item (iii) a rotation $\theta$ about an axis in the plane of the crystal plane and orthogonal to the instrument axis. If $\theta$ does not have the intended value then the energy at which the reflectivity is highest will change. For any given energy, the position in the focal plane where photons of a given energy are arrive will not be altered. However the number of photons diffracted by the crystal may either increase or decrease depending on the energy considered. \end{itemize} The probable effect is a spreading of the PSF as a result of enhancing the response of crystals that are not at the optimum radius for diffracting photons of a given energy towards the centre of the focal spot, at the expense of that of those that are. To avoid such spreading, errors in $\theta$ should be kept much smaller than the rocking curve width. The most obvious mounting method is to use adhesives to bond the crystals to a supporting structure at their proper position and orientation. However due to glue deformation during the polymerization phases it has not proven easy to maintain the necessary precision \cite{2014NIMPA.741...47B,2015SPIE.9603E..08V}. The amount of misalignment that is introduced depends on the type of adhesive used and on the polymerization process (two components epoxy adhesive, UV curable, thermal polymerization, etc.). In the {\claire} experiment the effects of uncertainties in the bonding process were avoided by the use of a manual adjustment mechanism (Sect:~\nameref{sect:claire}), but this technique is probably not appropriate for space instrumentation. \subsubsection{II b. Laue lens alignment} Because the MeV sky is poorly known at present, the issue of how to verify the correct alignment of gamma-ray diffraction instruments after launch and during operations is important. It is therefore suggested that some optical means of verifying the shape of the large-scale structure of the lens should be incorporated from the start of the project. One possible way is to install mirror reflectors on the lens structure, the orientation of which can be monitored, perhaps from a separate detector spacecraft. If the attachment technique for these mirrors is similar to that used for the Laue crystals the long-term stability of the mirror alignment will also give some confidence in the stability of the crystal mounting. \subsection{Examples of Laue lens projects} In this section we will review the Laue lens experiments that have been realized or proposed from the early 2000s until the present. The CLAIRE project is the only Laue lens instrument that has actually flown. Other experiments have been realized in the laboratory as R\&D projects and were directed to the advancement of well-defined aspects of the Laue lens technology (mainly crystal production and tiles alignment). Studies have been conducted of several possible space missions based on Laue lenses. \subsubsection{The {\textcolor{green}{CLAIRE}} balloon project (2001)} \label{sect:claire} A pioneering proof-of-principle Laue telescope, CLAIRE, was built and successfully flown as a balloon payload by the Toulouse group in 2001 and 2002 \cite{laporte2000, halloin04}. The balloon payload is shown in Figure~\ref{claire:instrum} during flight preparations. The {\claire} experiment is an example of a narrow energy pass-band instrument. It was designed for focusing photons with energy $\sim 170$~keV, with a pass-band of $\sim$ 4~keV at a focal distance of 2.8~m. The lens (Figure~\ref{fig:claire:lens}) consisted of 659 Germanium/Silicon mosaic crystals arranged in 8 concentric rings providing a collecting area of 511 cm$^2$ and a field of view ({\sc fov}) of 90~arcseconds. The mosaicity of the crystals selected was in the range 60 to 120 arc-seconds, leading to an angular resolution for the instrument of 25-30 arcseconds. Two crystal sizes were used: 10 $\times$ 10 mm$^2$ and 7 $\times$10 mm$^2$. The crystals focused the radiation onto a Germanium detector with 9 elements, each 15 $\times$ 15 $\times$ 40~ mm$^3$, having an equivalent volume for background noise of 18 cm$^3$. The fine tuning of the lens utilized a mechanical system capable of tilting each crystal tile until the correct diffracted energy was detected. The crystals were mounted via flexible lamellae on a rigid Titanium frame. The tuning of the individual crystals was done manually with adjustment screws. Due to the finite distance (14~m) of the X-ray source during the crystal tuning phase, the gamma-ray energy used was lowered from 170 keV to 122~keV. Moreover, also due to the finite source distance used for the crystals alignment, only a limited fraction of each crystal was effectively diffracting at the tuning energy, their subtended angle (about 150 arc-seconds, as seen from the source) being much larger that the crystal mosaicity. The performance of the complete lens was verified using a powerful industrial X-ray source at a distance of 200~m, (hence a diffracted energy of 165.4 keV). Seen from a distance of 200~m each crystal subtended an angle of about 10~arc-seconds, i.e. significantly less than the crystal mosaic width. The {\claire} lens was flown twice on balloon campaigns in 2000 and 2001. In both flights the target source was the Crab Nebula. The observed diffracted signal at 170~keV was found to be consistent with that expected from the Crab Nebula given the lens peak efficiency, estimated at about 8\% from ground measurements \cite{vonballmoos05}. \begin{comment} A pioneering proof-of-principle Laue telescope, CLAIRE, was build and successfully flown as a balloon payload by the Toulouse group in 2001 and 2002 \cite{laporte2000, halloin04}. The balloon payload is shown in Figure~\ref{claire_lens} during flight preparations. The {\claire} experiment is an example of narrow energy pass-band Laue lens that was designed for focusing photons with energy $\sim 170$~keV, with a pass-band of $\sim$ 4~keV at a focal distance of 2.8~m. The lens consisted of 659 Germanium/Silicon mosaic crystals arranged in 8 concentric rings providing a collecting area of 511 cm$^2$ and a field of view ({\sc fov}) of 90~arcseconds. The angular resolution of the instrument was 25-30 arcseconds, which was dictated by the mosaicity of the selected crystals that was in the range 60 to 120 arc-seconds. Two crystal sizes were used: 10 $\times$10 mm$^2$ and 7 $\times$10 mm$^2$. The crystals focussed the radiation into a Germanium detector with 9 elements, each $15\times15\times40z$ small solid state detector with $\sim$ 1.5 cm$^2$ sensitive cross section and 18 cm$^2$ equivalent volume for background noise. The fine tuning of the lens consisted of a mechanical system capable of tilting each crystal tile to the appropriate Bragg angle until the diffracted energy was detected for each tile. The crystals were mounted via flexible lamellae on a rigid Titanium frame. The tuning of the individual crystals was done manually with adjustment screws. Due to the finite distance (14~m) of the X-ray source during the crystal tuning phase, the gamma-ray energy used had to be lowered from 170 keV to about 122~keV. Moreover, also due to the finite source distance used for the crystals alignment, only a limited fraction of each crystal was effectively diffracting at the tuning energy, their subtended angle (about 150 arc-seconds, as seen from the source) being much larger that the crystal mosaicity. The performance of the complete lens was verified using an a powerful industrial X-ray source at a distance of 200 m, (hence a diffracted energy of 165.4 keV). Seen from a distance of 200~m each crystal subtended an angle of about 10~arc-seconds, i.e. significantly less than the crystal mosaic width. In both the 2000 and 2001 balloon campaigns the target source was the Crab Nebula. The observed diffracted signal at 170~keV was found to be consistent with that expected given the lens peak efficiency, estimated at about 8\% from ground measurements \cite{vonballmoos05} \end{comment} \subsubsection{The MAX project (2006)} \label{MAXproject} Following up on the successful {\claire} project the French Space Agency, CNES, embarked on the study of a project, `MAX', for a satellite mission with a Laue lens. MAX was planned as a medium-sized, formation-flying project with separate lens and detector spacecrafts launched together by a single launcher. The scientific aims were (i) the study of supernovae type 1a (through observations of the gamma-lines at 812 and 847~keV), (ii) a search for localized sources of electron-positron annihilation radiation at 511~keV and (iii) a search for 478 keV line emission from $^7$Be-decay associated with novae. These objectives could be met by a Laue lens with two energy passbands: 460-530~keV and 800-900~keV. The left panel of Figure \ref{fig:MAXlensLayout} shows the lens proposed for MAX which would contain nearly 8000 crystal tiles 15 $\times$ 15 mm$^2$ with a mosaic spread of 30 arc-seconds. The total mass of the Laue crystals was expected to be 115 kg. A focal length of 86~m was foreseen. The predicted response of the MAX lens is shown in the right panel of Figure \ref{fig:MAXlensLayout}. In the end, however, CNES decided not to continue the development of MAX. \begin{comment} \end{comment} \subsubsection{GRI: the Gamma-Ray Imager (2007)} \label{gri} The Gamma Ray Imager (GRI) mission concept was developed by an international consortium and proposed to the European Space Agency in response to the `Cosmic Vision 2015–2025’ plan. GRI consisted of two coaligned telescopes each with a focal length of 100~m: a hard X-ray multilayer telescope working from 20~keV up to 250~keV and a Laue lens with a broad passband 220~keV - 1.3~MeV. The low energy limit of the GRI Laue lens was driven by the anticipated upper limit of the multilayer technology. The NuSTAR mission has demonstrated the capability of multilayer telescopes of focusing photons up to 70-80 keV. Further developments are expected to allow multilayers to be used up to 200/300~keV. The two optics were proposed to share a single solid state detector using CZT (Cadmium Zinc Telluride) crystals which are attractive as they can be used without cooling. Thanks to the 3-d capability of pixellated CZT, GRI could also be exploited for hard X-/soft gamma-ray polarimetry. Due to the long focal length, a two spacecraft, formation flying mission was proposed. With these features, GRI was expected to achieve 30~arcsec angular resolution with a field of view of 5~arcmin. Unfortunately, the mission was not selected by CNES or ESA for further assessment. \subsubsection{ASTENA: an Advanced Surveyor of Transient Events and Nuclear Astrophysics (2019)} \label{astena} Within the European AHEAD project (integrated Activities in the High Energy Astrophysics Domain) \citep{natalucci18}, a mission was conceived to address some of the current issues in high energy astrophysics: a high sensitivity survey of transient events and the exploitation of the potential of gamma-ray observations for Nuclear Astrophysics. This mission concept, named ASTENA (Advanced Surveyor of Transient Events and Nuclear Astrophysics) \citep{frontera19wp, guidorzi19wp} has been proposed to the European Space Agency in response to the call "Voyage 2050". It consists of a Wide Field Monitor, with both spectroscopic and imaging capabilities (WFM-IS), and a Narrow Field Telescope (NFT) based on a broad energy pass-band (60 - 700 keV) Laue lens with a field of view of about 4~arcmin and with an angular resolution of $\sim$30 arcsec. The Laue telescope exploits bent tiles of Si and Ge using the (111) reflecting planes. The tiles have dimensions of 30 $\times$ 10 mm$^2$ in size (the longer dimension being in the focusing direction) and a bending curvature radius of 40~m. The focal length of the lens is 20~m. The crystals are arranged in 18 rings, with an outer diameter of $\sim$ 3~m that gives an outstanding geometric area of about 7 m$^2$. The focal plane detector consists of 4 layers of CZT drift strip detectors \citep{kuvvetli05} each layer having a cross section of 80 $\times$ 80~mm$^2$ and thickness of 20~mm. \section{Fresnel Lenses} % \label{sect:fresnel} {\index{Diffraction limited}} Fresnel lenses have been extensively discussed and studied for use in astronomy at X-ray energies ({\emph{e.g.} } \cite{1996SPIE.2805..224D,2004ApOpt..43.4845S,2004SPIE.5168..411G, 2006ExA....21..101B,2008SPIE.7011E..0UG, 2012ApOpt..51.4638B,2018ApOpt..57.1857B} ) and missions exploiting them in that band have been proposed \cite{2008SPIE.7011E..0TS,2012SoPh..279..573D,2020SPIE11444E..7VK}. % For a review see \cite{2010XROI.2010E..16S}. % The circumstances in which such lenses offer diffraction limited resolution are discussed in \ref{willingale}. Although the idea of \textcolor{green}{Fresnel lenses} \index{Fresnel lenses} for gamma-rays was introduced at least as far back as 2001 \cite{2001A&A...375..691S,2002A&A...383..352S}. % when their potential for micro-arc-second imaging was pointed out, the idea has rested largely dormant. It will be seen below that the main reason for this is the extremely long focal lengths of such lenses. A secondary reason is that although they offer effective areas far greater than any other technique, with a simple lens the bandwidth over which this is achieved is narrow because of chromatic aberration. However, if those difficulties can be overcome, gamma-ray Fresnel lenses offer some unique possibilities. Like Laue lenses, Fresnel lenses provide a way of concentrating incoming gamma-rays onto a small, and hence low background, detector. A gamma-ray Fresnel lens could focus the flux incident on an aperture that could be many square metres into a millimetre scale spot with close to 100\% efficiency. Moreover, such a lens would also provide true imaging in the sense that there is a one-to-one correspondence between incident direction and positions in a focal plane. At photon energies above the limits of grazing energy optics no other technique can do this. Finally, the imaging can be diffraction-limited, which in the gamma-ray band with a metre scale aperture means sub-micro-arcsecond resolution. What is more, even if missions employing them present challenges, gamma-ray Fresnel lenses are, \emph{per se}, low-technology items. A conventional refractive lens, operating for example in the visible band, focuses radiation by introducing a radius-dependent delay such that radiation from different parts of the lens arrives at the focal spot with the same phase (Figure \ref{fig:1}a). A Fresnel lens \footnote{Strictly the term used should be ``Phase Fresnel Lens'' as the shorter form can also be used for stepped lenses in which coherence is not maintained between steps} (Figure \ref{fig:1}b) achieves the same phase-matching by taking advantage of the fact that the phase of the incoming radiation never needs to be changed by more than $2\pi$. Consequently the maximum thickness of the lens can be reduced to that necessary to produce a phase change of $2\pi$, a thickness termed here $t_{2\pi}$. It is usual to write the complex \textcolor{green}{refractive index} \index{Refractive index} as $ n = 1 - \delta - i\beta $ where for gamma-rays both $\delta$ and $\beta$ are small and positive. The imaginary component describes absorption and does not affect the phase so $t_{2\pi} = \lambda/\delta$ where $\lambda$ is the wavelength. The fact that $\delta$ is positive for gamma-rays means that a converging lens has a concave profile and a Fresnel lens has the form illustrated in Figure \ref{fig:1}c. It is a more efficient form of a \textcolor{green}{`Phase Zone Plate`} \index{Phase Zone Plate} (Figure \ref{fig:1}d) in which the thickness profile has just two levels differing in phase shift by $\pi$. The parameter $\delta$ is given in terms of the atomic scattering factor $f_1(x,Z)$ discussed in Section \nameref{CrystalDiff} by: \begin{equation} \delta = \frac{r_e\lambda^2}{2\pi}n_a f_1(x,Z) \label{eq:delta} \end{equation} where $r_e$ is the classical electron radius and $n_a$ is the atomic density. For lenses of the type considered here $x$ is essentially zero. Well above all absorption edges $f_1$ approaches the atomic number $Z$ and so is constant. Thus $\delta$ is proportional to $\lambda^2$ or inversely proportional to the square of the photon energy $E$. In principle in the region of the 1.022 MeV threshold for pair production in the nuclear electric field and above, Delbr\"uck scattering should be taken into consideration. Early reports of an unexpectedly large contribution to $\delta$ from this effect \cite{2012PhRvL.108r4802H} % turned out to be mistaken \cite{2017PhRvL.118p9904H} % but did lead to experimental confirmations of predicted gamma-ray refractive indices at energies up to 2 MeV \cite{2017PhRvA..95e3864G, 2017PhLA..381.3129K}. % Like those from Delbr\"uck scattering, contributions from nuclear resonant scattering will also be negligible in most circumstances. Although $\delta$ is extremely small at gamma-ray energies, the wavelength is also extremely short, for example 1.24 pico-metres at 1 MeV. Using $\delta$ from Eq. \ref{eq:delta} results in the following expression for $t_{2\pi}$ in terms of some example parameters \begin{equation} t_{2\pi} = \frac{\lambda}{\delta} = 2.98 \biggl(\frac{Z}{A}\biggr)^{-1} \biggl(\frac{E}{1 \textrm{ MeV}} \biggr) \biggl(\frac{\rho}{1 \textrm{ g cm}^{-3}}\biggr)^{-1} \textrm{ mm}. \end{equation} Noting that for materials of interest $Z/A$ is in the range 0.4 to 0.5, this means that a gamma-ray Fresnel lens need have a thickness only of the order of millimetres. The period of the sawtooth profile where it is lowest at the edge of the lens is a crucial parameter. Again in terms of example parameters, for a lens of diameter $d$ and focal length $F$ the minimum period is given by \begin{equation} p_{min} = 2\lambda \frac {F}{d} = 2.48 \biggl( \frac{E}{1\; \textrm{ MeV}}\biggr)^{-1} \biggl( \frac{F}{10^9 \textrm{ m}}\biggr) \biggl( \frac{d}{1\; \textrm{ m}}\biggr)^{-1} \textrm{ mm}. \label{eqn:pmin} \end{equation} The difficulties lie in the $F$ term. For reasonable values of $p_{min}$ extremely long focal lengths are required. $p_{min}$ is an important parameter in another respect. The % PSF will be an Airy function with a FWHM of $1.03\lambda/d$, so the focal spot size using this measure will be given by \begin{equation} w = 1.03 \frac{F \lambda}{d} = 0.501 p_{min} \label{eqn:focalspot} \end{equation} That is to say the size of the focal spot is about half the period at the periphery of the lens. The corresponding angular resolution is $w/F$ and is given in {\bf micro-}arcseconds ($\mu"$) by \begin{eqnarray} \label{eqn:diff_lim1} \Delta \theta_d&=& 0.263 \left( \frac {E}{1\: \textrm{MeV}} \right)^{-1} \left( \frac {d}{1\: \textrm{m}} \right)^{-1}. \label{eqn:diff_lim} \end{eqnarray} Thus gamma-ray Fresnel lenses have the potential to form images with an angular resolution better than available in any other waveband and would be capable of resolving, for example, structures on the scale of the event horizons of extra-galactic massive black holes. \subsection{Construction} \label{sect:construct} If the full potential of a Fresnel lens is to be realised, then Eq. \ref{eqn:focalspot} implies that $p_{min}$ should be larger than the detector spatial resolution and so should be of the order of millimetres or more. Except at energies above 1 MeV the required thickness is also of the order of millimetres (Fig. \ref{fig:2}a) so high aspect ratios are not then needed. Almost any convenient material can be used -- the nuclei only serve to hold in place the cloud of electrons that produces the phase shifts! Dense materials have the advantage of minimising thickness but also tend to be of higher $Z$ and hence more lossy at low and high energies. Over much of the gamma-ray band all reasonably low $Z$ materials have similarly low losses (Figure \ref{fig:2}b). Because the refractive index is so close to unity, dimensional tolerances are relatively slack. Using the Mar\'echal formula \cite{1982JOSA...72.1258M}, if the \textit{rms} errors in the profile are kept within 3.5\% of the maximum lens thickness (assumed to be $t_{2\pi}$) the loss in Strehl ratio (on-axis intensity) will be less than 5\%. If the lens is assembled from segments then the precision needed in their alignment is only at a similar level. Thus a range of constructional technologies can be considered, including diamond point turning, vapour-phase deposition, photo-chemical etching and 3-d printing. \subsection{The focal length problem} As argued above, if the the best possible angular resolution is sought then detector consideration drive one to a $p_{min} \sim \textit{mm}$. When this is coupled with a requirement for a reasonable collecting area Eq. \ref{eqn:pmin} implies a focal length of the order of $10^5{\mbox{--}}10^6$ km. This sort of focal length clearly demands \textcolor{green}{formation flying} \index{Formation flying} of two spacecraft, one carrying the lens and the other a focal plane detector. Minimising station-keeping propulsion requirement suggests locations near to a Sun-Earth Lagrangian point. The line of sight from detector to lens must be controlled with sufficient accuracy to keep the image on the detector, so the precision needed depends on the size of the detector. This could be from a few cm up to perhaps 1m. Changes in the direction of that line of sight need to be determined with an accuracy corresponding to the angular resolution aimed for, which could be sub-micro-arcsecond. Detailed studies have been performed of missions with requirements that meet all of these requirements separately. Not all are met together in any single study, but on the other hand the complexity of the instrumentation and spacecraft was in each case much greater than that for a gamma-ray Fresnel telescope. New Worlds Observer requires a 50~m star shade and a 4~m visible-light diffraction-limited telescope to be separated by $8\times 10^4$~km \cite{2009SPIE.7436E..06C}. The transverse position of the telescope must be maintained to within a few cm. The \textcolor{green}{MAXIM} {\index{MAXIM}} X-ray interferometer mission calls for a fleet of 26 spacecraft distributed over an area 1 km in diameter to be separated from a detector craft $2\times 10^4$ km away \cite{2003ExA....16...91C, 2005AdSpR..35..122C}. % The requirement for a diffraction-limited gamma-ray Fresnel lens mission to track changes in the orientation of the configuration in inertial space at the sub-micro-arcsecond level are similar to the corresponding requirement for MAXIM, though only two spacecraft are needed. MAXIM studies envisaged a `super star tracker'. Such a device could locate a beacon on the lens spacecraft relative to a stellar reference frame. In terms of separation distance, the requirement for $\sim 10^6$ km separation can be compared with the needs of the LISA gravitational wave mission \cite{2021AdSpR..67.3868J} for which 3 spacecraft must each be separated from the others by $2.5\times 10^6$ km, though in this case it is the rate of change of inter-spacecraft distance that requires strict monitoring rather than the orientation. \subsection{Effective area} \label{sect:aeff} The focusing efficiency of a gamma-ray Fresnel lens will depend on the fraction of incoming radiation that passes unabsorbed through the lens and on the efficiency with which that radiation is concentrated into a focal spot. In Figure \ref{fig:2}b the mean transmission of a lens with a maximum thickness of $t_{2\pi}$ and a typical profile is shown for some example materials. Transmissions in excess of 95\% should generally be possible. For an Airy disc 84\% of the radiation falls within the central peak, that is to say within a radius equal to the resolution according to the Rayleigh criterion. As noted in Section \nameref{sect:construct}, allowing for profile errors with an rms level of 3.5\% \textit{rms} of the maximum height could reduce the Strehl ratio by 5\%. If the errors are random, the form of the PSF will be little changed and so the effective area will be reduced in proportion. Combining all these factors together indicates a focussing efficiency of about 75\%. We take as a reference design a 2~m diameter lens made from polycarbonate with a nominal working energy of 500 keV and focal length $10^6$~km. The baseline active thickness is taken to be $t_{2\pi} = 1.15$~mm but absorption in a 2 mm substrate has been allowed for. Additional support against diaphragm mode vibrations would obviously be needed during launch but out of plane distortions during operation have little effect. Allowing for the above factors, an effective area over 20000~cm$^2$ should be attainable with a simple Fresnel lens having these parameters. The diffraction-limited angular resolution of such a lens would be 0.31~micro-arc-seconds. \subsection{Chromatic aberration} Unfortunately the above indication of a possible effective area applies only at the particular energy and focal distance for which the lens has been designed. An important limit to the performance of a Fresnel lens is the chromatic nature of the imaging. The bandwidth over which the good focussing is achieved is very narrow. For even quite small deviations from the wavelength for which the profile has been designed the focal length changes and for a fixed detector position the focal spot rapidly becomes blurred. The FWHM of the on-axis intensity as a function of energy is approximately \begin{equation} \frac{\delta \lambda}{\lambda} = \frac{\delta E}{E} = \frac{ 1.79}{ N_F} \label{eqn:deltaE} \end{equation} where $N_F$ is the \textcolor{green}{Fresnel number}, \index{Fresnel number} $r^2/(f\lambda)$. When the lens thickness is $t_{2\pi}$ then $N_F$ is equal to twice the number of rings. If the flux within a focal spot the size of the ideal Airy peak is used as a measure of the response, then the bandwidth is somewhat larger, with the numerical factor in Eq. \ref{eqn:deltaE} increased to about 2.95, and if the flux from a larger detector region is accepted, then the bandwidth increases further as seen in Figure \ref{fig:3}. The improvement will be at the expense of poorer angular resolution and increased detector background. Note that the diffraction-limited angular resolution will anyway only be available if the detector energy resolution is good enough to select only those photons within $\Delta E$. For a High Purity Germanium detector (see Volume 2 of this work, \ref{germanium} ) then $\frac{\delta E}{E} \sim 4\times 10^{-3}$ at 500 keV, effectively setting a limit of about 700 on the useful Fresnel Number of a diffraction limited telescope using a simple gamma-ray Fresnel lens. Over a much wider band, good focussing can be recovered by adjusting the detector position (Figure \ref{fig:3}) but only radiation within the narrow bands can be focussed efficiently for any given position. There is a direct analogy with the tunable Laue lenses described in \nameref{sect:tunable}, but for a Fresnel lens no adjustment is required to the lens, only a change in focal distance. For visible and IR radiation it has been shown possible to increase the bandpass of Fresnel lenses by correcting the chromaticity of one diffractive lens with that of another one of opposite power, but such systems cannot produce a real image as is required for a gamma-ray telescope \cite{1976ApOpt..15..542B, 2010XROI.2010E..16S}. Similarly schemes used in those wavebands to make "Multiorder" or "Harmonic" Fresnel lenses by increasing the maximum thickness of the lens so that it becomes a (different) integer multiple of $t_{2\pi}$ at each wavelength \cite{1995ApOpt..34.2462F, 1995ApOpt..34.2469S} do not work in the gamma-ray band. A small part of the surface of such a lens can be regarded as a diffraction grating blazed to direct radiation into the appropriate order, but as $t_{2\pi}$ for gamma-rays is almost universally proportional to $\lambda^{-1}$, the blaze angle cannot be correct at different energies. However various possibilities do exist for alleviating the chromaticity problem: \begin{enumerate} \item{ \textcolor{green}{`Axilenses'} \index{Axilenses} in which the pitch is varied in ways other than the $r^{-1}$ variation of a regular Fresnel lens. This can produce an enlarged bandwidth at the expense of efficiency \cite{2009SPIE.7437E..0JS}. The integral over energy of the effective area is close to that of a corresponding classical Fresnel lens. } \item{\textcolor{green}{Achromatic doublets.} \index{Achromatic doublets} While the focal length of a diffractive lens for X-rays or gamma-rays is proportional to $E$, that of a refractive lens is even more strongly dependent on energy, being proportional to $E^2$. In various contexts \cite{2002A&A...383..352S, 2003SPIE.4851..599G, 2003Natur.424...50W} % it has been pointed out that consequently diffractive and refractive lenses for which the focal lengths are \begin{equation} f_d = \biggl(\frac{E}{E_0}\biggr) \frac{f_0}{2} \hspace{10mm} f_r = -\biggl(\frac{E}{E_0}\biggr)^2 {f_0} \end{equation} can be combined to form an achromatic doublet for which the combined focal length is to first order independent of wavelength. In the gamma-ray band the absorption losses for a full refractive lens with useful parameters are likely to be prohibitive but steps that are many times $t_{2\pi}$ can be introduced. This leads to a lens having a profile corresponding to the combination of those shown in Figure \ref{fig:4}(a) and (b). Fig \ref{fig:4}(c) is a zoom on parts of the fine structure in the diffractive component, (b). At a given radius only the total thickness is important, so the two components can be separate as shown, or back to back in s single component, or the diffractive profile can be superimposed on that of the refractive one. With such lens focusing then be achieved at a number of wavelengths over an extended band (Figure \ref{fig:5}(a)). The peak effective area is less than that for a single lens but the integral can be several times higher. The mass of the lens will be many times that of a simple lens and the finest scale of the structure must be a factor of two smaller. } \item{\textcolor{green}{Multi-wavelength lenses}. \index{Multi-wavelength lenses} Recently, a number of papers have been published describing the design of `achromatic diffractive lenses' for UV/visible/IR radiation (e.g. \cite{BanerjiSWIR}). The approach used is to optimise the thickness of a number of concentric rings so that the imaging performance over a number of wavelengths is as good as possible based of some figure of merit. The same technique can be used to design gamma-ray lenses. At first sight the reported performance of the IR lens designs is remarkably good with, for example, 91\% `average efficiency' reported over wavelengths spanning a factor of 2. On detailed examination, though, the efficiency is low except at a few specific wavelengths for which the design was optimised and analysis suggests that even at these wavelengths the efficiency may have been overestimated \cite{2021Optic...8..834E}. % If the maximum thickness is restricted to be $\sim t_{2\pi}$ then the integrated effective area of a lens designed in this way is found to be similar to that of a simple Fresnel lens. If larger thicknesses are allowed then the performance and limitations are similar to those for an achromat of the same total thickness but there is the possibility of choosing the energies at which the performance is optimised. An example is illustrated in Figure~\ref{fig:5}~(b) which shows the effective area of a lens designed to work simultaneously at the energies of 4 astrophysical gamma-ray lines. The approach used is basically that described by Doskolovich \emph{et al.} \cite{2020OExpr..2811705D} which maximises the Strehl ratio averaged over the design energies. However an additional step was performed in which different target phases were tried and that selected which resulted in the highest value for the minimum over the four energies of the effective area, thus avoiding solutions in which one dominated. } \begin{table}[ht] \caption{Parameters of example lens designs. All the designs have the following in common - Nominal energy: 500 keV, Diameter: 2 m, Focal distance: $10^6$ km, Fresnel Number: 400, Diffraction limited angular resolution 0.3 micro-arc-seconds, Material: Polycarbonate. A substrate of 2 mm is allowed for in the mass and in calculating the effective area but not included in the 'active thickness'. The effective areas quoted are those for the lens and do not include detector efficiency. Those quoted here assume that the flux is collected from a 12 mm diameter region in the image plane. \emph{rms} figure errors of 3.5\% of $t_{2\pi}$ have been allowed for. } \label{table:1} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Figure} & \multicolumn{2}{c|}{Active thickness $t$} & Mass & \multicolumn{2}{c|}{Effective Area} \\ \cline{3-4}\cline{6-7} & & mm & $t/t_{2\pi}$ & kg & Peak (cm$^2$) & Integral ( cm$^2$ MeV ) \\ \hline Simple Fresnel & \ref{fig:3}a & 2.4 & 1 & 12 & 26735 & 331 \\ Achromatic Doublet & \ref{fig:5}a(i) & 50 & 21 & 100 & 15768 & 720 \\ " & \ref{fig:5}a(ii) & 191 & 81 & 332 & 5224 & 440 \\ Multiwavelength & \ref{fig:5}b & 24 & 10 & 52 & 17064 & 642 \\ \hline \end{tabular} \end{center} \label{default} \end{table} \end{enumerate} \newpage \section{Detector issues for focused gamma-rays} A discussion of detectors for gamma-ray astronomy can be found in Volume \ref{germanium} of this work. Here we present some specific considerations for detectors for focusing gamma-ray telescopes. \begin{itemize} \item {\bf Low Background.} Gamma-ray observations are almost always limited by background from diffuse sky emission and events % arising directly or indirectly from cosmic-rays. It is imperative to suppress these as far as possible. Shielding is heavy and even active shielding is imperfect, some background actually being created by the shielding itself. \item {\bf Sensitivity to Photon Arrival Direction.} A key method of background reduction in the low energy gamma-ray band involves extracting information about the direction from which a photon arrived. Where the first interaction of a photon in the detector is a Compton scattering and the 3-d locations of that and subsequent interactions can be recorded a Compton kinematics analysis allows constraints to be placed on the arrival direction. Although there are ambiguities and in the MeV range the precision is limited to a few degrees, background can be drastically reduced by selecting only events whose direction is consistent with having passed through the lens. Obviously such analysis is not possible for all events, but it is possible to identify a subset of low detection efficiency, very low background, events and give these a high weight during analysis. \item{\bf Position Resolution.} In an imaging system it is strictly the position where a photon first interacts in the detector that is important, though the location of the centroid of all energy deposits may provide an adequate approximation. Even when the instrument is used as a flux-collector the position information can be used to define a region of the detector within which events are counted or, better, to attribute to each event a weight depending on the noise level and the expected response at its location. The latter amounts to 'PSF fitting'. Of course Compton kinematics analysis requires recording the positions in 3-d of subsequent interactions as well as the first. \item {\bf Spatial extent} For a Laue lens a large detector will allow all of the events in the broad wings of the PSF to be captured. With the long focal lengths associated with Fresnel lenses, fields of view tend to be very small and are limited by how large a detector is feasible. In either case a large detector can provide regions that can be used for contemporaneous background estimation, obviating or reducing the need for 'off-source' observations \item {\bf Energy Resolution.} As gamma-ray lenses operate over a limited band, good energy resolution reduces the amount of background accompanying the signal. Fresnel lenses, particularly those that are large or have less extreme focal lengths, suffer from severe chromatic aberration so this is crucial where they are used. More generally, insufficient energy resolution will degrade the angular resolution of the Compton kinematic reconstruction and so increase the background. \item{\bf Detection efficiency} Detection efficiency is a major problem in the MeV region. The photons are very penetrating and a non-negligible fraction of photons traverse the detector without interacting at all. \end{itemize} The constraints resulting from these objectives are often conflicting. A dense detector is desirable to maximise efficiency, but for Compton reconstruction low density materials are favoured. Compton reconstruction requires measuring position and the energy deposits at multiple sites but on the other hand electronic read-out noise has least impact on energy resolution if all of the charge is fed to a single preamplifier. High purity Germanium detectors provide good efficiency and currently the best energy resolution. Germanium based Compton cameras are well advanced and demonstrated \cite{Tomsick2019}. Cadmium-Zinc-Telluride (CZT) elements have an energy resolution that is only slightly inferior, do not need cooling and can be assembled in large arrays. More complex detectors using a 3d-imaging Silicon scattering column surrounded by a 3d-imaging CZT-shield absorbing the scattered photons may reach a higher efficiency, but are currently at a development stage \cite{kuvvetli05}. \section{Conclusions} \label{conclusions} Diffractive gamma-ray optics is a field yet to be explored and exploited. Nevertheless, Laue and Fresnel lenses offer unique possibilities. The examples shown in this review illustrate both their capabilities and their limitations. An important consideration in choosing a lens design is the expected signal and how it compares with the likely event rate due to background in the detector. Increasing the effective area is not necessarily an advantage if the background noise increases. For photon-starved observations of continuum sources, though, large integrated effective areas are needed. For applications in which lower effective area or poorer resolution are adequate, reduced diameter or shorter focal lengths can be considered. In some cases an array of smaller lenses can provide the best solution. Observational MeV astrophysics is still in its infancy. We only know of a handful of low energy gamma-ray point sources. This is even less than the number known at 100~MeV and above from the SAS-2 and COS-B missions in the 1970s and 1980s. Today high energy gamma-ray sources are counted in the thousands. For MeV astrophysics we now await results from the COSI mission \cite{tomsick19}, which in a few years will give us a much clearer picture of the sky at a few MeV, that will be very important in designing future gamma-ray telescopes. \section{Cross References} \begin{enumerate} \item {Willingale, Richard. Handbook of X-ray and Gamma-ray Astrophysics, Volume 1: X-ray Experimental Techniques and Missions, ``Diffraction-limited optics and techniques" \label{willingale}} \item{ TBD, Handbook of X-ray and Gamma-ray Astrophysics, Volume 2: Gamma-ray Experimental Techniques, Observatories, and Missions, ``Germanium detectors for gamma-ray astronomy" \label{germanium}} \end{enumerate} \bibliographystyle{ws-book-van} \bibliography{Laue_Fresnel_arxiv} \printindex
Title: Origin of nontopological soliton dark matter: solitosynthesis or phase transition
Abstract: This work demonstrates that nontopological solitons with large global charges and masses, even above the Planck scale, can form in the early universe and dominate the dark matter abundance. In solitosynthesis, solitons prefer to grow as large as possible under equilibrium dynamics when an initial global charge asymmetry is present. Their abundance is set by when soliton formation via particle fusion freezes out, and their charges are set by the time it takes to accumulate free particles. This work improves the estimation of both quantities, and in particular shows that much larger-charged solitons form than previously thought. The results are estimated analytically and validated numerically by solving the coupled Boltzmann equations. Without solitosynthesis, phase transitions can still form solitons from particles left inside false-vacuum pockets and determine their present-day abundance and properties. Even with zero charge asymmetry, solitons formed in this way can have very large charges on account of statistical fluctuations in the numbers of (anti)particles inside each pocket.
https://export.arxiv.org/pdf/2208.12290
\setlength{\parskip}{0.2ex} \thispagestyle{empty} \newpage \setcounter{page}{1} \begingroup \hypersetup{linkcolor=black,linktocpage} \tableofcontents \endgroup \newpage \section{Introduction} \label{sec:intro} Nontopological solitons (NTSs) are fascinating macroscopic states that may exist in theories containing scalar or fermion fields with a conserved global symmetry and nonlinear interactions. Once its global charge is above a minimum charge, an NTS could be stable at the quantum level if it is energetically forbidden to decay into states with smaller charges. Because of their longevity at the cosmological scale, NTSs could account for all or part of the dark matter in our Universe. Their properties and detection methods are also different from ordinary dark matter searches for point-like particles, which make them interesting objects to study. Historically, NTSs were proposed by G. Rosen~\cite{Rosen:1968mfz}, T. D. Lee~\cite{Friedberg:1976me}, and S. Coleman~\cite{Coleman:1985ki} and their collaborators. Examples of NTSs exist in the Minimal Supersymmetric Standard Model (MSSM) where there are nonlinear interactions between squarks or sleptons which carry baryon or lepton numbers~\cite{Kusenko:1997zq,Kusenko:1997si}. In Higgs-portal dark matter models, there may exist NTSs of the dark scalar fields inside which the electroweak symmetry is restored even after the electroweak phase transition~\cite{Ponton:2019hux}. An NTS can also have gauge charges~\cite{Lee:1988ag,Gulamov:2015fya,Brihaye:2015veu,Heeck:2021zvk,Heeck:2021bce} or even topological charges~\cite{Bai:2021mzu} in the presence of a gauge group. Other studies include \cite{Kusenko:1997ad,Dvali:1997qv,Kusenko:1997vi,Berkooz:2005sf,Bishara:2017otb,Heeck:2020bau,Bai:2021xyf,Almumin:2021gax,Lennon:2021zzx,Pearce:2022ovj}; see \cite{Lee:1991ax,Nugaev:2019vru} for reviews. Cosmologically, NTSs can be formed through a first- or second-order phase transition (FOPT or SOPT, respectively)~\cite{Frieman:1988ut,Griest:1989cb,Frieman:1989bx,Macpherson:1994wf,Hong:2020est} sometimes referred to as ``solitogenesis,'' where the matter in the unbroken phase is congregated by the bubble wall and condenses as small pockets. In this circumstance, the two states of the same scalar field, the free scalar particles in the broken phase and the NTSs in the unbroken phase, will co-exist. The interactions between free particles and NTSs could then possibly drive the cosmic properties of the NTSs away from ``the initial condition" right after the phase transition (PT). In other words, the NTSs may absorb or release free scalar particles, or be formed through fusion of the free particles during the cosmic evolution, such that the typical charge and mass of the NTS system evolves with time. This is known as ``solitosynthesis.'' The properties of NTSs depend strongly on the interactions, and therefore the results of solitosynthesis can be very different between models. In this work, we study the cosmic evolution of NTSs with and without solitosynthesis for different globally charged scalar models. Assuming an initial global charge asymmetry, we set up Boltzmann equations for the free-particle--NTS system and examine which species dominates the total charge or energy of the system. This enables the estimation of two crucial temperatures of solitosynthesis: the NTS {\it charge-domination temperature} $T_D$ when the charge abundance of NTSs dominates that of free particles, and the {\it freeze-out temperature} $T_F$ when the NTS number density goes out of equilibrium. The former temperature being higher renders NTSs to be the dominant component of charge in the dark sector, which we refer to as ``efficient'' solitosynthesis. For a given minimum stable charge $\Qmin$ and maximum attainable charge $\Qmax$, the minimum global charge asymmetry for efficient solitosynthesis follows a universal scaling relationship with very little model dependence. This degeneracy of parameter space is broken when the relic abundance of the dark sector is taken into account, as the mass spectrum of NTSs is model dependent. The upshot is that there exists a range of model parameters where NTSs can dominate the charge and/or energy density of the dark sector and be all of dark matter. The cosmic evolution of macroscopic states in general has been discussed in several earlier works. Among them, Ref.~\cite{Griest:1989bq} coined the term and pioneered the study of ``solitosynthesis'' (see also \cite{Frieman:1989bx,Kusenko:1997si,Postma:2001ea}). One finding of our work is that Ref.~\cite{Griest:1989bq} used an inappropriate estimate for the freeze-out temperature for solitosynthesis that leads to inaccurate estimates for the typical NTS charge, mass, and abundance. Their estimate relies on the freeze out of free particle annihilation, whereas our updated estimate relies on the freeze out of the NTS number density. Additionally, we find that the maximum NTS charge accessible during solitosynthesis is underestimated by Ref.~\cite{Griest:1989bq}, leading them to exclude far too pessimistically the possibility of NTS domination from solitosynthesis. Further, unlike prior works we distinguish between the charge domination and energy density domination of solitons. Though we include only scalar NTSs in this paper, our discussions can be easily generalized to fermionic macroscopic states. Examples of these include dark nuclei~\cite{Wise:2014jva,Wise:2014ola,Gresham:2017zqi,Gresham:2017cvl}, (dark) quark nuggets~\cite{Bai:2018vik,Bai:2018dxf,Liang:2016tqc}, and fermi-balls/fermion solitons \cite{Lee:1986tr,Macpherson:1994wf,Hong:2020est}. In Ref.~\cite{Gresham:2017cvl}, the coagulation of dark nuclei---heavy states of fermionic dark nucleons described by the liquid drop model whose mass and radius spectra in terms of the dark nucleon number are very similar to those of NTSs---is discussed and provides a rough analog to solitosynthesis. Despite some model similarities, we find that the NTS evolution proceeds in a qualitatively different way, mostly relying on absorption of free particles as in \cite{Griest:1989bq} instead of mergers of larger bound states as in \cite{Gresham:2017cvl}. Still, like \cite{Gresham:2017cvl}, we find that states with large charges are formed. On the other hand, if solitosynthesis is not efficient---which may occur when the free particles are out of equilibrium from NTSs, \eg, when the free-particle mass is much heavier than the PT temperature---then NTSs do not appreciably evolve after the PT. In this scenario, we examine how the PT parameters determine the typical charge, mass, and abundance of the NTSs, building on prior works. The organization of this paper is as follows. We first review several different scalar theories containing NTSs in Sec.~\ref{sec:Qball-model}. In Sec.~\ref{sec:solitosynthesis}, we study the solitosynthesis for two different NTS models and derive the parameter space where solitosynthesis is efficient and reproduces the observed dark matter relic abundance. Then, in Sec.~\ref{sec:Q_ball_from_PT}, we explore the model parameter space in the absence of solitosynthesis, where the cosmic properties of the dark sector are determined by the PT. Finally, we discuss the related phenomenology and conclude in Sec.~\ref{sec:conclusion}. Several detailed calculations for the NTS properties and PTs are included in Appendices. Throughout the paper we use the reduced Planck mass $\Mpl=1/\sqrt{8\pi G} = 2.44 \times 10^{18}~\GeV$ with $G$ the Newton constant. Also, for simplicity we will follow the convention of~\cite{Coleman:1985ki} and use the terms ``Q-balls'' and ``nontopological solitons'' interchangeably. \section{Q-ball models and properties} \label{sec:Qball-model} \subsection{Representative models}\label{sec:EWSDMB} A general Q-ball state requires the theory to have a good global symmetry and a nonminimal potential~\cite{Coleman:1985ki}. The mass per charge for a stable Q-ball state is smaller than a free particle carrying a unit charge in the vacuum of the potential. In this paper, we use a scalar boson constituent as a representative model and consider two renormalizable potentials for a theory of two gauge-singlet scalar fields. The first model, from Ref.~\cite{Frieman:1988ut}, is the one adopted in an earlier study of solitosynthesis~\cite{Griest:1989bq} with a real scalar $\sigma$ and complex scalar $S$ with scalar potential \beqa V(S, \sigma) = \frac{1}{8} \lambda\, (\sigma^2 - \sigma_0^2)^2 + \frac{1}{3} \lambda_2\, \sigma_0 (\sigma-\sigma_0)^3 + \frac{m_S^2}{(\sigma_- - \sigma_0)^2} |S|^2 (\sigma - \sigma_0)^2 + \Lambda \, , \label{eq:VGK} \eeqa where $\sigma_-$ is the true zero-temperature minimum, $\Lambda$ is set so that $V=0$ at the zero-temperature minimum ($\sigma=\sigma_-$ and $S=0$), and for simplicity $\lambda_2=0.15 \lambda$ is used. The free-particle mass in the true vacuum is $m_S$. The quartic term $|S|^4$ was explicitly chosen to be zero in this model. We will call this ``Model A.'' This model admits a Q-ball solution for the field $S=e^{-i \omega t}\,\sigma_0\,s(r)/\sqrt{2}$ with $s(0)=s_0$ and $s(\infty)=0$, where $\sigma'(0)=0$ and $\sigma(\infty)=\sigma_-$. Its charge is $Q=i \int d^3 x \, \left(\Sbar\, \partial_t\, S - S\, \partial_t \,\Sbar \right) = 4 \pi \,\Omega \int_{0}^{\infty} d\rb \, \rb^2 s^2$, with $\Omega \equiv \omega/\sigma_0$ and $\rb \equiv \sigma_0\,r$. Q-balls in this model have mass $m_Q=4 \pi \sqrt{2} \,Q^{3/4} \Lambda^{1/4}/3$. By requiring their binding energy $B_Q=Q m_S - m_Q > 0$, it can be shown that the minimum stable charge is $\Qmin=1231 \Lambda m_S^{-4}$. Thus, the Q-ball mass and radius are, \begin{align} \label{eq:M_GK} m_Q &= 5.15 \sigma_0 \lambda^{1/4} Q^{3/4} \, , \\ \label{eq:R_GK} R_Q &= 0.8 \lambda^{-1/4} \sigma_0^{-1} Q^{1/4} \, . \end{align} The free particle mass in terms of these free parameters is $m_S=5.15 \sigma_0 \lambda^{1/4} \Qmin^{-1/4}$. The second model, from Ref.~\cite{Ponton:2019hux}, has two complex scalars with scalar potential \beqa V(S, \phi) = \frac{1}{4} \lphi (|\phi|^2 - v^2)^2 + \frac{1}{4} \lc |S|^2 |\phi|^2 + \ls |S|^4 + \ms^2 |S|^2 \, , \label{eq:V} \eeqa with $\lphi, \lc, \ls > 0$. There are two global symmetries for this potential $U(1)_S$ and $U(1)_\phi$, where the $U(1)_S$ is responsible for the Q-ball charge. The other complex field $\phi$ is introduced to provide a nontrivial potential for the $S$ field. Similar results would be obtained if $\phi$ were a real scalar with $\mathbb{Z}_2$ parity ($\phi \rightarrow - \phi$) symmetry or a scalar multiplet under a larger symmetry group (including even $\phi$ being the Standard Model Higgs boson doublet as in \cite{Ponton:2019hux}). The zero-temperature free $S$ particle mass is $m_S^2 = \frac{1}{4} \lc v^2 + \ms^2$, and for simplicity we work in the limit $\ms=0$. We will call this ``Model B.'' The main difference between Model A and B is whether the self-quartic coupling for the $S$ field is zero or not. This model also admits a Q-ball solution for the field $S=e^{-i \omega t}\,v\,s(r)/\sqrt{2}$ with $s(0)=s_0$ and $s(\infty)=0$, where $\phi = v\, f(r)$ satisfies $f'(0)=0$ and $f(\infty)=1$, provided $\lc^2 > 2 \lphi \ls$~\cite{Ponton:2019hux}. For sufficiently small $Q$, the $\ls$ term is negligible because the solutions have small $s_0$. Therefore, for small $Q$ the profile solutions can be approximated by $s(r) = s_0[1- \tanh^2(\omega' r)]$ and $f= 1 - \pi_0[1- \tanh^2(\omega' r)]$ for the case with $\pi_0 \ll 1$ ($\omega'$ is a parameter that is determined by minimizing the energy of the solution; see Appendix~\ref{appedix:smallQ} for details). The mass as a function of $Q$ is derived to be \beqa \label{eq:M_small} m_{Q, {\rm small} }\approx \left( \frac{1}{2}\,\lc^{1/2}\,+ \lphi\,\lc^{-1/2} \right) \, v\, Q - \frac{(\pi^2 - 9)}{2048\,\pi^2(\pi^2 - 6)^3}\,\lc^{5/2}\,v\,Q^3 ~. \eeqa The radius of the profile is $R_{Q, \rm small} \equiv \frac{1}{\omega'} \approx 64\pi(\pi^2 - 6)/(\lc^{3/2}\,v\,Q)$. To be stable against decay to $Q$ free particles, $m_Q< Q m_S= Q\,\lc^{1/2} v/2$. The minimum (quantum-level) stable charge is \beqa \label{eq:Qmin} Q_s \approx 32\pi\,\left( \frac{2(\pi^2-6)^3}{\pi^2-9} \right)^{1/2}\,\frac{\lphi^{1/2}}{\lc^{3/2}}\, . \eeqa In practice, $Q_s$ must be determined by numerically solving the classical equations of motion for $f(r)$ and $s(r)$. There is an additional charge $Q_c<Q_s$ which is the lowest possible Q-ball charge. Solutions do not exist for $Q<Q_c$, and in between $Q_c<Q<Q_s$ the Q-balls are metastable \cite{Levkov:2017paj}. The minimum (meta)stable quantized charge $\Qmin$ available during solitosynthesis should satisfy $\lceil Q_c \rceil \leq \Qmin \leq \lceil Q_s \rceil$, where metastability is defined by comparing to time scales relevant for solitosynthesis and $\lceil ... \rceil$ denotes the ceiling function. As we will see in the next section, a small $\Qmin$ would be preferred for efficient solitosynthesis, and hence the $Q_s$ of our concern will be of $\mc{O}(1)$, a value where its difference with $Q_c$ may be less than an integer and unimportant. For quartic couplings of order unity, $\lambda_\phi = \mathcal{O}(1)$ and $\lambda_{\phi S} = \mathcal{O}(1)$, the minimum stable charge has $\Qmin = \mathcal{O}(10^3)$. On the other hand, for a flatter $\phi$ potential with $\lambda_\phi \ll 1$, a smaller $\Qmin$ is anticipated. For $\lambda_\phi$ saturating a minimum value from a one-loop n\"aive dimension analysis, $\lambda_\phi \sim \lc^2/(16\pi^2)$, one has $\Qmin = \mathcal{O}(1)$ from \eqref{eq:Qmin}. At large charge when $\ls>0$, the relationship between mass, radius, and charge are calculated to be \begin{align} \label{eq:M_large} m_{Q, {\rm large}} & \approx v\,Q \left[\left(\ls \lphi\right)^{1/4} + c_2 Q^{-1/3} \right]\, , \\ \label{eq:R_large} R_{Q, \rm large} & \approx \frac{3^{1/3} \, \ls^{1/12} }{(4\pi)^{1/3}\lphi^{1/4}\,v}\,Q^{1/3}\, . \end{align} We include the subleading $Q^{2/3}$ surface energy term for the Q-ball mass with $c_2$ as a dimensionless number depending on couplings in the potential (see Appendix \ref{appedix:surfaceE} for derivation and the approximate formula of $c_2$). A nonzero repulsive self-interaction $\ls$ plays an important role in the parametrics at sufficiently large $Q$. If $\ls$ were exactly zero, then $m_Q \propto Q^{3/4}$ and $R_Q\propto Q^{1/4}$ at large $Q$ \cite{Ponton:2019hux}, the same scaling as in Model A. However, such a $\ls$ term must be present as it is generated by radiative corrections---\eg, in this theory, it generically satisfies $\ls \gtrsim \lc^2/(16 \pi^2)$ without fine tuning. In Fig.~\ref{fig:MQ-Q}, we show the Q-ball mass as a function of $Q$ in Model B, where we fix the model parameters to be $\lambda_\phi = 0.01$, $\lambda_{\phi S} = 10$, and $\lambda_S = 0.2$. In the left panel, the numerically calculated results agree well with the analytic formulas in \eqref{eq:M_small} and \eqref{eq:M_large} for the small $Q$ and large $Q$ regions, respectively. In the right panel, we zoom in on the behaviors in the small $Q$ region and demonstrate that a Q-ball is quantum-level stable if $Q \ge Q_{\rm min} = 4$ (which is well approximated by (\ref{eq:M_small}) and (\ref{eq:Qmin})). A higher-energy Q-cloud solution is also displayed in the right panel, which is truncated in the figure but should extend to $Q=\infty$. We will not consider Q-clouds further in this work (see Refs.~\cite{ALFORD1988323,Nugaev:2015rna,Levkov:2017paj} for relevant studies), though it is possible that some Q-balls could initially form as Q-clouds and then later relax to lower-energy Q-balls or evaporate to free particles, affecting fusion and capture cross sections discussed in the following subsection. There are other types of models which can generate a small enough $Q_{\rm min}$. One example is related to the Coleman-Weinberg potential~\cite{Coleman:1973jx} for the $\phi$ field. Instead of a tree-level potential for $\phi$, the self-interacting potential could be replaced by \beqa V(S, \phi) \supset \lambda_\phi \, |\phi|^4 \left[ 4 \log\left( \frac{|\phi|}{f}\right) - 1 \right] ~, \eeqa where $\lambda_\phi>0$ is loop-factor suppressed and the constant 4 in front of the logarithm is chosen to have the $\phi$ vacuum expectation value $\langle |\phi|\rangle = f$. A second example is motivated by the flat directions (with vanishing tree-level $F$-term and $D$-term potential) in the MSSM~\cite{Dine:1995kz}. For instance, the phenomenological potential in the $u^c d^c d^c$ direction is \beqa V(S, \phi) \supset m_{\phi,0}^2 \left[ 1 - \tilde{c}_1\,\frac{\alpha_s}{8\pi}\,\frac{M_{\tilde{g}}^2 }{m_{\phi,0}^2} \,\log\left(\frac{M^2_{\tilde{g}} + \tilde{c}_2\,g_s^2\, |\phi|^2 }{M_X^2} \right) \right]|\phi|^2 + \frac{|\phi|^{2d}}{\Lambda^{2d -4 }} ~. \eeqa Here, $\tilde{c}_1$ and $\tilde{c}_2$ are order-one numbers related to $SU(3)_c$ group representations; $M_{\tilde{g}}$ is the gluino mass; $M_X$ is a high reference scale to define the soft masses $m_{\phi,0}$ and $M_{\tilde{g}}$. The last higher-dimensional operator with $d > 2$ makes the potential bounded from below. In the remainder of our paper, we use Model A in \eqref{eq:VGK} and Model B in \eqref{eq:V} as representative models to discuss early-universe formation of NTSs. Unless stated otherwise, in all figures we use the benchmark parameters: \begin{subequations} \label{eq:benchmark} \begin{align} \label{eq:benchmark-A} \lambda=1 \, , \lambda_2=0.15 \lambda \, , \; \; \; \; & \text{(Model A)}~, \\ \lambda_\phi=0.01\, , \lambda_{\phi S}=10\, , \lambda_S=0.2\, , \ms=0 \, , \; \; \; \; & \text{(Model B)}~. \end{align} \end{subequations} For Model A, $\Qmin$ remains a free parameter which determines $m_S$, while for Model B, these parameters fix $\Qmin=4$. It can be numerically confirmed that decreasing $\lc$ or $\ls$ or increasing $\lphi$ or $\ms$ will result in an increase to $\Qmin$. Also, we may at times use $\sigma_0$ and $v$ interchangeably. \subsection{Q-ball interactions} \label{sec:Qball_interactions} Various processes could be important for the evolution of Q-balls with different charges. For processes with two initial states in the forward direction, one has \begin{subequations}\label{eq:proc-main} \beqa \label{eq:proc-SS} S + \Sbar &\leftrightarrow& \phi + \phi^\dagger ~, \\ \label{eq:proc-capture} (Q) + S &\leftrightarrow& (Q+1) + X ~, \\ \label{eq:proc-Qsbar} (Q) + \Sbar &\leftrightarrow& (Q-1) + X ~, \\ \label{eq:proc-Qmin} (Q_{\rm min}) + \Sbar &\leftrightarrow& \underbrace{S + S + \cdots +S}_{Q_{\rm min} - 1} + X ~. \\ \label{eq:proc-QQ} (Q_1) + (Q_2) &\leftrightarrow& (Q_1 + Q_2) + X ~, \\ \label{eq:proc-QantiQ} (Q_1) + (-Q_2) &\leftrightarrow& \left\{ \begin{array}{l l} (Q_1 - Q_2) + X ~\, ~~ & \text{for} ~ Q_1 - Q_2 \ge Q_{\rm min} ~, \vspace{2mm} \\ \underbrace{S + S + \cdots +S}_{Q_1-Q_2} + X ~\, ~~ & \text{for} ~ \Qmin > Q_1 - Q_2 \geq 0 ~. \end{array} \right. \eeqa \end{subequations} Here, $X$ represents the degrees of freedom in $\phi$, $\sigma$, or the SM if their masses are smaller than the binding energy in the process. States with parentheses like $(Q)$ represent Q-balls with charge $Q$. All $Q$ are taken positive, while the equivalent processes for negative $Q$ can be inferred. For the first process in \eqref{eq:proc-SS}, the final state could be the radial or the Goldstone boson mode inside the complex field $\phi$ in Model B. In the limit of $m_S \gg \sqrt{\lambda_\phi} \,v$, the annihilation rate to both $\phi$ degrees of freedom is \beqa \label{eq:SS-xsec} \sigma v_{\rm rel} (S + \Sbar \rightarrow \phi + \phi^\dagger) = \frac{1}{2^{11}\,\pi}\,\frac{\lc^2}{m_S^2} = \frac{1}{128\pi\,v^2} ~, \eeqa where $v_{\rm rel}$ is the relative velocity of the two initial-state particles. \footnote{An interesting case appears when $\phi$ is a real scalar field and more massive than $S$. If $S$ cannot kinematically annihilate to other particles, then an NTS can be made of both $S$ and $S^\dagger$ simultaneously \cite{Frieman:1989bx}.} In Model A the annihilation rate of the corresponding process is estimated as $\sigma v_{\rm rel}=0.014(1-\Qmin^{1/2}/12.5)^{1/2}/(\Qmin^{1/2}\sigma^2_0)$~\cite{Griest:1989bq}. For a large $Q$, the elastic scattering cross section of $(Q) + S \rightarrow (Q) + S$ can be approximated by a geometric cross section, $\pi R_{Q, \rm large}^2$. For the capture process in \eqref{eq:proc-capture}, the cross section formula depends on how the binding energy is released into $X$ and is hence model dependent. The detailed calculation is beyond the scope of the current paper (see Ref.~\cite{Bai:2019ogh} for one example of a radiative capture cross section calculation; the current model has self-quartic interactions of the free $S$ particle with the $S$-constituents inside the Q-ball, which will change the capture cross section relative to \cite{Bai:2019ogh}). To simplify our discussion, we choose $\sigma v_{\rm rel}$ for the capture processes in (\ref{eq:proc-capture}) and (\ref{eq:proc-Qsbar}) to be the geometric cross section $\pi R_Q^2$ using (\ref{eq:R_GK}) for Model A and and $R_{Q, \rm large}$ in (\ref{eq:R_large}) for Model B. We will also apply these cross section formulas to the region close to $Q_{\rm min}$, although the cross section is unlikely to be geometric because only a handful of bound states mediate the scattering. For the process in \eqref{eq:proc-Qmin} and depending on the value of $Q_{\rm min}$ and the abundance of free particles, a detailed balance may be difficult to reach. The forward Q-ball destruction process could happen much more easily than the backward fusion process of free $S$ particles forming a Q-ball state. For the small $Q_{\rm min}$ region of interest in Sec.~\ref{sec:solitosynthesis}, detailed balance is easier to maintain. We will see in the later analysis that this process plays an important role in determining when the Q-ball number density goes out of equilibrium. The case where the fusion process is absent is considered in \cite{Griest:1989bq,Frieman:1989bx} with largely negative results for solitosynthesis. The last two processes in \eqref{eq:proc-QQ} and \eqref{eq:proc-QantiQ} could be important to change the charge distribution of Q-balls, but not generally important to change the total $S$-number inside Q-ball or free particle states. They will not change the equilibrium distributions of the Q-balls (to be discussed in Sec.~\ref{subsec:TD}). Further, because Q-ball charges will tend towards $\Qmax$ (defined in Sec.~\ref{sec:maximum-charge}) during the later stages of solitosynthesis, these processes will not have much effect on the final charge distribution. There may be a small effect on the freeze-out temperature. Additionally, in \cite{Bai:2021mzu}, it was estimated (by comparing the interaction rate $\Gamma = n_Q \sigma v_\text{rel} \sim n_Q \pi R^2 \sqrt{T/m_Q}$ with the Hubble parameter $H$) that $(Q)+(Q)$ interactions are only important when $Q \lesssim 6 \times 10^4 (v/10^3~\GeV)^{-12/5}$ for the case of Model B near temperatures $T\sim v$ assuming those Q-balls make up all of dark matter. Notice that the rate is suppressed by both the Q-ball number density $n_Q$ and nonrelativistic velocity when $Q$ is large. A similar estimate for Model A gives $Q \lesssim 2\times 10^3 (\sigma_0/10^3~\GeV)^{-16/5}$. Because (i) this is well below $\Qmax$, and (ii) the Q-balls in this range typically have much smaller comoving density than the dark matter density throughout the course of solitosynthesis, we can safely neglect these interactions. When there is an asymmetry present, as will be required for solitosynthesis but not necessarily PT formation, \eqref{eq:proc-QantiQ} can also generally be neglected in the formation process because the abundance of anti-Q-balls is suppressed. Relatedly, Ref.~\cite{Gresham:2017cvl} includes processes analogous to $(Q_1)+(Q_2)\leftrightarrow (Q_3)+(Q_4)$ in the evolution, where $Q_1+Q_2=Q_3+Q_4$. However, it is also argued in~\cite{Gresham:2017cvl} that under the on-shell scheme, in the process $(Q_1)+(Q_2)\to (Q_1+Q_2)^\ast\to (Q_3)+(Q_4)$ the intermediate excited state $(Q_1+Q_2)^\ast$ prefers to decay to $(Q_1+Q_2) + X$. We expect similar behavior in our models. Therefore the net effect of this process will be similar to the combination of our \eqref{eq:proc-capture} and \eqref{eq:proc-QQ}, and other charge combinations in the final state can generally be neglected. \section{Q-balls from solitosynthesis} \label{sec:solitosynthesis} Cosmologically, Q-balls may be conveniently formed from PTs, while their final abundances may not necessarily be determined merely by the PT. If there exists an efficient solitosynthesis, wherein Q-balls can form from the merger of $\Qmin \times S$ particles and grow or shrink from Q-balls absorbing or emitting $S/\Sbar$ particles, the initial abundance of Q-balls after the PT would be irrelevant. The interactions within the dark sector would generate an equilibrium system up to a certain large charge and finally decouple from the thermal bath due to the cosmic expansion. Thus, to know the final properties and the relic abundance of the dark sector, one needs to track its evolution by examining the solitosynthesis process, which we examine in this section. The $U(1)_S$ charge asymmetry $\eta$ plays an important role in the course of solitosynthesis. If $\eta=0$, the Q-ball abundance is quickly depleted in equilibrium and the free $S$ particles dominate the global charge. Thus, for the purposes of this section, we assume $\eta>0$ and check how small $\eta$ can be to have Q-balls dominate the global charge. \subsection{The maximum charge in the Q-ball system} \label{sec:maximum-charge} Assuming that the solitosynthesis processes are not immediately frozen out after the PT, Q-balls of all possible charges could be successively produced and reach a thermal distribution through direct fusion or absorption/emission of free quanta of the global charge. As the universe cools down, the global charges will gradually congregate into the Q-balls due to the gain of binding energy. Similar to the analysis in~\cite{Griest:1989bq}, we can determine the temperature at which the majority of the global charges are in Q-balls, but to do this we need to understand the maximal Q-ball global charge accessible during solitosynthesis. For Models A and B presented in Sec.~\ref{sec:Qball-model}, there is no intrinsic upper bound on the Q-ball charge. Nevertheless, having an upper bound $\Qmax$ on the charge of the system is not only necessary for numerical computations, but also realistic in terms of statistical equilibrium, since it takes time for the Q-balls not produced from the PT to be formed and thermalized. In statistical equilibrium, $\Qmax$ can be estimated by comparing the time required for a Q-ball to absorb that many $S$ particles and the Hubble time. For statistical equilibrium to hold, we expect that a Q-ball should be able to charge from $\Qmin$ to $\Qmax$ within a few Hubble times. The charge time is \begin{equation} \label{eq:charge-shuffle-time} \tau_{\Qmin \to \Qmax} = \sum_{Q=\Qmin}^{\Qmax} \frac{1}{n_S \, (\sigma v_{\rm rel})_Q} \, , \end{equation} where $n_S$ is the number density of $S$ particles. For Model A where $R_Q \simeq 0.8 (Q/\lambda)^{1/4}/\sigma_0$, the summation can be approximated [using the geometric cross section $(\sigma v_{\rm rel})_Q \approx \pi R_Q^2$] as $\tau_{\Qmin \to \Qmax} \simeq \lambda^{1/2} \sigma_0^2 \Qmax^{1/2} / n_S$. Requiring $\tau_{\Qmin \to \Qmax} \lesssim H^{-1}$ with $H=\sqrt{\pi^2 g_*/90}\, T^2/\Mpl$, \begin{equation}\label{eq:Qmax_bound} \Qmax \lesssim \left( \frac{3\,n_{S}\, \Mpl}{g^{1/2}_\ast\,\lambda^{1/2}\, \sigma_0^2\, T^2} \right)^2 \,, \end{equation} where $g_\ast$ is the relativistic degrees of freedom in the thermal plasma. For Model B, the corresponding upper limit is \begin{align}\label{eq:Qmax_bound_2} \Qmax\lesssim \left(\dfrac{3^{2/3} \ls^{1/6} }{(4\pi)^{2/3}\lphi^{1/2}}\dfrac{\pi\uu n_S \Mpl}{g^{1/2}_\ast v^2 T^2}\right)^3\,, \end{align} using the radius in \eqref{eq:R_large}. These limits are less constraining for a smaller energy scale $\sigma_0$ or $v$. A similar condition for an initial population of Q-balls following the PT to discharge in order to reach equilibrium could be considered by replacing $n_S$ with $n_{\Sbar}$, the number density of $S^\dagger$. The maximum charge for an NTS that is capable of being discharged down to $\Qmin$ should not differ much from the bound on $\Qmax$ estimated above. This is because $S S^\dagger$ pair creation in the reverse process of (\ref{eq:proc-SS}) is still active at temperatures $T \sim v$ or $\sigma_0$ right after the PT, so $n_S \sim n_{S^\dagger}$. Note that the estimation of $Q_{\rm max}$ here is different from Ref.~\cite{Griest:1989bq}, where the Q-ball number density $n_{\Qmin}$ is used in place of $n_S$ to estimate the charge shuffling time in the right-hand side of (\ref{eq:charge-shuffle-time}). We have also numerically checked the time for a perturbed system of number densities to return to equilibrium and found agreement with Eq.~\eqref{eq:charge-shuffle-time}. \subsection{Q-ball domination in equilibrium} \label{subsec:TD} The number density of the free global charge quanta $S$, $\Sbar$ and NTSs of charge $Q$ and $-Q$ in kinetic equilibrium are given by $n_{i=S,\Sbar,Q,-Q}^{\text{eq}}$, with \begin{align}\label{eq:number_density} n_i^{\text{eq}} = \frac{1}{2\pi^2}T\,m^2_i\exp(\mu_i/T)K_2(m_i/T)\,, \end{align} where $K_2(x)$ is a modified Bessel function. In chemical equilibrium, the chemical potentials satisfy [based on (\ref{eq:proc-main})] \begin{subequations} \label{eq:mu} \begin{align} \mu_{\Sbar} &= -\mu_S\,,\\ \mu_Q &= Q\,\mu_S\,,\\ \mu_{-Q} &= -\mu_{Q}\,. \end{align} \end{subequations} If $\eta=0$, the chemical potentials are identically zero, and the Q-ball abundances are Boltzmann suppressed such that solitosynthesis is inefficient. In the following, we take $\eta>0$. Letting $Z_i\equiv n_i/(\eta \, n_\gamma)$, the conservation of global charge indicates \begin{align}\label{eq:Q_conservation} Z_S-Z_{\Sbar}+\sum^{Q_{\rm max}}_{Q=Q_{\rm min}}Q\,(Z_{Q}-Z_{-Q}) = 1\, . \end{align} For a given mass spectrum $m_i$ and asymmetry $\eta$, this equation uniquely determines the chemical potential $\mu \equiv \mu_S$ as a function of temperature. For models of interest to us, the global charge will be mainly aggregated in Q-balls of charge $\Qmax$ at low temperature as long as the Q-ball system stays in equilibrium. To see this, we parametrize the mass spectrum of the Q-ball system to be $m_Q=m_1 Q^p$. We expect $p\leq 1$, such that $m_Q/Q$ will be smaller than the free charge quanta mass for sufficiently large $Q$. Defining a ratio $r=(Q+1)n_{Q+1}/(Q\,n_Q)$ with \eqref{eq:number_density} and using $K_2(x)\approx \sqrt{\pi/(2x)}e^{-x}$ for $x \gg 1$, \begin{align} \label{eq:rQ} r&=\left(\dfrac{Q+1}{Q}\right)^{\frac{3p}{2}+1}\exp\left(\dfrac{m_Q-m_{Q+1}+\mu}{T}\right)\,,\\ \label{eq:drdQ} \frac{dr}{dQ}&=\frac{2 p\,m_1\left(Q^p(Q+1)-(Q+1)^p Q\right)-(2+3p)T}{2\,Q^2\, T}\left(\frac{Q+1}{Q}\right)^{3p/2}\exp\left(\dfrac{m_Q-m_{Q+1}+\mu}{T}\right)\,. \end{align} The sign of $dr/dQ$ is determined by the numerator of the first term $2p\,m_1(Q^p(Q+1)-(Q+1)^p Q)-(2+3p)T$. For $p=1$, $dr/dQ$ will be negative, therefore the charge of the equilibrium system will concentrate in NTSs with a specific charge $Q<\Qmax$ for sufficiently large $\Qmax$. For $p<1$, on the other hand, $r \xrightarrow[]{Q\to\infty} e^{\mu/T}>1$. Therefore, as long as $\Qmax$ is large enough, the charge of the system will finally concentrate in the largest Q-balls $(\Qmax)$ following the equilibrium evolution. Of course, we usually do not expect the Q-ball mass spectrum to have a simple power-law behavior, as we have seen in Sec.~\ref{sec:Qball-model}. However, as long as the growth of $m_Q$ versus $Q$ is ``slower'' than the linear power, one effectively has the $p<1$ scenario with $(\Qmax)$ finally dominating the global charge. In Fig.~\ref{fig:equilibrium} we show the equilibrium evolution of the global charge for Models A and B. In both models, the global charge stored in the largest Q-ball dominates over those in other species at late time. The overall shape of the curves in Fig.~\ref{fig:equilibrium} can be understood in the following way. At the earliest times, all interactions in (\ref{eq:proc-main}) are active and the chemical potential is near zero, favoring lighter states, \ie, free particles and smaller-charged Q-balls. Then, as temperature drops the reverse process in (\ref{eq:proc-SS}) becomes inefficient and the abundance of $S^\dagger$ decreases well below the abundance of $S$, so further annihilations via (\ref{eq:proc-SS}) do not appreciably change the $S$ abundance. The chemical potential increases, and anti-Q-ball abundances also become suppressed. Eventually it becomes kinematically inefficient to knock charges out of Q-balls in the reverse process of (\ref{eq:proc-capture}), so the only remaining processes are Q-ball fusion in the reverse of (\ref{eq:proc-Qmin}) and Q-ball captures in the forward process of (\ref{eq:proc-capture}). Because maximally-charged Q-balls are the lowest energy state per charge, they become the most abundant, leading to the bounce in their abundance at late times (as demonstrated in (\ref{eq:drdQ}), see also \cite{Puetter:2022ucx} for a discussion of this bounce in a simplified three-particle system). The Q-ball domination in Model B is much later in the presented examples because at small $Q$ the mass spectrum in Model B is still dominated by the linear term in $Q$ and thus has a smaller binding energy, as shown by Eq.~\eqref{eq:M_small} and Fig.~\ref{fig:MQ-Q}. To understand the equilibrium evolution of the system better, we estimate the temperature $T_D$ where the Q-ball charge domination happens as in \cite{Griest:1989bq}. It is clear from \eqref{eq:mu} that $n_{\Sbar}$ and $n_{-Q}$ are suppressed by the chemical potential term compared with $n_S$ and $n_{Q}$. Therefore at $T=T_D$ we expect $Z_S \approx \Qmax Z_{\Qmax} \approx 1/2$. By approximating $K_2(x)\approx \sqrt{\pi/(2x)}e^{-x}$ for $x \gg 1$ in~\eqref{eq:number_density}, \begin{align} &\left(\frac{m_S\,T_D}{2\pi}\right)^{3/2} \exp\left(\frac{\mu-m_S}{T_D} \right) = \Qmax \left(\frac{m_{\Qmax}\,T_D}{2\pi}\right)^{3/2} \exp\left(\frac{\Qmax\mu-m_{\Qmax}}{T_D}\right)=\frac{1}{2}\eta c_\gamma T_D^3\,, \end{align} where $\mu\equiv\mu_S$, $B_{\Qmax}=\Qmax\,m_S-m_{Q}$, and $c_\gamma = 2\zeta(3)/\pi^2$. Then, \begin{align}\label{eq:TD} T_D=\frac{B_{\Qmax}}{\log\left\{\frac{1}{\Qmax}\left[\frac{2}{\eta c_\gamma}\left(\frac{m_S}{2\pi T_D}\right)^{\frac{3}{2}}\right]^{\Qmax-1}\left(\frac{m_S}{m_{\Qmax}}\right)^{\frac{3}{2}}\right\}}\,. \end{align} This analytic estimate is shown to match very well to the crossing of the $n_S^\text{eq}$ and $\Qmax n_{\Qmax}^\text{eq}$ curves in Fig.~\ref{fig:equilibrium}. As the system usually evolves to a very large $\Qmax$, one can obtain the asymptotic expression of $T_D$ in the limit of $\Qmax\to\infty$ for different models. For the Q-balls in Model A, we parametrize the spectra $m_S=m_0\Qmin^{-1/4}$ and $m_{\Qmax}=m_0 \Qmax^{3/4}$ with $m_0 = 5.15 \sigma_0 \lambda^{1/4}$, and Eq.~\eqref{eq:TD} can be further rewritten as \begin{align} T_D\xrightarrow[]{\Qmax\to\infty}\frac{m_0\,\Qmin^{-1/4}}{\log\left\{\frac{2}{\eta c_\gamma}\left(\frac{m_0}{2\pi T_D}\right)^{\frac{3}{2}}\,Q^{-3/8}_{\rm min}\right\}}\,, \qquad \mbox{(Model A)} ~. \end{align} For Model B, on the other hand, at large $\Qmax$ we expect the Q-ball mass to be $m_{\Qmax}=\Qmax\Omega_c v$ at the leading order, and the free global charge quanta mass to be $m_S=\sqrt{\lambda_{\phi S}}\,v/2$. Therefore, \begin{align} \label{eq:TD-B} T_D\xrightarrow[]{\Qmax\to\infty} \frac{v\,(\sqrt{\lambda_{\phi S}}/2-\Omega_c)}{\log\left\{\frac{2}{\eta c_\gamma}\left(\frac{\sqrt{\lambda_{\phi S}}v}{4\pi}\right)^{\frac{3}{2}}T^{-\frac{3}{2}}_D\right\}}\,, \qquad \mbox{(Model B)} ~, \end{align} where $\Omega_c \equiv (\ls \lphi)^{1/4}$. From the Q-ball domination we can also understand how the chemical potential $\mu$ evolves at late time. At $T<T_D$, we expect almost all the global charge to be concentrated in $(\Qmax)$, \ie, $n_{\Qmax}\simeq \eta\, n_\gamma / \Qmax$ using (\ref{eq:Q_conservation}), which gives an approximate analytic expression of $\mu$: \begin{equation}\label{eq:mu_analytic} \mu \simeq \frac{1}{Q_\text{max}} \left(m_{\Qmax} + T \log \left[\frac{\eta\,c_\gamma}{\Qmax} \left(\frac{2 \pi T}{m_{\Qmax}}\right)^{3/2} \right] \right) \, , \; \; \; \; T < T_D \, . \end{equation} A final remark before moving on to the out-of-equilibrium evolution of the system. The temperature $T_D$ defined here is the temperature when the Q-balls dominate over the free particles in {\it charge}, but not necessarily {\it energy density}. Setting the two energy densities equal, by approximating $n_{\Qmax} \simeq \eta\,n_\gamma / \Qmax$ and using \eqref{eq:mu_analytic}, the energy density domination temperature is \begin{align}\label{eq:Trho} \Trho=\dfrac{m_{\Qmax}-\Qmax m_S}{(\Qmax-1)\log\left[\dfrac{\eta\uu c_\gamma}{\Qmax}\left(\dfrac{2\pi\uu \Trho}{m_S}\right)^{3/2}\right]+\left(\Qmax+\dfrac{3}{2}\right)\log\left(\dfrac{m_{\Qmax}}{m_S}\right)}\,, \end{align} which in the large-$\Qmax$ limit becomes \begin{align}\label{eq:Trho2} \Trho\xrightarrow[]{\Qmax\to\infty}\dfrac{m_S-m_{\Qmax}/\Qmax}{\log\left[\dfrac{\Qmax m_S}{m_{\Qmax}}\dfrac{1}{\eta\uu c_\gamma}\left(\dfrac{m_S}{2\pi\uu \Trho}\right)^{3/2}\right]}\,. \end{align} For Model A, $\Qmax/m_{\Qmax}\propto \Qmax^{1/4}$, and therefore Q-balls dominate the energy density at a rather late time for large $\Qmax$ compared to the charge-dominance temperature $T_D$. For Model B, on the other hand, the situation is different. Because $m_{\Qmax}\sim \Qmax\, \Omega_c \,v$ for large $\Qmax$, $\Trho$ does not depend on $\Qmax$ and will be closer to $T_D$. However, whether the Q-ball energy density domination can happen depends on when the system goes out of equilibrium, which we discuss in the next subsection. \subsection{The freeze out and relic abundance of Q-balls} The evolution of the free-particle--NTS system will have to freeze out sometime after the PT due to the cosmic expansion. In Ref.~\cite{Griest:1989bq}, the freeze-out temperature was estimated by considering the freeze out of process \eqref{eq:proc-SS}. However, we have numerically checked that this estimation does not capture the Q-ball dynamics properly because it is not directly related to Q-ball evolution.~\footnote{Ref.~\cite{Postma:2001ea} considered the freeze out of an individual species $n_Q$ rather than the sum $n_\text{NTS}$, which is not precise. One needs to account for $(Q-1)+S$ and $(Q)+S$ processes simultaneously, which adds to and removes from the abundance of $(Q)$ at similar rates. Ref.~\cite{Frieman:1989bx} largely disregards the fusion process in the reverse of (\ref{eq:proc-Qmin}).} Instead, one should examine the total Q-ball number density: \begin{equation} n_\text{NTS} \equiv \sum_{Q=\Qmin}^{\Qmax} n_{Q} \, . \end{equation} We start with the Boltzmann equations of (\ref{eq:proc-capture}--\ref{eq:proc-Qmin}) for the individual Q-ball number densities, \begin{equation} \label{eq:boltz-single} \begin{aligned} \dot{n}_{Q} + 3 H n_{Q} = & - \delta_{Q,Q_\text{min}} (\sigma v_{\rm rel})_{Q_\text{min}} \left(n_{\Qmin} n_{\Sbar} - n_{\Qmin}^\text{eq} n_{\Sbar}^\text{eq} \left( \frac{n_S}{n_S^\text{eq}} \right)^{\Qmin - 1} \right) \\ & - (1-\delta_{Q,Q_\text{max}}) (\sigma v_{\rm rel})_{Q} \left(n_{Q} n_S - n_{Q}^\text{eq} n_S^\text{eq} \left( \frac{n_{Q+1}}{n_{Q+1}^\text{eq}} \right) \right) \\ & + (1-\delta_{Q,Q_\text{min}}) (\sigma v_{\rm rel})_{Q-1} \left(n_{Q-1} n_S - n_{Q-1}^\text{eq} n_S^\text{eq} \left( \frac{n_{Q}}{n_{Q}^\text{eq}} \right) \right) \\ & - (1-\delta_{Q,Q_\text{min}}) (\sigma v_{\rm rel})_{Q} \left(n_{Q} n_{\Sbar} - n_{Q}^\text{eq} n_{\Sbar}^\text{eq} \left( \frac{n_{Q-1}}{n_{Q-1}^\text{eq}} \right) \right) \\ & + (1-\delta_{Q,Q_\text{max}}) (\sigma v_{\rm rel})_{Q+1} \left(n_{Q+1} n_{\Sbar} - n_{Q+1}^\text{eq} n_{\Sbar}^\text{eq} \left( \frac{n_{Q}}{n_{Q}^\text{eq}} \right) \right) \, , \end{aligned} \end{equation} where $\delta_{i,j}$ are the Kronecker delta functions and $Q>0$ is assumed ($Q<0$ equations are obtained by swapping $S$ and $S^\dagger$). The cross sections are given by the Q-ball geometric cross section: $(\sigma v_\text{rel})_{Q} \approx \pi R_{Q}^2$ (although see discussion of this assumption in Sec.~\ref{sec:Qball_interactions}).~\footnote{Here, we have taken $v_\text{rel} \sim 1$ because the $S$ particles are semi-relativistic during solitosynthesis. We have verified that taking into account the additional velocity dependence does not change the result appreciably.} All of the terms on the right-hand side of Eq.~(\ref{eq:boltz-single}) except the $(\sigma v_\text{rel})_{\Qmin}$ term are related to the internal evolution of the NTS system, but do not change $n_\text{NTS}$. So, by summing the Boltzmann equations for each value $Q$ together, a simpler Boltzmann equation for the total Q-ball abundance is obtained via the cancellation of terms on the right-hand side: \begin{equation}\label{eq:boltz-NTS} \dot{n}_\text{NTS} + 3\,H\,n_\text{NTS} = - (\sigma v_{\rm rel})_{Q_\text{min}} \left(n_{\Qmin} n_{\Sbar} - n_{\Qmin}^\text{eq} n_{\Sbar}^\text{eq} \left( \frac{n_S}{n_S^\text{eq}} \right)^{Q_\text{min} - 1} \right) \, . \end{equation} Thus, the freeze-out temperature of the NTS number density can be estimated from \begin{equation}\label{eq:TF} H\,n_\text{NTS} \sim (\sigma v_{\rm rel})_{Q_\text{min}}\, n_{\Qmin}\,n_{\Sbar}\,\left| \right._{T=T_F} \, . \end{equation} Estimated in this way, $T_F$ provides a good proxy for comparison to $T_D$. This is because for $T<T_D$, the freeze out of $n_\text{NTS}$ is equivalent to the freeze out of the dominant charge component of the system $(\Qmax)$ because $n_\text{NTS}^\text{eq} \simeq n_{\Qmax}^{\rm eq}$ has already stopped significantly evolving (see Fig.~\ref{fig:equilibrium}). This approximate equality enables us to find a simplified expression for $T_F$ from~\eqref{eq:TF} by substituting the equilibrium number densities, \begin{align} \label{eq:TF2} T_F=\dfrac{(\Qmin-1-\Qmax)\mu-(m_S+m_{\Qmin}-m_{\Qmax})}{\log\left[\dfrac{\pi\,g^{1/2}_\ast \,T_F^{1/2}[2\pi\,m_{\Qmax} /(m_S\,m_{\Qmin})]^{3/2})}{\sqrt{90}\Mpl\,(\sigma v_{\rm rel})_{\Qmin} } \right]}\, , \; \; \; \;\mbox{if}~T_F < T_D\,. \end{align} In Fig.~\ref{fig:Boltzmann}, we show the numeric solutions to the full set of Boltzmann equations for two benchmark points of Model A,~\footnote{For Model B, it is not possible to probe large enough $\Qmax$ numerically to get a useful result where (\ref{eq:M_large}) determines the mass.} as well as the freeze-out temperature analytically estimated by Eq.~\eqref{eq:TF2}. It can be seen from the plots that there is a good agreement between our estimated $T_F$ and the point where $S$ starts to deviate from its equilibrium distribution, meaning that our estimation catches the essence of the freeze out of the system. The $S$ abundance appears to ``freeze out'' at $T_F$ because fusions of $S$ particles in (\ref{eq:proc-Qmin}) have stopped and---although the forward processes in (\ref{eq:proc-SS}) and (\ref{eq:proc-capture}) are still active---the $n_{S^\dagger}$ and $n_{Q<\Qmax}$ abundances are subdominant to $n_S$ and thus have little effect on a logarithmic scale. Using Eq.~\eqref{eq:mu_analytic} to take into account the $T$ dependence of the chemical potential and equating $T_F = T_D$ with \eqref{eq:TD} and \eqref{eq:TF2}, we determine the parameter boundary to have Q-balls as the dominant component of dark matter in terms of $\eta$ and other NTS-related parameters, \begin{align}\label{eq:contour} \log\eta=\frac{m_S+m_{\Qmin}}{m_{\Qmin}}\log\left[\dfrac{2}{c_\gamma}\left(\dfrac{m_S}{2\pi \TFD}\right)^{\frac{3}{2}}\right] + \frac{m_S}{m_{\Qmin}} \log\left[\dfrac{\pi g^{1/2}_\ast c_\gamma \,\TFD^{1/2}}{\sqrt{90}\Qmax\Mpl\, (\sigma v_{\rm rel})_{\Qmin}}\left(\dfrac{4\pi^2 \TFD}{m_S\,m_{\Qmin}}\right)^{\frac{3}{2}}\right] ~. \end{align} To further understand how $\eta$ scales with $v$ or $\sigma_0$, we observe that $\TFD/v$ in~\eqref{eq:contour} can be treated approximately as a constant for this freeze-out system. Indeed, we have numerically tested and found that $T_F/v \approx 5~\text{to}~10$ is not sensitive to the other parameters in the system. Because all the scales are proportional to one another, $T_F \propto v \propto m_S \propto m_{\Qmin}$,~\footnote{We expect the system we are discussing is of only one energy scale, \ie, the PT temperature should be roughly the same as $v$ or $\sigma_0$.} the right-hand side of \eqref{eq:contour} depends on $v$ only through the second term, which dominates the first term as $\Qmax$ becomes large. Therefore, at large $\Qmax$, we expect $\eta$ to scale as \begin{align}\label{eq:eta-v-Qmax} \eta\propto \left[\frac{v}{\Qmax\,\Mpl}\right]^{\frac{m_S}{m_{\Qmin}}}\,. \end{align} Note that $m_S/m_{\Qmin} \approx 1/\Qmin$, giving the power-law dependence of the boundary. In Fig.~\ref{fig:contours} we show the contours of $T_D=T_F$ for various $\Qmax$. Above the contours are the regions where we expect efficient solitosynthesis to occur and NTSs to be the dominant component of charge in the dark sector. It can be easily seen from the behavior of the contours that, as $\Qmax$ becomes large, the scaling of the contours indeed follows Eq.~\eqref{eq:eta-v-Qmax}. Meanwhile, comparing the contours with the same $\Qmax$ in the two panels, we find that the boundaries of efficient solitosynthesis are similar for the two models when $\Qmax$ becomes large. This is, again, because the second term in \eqref{eq:contour} determines the contour behaviors at large $\Qmax$ with a tiny model-dependent effect in the logarithm function. Note that for the same set of $\Qmin$ and $\Qmax$, we predict here a smaller parameter space to have efficient solitosynthesis compared with Ref.~\cite{Griest:1989bq}, as their estimation renders a lower $T_F$. But as we can use a much larger value of $\Qmax$ as justified in Sec.~\ref{sec:maximum-charge}, the available parameter space for efficient solitosynthesis turns out to be much larger than that in~\cite{Griest:1989bq}. With the freeze-out temperature determined, we can now derive the relic abundance of the dark sector (free particle+Q-balls), which combined with the boundary of solitosynthesis gives us the viable parameter space of the models. In Fig.~\ref{fig:relic_abundance_SS} we show the parameter space where the relic abundance of the dark sector matches that of the dark matter, $\Omega_{\rm DM} h^2=0.12$~\cite{Planck:2018vyg}. For each combination of $\eta$ and $\sigma_0$ or $v$ where solitosynthesis can happen, there is a unique $\Qmax$ to satisfy the relic abundance condition, denoted by the solid black lines (above these lines, the dark sector overcloses the Universe for fixed $\Qmax$). On the boundary for efficient solitosynthesis to occur (gray line separating NTS charge-dominated region from $S$ charge-dominated region), a smaller amount of charge asymmetry requires a larger $\Qmax$ and energy scale $v$ or $\sigma_0$. On the other hand, $\Qmax$ is bounded from above by the equilibrium requirement as in \eqref{eq:Qmax_bound} and \eqref{eq:Qmax_bound_2} (excluding the blue shaded region), which is more stringent for larger $\sigma_0$ or $v$. As a result, large $\sigma_0$ or $v$ is excluded, while the exact boundary is model dependent due to the different $Q$ dependence in the cross section. For each relic abundance curve of a certain $\Qmax$, there are two knees in the curve. The one at larger $\eta$ (around the intersection with the red line) corresponds to the boundary between Q-ball and free particle dominance of the dark sector energy density, as explained at the end of Sec.~\ref{subsec:TD}. Above this knee, the relic abundance is dominated by Q-balls, and thus can be calculated as \begin{align} \Omega_{\Qmax,0}=\dfrac{m_{\Qmax}n_{\Qmax,0}}{\rho_{c,0}}=\dfrac{m_{\Qmax}n_{\Qmax,T_F}}{\rho_{c,0}}\left(\dfrac{T_0}{T_F}\right)^3=\dfrac{(m_{\Qmax}/\Qmax)\eta\uu c_\gamma T^3_0}{\rho_{c,0}}\propto\dfrac{\eta\uu m_{\Qmax}}{\Qmax}\,, \end{align} where we have approximated $n_{\Qmax,T_F}\approx\eta\uu n_\gamma/\Qmax=\eta\uu c_\gamma T^3_F/\Qmax$, and the subscript ``0" indicates the values today. This explains why the relic abundance curves of different $\Qmax$ converge above this knee for Model B, as $m_{\Qmax} \propto \Qmax$ such that the $\Qmax$ dependence cancels out in the expression when $\Qmax$ is large. For Model A, because $m_{\Qmax} \propto \Qmax^{3/4}$, different curves have separate behaviors. The second knee happens at a smaller $\eta$ where the relic abundance is completely determined by the free particle $S$, \ie, $\Omega_S=\Omega_{\rm DM}$. The transition should be smooth, while in Fig.~\ref{fig:relic_abundance_SS} the transition is plotted dashed and sharp. This is because we have used the approximation \eqref{eq:mu_analytic} in the calculation, which breaks down in the $S$ charge domination region, so the results are not reliable in the light gray shaded region. However, the general knee behavior is anticipated.~\footnote{For very small $\eta$ one should expect the situation to match the case of symmetric dark matter, which means that the $\Omega_S=\Omega_{\rm DM}$ curves should not continue as they are in Fig.~\ref{fig:relic_abundance_SS}, but have a unitary bound in $\sigma_0$ or $v$. This bound is beyond the range plotted and therefore is not shown here.} In Fig.~\ref{fig:relic_abundance_SS} we have mainly used $\Qmin=4$. With a larger $\Qmin$, for a certain $\eta$ and $v/\sigma_0$ to keep having efficient solitosynthesis one needs to increase $\Qmax$ as well, as can be seen from Eq.~\eqref{eq:eta-v-Qmax}. Constraints from the maximum attainable $\Qmax$ become more stringent and shift to smaller $v/\sigma_0$ in this case. The thin blue dashed line shows the corresponding $\Qmax$ upper bound for $\Qmin=7$.~\footnote{The gray shaded region should shift for a different $\Qmin$, but we have checked that it is hardly noticeable for our new choice compared with the case of $\Qmin=4$.} The $\Omega_{\rm NTS}+\Omega_S=\Omega_{\rm DM}$ contours of fixed $\Qmax$ will also shift as $\Qmin$ becomes larger, so the thin blue dashed line should not be compared to the black lines in the figure. For Model A, the point where the blue solid line and the light gray line intersect corresponds to $\Qmax=6\times 10^{29}$, while for the blue dashed line of $\Qmin=7$ it is $\Qmax=5\times 10^{36}$, marking the increase of $\Qmax$ with respect to a larger $\Qmin$. For Model B, the corresponding $\Qmax$ values are $1\times 10^{42}$ for $\Qmin=4$ and $2\times 10^{52}$ for $\Qmin=7$. For both models, a larger $\Qmin$ prefers a smaller energy scale, which could be constrained by Big Bang nucleosynthesis observables depending on additional model details. Note, microlensing searches exclude compact objects with mass $m \gtrsim 10^{23}~\text{g}$ from making up all of dark matter \cite{Niikura:2017zjd,Smyth:2019whb,Macho:2000nvd,EROS-2:2006ryy,Wyrzykowski:2011tr,Griest:2013aaa,Oguri:2017ock}. Our Model A is not constrained by this bound because the masses $m_Q \propto Q^{3/4}$ are too small. For Model B with a $\Qmax$ given in \eqref{eq:Qmax_bound_2}, microlensing requires $v\gtrsim 10^2$ GeV with benchmark (\ref{eq:benchmark}). \section{Q-balls from a phase transition} \label{sec:Q_ball_from_PT} Q-balls can form from either an FOPT or SOPT~\cite{Frieman:1988ut,Griest:1989cb,Frieman:1989bx,Macpherson:1994wf,Hong:2020est,Bai:2018dxf,Ponton:2019hux,Bai:2021mzu} when the vacuum expectation value of $\sigma$ changes from $\sigma_0$ to $\sigma_-$ for Model A and that of $\phi$ changes from zero to $v$ for Model B. When regions of false vacuum contain a charge $Q >\Qmin$, it can be energetically preferable to remain in the false vacuum and lead to Q-balls. This could come about either due to some initial asymmetry in the $S$ sector, or simply due to statistical fluctuations in the difference of $S$ particles and antiparticles within the relevant volumes \cite{Griest:1989cb}. Once formed, these Q-balls could serve as the initial seeds for solitosynthesis, as discussed in the previous section. But if the chemical equilibrium between free $S$ particles and the Q-balls cannot be reached, this initial population could remain relatively unchanged since the end of the PT. The initial conditions after a PT can be modified in two ways, possibly simultaneously: either (a) by the building up of Q-balls starting from fusion of free particles to form $(\Qmin)$ or (b) from the evolution of the Q-balls formed during the PT. We are interested in the possibility that the initial conditions are not modified. There are various reasons for (a) not to occur. For example, it is plausible that the PT temperature $T_f$ is lower than $T_F$, such that there is not enough time for the chemical equilibrium to be established. Note that $T_F$ is usually smaller than $\sigma_0$ or $v$ by an $\mc{O}(1)$ number, so the potential might not have to be very fine-tuned to achieve a lower PT temperature. Regarding (b), a sufficient condition for no evolution of the PT-produced Q-balls is that the number of free particles in the true vacuum regions is highly suppressed after the PT. This can occur if (i) the free particles remain inside the false vacuum bubbles due to energetics and bubble wall dynamics, (ii) the free particles are not thermally produced in the reverse of (\ref{eq:proc-SS}), and (iii) the thermal bath temperature is low enough that it cannot dislodge particles from Q-balls in the reverse of (\ref{eq:proc-capture}) or (\ref{eq:proc-Qsbar}). All three conditions can be true provided $m_S/T_f$ is large enough, perhaps requiring special model engineering. For (i), in an FOPT, the proportion of particles trapped in the false vacuum can be order unity for $m_S/T_f \gtrsim \mathcal{O}(10)$, depending on the bubble wall velocity \cite{Hong:2020est}. In an SOPT particles are initially distributed randomly in true and false vacuum pockets, but then may be expected to rearrange to favor the false vacuum pockets. Because they are heavier inside true vacuum pockets, the probability to remain there should be suppressed by a Boltzmann factor $\sim e^{-m_S/T}$. There may also be additional bubble wall dynamics similar to the FOPT as false vacuum pockets shrink~\cite{Frieman:1988ut}. For (ii), similar to \cite{Baker:2019ndr}, we require $(\sigma v_\text{rel}) n_S^{\rm eq} \lesssim H$ using (\ref{eq:SS-xsec}) and (\ref{eq:number_density}) with $\mu_S=0$ at $T=T_f$, giving $m_S/T_f \gtrsim 31 + (3/2)\log [m_S/(31 T_f)] + \log (T_f\cdot\text{TeV}/v^2)$. Finally, for (iii), because the binding energy per $S$ particle in a large-charge Q-ball is generically of order the free particle mass in Model B [barring a fine tuning with $(\ls \lphi)^{1/4}$ very close to $m_S/v$] and could be significantly larger in Model A, it is energetically unlikely for $S$ or $S^\dagger$ particles to be kicked out of Q-balls. Specifically, detailed balance allows the estimation that these processes are irrelevant when $(\sigma v_\text{rel})_{\langle Q \rangle} n_S^{\rm eq}<H$ with $\mu_S=0$, giving $m_S/T_f \gtrsim 50 + (3/2)\log [m_S/(50\,T_f)] + \log (T_f\cdot\text{TeV}/v^2) + (1/2)\log(\langle Q \rangle/10^{10})$ for Model A and $m_S/T_f \gtrsim 53 + (3/2)\log [m_S/(53\,T_f)] + \log (T_f\cdot\text{TeV}/v^2) + (2/3)\log(\langle Q \rangle/10^{10})$ for Model B, where $\langle Q \rangle$ is the typical charge produced from the PT. In this case, it is unlikely for chemical equilibrium to be achieved following the PT, and the solitosynthesis story in Sec.~\ref{sec:solitosynthesis} needs not apply. If these conditions do not hold, we must carefully consider how the charge and abundance of Q-balls and free particles evolve. To estimate the typical Q-ball charge immediately following a phase transition, we begin with the number density of $S$ and $S^\dagger$ particles in a given Hubble volume near the temperature $T_f$: $n_S = (2 \zeta(3)/\pi^2) T_f^3$, where $T_f$ refers to the bubble nucleation temperature $T_n$ (when the true vacuum occupies $1-e^{-1}$ of the total volume) for an FOPT or the Ginzburg temperature $T_G$ (the temperature where thermal fluctuations between the true and false vacua freeze out) for an SOPT. These $S$ particles will be divided up into a number of potential pockets for Q-ball formation. For an FOPT, the number of potentially formed Q-balls approximately equals the number of bubble nucleation sites. This number is determined by the temperature-dependent bounce action for the field to transition from the false to the true vacuum (see Appendices~\ref{appendix:bounce_action} and \ref{appendix:nucleation_sites} for more detailed discussion). Once bubbles have nucleated, $S$ particles will be ``snowplowed'' by the bubble walls owing to their smaller mass inside the false vacuum. Where bubble walls meet, $S$ particles can collect and form Q-balls. Parametrizing the bounce action of the FOPT as $S_3/T = a/\epsc^2$, with $\epsc = (T_c-T)/T_c$ and $T_c$ is the temperature when the two vacua are degenerate, the number density of Q-balls formed at the bubble nucleation temperature $T_n$ is (see Appendix~\ref{appendix:nucleation_sites}) \begin{equation} n_\text{Q-ball}(T_n) \sim n_\text{nuc} \approx (4 \pi v_\text{sh}^3 a^{1/2})^{-1} H_n^3 \left(\log \left[\frac{v_\text{sh}^3\, \epsn^9\, T_n^4}{8 \sqrt{2\pi}\, a^{5/2} \, H_n^4}\right]\right)^{3/2} \, , \; \; \; \; \text{(FOPT)} . \label{eq:nQball_FOPT} \end{equation} Here, subscript $n$ denotes quantities evaluated at $T=T_n$, and we have taken $\epsn = \epsc|_{T=T_n} \lesssim 1$. For an SOPT, the number of Q-ball-forming sites depends on the correlation length $\xi$ of the PT and the probability for each correlated region to be in the false vacuum \cite{Frieman:1988ut}. The latter depends on the energy difference between the false and true vacua at the Ginzburg temperature $T_G$. The probability ratio is $p_\text{false}/p_\text{true} \sim \exp [-\Delta V(T_G) \, (2\xi)^3 / T_G]$. The correlation length also depends on the Ginzburg temperature as $\xi \simeq (\lphi T_G)^{-1}$ for Model B (and replacing $\lphi$ by $\lambda$ for Model A), where $T_G \simeq \lphi^{-1/2} v$~\cite{Frieman:1988ut} and $\Delta V(T_G) \sim \lphi v^4 /4$. Thus, $p_\text{false}/p_\text{true} \sim e^{-2}$. This gives the number density of potentially Q-ball-forming correlated regions \begin{align} n_\text{Q-ball} (T_G) \sim \frac{1}{1+p_\text{true}/p_\text{false}} \xi^{-3} \sim 10^{-1} \lphi^{3/2} v^3 \, , \; \; \; \; \text{(SOPT)}\,. \end{align} Notice that this is generally orders of magnitude larger than the number density from an FOPT because $v \gg H_n \sim v^2/\Mpl$. Thus, an SOPT will produce more numerous but smaller-charged Q-balls. The number of $S$ particles and antiparticles within the proto-Q-ball is $N_S^\text{Q-ball} \sim p_\text{in} \, n_S/ n_\text{Q-ball}$. The factor $p_\text{in}$ accounts for the probability for each of the $S$ particles to remain inside the false vacuum regions and form Q-balls. The typical Q-ball charge will be the greater of the asymmetric or statistical fluctuation components: $\langle Q \rangle \sim \max \left[\eta N_S^\text{Q-ball}, \, (N_S^\text{Q-ball})^{1/2} \right]$, where $\eta \approx |n_S - n_{S^\dagger}| / (n_S + n_{S^\dagger})$. \footnote{The approximation is exact when $S$ particles are relativistic so $n_S+n_{S^\dagger} = n_\gamma$. If more exact results are required, simply define $\eta$ using $n_S+n_{S^\dagger}$ instead of $n_\gamma$ in the denominator for the purposes of this section.} Notice that if the asymmetric component (first term) dominates $\langle Q \rangle$, then most Q-balls will have same-sign charge, whereas if the statistical fluctuations (second term) dominate, then both positively and negatively charged Q-balls result in equal proportion. The Q-ball abundance can also be calculated as $Y_\text{Q-ball} \sim p_{Q>\Qmin} n_\text{Q-ball}\,s^{-1}$. The factor $p_{Q>\Qmin}$ accounts for the probability for each proto-Q-ball to have large enough charge to be stable. Often, $\langle Q \rangle \gg \Qmin \sim \mathcal{O}(1\text{ to } 10^3)$, so this factor can be $\mathcal{O}(1)$. The factor $s=(2\pi^2/45) g_{*S} T^3$ is the entropy density with $g_{*S} \sim 100$. This can be compared to the observed dark matter abundance $Y_\text{DM} = (3.6 \times 10^{-10}) (\GeV/m_Q)$. For an FOPT and the mass spectrum of $m_Q = 5.15 \sigma_0 \lambda^{1/4} \langle Q\rangle^{3/4}$, \begin{align} \frac{Y_\text{Q-ball}}{Y_\text{DM}} \sim & (1.3 \times 10^{-5}) p_{Q>\Qmin} g_{*s}^{-1} \lambda^{1/4} \left(\frac{\sigma_0}{\GeV}\right)^{7/4} a^{-1/8} \\ & \times \max \left( \eta \,p_\text{in}^{3/4} g_*^{3/8} l^{3/8} v_\text{sh}^{-3/4} \, , \, \, (6.4 \times 10^{-23}) \left(\frac{\sigma_0}{\GeV}\right)^{9/8} \frac{p_\text{in}^{3/8} g_*^{15/16} l^{15/16}}{a^{3/16} v_\text{sh}^{15/8}} \right) ~, \nonumber \end{align} while for the mass spectrum $m_Q = \langle Q \rangle v (\lphi \ls)^{1/4}$, \begin{align} \frac{Y_\text{Q-ball}}{Y_\text{DM}} \sim & (1.5\times 10^{9}) p_{Q>\Qmin} g_{*s}^{-1} (\lphi \ls)^{1/4}\frac{v}{\GeV} \\ & \times \max \left(\eta \, p_\text{in} \, , \, \, (2.6 \times 10^{-30}) p_\text{in}^{1/2} \left(\frac{v}{\GeV}\right)^{3/2} \left(\frac{g_*^3\uu l^3}{v_\text{sh}^6 a}\right)^{1/4} \right) \, , \nonumber \end{align} where $l\equiv \log \left[v_\text{sh}^3\, \epsn^9\, T_n^4/(8 \sqrt{2\pi}\, a^{5/2} \, H_n^4)\right]$. Meanwhile, for an SOPT and the mass spectrum of $m_Q = 5.15 \sigma_0 \lambda^{1/4} \langle Q\rangle^{3/4}$, \begin{align} \frac{Y_\text{Q-ball}}{Y_\text{DM}} \sim & (1.1 \times 10^{10}) p_{Q>\Qmin} g_{*s}^{-1} \lambda \frac{\sigma_0}{\GeV} \max \left(\frac{\eta\, p_\text{in}^{3/4}}{(1+p_\text{true}/p_\text{false})^{1/4}} \, , \, \, \frac{1.7 \lambda^{9/8} p_\text{in}^{3/8}}{(1+p_\text{true}/p_\text{false})^{5/8}} \right)~, \end{align} while for the spectrum of $m_Q = \langle Q \rangle v (\lphi \ls)^{1/4}$, \begin{align} \frac{Y_\text{Q-ball}}{Y_\text{DM}} \sim & (1.5 \times 10^{9}) p_{Q>\Qmin} g_{*s}^{-1} (\lphi \ls)^{1/4}\frac{v}{\GeV} \max \left(\eta\, p_\text{in} \, , \, \, 2 \lphi^{3/2} \sqrt{\frac{p_\text{in}}{1+p_\text{true}/p_\text{false}}} \right) ~. \end{align} In Figs.~\ref{fig:1OPT} and \ref{fig:2OPT} (corresponding to FOPTs and SOPTs, respectively), we show contours of parameters that give the proper abundance of Q-balls in the left panels, assuming $p_\text{in}=p_{Q>\Qmin}=1$. The right panels give the corresponding typical charges $\langle Q \rangle$ for those parameters (Model A curves stop at large $\langle Q\rangle$ when $\eta=1$). At larger $\eta$, the asymmetric component dominates the Q-ball properties. However, at smaller $\eta$ where the contours become vertical in the left panels and dots in the right panels, the symmetric component dominates, equivalent to the $\eta \to 0$ limit. For FOPTs, $a \lesssim 100$ for the expansion in $\epsc$ to hold, but there is in principle no lower bound. Note that because the correlation lengths for SOPTs tend to be much smaller than the bubble separation lengths of FOPTs, SOPTs tend to form a higher number density of Q-balls with smaller charges and therefore prefer lower energy scales compared to FOPTs. Additionally, Model A prefers higher charges and energy scales than Model B because Model A contains lighter Q-ball states with $m_Q \propto Q^{3/4}$. \section{Discussion and conclusions} \label{sec:conclusion} Before concluding, we present some useful benchmarks for Q-ball properties formed from solitosynthesis or PTs. A plausible value for the amount of charge asymmetry is $\eta\sim 10^{-10}$, similar to the observed baryon asymmetry in the Standard Model sector. In the case of efficient solitosynthesis and for the $\Qmin=4$ benchmark presented in Fig.~\ref{fig:relic_abundance_SS}, the $\Qmax$ upper bound could give a reasonable estimate for the typical NTS charge $\langle Q \rangle$. For the PT formation, on the other hand, it is most interesting to note that $\eta=0$ gives viable parameter space. Using these assumptions, some benchmark values are given in Table~\ref{tab:benchmarks} for solitosynthesis, an FOPT, and an SOPT for both Models A and B with Q-balls making up all of dark matter (except solitosynthesis of Model A, in which $S$ particles dominate the energy density but Q-balls dominate the global charge for the parameter choice $\eta=10^{-10}$, see Fig.~\ref{fig:relic_abundance_SS}). Comparing these three examples, solitosynthesis produces Q-balls with the largest charges, masses, and radii. FOPTs prefer the largest values of $v$ or $\sigma_0$, giving Q-balls the smallest radii---despite not having the smallest charges---as well as macroscopic masses. SOPTs prefer the smallest values of the charges and of $v$ or $\sigma_0$, giving Q-balls microscopic masses. Containing less compact Q-ball states, Model B tends to produce larger-radius Q-balls than Model A. Much larger masses and radii are accessible with larger $\eta$, which is also demonstrated in Table~\ref{tab:benchmarks} by choosing a benchmark with $v$ near the MeV scale for solitosynthesis or an FOPT in Model B. Note that microlensing searches exclude compact objects with mass $m \gtrsim 10^{23}~\text{g}$ from making up all of dark matter \cite{Niikura:2017zjd,Smyth:2019whb,Macho:2000nvd,EROS-2:2006ryy,Wyrzykowski:2011tr,Griest:2013aaa,Oguri:2017ock}, and there are possibilities to probe down to much lower masses in the future \cite{Katz:2018zrn,Bai:2018bej,Jung:2019fcs}. Indeed, some of the points in Table~\ref{tab:benchmarks} and Figs.~\ref{fig:relic_abundance_SS} and \ref{fig:1OPT} are already constrained. Other detection strategies at still lighter mass \cite{Carney:2022gse}, such as multiple-scatter searches \cite{Bramante:2018qbc,DEAPCollaboration:2021raj}, rely on model-dependent couplings to the Standard Model \cite{Ponton:2019hux,Bai:2019ogh,Bai:2021mzu}, although NTSs with masses near the Planck scale could eventually be discovered through gravitational interactions alone \cite{Carney:2019pza,Windchime:2022whs}. \begin{table}[t] \renewcommand{\arraystretch}{1.3} \addtolength{\tabcolsep}{3pt} \centering \begin{tabular}{l| l|l||l | l| l |l} \hline \hline Mechanism & Model & $\eta$ & $m_Q$ (g) & $R_Q$ (m) & $\langle Q \rangle$ & $\sigma_0$ or $v$ (GeV) \\ \hline \multirow{3}{*}{Solitosynthesis} & A & $10^{-10}$ & 3 & $3 \times 10^{-10}$ & $6 \times 10^{29}$ & 10 \\ & B & $10^{-10}$ & $5\times 10^{22}$ & $2\times 10^{-3}$ & $1 \times 10^{45}$ & $1\times 10^2$ \\ & B & $10^{-6}$ & $6\times 10^{30}$ & $3\times 10^5$ & $1\times 10^{57}$ & $1 \times 10^{-2}$ \\\hline \multirow{3}{*}{FOPT} & A & 0 & $9\times 10^{-6}$ & $5\times 10^{-23}$ & $3 \times 10^{11}$ & $2\times 10^9$ \\ & B & 0 & $2\times 10^{-3}$ & $4 \times 10^{-19}$ & $1 \times 10^{14}$ & $4 \times 10^7$ \\ & B & $10^{-4}$ & $8 \times 10^{26}$ & $1 \times 10^{5}$ & $7 \times 10^{53}$ & $3 \times 10^{-3}$ \\\hline \multirow{2}{*}{SOPT} & A & 0 & $2 \times 10^{-20}$ & $3\times 10^{-15}$ & $5 \times 10^4$ & $7 \times 10^{-1}$ \\ & B & 0 & $1 \times 10^{-20}$ & $5 \times 10^{-13}$ & $5 \times 10^4$ & $2 \times 10^{-2}$ \\ \hline \hline \end{tabular} \caption{ Benchmark Q-ball properties in Models A and B as produced from solitosynthesis or a PT making up all of dark matter (except solitosynthesis of Model A, see text). For the FOPT, $a=10^{-1}$ is assumed. Model parameters are chosen as in (\ref{eq:benchmark}) with $\Qmin=4$, except the SOPT where $\lambda=\lphi=10^{-3}$ is used. The benchmark points with the largest masses for solitosynthesis and FOPT were chosen to demonstrate the full range of possible properties, but they are excluded by microlensing searches from making up all of dark matter \cite{Niikura:2017zjd,Smyth:2019whb,Macho:2000nvd,EROS-2:2006ryy,Wyrzykowski:2011tr,Griest:2013aaa,Oguri:2017ock}. } \label{tab:benchmarks} \end{table} As Q-balls are usually massive macroscopic objects, it is natural to inspect the possibility that they collapse into black holes~\cite{Lee:1986ts,Friedberg:1986tq,Lee:1991ax}. We have found that the solitons from both solitosynthesis and PTs do not collapse into a black hole. A simple criterion for the collapse is to compare the Q-ball radius with the Schwarzschild radius of a black hole with the same mass, \ie, requiring $R_Q \gtrsim G\,m_Q/2$. This defines a critical charge $Q_\text{BH}$ above which Q-balls collapse into black holes. For solitosynthesis, one may compare this with the upper bounds given in \eqref{eq:Qmax_bound} and \eqref{eq:Qmax_bound_2}. For $Q_\text{BH} < \Qmax$, for Model A, this leads to $n^2_S(T)/T^4\lesssim 0.8g^{1/2}_\ast/G$, which sets $\sigma_0\lesssim 1.9\times 10^{21}$ GeV when taking $T=\sigma_0$ and $g_\ast=100$. Thus, Q-balls will not form for sub-Planckian values of $\sigma_0$. For Model B, the corresponding bound [again, taking $m_Q=(\lambda_S\lambda_\phi)^{1/4}v\uu Q$] is $n^2_S(T)/(v^2 T^4) \lesssim 64 g_\ast \lambda^{1/2}_\phi/(3\lambda^{1/2}_S)$, which sets a bound on the model parameters instead to be $\lambda^2_{\phi S}K^2_2(\lambda^{1/2}_{\phi S}/2)<4096\pi^4 g_\ast \lambda^{1/2}_\phi/(3\lambda^{1/2}_S)$. The left-hand side of this inequality has a maximum value of 64 at $\lambda_{\phi S}=0$, and therefore the constraint holds easily unless $\lambda_\phi$ is as small as $10^{-10}$. An assumption we made during the analysis of solitosynthesis is that the evolution process results in only a single thermalized system of free particles and Q-balls of charges from $\Qmin$ to $\Qmax$. There is another possibility where the Q-balls formed from PTs are so large that they can only discharge to a charge of $Q_{\rm low}$ larger than $\Qmax$, the maximum charge available through fusion. In other words, there is a gap in the Q-ball charge spectrum, and the system has two subsets that are not thermalized with each other. Either subset could end up dominating the charge/energy density. Note that such a situation invalidates the chemical potential relationships \eqref{eq:mu} as well as the necessary ingredients to derive the Boltzmann equation \eqref{eq:boltz-NTS}. In this case the analysis is more complicated, beyond the scope of this work. However, given the typical charges shown in Table~\ref{tab:benchmarks}, it may be more likely that $Q_\text{low}<\Qmax$ and there are not two separate subpopulations. Neither of the two models we discussed has an intrinsic upper bound on the Q-ball charge, whose existence can change soliton evolution calculations in this work. Such an upper bound does exist in some models, potentially because large Q-balls could destabilize the false vacuum~\cite{Kusenko:1997hj} or because a repulsive interaction within the large Q-balls is too strong for Q-balls to exist~\cite{Lee:1988ag}. If solitosynthesis pushes the charges to beyond this upper bound, there may be unique signatures coming from, \eg, the induced phase transition \cite{Croon:2019rqu} or collapse of these large Q-balls in these respective examples. To conclude, in this work we study the cosmic evolution of Q-balls, considering both the scenarios with and without solitosynthesis. In the case that solitosynthesis is efficient, we estimate which species dominates the total charge or energy of the free-particle--NTS system, and examine both analytically and numerically the evolution of the system through a set of coupled Boltzmann equations. We then derive the parameter space where solitosynthesis is efficient such that Q-balls dominate the charges and/or energy density of the dark sector, and meanwhile the dark sector can make up all the dark matter. Our calculations refine previous estimations on the maximum attainable charge in the system and the freeze-out temperature of NTS number density, and thus we find a much larger parameter space for efficient solitosynthesis. Without solitosynthesis, Q-balls do not appreciably evolve after the PT. We discuss the possible reasons for this scenario and examine how the PT parameters determine the typical charge, mass, and abundance of the Q-balls. Our work is restricted within scalar theories, while the discussions can be generalized to fermionic macroscopic states as well. The results of this work demonstrate that NTSs can be copiously produced in the early universe and serve as one type of macroscopic dark matter. \subsubsection*{Acknowledgements} The work of YB is supported by the U.S. Department of Energy under the contract DE-SC-0017647. The work of SL is supported in part by Israel Science Foundation under Grant No. 1302/19, and also by the Area of Excellence (AoE) under the Grant No. AoE/P-404/18-3 issued by the Research Grants Council of Hong Kong S.A.R. The work of NO is supported by the Arthur B. McDonald Canadian Astroparticle Physics Research Institute. We are grateful to the Munich Institute for Astro- and Particle Physics (MIAPP), which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy-EXC-2094-390783311, and the Mainz Institute for Theoretical Physics (MITP) of the Cluster of Excellence PRISMA+ (Project ID 39083149) for their hospitality and partial support during the completion of this work. \appendix \section{Analytic approximations for small $Q$} \label{appedix:smallQ} For Model B in (\ref{eq:V}), it has in the past been pointed out that the ansatz $s(r) \approx s_0 \sin (\omega r)/(\omega r)$ for $r<\pi/\omega$ provides a reasonable approximation to the solution for the field equations of motion \cite{Ponton:2019hux,Bai:2021mzu}. Indeed, this is the approximate solution when $f=0$ and $\ls$ is taken negligible (or for nonzero $\ms$ just replace $\omega \to \sqrt{\omega^2-\ms^2}$). However, this ansatz does not capture the tail at large radius, nor does it account for the fact that $f(0)>0$ is more likely at small $Q$. Here, we propose a refined small-$Q$ ansatz $s(r) = s_0 [1-\tanh^2 (\omega' r)]$ and $\Delta f = 1 - f = \pi_0 [1-\tanh^2(\omega' r)]$. We must determine $\omega, \omega', s_0, \pi_0$ by eliminating some in favor of other parameters like the parameter $Q$ and minimizing the mass $m_Q$ with respect to the rest. The equations of motion are not analytically tractable for this ansatz, so we will use an expansion in powers of $r$. For simplicity, we take $\ms=0$ here, but it can easily be added back by modifying $\Omega \equiv \omega/v$. Also define $\Omega' = \omega'/v$ similarly to $\Omega$. First, use $Q = 4 \pi \Omega \int_{0}^{\infty} d \bar{r} \bar{r}^2 s^2$ to obtain $s_0$: \begin{equation} s_0 = 3 \sqrt{\frac{Q \, \Omega'^3}{(2\pi^3-12 \pi) \Omega}} \, . \end{equation} Next, use the leading order solution to the EOM $\Delta f'' + \frac{2}{\bar{r}} \Delta f' - \frac{1}{2} \frac{\partial V}{\partial{\Delta f}} =0$~\footnote{The final factor of $1/2$ on the left-hand side arises because $\Delta f$ does not have canonical kinetic term normalization.} near $r=0$ to obtain \begin{equation} \pi_0 = \frac{\lc \, s_0^2}{48 \, \Omega'^2} \, . \end{equation} Using these results in the formula for the mass \beqa m_Q/v = 4\pi \int d \bar{r} \bar{r}^2 \left(\frac{1}{2} s'^2 + f'^2 + \frac{1}{2} \Omega^2 s^2 + \frac{1}{8}\, \lc\,s^2 \,f^2 + \frac{1}{4} \lphi (1 - f^2 )^2 + \frac{1}{4} \ls s^4 + \frac{1}{2}\ms^2 s^2 \right)\,, \eeqa and keeping contributions up to $\mathcal{O}(Q^3)$ leads to \begin{equation} \begin{aligned} m_Q/v \supset & 4 \pi \int_{0}^{\infty} d \bar{r} \bar{r}^2 \left(\frac{1}{2} s'^2 + \Delta f'^2 + \frac{1}{2} \Omega^2 s^2 + \frac{1}{8} \lc s^2 - \frac{1}{4} \lc s^2 \Delta f + \lphi \Delta f^2 \right) \\ =& \frac{2 \pi Q \Omega'^2}{5(\pi^2-6) \Omega} + \frac{\pi \lc^2 Q^2 \Omega'}{640 (\pi^2-6)^2 \Omega^2} + \frac{Q \Omega}{2} + \frac{\lc Q}{8 \Omega} + \frac{3(15-2\pi^2) \lc^2 Q^2 \Omega'}{320 \pi (\pi^2-6) \Omega^2} + \frac{\lphi \lc^2 Q^2}{512 \pi (\pi^2-6) \Omega^2 \Omega'} \, , \end{aligned} \end{equation} where the terms in the second line correspond in order to those in the first line. A simple solution for $\Omega'$ can be determined by neglecting the last term and minimizing $m_Q$ with respect to $\Omega'$ for the rest of the terms, leading to \begin{equation} \Omega' = \frac{(11\pi^2 - 90) Q \lc^2}{512 \pi^3 (\pi^2-6) \Omega} \, . \end{equation} A slightly more complicated expression for $\Omega$ (not displayed here) results from minimizing the same terms with respect to $\Omega$ after substituting for $\Omega'$. It approaches to its upper limit \cite{Ponton:2019hux} $\Omega < \sqrt{\lc}/2$ as $Q \to 0$ and is relatively independent of $Q$ for the sufficiently small $Q$ valid in our approximations. The last term in $m_Q/v$ plays a crucial role in setting the minimum charge $\Qmin$. After some algebra, it can be shown that the mass goes as \begin{equation} m_Q/v \sim Q \frac{\sqrt{\lc}}{2} + Q \lphi a_1 - Q^3 a_2 + \mathcal{O}(Q^5) \, , \end{equation} where $a_1$ and $a_2$ are positive constants given in (\ref{eq:M_small}). The second term in this expression comes directly from the $\lphi \Delta f^2$ term in the mass integral, \ie, the vacuum energy from the $\phi$ field. Notice that at sufficiently small $Q$, $m_Q/Q > v \sqrt{\lc}/2 =m_S$. Thus, the Q-ball is unstable at small $Q$. However, at large enough $Q$, the $Q^3$ term dominates the $a_1$ term and makes the Q-ball stable against decaying via evaporation into $Q$ free particles. The value of $\Qmin$ is sensitive in particular to $\lphi$. This approach gives highly accurate predictions for $\Qmin$ compared to the full numerical solutions, as shown for example in Fig.~\ref{fig:MQ-Q}. \section{Analytic approximations for large $Q$ surface energy} \label{appedix:surfaceE} In the large $Q$ limit, it is useful to estimate the surface energy contribution $c_2$ in (\ref{eq:M_large}). (For a simpler derivation of the leading order contributions, see, \eg, \cite{Ponton:2019hux,Bai:2021mzu}.) For a single-field model like Coleman's Q-ball, there is a simple estimation available in the thin-wall limit. If one neglects the friction term $2 s'/\overline{r}$, then the $s$ equation of motion would be \begin{equation} \label{eq:EOM_surface_1field} s'' + \frac{\partial U_\text{eff}}{\partial s} = \frac{1}{s'} \left(\frac{1}{2} (s')^2 + U_\text{eff}\right)' = 0 \, , \end{equation} where primes denote derivatives with respect to the radial coordinate $\overline{r}$, and $U_\text{eff} = \Omega^2 s^2/2 - V(s)$. Then, the gradient energy contribution near the surface of the Q-ball can be approximated by \begin{equation} \label{eq:M_surface_1field} \frac{m_Q}{4 \pi v} \supset \int_0^\infty d\overline{r} \overline{r}^2 \frac{1}{2} s'^2 \approx \frac{1}{2} R^2_Q \int_0^{s_0} ds s' \approx \frac{1}{2} R^2_Q \int_0^{s_0} \sqrt{-2 U_\text{eff}} \, . \end{equation} Unfortunately, this approach does not work for multifield potentials like that of (\ref{eq:V}), which contains another degree of freedom $f$ in $U_\text{eff}$. Because the radial field derivative in the middle term of (\ref{eq:EOM_surface_1field}) also picks up a $f' (\partial U_\text{eff} / \partial f)$ term, no simple substitution for $s'$ exists in (\ref{eq:M_surface_1field}). Thus, to estimate $c_2$ in the model with potential (\ref{eq:V}), we will instead resort to the variation method using the following analytic ansatz for the field solutions in the large-$Q$ limit: \begin{subequations} \label{eq:largeQ_surface_ansatz} \beqa s &= s_0 \left(1 - \tanh \left(\frac{r-R_Q}{a} \right) \right) ~, \\ f &= \frac{1}{2} \left(1 + \tanh \left(\frac{r-R_Q}{a} \right) \right) ~. \eeqa \end{subequations} In principle, $R_Q$ and $a$ could be different for the two field profiles to provide a better fit, but keeping them the same simplifies expressions. These provide a reasonable fit to the numerical results. As with any application of the variation method, they will slightly overestimate the true ground state energy of the system. These field profiles are integrated to obtain expressions for $Q$ and $m_Q$. These integrations will result in polylog functions. We replace the polylogs by their leading-order asymptotic approximations at infinity: $\text{Li}_2(x) \sim -(3 \log^2(x) + \pi^2)/6$ and $\text{Li}_3(x) \sim -(\log^3(x) + \pi^2 \log(x))/6$, checking at each step that these approximations match the full expression to high precision. Then, we substitute the expression for $Q$ to eliminate the variable $s_0$ in the expression for $m_Q$. Leading-order expressions for $\Omega$ and $R_Q$ can be determined by ignoring (surface) terms with $a$ and expanding about $R_Q \to \infty$ at leading order, resulting in the expression $m_{Q\to\infty} \sim Q \Omega / 2 + \pi \lphi R^3_Q /3 + 3 \ls Q^2 / (16 \pi R^3_Q \Omega^2)$. Minimizing with respect to $\Omega$ and $R_Q$ gives $\Omega_{Q\to\infty} = (\lphi \ls)^{1/4}$ and $R_{Q\to\infty} = (3/(4 \pi))^{1/3} \lphi^{-1/4} \ls^{1/12} Q^{1/3}$. \footnote{This agrees with the large-$Q$ result in \cite{Bai:2021mzu} after accounting for differences in the definitions of couplings in (\ref{eq:V}).} Next, for the surface terms, additional subleading terms in $R_Q$ are added to the expression for $m_Q$ in the previous paragraph: $m_Q = m_{Q\to\infty} + 3 a \ls Q^2 / (64 \pi R^4_Q \Omega^2) + \pi R^2_Q (12 a \lphi + 192 a^{-1})/144$. A new minimized value for $\Omega$ is obtained (including $a$). This new $\Omega$ is substituted into $m_Q$, as well as $R_Q = R_{Q\to\infty} + \Delta R_Q$. The expression is then minimized with respect to $\Delta R_Q$, which in the $Q \to \infty$ limit becomes $\Delta R_Q \sim - 2/(3 a \lphi)$. After further simplification in the $Q \to \infty$ limit and minimizing with respect to $a$, we obtain \begin{equation} c_2 = \left(\frac{\pi}{6}\right)^{1/3} \frac{2 \sqrt{4 \ls^{3/2} \lphi +\ls \lc \sqrt{\lphi}} + \sqrt{4 \lphi^2 \sqrt{\ls} + \lphi^{3/2} \lc}}{2\ls^{1/3} \sqrt{2 \lphi \sqrt{\ls} + \lphi^{3/2}}} \, . \end{equation} In actuality, we find that numerically this is only reliable to about a factor of two. \section{Series expansion near $T_c$} \label{appendix:bounce_action} For an FOPT, define a temperature $T_c$ where the two minima of the potential are degenerate, and let $\epsc(T) \equiv (T_c-T)/T_c$. We wish to see the $\epsc$ dependence of relevant quantities in the PT for small $\epsc$. For simplicity, start with a one-field model of the PT with scalar field $\phi$. Then, expanding the potential $V(\phi,T)$ in $\epsc$ for $T \approx T_c$, \begin{equation} V(\phi,T) = V(\phi,T_c) - \epsilon_c(T) f(\phi) \, . \label{eq:Vexp} \end{equation} The three-dimensional bounce action for the PT at fixed temperature near $T_c$ is thus (where $\phi=\phi(r)$) \begin{equation} \begin{aligned} S_3 &= 4 \pi \int r^2 dr \left[\frac{1}{2} \left(\frac{\partial\phi}{\partial r}\right)^2 + V(\phi,T) \right] \\ &= 4 \pi \int r^2 dr \left[\frac{1}{2} \left(\frac{\partial\phi}{\partial r}\right)^2 + V(\phi,T_c) - \epsc f \right] \\ & \equiv \frac{4 \pi}{2} R^2 S_1 - 4 \pi \epsc \int f\, r^2 dr \\ & \approx \frac{4 \pi}{2} R^2 S_1 - \frac{4 \pi}{3} R^3 \epsc f(\phi_2) \, . \end{aligned} \end{equation} In the last two lines, the thin wall approximation has been assumed---namely, that $f(\phi)\approx f(\phi_2)$ for $r<R$ and $f(\phi) \approx f(\phi_1) \approx 0$ for $r>R$, where $\phi_{1,2}$ are the $T$-dependent minima of $V(\phi,T)$ corresponding to the low-temperature false and true vacua, respectively. The first two terms in the integral in the second line define $S_1$---those terms are only nonzero near the wall at $r \approx R$ and thus contribute a surface term proportional to $R^2$. Minimizing $S_3$ with respect to $R$, the radius is $R = S_1/(\epsc f(\phi_2))$, and the action is \begin{equation} S_3 = \frac{4 \pi\, S_1^3}{6 \,\epsc^2\, f^2(\phi_2)} \, . \end{equation} Ultimately, we would like a series expansion of $S_3$ in terms of $\epsilon_c$. We have seen above that we expect the leading term to go as $\epsc^{-2}$ in the thin wall approximation. Thus, we expect the expansion around small $\epsc$ to go as \begin{equation} \label{eq:S3T} \frac{S_3}{T} = \frac{a}{\epsc^2} + \frac{b}{\epsc} + ... \end{equation} To get a sense of how this may look, consider the thermal potential in Ref.~\cite{Dine:1992vs} \begin{equation}\label{eq:V_DLHLL} V(\phi, T) = D \,(T^2-T_0^2) \,\phi^2 - E \,T\, \phi^3 + \frac{\lambda}{4}\, \phi^4 \, . \end{equation} Here, $T_c = T_0 \sqrt{\lambda D / (\lambda D - E^2)}$, which requires $\lambda D > E^2$. That reference gives a good approximation for the bounce action \begin{equation}\label{eq:S3_DLHLL} \frac{S_3}{T} \approx \frac{13.7 D^{3/2} (T^2-T_0^2)^{3/2}}{E^2 T^{3}} \left[1+ \frac{\alpha}{4} \left(1+ \frac{2.4}{1-\alpha} + \frac{0.26}{(1-\alpha)^2}\right) \right] \, \end{equation} with $\alpha \equiv \lambda D (T^2-T_0^2)/(E^2 T^2)$. A series expansion of this in powers of $\epsc$ reveals the coefficients $a,b$ as \begin{align}\label{eq:S3oT_ab} a = \frac{0.22 E^5}{\lambda^{3/2} (\lambda D - E^2)^2} ~, \qquad \qquad b = \frac{3.00 E^3 (\lambda D - 1.22 E^2)}{\lambda^{3/2} (\lambda D - E^2)^2} ~. \end{align} In the model in \cite{Dine:1992vs}---which considers the electroweak PT---$D \propto g^2$ and $E \propto g^3$, where $g$ is the $SU(2)$ gauge coupling. A similar argument may apply to our model, \eg, if one of the scalar fields is gauged. Thus, for small $g$, $a\propto \lambda^{-7/2} g^{11}$. It can be checked that $(\epsn b)/a \propto g^{3/2}$ in the small $g$ limit, where we have used $\epsn \propto a^{1/2}$ derived in Appendix \ref{appendix:nucleation_sites}. Therefore, the expansion in (\ref{eq:S3T}) can hold to arbitrarily small $g$ and thus arbitrarily small $a$. For example, with $g \sim 10^{-3}$ one may have $a \sim 10^{-30}$. \section{Number density of nucleation sites} \label{appendix:nucleation_sites} With the parametrized bounce action $S_3/T$, it is straightforward to determine the number density of bubble nucleation sites. We start from the bubble nucleation rate per volume $\gamma$, which is written as \begin{align} \gamma \approx \, T^4 \, \left( \frac{S_3}{2\pi T}\right)^{3/2} \, e^{-S_3/T} \,, \end{align} where we have omitted an $\mc{O}(1)$ coefficient. The fraction of space in the false volume can then be calculated as \begin{align} h(t) = \mathrm{exp}\Bigl[ - \frac{4\pi}{3} \int^t_{t_c} \! dt' \, v_\mathrm{sh}^3\,(t-t')^3\, \gamma(t') \Bigr] \,, \end{align} where $t_c$ is the time at which the plasma temperature equals $T_c$, and $v_{\rm sh}$ is the bubble wall velocity. As the spacial fraction of the unbroken phase is exponentially suppressed when the PT starts, we take $h(t_n)=1/e$ as the definition of the bubble nucleation time $t_n$ (the corresponding temperature $T_n$ is the bubble nucleation temperature). We use the saddle-point approximation to evaluate the integration in the exponential function, as it is saturated at late time. Specifically, we rewrite $\gamma$ as $\gamma(t^\prime) = \mathrm{exp}{[\log \gamma(t^\prime)]}$ and expand $\log\gamma$ to the next-to-leading order as $\log{\gamma(t^\prime)} \approx \log{ \gamma(t_n)} +\, (t^\prime - t_n)\, \xi$. Defining \begin{align} \beta \equiv - \frac{d(S_3/T)}{dt} = \left( \frac{\dot{T}/T}{-H} \right) \left( T \frac{d(S_3/T)}{dT} \right) H \,, \end{align} we express $\xi$ as \begin{align} \xi \equiv \frac{d}{dt} \log \gamma = \beta - \frac{3}{2}\, \frac{\beta}{(S_3 / T) } + 4 \, \frac{\dot{T}}{T}\approx \beta \,. \end{align} With these approximations, the bubble nucleation time can be determined as \begin{align}\label{eq:t_n} h(t_n)=1/e\ \Rightarrow\ \frac{4\pi}{3} \int_{t_c}^{t_n} \! d t' \, v_\mathrm{sh}^3 \, (t_n - t')^3\, \gamma(t_n) \, e^{(t^\prime - t_n) \beta} \ \approx \ 8\pi v_\mathrm{sh}^3 \gamma(t_n) \, \beta^{-4}\ \approx\ 1\,. \end{align} The number density of nucleation site can be correspondingly estimated as \begin{align} n_\mathrm{nuc} = \int_{t_c}^{t_n} \! d t^\prime \, \gamma(t^\prime) \, h(t^\prime) \, \approx \, \bigl( 8 \pi v_\mathrm{sh}^3 \,\beta^{-3} \bigr)^{-1} ~. \end{align} With the parametrization in (\ref{eq:S3T}), it is more convenient to perform the calculations in terms of the supercooling parameter $\epsc$ instead of the physical time $t$. It is easy to show that $\beta/H\approx T\ d(S_3/T)/dT=(1-\epsc)(2a/\epsc^3+b/\epsc^2)$. The supercooling parameter $\epsc$ at the nucleation time can thus be determined from \eqref{eq:t_n} as \begin{align} 8\pi\, v^3_{\rm sh}\,T^4_n\left(\frac{a/\epsn^2+b/\epsn}{2\pi}\right)^{3/2}=\exp\left(\frac{a}{\epsn^2}+\frac{b}{\epsn}\right)\left[(1-\epsn)\left(\frac{2a}{\epsn^3}+\frac{b}{\epsn^2}\right)\right]^4 H^4_n\,, \end{align} where the subscript $n$ indicates that the corresponding quantity is evaluated at $t=t_n$. As we expect the exponential factor on the right-hand side to dominate, we can reorganize the equation as \begin{align} \frac{a}{\epsn^2}+\frac{b}{\epsn}\ =\ \log\left(\frac{8\pi\, v^3_{\rm sh}\,T^4_n\left(\frac{a/\epsn^2+b/\epsn}{2\pi}\right)^{3/2}}{\left[(1-\epsn)\left(\frac{2a}{\epsn^3}+\frac{b}{\epsn^2}\right)\right]^4 H^4_n}\right)\ \equiv\ c\,. \end{align} We therefore have $\epsn=(2a)/(-b\pm\sqrt{b^2+4ac})$. Note that there is still a mild $\epsn$ dependence in $c$. Under the thin-wall approximation, Appendix~\ref{appendix:bounce_action} showed that the bounce action satisfies $S_3/T\propto \epsc^{-2}$, {\it i.e.} $b=0$. It is worth checking to what extent this approximation is valid, or how much the $b/\epsc$ term contributes to the bounce action $S_3/T$ and the nucleation site number density $n_{\rm nuc}$. We consider the potential in \eqref{eq:V_DLHLL} and keep only the $1/\epsc^2$ and $1/\epsc$ terms in the fitted bounce action \eqref{eq:S3_DLHLL}. The results are shown in Fig.~\ref{fig:S3_NLO}. The difference in the number density of bubble nucleation sites brought by the NLO term is in general within one order of magnitude compared with the LO-only result. For the limit $b=0$, a simpler expression results: $a = \epsn^2 \log [ v_\text{sh}^3 \epsn^9 T_n^4 / (8 \sqrt{2\pi} a^{5/2} H_n^4)]$ (where $\epsn \ll 1$ was assumed). Thus, for the expansion in powers of $\epsn$ to hold, $\epsn \lesssim 1$ implies $a \lesssim 100$ for $T_n \sim 100~\GeV$. Then, $n_\text{nuc} \approx (4 \pi v_\text{sh}^3 a^{1/2})^{-1} H_n^3 (\log [v_\text{sh}^3 \epsn^9 T_n^4 / (8 \sqrt{2\pi} a^{5/2} H_n^4)])^{3/2}$. Note that a larger value for $a$ gives a later PT with less bubble nucleation. Thus, larger $a$ is associated with a larger charge $Q$. \providecommand{\href}[2]{#2}\begingroup\raggedright\endgroup
Title: Flexible Models for Galaxy Star Formation Histories Both Shift and Scramble the Optical Color-M/L Relationship
Abstract: The remarkably tight relationship between optical color and stellar mass-to-light ratio ($M_*/L$) in galaxies is widely used for efficient stellar mass estimates. However, it has remained unclear whether this low scatter comes from a natural order in the galaxy population, or whether it is driven by simple relationships in the models used to describe them. In this work we investigate the origins of the relationship by contrasting the derived relationship from a simple 4-parameter physical model with a more sophisticated 14-dimensional Prospector-$\alpha$ model including nonparametric star formation histories. We apply these models to 63,430 galaxies at $0.5<z<3$ from the 3D-HST survey and fit the results with a hierarchical Bayesian model (HBM) for the population distribution in the $(g-r)$--$\log(M/L_g)$ plane. We find that Prospector-$\alpha$ infers systematically higher $M_*/L$ by 0.12 dex, a result of the nonparametric star formation history producing older ages. Prospector-$\alpha$ also infers systematically redder rest-frame $(g-r)$ by 0.06 dex owing to the nebular emission. Surprisingly, we observe similar average color--$M_*/L$ relationships for the two models due to the combined effects of the $M_*/L$ and $(g-r)$ offsets. Nevertheless, Prospector-$\alpha$ produces a much looser color-M/L relationship with a scatter of 0.28 dex compared to the simple model of 0.12 dex. Also, unlike the simple model, the Prospector-$\alpha$ model shows a substantial redshift evolution in the relationship due to stellar aging. Finally, we demonstrate that the HBM produces substantial shrinkage in the individual posteriors for faint galaxies, an important first step towards using the observed galaxy population directly to inform the priors in galaxy SED-fitting.
https://export.arxiv.org/pdf/2208.12295
\begin{CJK*}{UTF8}{gbsn} \title{Flexible Models for Galaxy Star Formation Histories Both Shift and Scramble the Optical Color-M/L Relationship} % \author[0000-0002-0682-3310]{Yijia Li (李轶佳)} \email{yzl466@psu.edu} \affiliation{Department of Astronomy \& Astrophysics, The Pennsylvania State University, University Park, PA 16802, USA} \affiliation{Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA} \author[0000-0001-6755-1315]{Joel Leja} \affiliation{Department of Astronomy \& Astrophysics, The Pennsylvania State University, University Park, PA 16802, USA} \affiliation{Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA} \affiliation{Institute for Computational \& Data Sciences, The Pennsylvania State University, University Park, PA 16802, USA} \shorttitle{color-\stellarML{}} \shortauthors{Li et al.} \keywords{Galaxy evolution, Galaxy masses, Galaxy colors, Spectral energy distribution, Hierarchical models} \section{Introduction} \label{sec:introduction} The stellar mass of a galaxy ($M_*$) encodes rich information about the formation of the galaxy itself. Stellar mass changes through internal star formation activity and external mergers and galaxy interactions. It is a stable property that allows one to connect together galaxy populations across time when the galaxies themselves have disparate ages, dust contents, sSFRs, and colors. It plays a critical role in our understanding of the evolution of galaxies over cosmic time, as massive galaxies tend to form earlier and quench faster than low-mass galaxies (``downsizing"; \citealt{Cowie1996}; \citealt{massmet}). Also, $M_*$ correlates with many galaxy physical properties such as star formation rate (SFR), metallicity ($Z$), and galaxy size. By combining these scaling relationships, $M_*$ can provide us with a baseline knowledge of galaxy properties. Since $M_*$ is not an observable, its determination generally relies on spectral energy distribution (SED) fitting of broadband photometry or spectra (e.g., \citealt{Papovich2001, Shapley2001, Pforr2012, Conroy2013, Courteau2014}). In SED fitting, the galaxy spectrum is modeled as an assembly of stellar populations of coeval stars with (typically) homogeneous metallicity. In this composite stellar population, the total stellar mass of the galaxy is the integral of the SFH across time plus the effects of stellar mass loss. The SED-fitting process is complex and most informative when performed with large amounts of data as it involves generating template galaxy SEDs and inferring the model parameters by comparing the model SEDs to the data. In contrast, when we do not have sufficient data to perform informative SED modeling, one alternative is a popular and efficient method to get $M_*$ using the rest-frame optical color (CMLR for the color--\stellarML{} relationship). This empirical relationship allows remarkably accurate $M_*$ estimates without SED fitting. \citet{Bell&deJong2001} reported a tight linear relationship between the optical color $(B-R)$ and $\log M_*/L$ with a scatter of $\sim$0.2\,dex. Since then several studies have further explored the relationship using combinations of color and $M/L$ in different bands based on both stellar population synthesis (SPS) models and the results of performing SED fitting on the observations (e.g., \citealt{Bell2003}; \citealt{Portinari2004}, \citealt{Zibetti2009}; \citealt{Taylor2011GAMA}; \citealt{Into2013}; \citealt{Courteau2014}; \citealt{McGaugh2014}; \citealt{van-de-Sande2015}; \citealt{Garcia2019}; \citealt{Ge2021}). These studies confirmed the robustness of $M_*$ estimates from a single optical color but revealed a larger scatter in the relationship ($\sim$0.3\,dex). In comparison to optical colors, near-infrared (NIR) colors are less predictive for $M_*/L$, which we will discuss in Figure~\ref{fig:colorpriors} later, and are more sensitive to the modeling of the asymptotic giant branch (AGB) phase of stellar evolution. Because we are able to estimate the approximate color directly from the observations, the optical color--\stellarML{} relationship has been widely used in translating the stellar light to galaxy mass when we know the object redshift, e.g., in dynamical studies \citep[e.g., ][]{Nguyen2020}. % The notorious dust-age-metallicity degeneracy actually helps shape this remarkably tight relationship. Increasing the age, metallicity, or including more dust will make the galaxy redder and meanwhile enhance the measured $M_*/L$. An important finding of \citet{Bell&deJong2001} is that the dust effect on color and \stellarML{} is parallel to the relationship. These stellar population parameters counteract each other's effects on optical color and \stellarML{} and the net result is a relationship with small scatter. In Figure~\ref{fig:colorpriors} we present the relationships between the priors of color and diffuse dust optical depth ($\hat\tau_2$), mass-weighted age derived from SFH ($t_\textrm{avg}$), and stellar metallicity (stellar $Z$) for the two SED models adopted in this paper\footnote{Note that, only for Figure~\ref{fig:colorpriors}, we match the $\hat\tau_2$ prior of the models to have a fair comparison of the color--\stellarML{} relationship.} (see Section 2.2 for the details and construction of Figure~\ref{fig:colorpriors}). As we have emphasized, a tight relation between optical color and $\logML$ is a fundamental outcome of the stellar physics and dust attenuation models. % Figure~\ref{fig:colorpriors} shows that due to the strong degeneracies among parameters, we cannot infer the age, dust, and metallicity from the optical color \gr{} alone, as indicated by their broad distribution at a given color. However, we can predict the $\ML$ ratio with high confidence from \gr{}. On the contrary, $(\textsl{J}-\textsl{K})$ color spans a narrow range and is not sensitive to either $\ML$ or stellar population parameters. This is because the degeneracy works differently for different bands. The shape of the degeneracy makes optical \stellarML{} increase much faster than NIR color as the stellar population parameters vary. Many components of SED modeling can potentially affect the accuracy of the mass determination from the empirical optical color--\stellarML{} relationship. Such components include the physical model, the initial mass function (IMF), the stellar isochrones, and the stellar spectral libraries. IMF variations can shift the entire relationship to higher or lower \stellarML{} but generally do not influence the variance, as most IMF assumptions only affect the low-mass stars, which emit a small fraction of the total light. After choosing the same IMF, different choices of SPS models can still introduce a $\lesssim$\,0.2\,dex offset in \stellarML{} as argued by several previous studies (e.g., \citealt{Zibetti2009}; \citealt{van-de-Sande2015}; \citealt{Ge2019}). The offset is strongest for optically blue stellar populations. There are many possible reasons for the offset such as different treatment of the AGB phase of stellar evolution \citep{Zibetti2009, Kriek2010}, and the completeness of the covered parameter space of the stellar spectral libraries, particularly for young, metal-poor populations \citep{MIST1, Senchyna2019}. However, the influence on the color--\stellarML{} relationship due to SPS models is not the topic of this paper. In this work, we want to evaluate the role of the simplified prescriptions in SED models which comparatively has not seen as much attention in the literature. % Previous work on the color--\stellarML{} relationship mostly relies on relatively simple model assumptions like parametric SFHs, fixed dust attenuation curves, and often assumes fixed metallicity. These simple prescriptions have been used for a long time because they make the model straightforward to fit and easy to interpret. Nevertheless, SED-fitting models have become substantially more complex over these years (see reviews by \citealt{Walcher2011}; \citealt{Conroy2013}) in accounting for extra important effects, such as freedom in SFHs, metallicity evolution, nebula emission, dust emission, and AGN, in concert with the development of machinery that creates self-consistent population evolution. It is unclear if the color--\stellarML{} relationship from previous simpler SED models emerges from a substantially more sophisticated framework.% In this paper, we employ a sophisticated physical model \prospector{} \citep{Prospector-1}, which allows a wide range of physics and has 14 free parameters. By comparing it with a simpler model, we will demonstrate that how we model the galaxy has a large impact on the resultant relationship. Instead of performing a linear regression to the optical color and $\log M_*/L$ best-fit data like most previous studies, we utilize a hierarchical Bayesian model (HBM) to derive the density distribution of the relationship. Our HBM characterizes the relationship with a population distribution that models the distribution parameter of $\log M_*/L$ at given color with explicit parameters. The hyperparameters that define the population model are fit upon the individual color and \stellarML{} posteriors from SED fittings. The result of this two-level Bayesian inference is the full posterior distribution of the population parameters. % This method naturally corrects for the observational uncertainties of color and $M_*/L$ by utilizing the SED-fitting posteriors as weights, without any requirement for assuming Gaussian or uncorrelated posteriors. Traditional $\chi^2$ minimizing can be biased by an uneven data distribution in the measurement plane, especially when the measurement uncertainties of the independent variable are comparable to its intrinsic population scatter \citep{Kelly2007}. HBM is likely to be less biased as it naturally distinguishes intrinsic scatter from the measurement errors in its hierarchical structure. Additionally, we pass likelihoods to the population model instead of posteriors by including priors in the weights. In this way we go from an informative prior to an ``uninformative” (flat) prior, which is normally not the case in linear regression. Also, because HBM assumes the fit objects coming from the same population and assigns a shared prior distribution (i.e., the population distribution) to the individual color and $M_*/L$ estimates, it shrinks the individual fits and makes them cluster around the population mean \citep{Loredo&Hendry2019}. This alludes to one big advantage of the HBM, that we can learn new priors and reapply them to SED fits. % In this work, we will investigate the relationship between optical color \gr{} and \stellarML{} in the $\textsl{g}$-band $\ML$ using two contrasting SED models. Our goals are to (1) diagnose how SED model assumptions affect the relationship since the \prospector{} model is very different from previous models; (2) derive a relationship at higher redshifts than previous studies, which allows data-based \stellarML{} estimates when the full machinery of SED fitting is impractical. The paper is organized as follows. In Section \ref{sec:sample}, we review the sample properties and the key features of our SED models. In Section \ref{sec:model}, we introduce the algorithms of our hierarchical Bayesian modeling approach. We present the resultant relationships in Section \ref{sec:result}, where we also investigate the driving factor for the \stellarML{} and the color offsets between the two SED models. We compare our results to a few previous works and discuss what we learn from the HBM in Section \ref{sec:discussion}. In Appendix \ref{sec:appendixA} we present a mock test of our HBM. We assume a \citet{Chabrier2003} IMF in our analysis. All colors and \stellarML{} are in the rest-frame band. We use AB magnitudes throughout the paper and adopt the absolute magnitude of the Sun in $\textsl{g}$-band $M_\textsl{g} = 5.11$ \citep{Willmer2018solarMags}. \section{Data from SED fitting} \label{sec:sample} In this work, we will contrast the optical color--\stellarML{} relations derived from two SED models. Both models are constructed within the SED-fitting framework \texttt{Prospector} (\citealt{Prospector-1}; \citealt{Prospector-2}). We select our sample from the \hst{} photometric catalogs \citep{Skelton2014}. In Section 2.1 we introduce the photometry from the \hst{} survey and our selection criteria. In Section 2.2 we describe the two SED models used in this work. In Section 2.3 we show the prior distribution of \gr{} and $\logML$, which will be used in deriving the hierarchical models. \subsection{\hst{} Sample} \hst{} \citep{Skelton2014} is a space-based grism survey covering $\sim$900\,arcmin$^2$ in five well-studied extragalactic fields. It provides between 17 (the UDS field) and 44 (the COSMOS field) bands of photometry at wavelengths 0.3--8 $\mu$m for hundreds of thousands of galaxies. The survey is supplemented with Spitzer/MIPS 24 $\mu$m photometry from \citet{2014ApJ...795..104W}. The \hst{} catalogs also provide photometric redshifts from the EAZY SED-fitting code \citep{2008ApJ...686.1503B}. Approximately 30\% of galaxies studied in this work have measured spectroscopic redshifts or grism redshifts, which are computed by fitting the photometry and spectrum simultaneously \citep{2016ApJS..225...27M}. In this paper, we adopt a subsample of \hst{} galaxies from \citet{2019ApJ...877..140L} and \citet{prospector-massfunc}, consisting of 63,430 galaxies. This sample is selected between $0.5<z<3.0$ above the stellar mass completeness limit of \hst{}, which is the mass of the least-massive galaxy detectable. It is critical when performing population-level inference to reduce or eliminate selection effects by working with a mass-complete sample, or alternatively to model the selection function very well. With the redshift cut, the observed photometry covers the rest-frame \gr{} color across the full redshift range. Details of the selection criteria and adjustments to the photometric zero-points from the default \hst{} catalogs are described in \citet{2019ApJ...877..140L}. \subsection{Two Contrasting Physical Models for SED fitting} We use the \texttt{Prospector} galaxy SED-fitting code to fit the \hst{} photometry. The \texttt {Prospector} inference framework is based on Bayesian forward modeling. The posterior parameter distribution is sampled using the dynamic nested sampling code \texttt{dynesty} \citep{dynesty}. For every galaxy in our sample, we fit a simple SED model that mimics the FAST settings \citep{2009ApJ...700..221K} as used to derive the stellar population parameters in \hst{} catalog \citep{Skelton2014}, and a more complex SED model \texttt {Prospector-$\alpha$} \citep{Prospector-1} to the photometry. The \prospector{} fits have been performed in \citet{2019ApJ...877..140L} and \citet{prospector-massfunc}. We adopt MESA Isochrones and Stellar Tracks (MIST; \citealt{MIST0}; \citealt{MIST1}), and MILES stellar spectral libraries \citep{MILES} in the Flexible Stellar Population Synthesis (FSPS; \citealt{Conroy2009}) framework\footnote{A small source of uncertainty in the SED-fitting procedure comes from the interpolation among stellar metallicity grids in simple stellar population (SSP) models \citep{Mitchell2013}, since $Z$ is a complex function of stellar population properties and observed flux. Our color and \stellarML{} are sensitive at the $\sim$0.03\,mag and $\sim$0.03\,dex level to the metallicity interpolation scheme (i.e., triangular weighting versus a delta function). }. The simple model is constructed using basic assumptions that have been widely applied in SED modeling. It has four parameters: the stellar mass formed $M_{*, \mathrm{formed}}$, the diffuse dust optical depth $\hat\tau_2$, the galaxy age $t_\mathrm{age}$, and the star formation timescale $\tau$. The stellar metallicity is fixed to solar metallicity. We adopt the \citet{Calzetti2000} dust attenuation curve with a flat prior over $0 < \hat\tau_2 < 4$. The $\hat\tau_2$ parameter controls the normalization of the attenuation curve. We assume an exponentially declining SFH with a minimum $t_\mathrm{f}$ of 30\,Myr. The 14-parameter \prospector{} model incorporates many of the recent important improvements in SED models into a single consistent framework. Free stellar and gas-phase metallicity are allowed in the model. The stellar mass--stellar metallicity relationship modified from \citet{massmet} is implemented as a prior. We assume a flexible two-component dust attenuation model accounting for birth-cloud and diffuse dust separately. For the diffuse dust component, we add a parameter dust index to enable variations in the shape of the attenuation curve as in \citet{Noll2009}, and include the UV dust absorption bump. Dust emission from energy balance is also built in the model. We use a step function nonparametric SFH with seven time bins, which is more capable of capturing the diversity of galaxy SFHs than the exponentially declining SFH \citep{Leja2019a, Carnall2019, Lower2020}. The model includes both the nebular line and continuum emission as implemented by \citet{Byler2017}, which are important for young stellar populations or high-redshift objects. Mid-infrared dust emission from a dust-enshrouded AGN is also permitted in the model. The priors of \prospector{} model parameters are demonstrated in \citet{2019ApJ...877..140L}. \subsection{Priors on Color and Mass-to-light Ratios} To fit a hierarchical model to the optical color--\stellarML{} relationship, we must first infer the SED-fitting priors on both color and $M_*/L$, so we can correct for the priors in the HBM and not be biased by assumptions used in our physical models. Color and $M_*/L$ priors are not specified explicitly, but instead are specified implicitly by the choice of priors on SED model parameters including SFH, dust, metallicity, etc. We infer them numerically, and their joint distributions are shown in Figure~\ref{fig:colorpriors}. Their marginal distributions are shown in Figure~\ref{fig:priors}. The resulting prior probability density distributions will be used in the next section for building hierarchical models. We compare the \gr{} and $\logML$ priors and their distribution inferred for the 63,430 galaxies fit in Figure~\ref{fig:priors}. The priors are not closely matched to the data, especially for the simple model. This is because the observed number density of galaxies is dominated by blue galaxies with small $M_*/L$ and the number densities of various subpopulations of galaxies are not yet typically taken into account in galaxy SED-fitting priors. Figure~\ref{fig:priors} shows that the \prospector{} priors describe the data better than the simple model, with a narrower range and a peak closer to the data. This is mostly owing to the different dust priors adopted, since galaxies with a high $\hat\tau_2$ will be redder and correspond to higher mass-to-light ratios. The simple model assumes a uniform $\hat\tau_2$ prior from 0 to 4, following the standard for the field (e.g., \citealt{daCunha2008}, \citealt{Marchesini2009}, \citealt{Muzzin2013ApJS}, \citealt{Skelton2014}). Whereas the \prospector{} model has a more informative $\hat\tau_2$ prior, a truncated Gaussian prior with a mean of 0.3 and a standard deviation of 1 between 0 and 4. Consequently, the priors of the simple model typically own more dust content and are redder. We further demonstrate the redshift dependence of the priors in Figure~\ref{fig:priors}. As the redshift decreases, the prior distributions shift to the redder regime. Priors at different redshifts have the same blue end, implying young galaxies in the nearby universe. Overall, the redshift effects on the \gr{} and $\logML$ priors are not strong. However, we will show later that the observed redshift dependence of the color--\stellarML{} relationship is strong. \section{A hierarchical model for the color--\stellarML{} relationship} \label{sec:model} In this section, we will construct a hierarchical Bayesian model to fit the relation between the rest-frame \gr{} color and the stellar mass-to-light ratio using the data described in Section \ref{sec:sample}. Since this work aims to understand the dependence of \stellarML{} on optical colors and the intrinsic scatter in this relation, in the HBM we parameterize the distribution of $\logML$ at a given \gr{} with hyperparameters $\brho$. The model computes posteriors on the population parameters $\brho$ from the individual galaxy fits $\btheta$, where $\btheta = \{\logML_i, \gr{}_i\}$. Our approach is similar to \citet{prospector-massfunc} in the context of modeling the stellar mass function (see also \citet{Nagaraj2022} for HBMs on the relationship between dust attenuation and galaxy properties). We employ the dynamic nested sampling package \texttt{dynesty} \citep{dynesty} to sample the model posteriors. We model the probability density function (PDF) of $\logML$ at a fixed \gr{} and redshift $z$ with a skewed generalized Student's t (sgt) distribution: \begin{flalign}\label{eq:ProbML} P(\btheta_i|\brho) &= \frac{p}{2 \sigma q^{1 / p} \beta\left(\frac{1}{p}, q\right)} \\\nonumber &\left(\frac{|\logML_i-\mu|^p}{q \sigma^p(\lambda \operatorname{sgn}(\logML_i-\mu)+1)^p}+1\right)^{-(\frac{1}{p}+q)}, \nonumber \end{flalign} where $\beta$ denotes the beta function and $\operatorname{sgn}$ is the sign function defined by: \begin{equation} \operatorname{sgn} (x) = \begin{cases}-1 & \text { if } x<0, \\ 0 & \text { if } x=0, \\ 1 & \text { if } x>0.\end{cases} \end{equation} This generalized Student's t-distribution has 5 parameters. $\mu$ is the mode of the distribution. $\lambda$ controls the skewness of the distribution. $p$ and $q$ are the kurtosis parameters. $\sigma$ accounts for the variance of the distribution. We model the evolution of $\mu$, $\lambda$, $p$, and $q$ with quadratics in color, and, motivated by the redshift dependence in Figure~\ref{fig:priors}, we additionally include a linear dependence on redshift for the location parameter $\mu$, which accounts for potential variations in the slope of the \gr{}--$\logML$ relation over different redshifts (e.g., \citealt{Szomoru2013}). \begin{equation}\label{eq:Parametrization} \begin{aligned} &\mu = a_0 + a_1 (\textsl{g}-r)_i + a_2 (\textsl{g}-r)_i^2 + a_3 z, \\ % &\lambda = b_0 + b_1 (\textsl{g}-r)_i + b_2 (\textsl{g}-r)_i^2, \\ &p = c_0 + c_1 (\textsl{g}-r)_i + c_2 (\textsl{g}-r)_i^2, \\ &q = d_0 + d_1 (\textsl{g}-r)_i + d_2 (\textsl{g}-r)_i^2.& \end{aligned} \end{equation} Here $a_{0,1,2,3}$, $b_{0,1,2}$, $c_{0,1,2}$ and $d_{0,1,2}$ are the quadratic coefficients. The model has 14 degrees of freedom to control the population-wide behaviors of $\logML$ and \gr{}. Such flexible formalism is necessary for fitting the skewness and the heavy tails of the $\logML$ density distribution. Due to the flexibility in the skew and kurtosis parameters, we can achieve a good fit with a fixed value of the variance parameter $\sigma$. This is also motivated by the shape of the observed distribution, where the scatter does not change significantly at different colors. % The sgt distribution requires $-1<\lambda<1$, $p>0$, and $q>0$. So in practice, we reparametrize the quadratic coefficients $b_{0,1,2}$, $c_{0,1,2}$ and $d_{0,1,2}$ using the values of $\lambda$, $p$, $q$ at ($\textsl{g}-\textit{r}) = -0.3, 0.5, 1.3$, respectively. The reparameterization process is similar to Appendix B of \citet{prospector-massfunc}. Using the anchor points such as $\lambda_{-0.3}, \lambda_{0.5}, \lambda_{1.3}$ instead of $b_{0,1,2}$ makes it easier to set the priors. We are able to satisfy the upper and lower limits of the sgt parameters directly by defining the proper prior range. The anchor points are selected to roughly cover the \gr{} range, and the choice of the specific anchor points should not alter the results. To summarize, our HBM model has 14 parameters to describe the distribution of the \gr{}--$\logML$ relationship. We assume uniform priors for these hyperparameters: \begin{equation} \begin{aligned} a_0, a_1, a_2, a_3 &\sim \mathrm{Uniform}(-10, 10), \\ % \lambda_{-0.3}, \lambda_{0.5}, \lambda_{1.3} &\sim \mathrm{Uniform}(-1, 1), \\ p_{-0.3}, p_{0.5}, p_{1.3} &\sim \mathrm{Uniform}(0, 200), \\ q_{-0.3}, q_{0.5}, q_{1.3} &\sim \mathrm{Uniform}(0, 1500),\\ \sigma &\sim \mathrm{Uniform}(0, 20),. \end{aligned} \end{equation} In Appendix \ref{sec:appendixA} we generate mock data from this population model and validate that we can recover the distribution. Now that we have defined our model for the distribution of \stellarML{} at a fixed color, we describe how to write down the likelihood for our HBM. The HBM is a straightforward extension of standard Bayesian analysis. Here, however, the input data are the posteriors from the individual galaxy fits, and the output is a model for the population distribution in \gr{} and $\logML$. By Bayes' theorem, \begin{equation} \posterior = \frac{ \likelihood P(\boldsymbol{\rho})} {P(\boldsymbol{D})} \end{equation} Here, $\bD$ is the data vector, and $P(\bD)$ is a normalizing constant that can be ignored in our posterior sampling. $P(\brho)$ is the prior distribution. Let $\btheta_i$ represent the $(\logML_i, \gr{}_i)$ vector for galaxy $i$. We can therefore rewrite the likelihood $\likelihood$ in terms of $\btheta_i$: \begin{equation} \likelihood = \int d^N \btheta P(\bD | \left\{\btheta_1, \ldots, \btheta_N\right\}) P(\left\{\btheta_1, \ldots, \btheta_N\right\} | \brho), \end{equation} where $N$ is the total number of galaxies in our data. Because the fits are performed independently to each galaxy, \begin{equation}\label{eq:LikeIntegral} \begin{aligned} \likelihood &= \int d^N \btheta \prod_{i=1}^N P(\bD_i | \btheta_i) P(\btheta_i | \brho) \\ &= \prod_{i=1}^N \int d\btheta_i P(\bD_i | \btheta_i) P(\btheta_i | \brho), \end{aligned} \end{equation} where $P(\btheta_i | \brho)$ is the probability density of $\logML_i$ at $\gr{}_i$. Here our model probability density is weighted by the likelihood $P(\bD_i | \btheta_i)$ from the SED fits performed by \texttt {Prospector}. The model likelihood for each galaxy $P(\bD_i | \brho)$ represents the weighted average of the model density distribution over all possible values of $\theta_i$. In order to calculate the integral $P(\bD_i | \brho) = \int P(\bD_i | \btheta_i) P(\btheta_i | \brho) d\btheta_i$ from Equation~\ref{eq:LikeIntegral}, we estimate the likelihood $P(\bD_i | \btheta_i)$ from the posteriors and priors we compute during the SED fits in Section \ref{sec:sample}. We draw $m$ samples $\left\{\btheta_{i, 1}, \ldots, \btheta_{i, m}\right\}$ from the posterior $P(\btheta_i | \bD_i)$ of every galaxy. We choose $m=50$ samples. Such a sample size is enough to resolve the posteriors while also allowing the fit to remain computationally tractable. We assign each posterior sample an importance weight: \begin{equation} w_{i, j}=\frac{1}{P(\btheta_{i, j})}, \end{equation} where $P(\btheta_{i, j}$) is the marginalized prior on $\logML_{i,j}$ demonstrated in Figure~\ref{fig:priors}. Accordingly, the model likelihood for each galaxy can be expressed in terms of $P(\btheta_{i, j} | \boldsymbol{\rho})$ from Equation~\ref{eq:ProbML} and $w_{i, j}$: \begin{equation} P(\bD_i | \brho) \approx \frac{\sum_{j=1}^m w_{i, j} P\left(\btheta_{i, j} | \brho\right)} {\sum_{j=1}^{m} w_{i, j}}. \end{equation} Therefore, our full log-likelihood becomes \begin{equation} \ln \likelihood % \approx \sum_{i=1}^N \ln \left(\frac{\sum_{j=1}^m w_{i, j} P(\btheta_{i, j} | \brho)} {\sum_{j=1}^{m} w_{i, j}}\right).% \end{equation} \section{Model results} \label{sec:result} While Figure~\ref{fig:colorpriors} shows that our model priors predict a tight relation between the optical color and $M_*/L$, in this section we will use real data to determine whether the observed relationship is similarly tight. We will first compare the \gr{}--$\logML$ relationship derived from the simple model and from the sophisticated \prospector{} model as described in Section \ref{sec:sample}. We will then explore the origin of their \stellarML{} differences and color differences. In the end, we will apply the new color--\stellarML{} relationship from \prospector{} HBM fit as a prior and show new color and \stellarML{} estimates. \subsection{Relation Between \gr{} and the Stellar $M/L$ Ratios} \label{sec:HBMrelation} Figure~\ref{fig:posteriors} shows the \gr{}--$\logML$ relation for the simple and \prospector{} models. We present the average relationship from the priors, the individual SED fits, and the distribution derived from our HBM. We marginalize the HBM over redshift when we project the 3D model distribution onto the 2D \gr{}--$\logML$ plane. % Table~\ref{tab:modelparams} lists the posteriors of the 14-parameter Student's t-distribution that is used to describe the distribution of $\logML$ at fixed color. The model parameters are overall well constrained except the quadratic parameters of the kurtosis $q$. To verify that our HBM fits the data well, we compare the \gr{}--$\logML$ posterior distribution from the HBM and the sum of SED-fitting posteriors of the entire sample for the \prospector{} model in Figure~\ref{fig:residuals}. Because the HBM input is SED-fitting likelihoods for the whole sample, we need to compare the full posterior distribution instead of point estimates, which do not represent the correlated, high-dimensional posterior distribution on redshift, \gr{}, and $\logML$. It is challenging to define a ``residual" between the data, i.e., SED-fit posteriors, and a best-fit HBM model and nor did the HBM aim to minimize any kind of residuals. We processed the HBM posterior distribution in two steps to make it comparable to the data. First, we weighted the HBM posteriors by the redshift and \gr{} distribution of the data to marginalize the 1D $\logML$ distribution at given redshift and color. Second, we convolved the weighted HBM posteriors with the median uncertainties of the observed \gr{} and $\logML$ in each grid of Figure~\ref{fig:residuals} using Gaussian kernels. The final product is shown in the left column of Figure~\ref{fig:residuals}. In the right column of Figure~\ref{fig:residuals}, we use the fractional difference between our model posteriors described in the last paragraph and the SED-fitting posterior sum to diagnose how well the HBM fits the data. We observe a close match between the model and data distribution density where the most data resides, as shown by the light-yellow color. This result suggests that our HBM works as expected. Since the population model is constrained by the observed galaxies, it should agree with the observed galaxies in the region where most of them reside. The difference is large where there is little data, especially for the upper edge of the relationship where the HBM posterior has a much higher density in the deep blue region than the data posterior sum. The comparison indicates that our HBM describes the main trend of the \gr{}--$\logML$ relationship well but is not able to capture the behavior of some outliers. In Figure~\ref{fig:posteriors}, we find that the two physical models have similar average relationships from the HBM (see the red and magenta lines in the right column). Though the simple relationship has a larger curvature, in which the slope is steep for blue galaxies and then flattens for red galaxies. In the contrast, \prospector{} has a roughly fixed slope for all galaxies. Based on Equation~\ref{eq:Parametrization} and the median posteriors of $a_{0,1,2,3}$ in Table~\ref{tab:modelparams}, the mode of the simple relationship is \begin{equation}\label{eq:simplerelationship} \logML = -0.891 + 2.068(\textsl{g}-r) - 0.503(\textsl{g}-r)^2 - 0.019 z; \end{equation} and the mode of the \prospector{} relationship is \begin{equation}\label{eq:prospectorrelationship} \logML = -0.659 + 1.541(\textsl{g}-r) + 0.149(\textsl{g}-r)^2 - 0.121 z. \end{equation} Readers may use this equation to estimate the \gr{}--$\logML$ relationship from this work. Note that the \gr{} color here refers to the median from SED-fitting posteriors (see Section~\ref{sec:deltagr} for a discussion on different methods for color measurement). We report the mode of the relationship instead of the mean here to show where most galaxies are. The mean and mode have a typical difference around 0.1\,dex because the $\logML$ distribution is skewed at given \gr{}. We warn the readers that our relationship is untested at $z < 0.5$. Also, in Section~\ref{sec:addri} we provide a tentative way to reduce the scatter of the relationship using an additional \ri{} color (Equation~\ref{eq:ri-residual}). % At fixed color, both models show an increasingly skewed distribution at lower $\logML$. This trend is strongest for blue galaxies with a $\lambda$ parameter closer to -1 (see Table~\ref{tab:modelparams}). Because the distribution is skewed, it is more challenging to estimate the \stellarML{} of blue galaxies, and can result in a high bias if the skew is unknown. % The most notable difference between the two SED models is the scatter around the average relation. The dispersion of the relationship is significantly larger for the \prospector{} model, which is evident by the larger posterior estimates of the $\sigma$ parameter in Table~\ref{tab:modelparams}. As we introduced in Section \ref{sec:introduction}, the relationship exists in the first place because the effects of dust, age, and metallicity canceled out: variations in these parameters move the galaxies more or less parallel to the main relationship. Nevertheless, our results suggest that the net effect of these variations can alter the direction of the change in the \gr{}--$\logML$ plane when we consider a more realistic model. % Hence we need to be careful about using a simple linear relation to predict $\log M/L$ ratios from optical colors. A distinctive feature of the simple SED fits is the hook at the bluest end. This is due to the similar SFHs of the bluest galaxies in the simple fits, marked by the large $t_\mathrm{f}/t_\mathrm{age}$. As found by \citet{Bell&deJong2001}, recent bursts of star formation will bias \stellarML{} to low values. The exponentially declining SFH suffers strongly from the ``outshining" effect \citep{Papovich2001, Maraston2010}, where the young bright stars outshine the old dim ones in the emitted light. As a result, the exponentially decaying SFH only characterizes the bulk of the most recent star formation and is not sensitive to underlying old populations with recent bursts. The simple fits may cause old star-forming galaxies to look like young star-forming galaxies in this sense. The lack of flexibility of the SFH model creates a bias on the \gr{}--$\logML$ relationship at the bluest colors. The principal point to be made here is that we cannot constrain the \gr{}--$\logML$ relation with the simple exponentially declining SFHs. The hook at the bluest color also makes the simple fits difficult to model. As shown in Figure~\ref{fig:posteriors}, our median relation from the HBM is higher than the SED fits for these blue galaxies since the HBM causes regression toward the mean. The diagonal edge of the simple fits is due to the lower bound of $t_\mathrm{age}$ prior, in which the youngest age allowed in the simple model is 10\,Myr. \begin{deluxetable}{ccc} \tablenum{1} \tablecaption{1-$\sigma$ Posteriors for the hierarchical model parameters}\label{tab:modelparams} \tablewidth{0pt} \tablehead{ \colhead{Parameter} & \colhead{Simple} & \colhead{\prospector{}} % } \startdata $a_0$ & $-0.8912^{+0.0004}_{-0.0004}$ & $-0.659^{+0.002}_{-0.002}$\\ $a_1$ & $2.068^{+0.004}_{-0.003}$ & $1.541^{+0.008}_{-0.006}$\\ $a_2$ & $-0.503^{+0.004}_{-0.005}$ & $0.149^{+0.008}_{-0.010}$\\ $a_3$ & $-0.0190^{+0.0002}_{-0.0002}$ & $-0.121^{+0.001}_{-0.001}$\\ $\lambda_{-0.3}$ & $-0.999^{+0.002}_{-0.001}$ & $-0.989^{+0.006}_{-0.006}$\\ $\lambda_{0.5}$ & $-0.450^{+0.006}_{-0.006}$ & $-0.451^{+0.005}_{-0.005}$\\ $\lambda_{1.3}$ & $0.995^{+0.003}_{-0.006}$ & $-0.330^{+0.027}_{-0.020}$\\ $p_{-0.3}$ & $0.413^{+0.005}_{-0.005}$ & $1.259^{+0.063}_{-0.123}$\\ $p_{0.5}$ & $0.604^{+0.012}_{-0.015}$ & $0.826^{+0.022}_{-0.036}$\\ $p_{1.3}$ & $0.372^{+0.010}_{-0.011}$ & $0.512^{+0.007}_{-0.007}$\\ $q_{-0.3}$ & $79.267^{+13.187}_{-5.987}$ & $0.360^{+0.088}_{-0.088}$\\ $q_{0.5}$ & $5.508^{+0.900}_{-0.544}$ & $19.653^{+5.260}_{-4.332}$\\ $q_{1.3}$ & $139.118^{+27.817}_{-14.611}$ & $77.119^{+23.018}_{-19.031}$\\ $\sigma$ & $0.0100^{+0.0005}_{-0.0006}$ & $0.062^{+0.003}_{-0.004}$\\ \enddata \end{deluxetable} \subsection{Redshift Evolution of the Color-$M/L$ Relationships} In addition to the color dependence discussed above, our HBM parameterizes the redshift evolution, prescribed by the $a_3$ parameter (see Table~\ref{tab:modelparams}). The bottom panel of Figure~\ref{fig:zevolv} demonstrates the redshift evolution of the model relationships from $z=1$ to $z=3$. The simple model has no perceptible redshift evolution. Conversely, as the redshift increases, the \prospector{} relationship shifts downwards. At given \gr{}, low-$z$ galaxies have higher \stellarML{} than those of high-$z$ galaxies because the stars in these galaxies are on average older. The slope of \prospector{} relationship does not vary as a function of redshift, which means the redshift evolution produces a similar effect for galaxies at all colors. The strong redshift dependence of the \prospector{} \gr{}--$\logML$ relationship matches simple expectation from passive stellar aging: it increases the average age of the stars, and thus their $M_*/L$ as the age of the universe increases. The simple relationship agrees with the \prospector{} relationship at the blue and red ends at $z=3$, but agrees with the main locus of the \prospector{} relationship at $z=1$. The weak redshift dependence of the simple model implies that it does not reproduce a simple expectation that stars in galaxies get older as the universe gets older. This is because exponentially declining SFH only models the light that it sees from bright stars and is not sensitive to old stellar populations as we discussed in Section~\ref{sec:HBMrelation}. Previous studies such as \citet{ALHAMBRA2019} found no change in the relationship with $z$, which is also likely due to their use of parametric SFHs. \subsection{What Drives the \stellarML{} Offset Between the Two Models} \label{sec:deltaML} So far we have shown how the optical color--\stellarML{} relationship derived from two SED models differs. According to \citet{prospector-massfunc}, \prospector{} provides one of the highest \stellarML{} from fitting the photometry among various SED models, which is still true when fitting the spectrum \citep{Tacchella2022}. % The next question is what physics properties drive the differences between the models. Much of the complexity in interpreting the relationship is that the model parameters correlate with each other. To disentangle the degeneracy quantitatively and identify the most relevant driver of the higher inferred \stellarML{} values, we exploit the feature importance measurement provided by random forests (RFs). We train an RF to predict the $\logML$ difference between the \prospector{} model and the simple model. We use the {\sl scikit-learn} package \citep{Pedregosa2011scikit} to construct an RF model of 1000 trees. We adopt 80\% of our sample as the training set, and 20\% as the test set. The training features are extracted from the SED fits, which include the model differences in the \gr{}, diffuse dust optical depth $\hat\tau_2$, total formed mass-weighted age $t_\textrm{avg}$, and SFR in recent 100\,Myr $\textrm{SFR}_\textrm{100\,Myr}$, as well as the \prospector{} measurements of \gr{}, $\hat\tau_2$, $t_\textrm{avg}$, specific star formation rate $\textrm{sSFR}_\textrm{100\,Myr}$, stellar metallicity, and gas-phase metallicity. The endpoint of the RF regression is an accuracy score to reflect the goodness of fit, with 1 being the most accurate. Our selected features can predict $\Delta \logML$ of the test set at a mean accuracy score of 91\% with a 0.03\,dex root-mean-square error. The prediction accuracy is even better if we only consider high signal-to-noise ratio ($S/N$) galaxies. This suggests that we have included sufficient training features. % Figure~\ref{fig:rf} shows the importance of each input feature, which is computed by the reduction of the model performance by dropping out the features. Figure~\ref{fig:rf} immediately demonstrates that the difference in mass-weighted age is the primary driven factor for the $\logML$ offsets. $t_\textrm{avg}$ is far more predictive than the other features with a relative importance of 35\%. This means that \prospector{} fits have systematically higher $\ML$ ratios mostly because they are older, as a result of using nonparametric SFHs (see \citealt{Lower2020} for a discussion on the $M_*$ estimates from different SFHs). This agrees with the redshift trend in Figure~\ref{fig:zevolv}. The bulk of the difference between the model relationships comes from galaxies at lower redshifts because the age differences permitted are larger \citep{2019ApJ...877..140L}. The principal role of age is also supported by studies based on SPS libraries. For example, \citet{Into2013} demonstrated that both colors and $M/L$ ratios increase continuously with SSP age regardless of the metallicities. % Other factors exhibit smaller influences than $t_\textrm{avg}$ on $\Delta \logML$ estimates, with the dust index being the second important feature. The dust index is a power-law modification to the shape of \citet{Calzetti2000} dust attenuation curve \citep{Noll2009}, where Calzetti law has a dust index of 0. Galaxies with a larger dust index have a flatter curve, i.e., less attenuation in UV and more attenuation in NIR. The dust index does not directly affect $L_\textsl{g}$ because $\hat\tau_2$ is measured within the $\textsl{g}$ band, but will influence $M_*$, and thus $M_*/L_\textsl{g}$. As the NIR SED traces closely the bulk of $M_*$ from the old, low-mass stars, a large dust index permits galaxies to add a lot of mass due to increased attenuation in NIR without changing optical colors \citep{Salmon2016, Malek2018}. We find that the dust index is the most distinguishing factor for red galaxies when we repeat the RF analysis for the red subsample. Since dusty galaxies on average have a flatter dust attenuation curve than galaxies with little dust \citep{Chevallard2013, dustlaw2020review}, having a free dust index allows red galaxies to be either dusty star-forming galaxies with flatter dust attenuation curves (e.g. ultraluminous infrared galaxies), or nearly dust-free quiescent galaxies with steeper curves. In contrast, the simple model assumes a zero dust index for all galaxies. The distinct dust indices between the two models lead to large $\logML$ offsets at the red end. The sSFR also has a second-order effect on $\Delta \logML$. The simple fits do not model the dust emission in IR while the IR emission traces the recent star formation. This generally biases the SFRs to lower values since dust-obscured star formation is preferentially missed \citep{Wuyts2011}. Our results confirm that the $\logML$ offsets are marginally influenced by metallicity, especially gas-phase metallicity. \subsection{Color Systematics}\label{sec:deltagr} The results discussed so far investigate one component of the relationship, the mass-to-light ratio. % The other part, the rest-frame optical color, is usually considered a well-measured quantity. Nevertheless, systematics occurs when we convert the observed photometry at different redshifts to the rest-frame colors. In our work we synthesize colors from the models generated during SED fitting. Some studies and surveys adopt K-corrections to convert the observed-frame photometry of one given band to the desired rest-frame photometry of the same or a different band (e.g, \citealt{Hogg2002}; \citealt{Blanton2007}). K-correction from the observed band to its rest-frame counterpart is an efficient way to estimate colors at low redshift presumably since the bandpass wavelengths have changed very little. As one goes to high redshift, the dependence of the same band K-corrections on the galaxy SED increases but some surveys try to use an observed bandpass near the blueshifted rest-frame bandpass to approximate. K-corrections are usually calculated by fitting the observed photometry with linear combinations of a limited number of SED templates (\citealt{Blanton2007}; some also use a single best-fit SSP to calculate K-corrections \citep{Chilingarian2010}). Although the calculation of K-corrections still involves SED fitting at some level, the resultant color is likely to be less model-dependent than the one from full SED fitting. But unlike K-corrections, colors from SED fitting can be easily calculated in a consistent fashion for galaxies at any redshifts. Also, SED fitting can infer other galaxy properties including ages, metallicities, dust properties, and detailed SFHs more than just colors. In Figure~\ref{fig:colordiff}, we contrast the median model rest-frame colors and spectra of several example galaxies between the two SED model fits. The typical color differences between the models are around 0.1\,mag, with the \prospector{} results systematically redder. In the upper left panel, we show galaxies with the typical color differences, where the corresponding $\textsl{g}$-band and $\textsl{r}$-band fluxes are shown as triangles. In the upper right panel, we show examples of galaxies with the most extreme color differences (all $>0.5$\,mag). The comparison makes it clear that the color offsets between the two models are mostly due to the emission lines. The simple model does not include nebular emission. In particular, if the galaxy has bright UV fluxes and shows no prominent Balmer break, the simple model cannot fit the data well because the Balmer break is its only sharp spectral feature. Conversely, \prospector{} is more flexible in modeling observed photometry with large variations. It will add strong emission lines in this situation, which produces a lower $\chi^2$ value when fitting the photometry as compared to the simple model (see upper right panel of Figure~\ref{fig:colordiff}). Combining our discussion in the last section, the shift of $\logML$ with a median 0.12\,dex due to nonparametric SFHs and the shift of \gr{} with a median 0.06\,mag due to nebular emission, together move the average \prospector{} relationship in a parallel direction as the average simple relationship. % Remarkably, the galaxies with the extreme color differences in Figure~\ref{fig:colordiff} all show the inverse Balmer break feature. The inverse Balmer break comes from strong nebular continuum emission. This feature is uncommon for galaxies at lower redshifts but more likely to be prevalent in the early universe, marked by very high SFRs, strong nebular emissions, and very low metallicities. Our finding suggests that a subsample of \hst{} galaxies are candidates for being extreme emission line galaxies (EELGs, e.g., \citealt{Maseda2018}). We examine the H$\alpha$ equivalent width (EW) for 20 galaxies with the most extreme color differences. All of them have H$\alpha$ EW $\gtrsim 400$\,km $\mathrm{s}^{-1}$, high sSFR, and low metallicity, consistent with the properties of observed EELGs. \subsection{Shrinkage Effects in the HB method}\label{sec:HBMshrinkage} Here we will discuss a key feature of HBMs called shrinkage -- this refers to the reduction in posterior size that occurs when applying the population model as a prior to individual fits. % In our problem, the shrinkage estimators introduce a decrease in variance around the average \gr{}--$\logML$ relationship. Because of the restrictions of the simple model and its curvature at the bluest color that makes it hard to model, we will focus on the \prospector{} model for the shrinkage analysis in this section. We calculate the shrinkage of every galaxy by multiplying the individual SED-fit posterior $P(\btheta_i|\bD)$ by a factor $P(\brho_i|\bD)/P(\btheta_i)$, where $P(\brho_i|\bD)$ is the HBM posterior at this galaxy's redshift and $P(\btheta_i)$ is the prior probability we show in Figure~\ref{fig:priors}. In this way, we are reapplying the HBM posterior as a prior to the individual likelihoods from SED fitting. Figure~\ref{fig:shrinkage} presents the shrinkage effects on the \prospector{} model. We compare the posterior mean of each galaxy before and after shrinkage, where the original SED fits are plotted in black and the data after shrinkage is in green. A close inspection of this shows a more concentrated posterior distribution after shrinkage. The model shifts most galaxies with high $\logML$ toward the median relationship. We compare the shrinkage effects between galaxies with different $S/N$ ratios in different panels. The full sample is divided into two subsamples using $S/N = 19$ as a cut, by which we have a roughly equal number of galaxies in each subsample. It is clear that the individual fits experience stronger shrinkage in the $S/N < 19$ subsample, where the galaxies are dimmer. Because such faint galaxies have a smaller effect on the inferred hyperparameters and we assume the faint galaxies distribute similarly to the bright galaxies, we are borrowing information from bright galaxies when we perform hierarchical shrinkage on the properties of faint galaxies. This manifests a strength of the HBM that it provides a framework whereby a subset with high-quality data explicitly benefits all of the data. \section{Discussion} \label{sec:discussion} Here we have learned that the chosen SED model has a large effect on the optical color, $M_*/L$, and the relationship between the two. Now we will discuss three outstanding issues. First, we will make a comparison to the color--\stellarML{} relationships in other work. Second, we will discuss the implications of the HBM shrinkage. Finally, we will explore the possibility of constraining a much tighter relationship with an additional color \ri{}. \subsection{Comparison to Other Color--\stellarML{} Relationships} \prospector{} model predicts higher galaxy masses and smaller SFRs than other SED models \citep{prospector-massfunc} and we expect it to also produce a different color--\stellarML{} relationship. In Figure~\ref{fig:comparison}, we compare the median \prospector{} $(\textsl{g}-\textsl{i})$--$\log M/L_\textsl{i}$ relationship to two empirical relationships and two theoretical relationships in other works. We switch to $(\textsl{g}-\textsl{i})$ and $\log M/L_\textsl{i}$ for consistency with these works. We follow the same procedure in Section \ref{sec:model} to calculate the distribution of $\log M/L_\textsl{i}$ at given $(\textsl{g}-\textsl{i})$ by fitting the HBM. We have accounted for the offsets due to the choice of different IMFs. % The relationships from \citet{Zibetti2009}, \citet{Into2013} are derived from SPS libraries. \citet{Zibetti2009} estimate their relationship by marginalizing over a Monte Carlo library of SPS models, with an SFH composed of an exponentially declining continuum and random SF bursts. \citet{Into2013} use the Padova isochrones to show the trends in $M/L$ ratios and colors as a function of ages, metallicities, SFH birthrate parameters. These relationships depend entirely on the physical model and should in fact be compared to our priors. Similar to the priors in Figure~\ref{fig:colorpriors}, these relationships have steeper slopes than our relationship from observational data. The empirical relationships from \citet{Bell2003}, \citet{Taylor2011GAMA} are more aligned with our relationship. \citet{Bell2003} fit the relationship using data from the Sloan Digital Sky Survey (SDSS) and the Two Micron All Sky Survey (2MASS). They use an exponentially declining SFH and evolve their dust-free relationship back to $z=0$ where they assume the galaxies are 12\,Gyr old. Our offset to their relationship can be partly explained by the inclusion of dust, and by the difference in age due to our nonparametric SFH and higher central redshift (see redshift evolution in Figure~\ref{fig:zevolv}). % \citet{Taylor2011GAMA} obtained the relation using intermediate-redshift ($z<0.65$) galaxies in the Galaxy And Mass Assembly (GAMA) survey with an exponentially declining SFH as well. Their relationship is the closest to ours. The \citet{BC2003} SPS models that they choose can introduce a $\sim$0.05\,dex offset in $M_*$ with respect to FSPS SPS models used in \prospector{} \citep{prospector-massfunc}. The relationships from literature in Figure~\ref{fig:comparison} are all based on parametric SFHs. We have argued that nonparametric SFHs are crucial to providing the flexibility needed to model \stellarML{} estimates for blue galaxies. Recently, \citet{Ge2021} also use nonparametric SFHs for SED fitting to study the pixel-by-pixel-based color-$M/L$ relationship of MaNGA galaxies. However, their $M/L$ estimates are systematically higher than ours, and we speculate that it is because individual pixels are allowed to have a more bursty SFH. While galaxies represent the integrated light from pixels, the bursty features of pixels are possibly smoothed out. Also note that we assume a continuity SFH prior from \citet{prospector-massfunc} and they assume a much bustier prior from \texttt{PPXF} \citep{PPXF2017}. Therefore, our relationship cannot be compared to theirs directly; this highlights the great effects of the nonparametric SFH priors on the inferred SFH. % To summarize, there are many ways that \prospector{} color--\stellarML{} relationship may be different than the other empirical results. First, \prospector{} is a sophisticated physical model that adds nebular emission, dust emission, and uses nonparametric SFH. All these model assumptions may affect the relationship. As we have highlighted in Section~\ref{sec:result}, the nonparametric SFH shifts the whole relationship as the stellar population ages. This means if our sample centers at a different redshift than other works do, the resultant average relationship will be different. Second, each work assumes different model priors. Consequently, the parameter space covered in our (other) model may not be allowed in other (our) models. For example, \citet{BC2003} SPS model covers a wider variety of metallicity than ours, which can lead to smaller \stellarML{} for blue galaxies because they are allowed to have lower metallicity. Despite the discrimination between our and other color--\stellarML{} relationships, we do not know which relationship is correct. Our updated relationship between color and \stellarML{} can potentially impose constraints on galaxy dynamical masses ($\Mdyn$). The difference between dynamical and stellar masses often depends on how far from the galaxy center we probe. If we have spatially resolved data to take the galaxy radius into account explicitly, we may correlate the $\Mdyn/L$ with the \stellarML{} of a galaxy (e.g., \citealt{Taylor2010}; \citealt{deGraaff2021}). % For example, \citet{Taylor2010} showed that the ratio of $M_*$ and $\Mdyn$ is independent of stellar population parameters and observables such as apparent magnitude and redshift after excluding the effect of galaxy structure. A few studies observe a strong empirical relationship exists between $\log \Mdyn/L$ and color (e.g., \citealt{van-de-Sande2015}) similar to $\log M_*/L$. Since $\Mdyn$ is composed of stellar, gas, and dark matter components, our relationship on \stellarML{} can set a lower limit of $\Mdyn/L$. Specifically, the relationship can potentially put radius-dependent constraints on $\Mdyn/L$ at high redshift where the galaxy kinematic studies are challenging. Note that there is much complexity in the interpretation of the dynamical masses such as the galaxy inclination and mass--anisotropy degeneracy, and therefore, one needs to be cautious about linking $\Mdyn$ with $M_*$. \subsection{Learning New SED-fitting Priors from the HBM Shrinkage} While \prospector{} likely allows more freedom in its parameters than galaxies actually occupy, the HBM provides us with one way to update the SED model to make it better describe the real data. A key aspect of the population model is to link galaxies of the same population and facilitate predictions of unobserved galaxies of this population. Potentially, we can infer from the shrinkage estimates how we should tune the priors to make them better fit the underlying population distribution. Deriving new priors from the shrinkage may not be a straightforward process \citep{Loredo&Hendry2019}. But understanding the HBM shrinkage would be a first step to building better priors. Figure~\ref{fig:prior_shrinkage_comparison} shows the probability density function of priors, the median posteriors from individual SED fits, the median estimates of every galaxy after HBM shrinkage in the \gr{}--$\logML$ plane, and the difference in $\logML$ between the median after shrinkage and the median after SED fitting for each galaxy. For the majority of galaxies, the shrinkage estimates are more tightly constrained to the average relationship than the individual SED fits. A large amount of shrinkage is observed for the blue galaxies where the HBM increases their extremely low $\logML$ values. The shrinkage analysis suggests that most galaxies do live within a narrower range than our priors, especially blue galaxies. If we apply this tighter color--$M/L$ relationship from the HBM for future photometric surveys like the Large Synoptic Survey Telescope (LSST), it may enable a more accurate selection of color and redshift for the mass range covered than those from the current model priors (see Figure~\ref{fig:priors}). Despite the significant shrinkage at the blue end and in the middle color range shown in Figure~\ref{fig:prior_shrinkage_comparison}, we observe an opposite effect for some red faint galaxies. For these outliers, the HBM pushes them even farther from the average by lowering their $\logML$. This is because the prior is too strong and washes out the effect of HBM posterior. The different results for blue and red galaxies from the shrinkage analysis may imply that our assumption that faint and bright galaxies are drawn from the same population may not be true, and we need more population components in our model. % In order to test the efficacy of our model on specific galaxy subpopulations, it will be important to have deeper, higher spectral resolution data across the entire color range or a larger sample size. The shrinkage analysis also supports our choice of the population model for the distribution of galaxies in the optical color--$\log M_*/L$ plane. We choose a flexible functional form to model the population distribution as described in Section \ref{sec:model}. The purpose of using a flexible model is to avoid biases led by model choices. Adding free parameters into the population model will usually lead to greater shrinkage toward the average relationship as the population model tries to avoid overfitting. Figure~\ref{fig:prior_shrinkage_comparison} only shows a moderate amount of reduction in scatter. Given that the amount of shrinkage is determined by $P(\brho_i|\bD)/P(\btheta_i)$, if galaxies in fact live far from the SED model predictions or the population model is unnecessarily complex, the amount of shrinkage should be much greater. In addition to learning from data using the hierarchical model, numerical simulation is another approach to building more informed priors \citep{Pacifici2013}. However, galaxy evolution is very complicated, and simulations need to consider many ingredients, such as the interstellar medium, dark matter halo, active galactic nuclei, stellar feedback, gas cooling, and magnetic fields. It is a challenge to ensure that all the physical parameters we are interested in are well predicted in simulations \citep{Somerville&Dave2015}, especially when considering different subgrid physics (e.g., \citealt{Crain2015}). As a result, each galaxy formation model encodes a different set of SFHs. What we learn from the simulations will vary widely depending on which simulation we choose. For example, the predictions for the power spectrum density have a large diversity among simulations \citep{Iyer2020}, and $M_*$ from SED fitting shows systematic offsets to the true mass from simulated stellar particles \citep{Lower2020}. Unlike simulations, for HBM we are guaranteed to learn how we can update our SED model to describe the data better but suffer from the observation uncertainties and degeneracies. We can utilize the observations and simulations together to better constrain the SED priors. \subsection{Constraining the \stellarML{} with Two Colors}\label{sec:addri} So far our discussion has been focused on inferring \stellarML{} from one optical color, but is it possible to significantly reduce the uncertainty of \stellarML{} estimates with one or two more colors? In principle, the derived \stellarML{} will certainly be closer to the SED-fit results when we use more colors, or equivalently, more photometry as this is what the SED models are conditioned upon. Our question is if there is one color that provides significant added information on the $\ML$ in addition \gr{}, so the combination of the two colors can constrain a much tighter relationship with $\ML$. We calculate rest-frame colors from 10 filters in the optical and NIR, including \textsl{u, g, r, i, z, J, H, K, IRAC1, IRAC2}. We examine if any of them are correlated with the $\logML$ residuals between individual SED fits and the HBM \gr{}--$\logML$ relationship from Equation~\ref{eq:prospectorrelationship}. Of these colors, we find that \ri{} has the strongest correlation with the residuals (see the left column of Figure~\ref{fig:ri}). The Pearson correlation coefficient is 0.44. This correlation indicates that including \ri{} may help reduce the residuals of $\logML$ estimates. We perform a linear regression between \ri{} and the residuals of $\logML$ estimates. Readers may use the following relation to evaluate an \ri{} dependent correction to the $\logML$ estimates from our HBM. \begin{equation}\label{eq:ri-residual} \logML~\textrm{residuals} = -0.11 + 0.40 (\textsl{r}-\textit{i}). \end{equation} Note that this relation has not been tested through our full HBM machinery. As a simple test to quantify the improvement of \stellarML{} estimates after adding \ri{}, we perform two linear fits on the relationship between color(s) and $\logML$ using the median posteriors from SED fits. In one of the linear models, $\logML$ has a linear dependence on \gr{} and redshift, and in the other model, $\logML$ depends on \gr{}, \ri{}, and redshift. The two models predict a similar average relationship between \gr{} and $\logML$ while the model with \ri{} has a smaller scatter around the regression line. The right column of Figure~\ref{fig:ri} shows the median and 1$\sigma$ range of the $\logML$ residuals of the linear fits. The linear model with two colors has residuals closer to 0 and a smaller scatter $\sim$0.15\,dex, compared to $\sim$0.26\,dex of the model with only \gr{}. Our results imply that it may be worthwhile taking \ri{} into account besides a single color \gr{} to better constrain \stellarML{}. We do not perform a full HBM here since the purpose of this section is only to provide a possible path forward for future analyses. \ri{} is potentially useful for constraining a tighter color--\stellarML{} relationship because it helps to identify sSFR, which can be degenerate with metallicity or dust at a fixed \gr{}. In the left column of Figure~\ref{fig:ri}, we show that both $\logML$ residuals and \ri{} correlates with $\textrm{sSFR}_\textrm{100\,Myr}$, which means that sSFR is the major driver for the potential of \ri{} in reducing \stellarML{} residuals. At a fixed \gr{} color, a bluer \ri{} color is associated with younger star-forming galaxies, whereas a redder \ri{} color is typically associated with older quiescent systems. We speculate that \ri{} correlates with $\textrm{sSFR}_\textrm{100\,Myr}$ because the \textsl{r}-band photometry likely puts constraints on the H$\rm{\alpha}$ emission line, an indicator of the most recent SFR. If this is true, adding \ri{} may only benefit the color--\stellarML{} relationship from SED models including nebular emission. The right column of Figure~\ref{fig:ri} indicates that including \ri{} has the largest effect on red galaxies with $(\textsl{g}-\textsl{r}) \gtrsim 0.3$. So \ri{} may be useful in differentiating the dusty star-forming galaxies with blue \ri{} and the quiescent galaxies with red \ri{} for galaxies with red \gr{} color. % This agrees with our finding that \ri{} has a strong correlation with $\hat\tau_2$. The fact that \ri{} anticorrelates with sSFR and reduces the scatter of $\logML$ estimates for red galaxies likely suggests that \ri{} helps distinguish between age and dust. \section{Summary} \label{sec:summary} The empirical rest-frame optical color--\stellarML{} relationship is greatly influenced by the physical model used in SED fits. In this work, we contrast the \gr{}--$\logML$ relationship between a simple 4-parameter SED model and a sophisticated 14-parameter \prospector{} model using 63,430 \hst{} galaxies up to $z=3$. We utilize a hierarchical Bayesian model to fit the relationship, which allows us to account for the SED-fit posteriors and priors explicitly. We show the distinction between the two model relationships and identify the galaxy properties that drive the difference. Furthermore, by taking advantage of HBM shrinkage, we learn more about the true distribution of the \gr{} color and $\ML$. Our main findings are as follows: \begin{itemize} \item The two contrasting SED models derive similar average \gr{}--$\logML$ relationships but with significantly different scatters. The more sophisticated \prospector{} model predicts a looser and less skewed relationship in contrast to the simple model (Figure~\ref{fig:posteriors}), with a 1$\sigma$ uncertainty of 0.28\,dex compared to 0.12\,dex. \stellarML{} and \gr{} offsets between the two SED models are mostly attributed to the nonparametric SFHs and nebular emission in \texttt {Prospector-$\alpha$}, respectively. However, the two effects together shift the average relationship in a parallel direction to higher \stellarML{} and redder color. The simple model is too restricted and not sufficient for describing all galaxies, especially the blue ones because the exponentially decaying SFH does not allow galaxies to have both old stars and optically blue colors. % \item Stellar age is the major driver of the difference between two model relationships (Figure~\ref{fig:rf}). For the \prospector{} model, we find a significant redshift evolution of the relationship (Figure~\ref{fig:zevolv}). The average relationship for low-$z$, older galaxies has higher \stellarML{} than that of high-$z$, younger galaxies. \prospector{} nonparametric SFH adds more scatter to the relationship through a more flexible treatment of SFHs. We do not recover a redshift evolution in the simple model due to the limitation of exponentially declining SFHs against outshining effects. The shape of the dust attenuation curve is an important driver for the \stellarML{} offset in red galaxies. \item The nebular line and continuum emission enabled in \prospector{} is the main reason for the different color measurements (Figure~\ref{fig:colordiff}). The objects with extreme color offsets are likely to be EELGs. \item The shrinkage analysis of the HBM motivates us to learn better galaxy SED-fitting priors from the data. Our current priors are slightly wider for most galaxies but should be weaker for some red galaxies (Figure~\ref{fig:prior_shrinkage_comparison}). \end{itemize} Our work shows that optical color--\stellarML{} relationship suffers from systematic errors depending on the choice of the SED model. The skewness of the distribution of \stellarML{} at given color may cause a bias when estimating $M_*$ for a specific galaxy using the average relationship. Given the redshift evolution we found, future work needs to be careful when applying the empirical relationship determined locally to high-redshift galaxies. Our results can be expanded in several ways. The HBM in this work is a path forward to informing better priors for SED fitting using various hierarchical models. Dynamical masses will provide useful constraints to the relationship, and it may help to include dynamical measurements in the HBM. Ideally, if we have a more flexible model such as an HBM fitting both M/L distribution at a given color and color distribution at a given M/L, we can better parameterize the M/L distribution at different colors. Finally, it is possible that the scatter in the relationship can be significantly reduced by adding only one or two more colors such as the $r-i$ color. \acknowledgments We thank Eric Bell for an insightful review. We thank Marijn Franx, Charlie Conroy, and Ben Johnson for help in the early stages of this project. We further thank Rachel Bezanson for thoughtful comments and discussion on future science. Also thanks to Pieter van Dokkum for helpful discussion. Computations for this research were performed on the Pennsylvania State University's Institute for Computational and Data Sciences' Roar supercomputer. \software{Prospector \citep{Prospector-1, Prospector-2}, FSPS \citep{Conroy2009}, PYTHON-FSPS \citep{python-fsps}, dynesty \citep{dynesty}, MIST \citep{MIST0, MIST1}, MILES \citep{MILES}, NumPy \citep{harris2020array}, SciPy \citep{2020SciPy-NMeth}, scikit-learn \citep{Pedregosa2011scikit}, Matplotlib \citep{Hunter2007}, Astropy \citep{astropy:2013, astropy:2018}. \\~\\} \appendix \section{Mock Test} \label{sec:appendixA} We perform mock tests to validate our population model described in Equation~\ref{eq:ProbML}. We generate $10^5$ mock galaxies and model them with the sgt distribution discussed in Section~\ref{sec:model}. The \gr{} color is sampled from a truncated normal distribution, which is similar to the data distribution in Figure~\ref{fig:priors}. For each object, we choose a random redshift between 0.5 and 3. We then generate their $\logML$ from the most likely parameters of the HBM modeling of the \prospector{} SED fits. We build 15 samples for each object, and for each sample we add small intrinsic errors drawn from normal distributions with a scale of 0.01 mag for \gr{} and 0.01\,dex for $\logML$. Figure~\ref{fig:mock} shows that our HBM successfully reproduces the mock data. We compare here the PDF of the true sgt model from which we simulated the mock data to the posterior of HBM in the \gr{}--$\logML$ plane. They are very similar except that the HBM result has less scatter at the blue and red ends. We find that the likelihood for the best-fit parameter of the HBM fit is slightly higher than the true solution. This implies that we are still sensitive to the number of galaxies and the errors of the \gr{} and $\logML$. Our population model needs more data to recover the exact parameters. Regardless of that, the distribution of $\logML$ can be measured to relatively high precision. \bibliography{main}{} \bibliographystyle{aasjournal} \end{CJK*}
Title: The Differential Assembly History of the Centers and Outskirts of Main Sequence Galaxies at $z\sim2.3$
Abstract: We present a study of spatially-resolved star formation histories (SFHs) for 60 $z\sim2.3$ main-sequence, star-forming galaxies selected from the MOSDEF spectroscopic survey in the GOODS-N field. Photometry is decomposed into a central and outer spatial component using observed $z_\mathrm{F850LP}-H_\mathrm{F160W}$ colors. The Prospector code is used to model spectral energy distributions for the centers, outskirts, and integrated galaxy using HST/ACS and WFC3, Spitzer/IRAC, and ground-based photometry, with additional constraints on metallicity and spectroscopic redshift from MOSDEF spectroscopy. For the low-resolution bands, spatially-resolved photometry is determined with an iterative approach. The reconstructed SFHs indicate that the majority of galaxies with $\log(M_\star/M_\odot)<10.5$ are observed while their central regions undergo relatively recent ($<100$ Myr) bursts of star formation, while the outskirts have a smooth, quasi-steady SFH. The enhanced star formation activity of the central parts is broadly consistent with the idea that it is produced by highly dissipative gas compaction and accretion. The broad dispersion of central density and size observed in the sample suggests that for the selected galaxies this process has started but is still far from being completed. The implication would be that selecting star-forming galaxies at cosmic noon frequently includes systems in an "evolved" evolutionary phase where the centers have recently started a burst of star formation activity that will likely initiate inside-out quenching in the next several hundred million years.
https://export.arxiv.org/pdf/2208.01653
\title{The Differential Assembly History of the Centers and Outskirts of Main Sequence Galaxies at $z\sim2.3$} \correspondingauthor{Sam E. Cutler} \email{secutler@umass.edu} \author[0000-0002-7031-2865]{Sam E. Cutler} \affiliation{Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA} \author[0000-0002-7831-8751]{Mauro Giavalisco} \affiliation{Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA} \author[0000-0001-7673-2257]{Zhiyuan Ji} \affiliation{Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA} \author[0000-0001-8551-071X]{Yingjie Cheng} \affiliation{Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA} \keywords{galaxies: evolution – galaxies: star formation - galaxies: SED fitting} \section{Introduction} A fundamental question in galaxy evolution is the formation history of the dense stellar cores sometimes associated with galactic bulges. In the canonical picture, galaxy growth and quenching is an inside-out process. Massive galaxies ($>10^{11}~M_\odot$) build their outer regions (i.e., increase in size) at lower redshift \citep{vanDokkum10,Whitney19,Mosleh20,Cutler22,Ji22} while the central parts of these galaxies are in place as early as $z\sim2$ \citep{Carrasco10}. Similar studies have shown this inside-out growth is prominent in galaxies across the main sequence and even below the main sequence \citep[e.g.,][]{vanderWel14b,Nelson16,Dimauro22}. The prominence of galactic bulges is correlated both with the stellar mass of the galaxy and scale length of the disk \citep{Shen03,vanderWel14b}, indicating that the growth of the disk is tied to the formation and structure of the bulge. Similarly, the growth of the central supermassive black hole in galaxies is also correlated with the bulge strength \citep{Haring04,Kormendy13}, which suggests that the bulge may play a role in active galactic nuclei (AGN) strength and corresponding quenching processes \citep{Chen20}. Bulges have also been tied to slow, inside-out quenching processes, in which massive bulges stabilize the gas in the disk and prevent it from collapsing \citep[so called ``morphological quenching'',][]{Martig09}. Further studies suggest that the bending of the star-forming main sequence (SFMS) to lower specific star formation rates ($\mathrm{sSFR}\equiv\mathrm{SFR}/M_\star$) at higher stellar mass is evidence of the presence of old bulges in massive galaxies \citep{Abramson14}, though this is disputed \citep[e.g.,][]{Guo15,Schreiber16,Dimauro22}. Dense stellar cores have been directly observed in the most massive galaxies as early as $z\sim2.5$ \citep[e.g.,][]{Carrasco10,vanDokkum14,Genzel17,Genzel20}. Much of the mass evolution in these galaxies occurs in the outer stellar envelope, while these central regions maintain a roughly constant mass \citep{vanDokkum14}. However, these galaxies are likely the progenitors of today's giant ellipticals \citep{Genzel17,Genzel20}, due to their location on the SFMS, and there isn't clear observational evidence for the direct detection of distinct centers in less massive, potential Milky Way progenitors at higher redshifts. In contrast to the results of massive galaxy studies, some observations suggest that cores are not fully formed in Milky Way progenitors at high redshift and significant mass evolution occurs at all radii \citep{vanDokkum13}. In this scenario, bulges likely form alongside disks through migration of star forming clumps \citep{Dekel09a,Dekel09}. This is driven by the accretion of cold gas filaments into the disk \citep[as seen in][]{Keres05}. The massive amount of gas entering the disk subsequently fragments into dense clumps through violent disk instabilities and these clumps become sites of intense bursts of star-formation. Dynamical friction then causes these clumps to migrate to the center of the galaxy and then merge to form a bulge \citep{Dekel09a,Dekel09,Ceverino15,Mandelker17,Renzini20}. Observations support this formation pathway, as clumps closer to galactic centers tend to be older and less active \citep{Guo12}, though other secular evolution processes \citep[e.g. bar instabilities,][]{Kormendy04} may also be play a role. In another scenario, a compaction event causes the dissipative collapse of cold gas into the galactic center \citep{Dekel14,Zolotov15,Tacchella18}. The so-called wet disk contraction triggers a burst of star formation in the center of the galaxy resulting in a so-called ``blue nugget''. Compaction events can be caused by a number of physical mechanisms, including violent disk instabilities triggered by cold gas accretion \citep{Dekel14}, mergers \citep[which can also cause disk instabilities, see][]{Zolotov15,Inoue16}, and collisions of counter-rotating streams \citep{Dekel19}. The removal of gas from the disk, as well as other internal (star-formation and AGN feedback) or external (halo heating of CGM gas) processes, then leads to the galaxy quenching. However, the galaxies may only stay quenched permanently if they have reached a certain cutoff in halo mass and the CGM is heated to the point where cold mode accretion stops. If this threshold is not reached, these galaxies may be able to regrow their disks \citep{Tacchella16}. This could explain the discrete bimodal distribution seen in chemical diagrams of the Milky Way center \citep{Queiroz20}: $\alpha$-rich, metal-poor populations (the bulge and chemical thick disk) are formed in an early burst caused by gas contraction which briefly causes the galaxy to quench and eject gas out of the galactic center, then cold mode accretion resumes forming an $\alpha$-poor, metal-enhanced population in the thin disk that mixes into the bulge/thick disk over time. The progenitors of today's Milky Way/$L^*$ galaxies are less massive than the galaxies studied in \cite{Genzel17} or \cite{Genzel20}. Due to their location in the middle of the SFMS, Lyman-break galaxies (LBGs) have been identified as potential candidates for progenitors of Milky-Way-like galaxies \citep{Giavalisco96,Steidel96,Papovich01,Shapley01,Giavalisco02,Williams14,Steidel16}. Moreover, chemical and kinematic studies of the bulge of the Milky Way in \cite{Queiroz21} suggest that the bulge is most likely an old, pressure-supported component, which formed around $z\sim2-3$. As such, observations of LBGs between $2<z<3$ could reveal forming bulges and help constrain the physical processes behind bulge formation in lower mass galaxies. In this paper, we search for these objects in a sample of LBG-like galaxies, i.e. UV-bright, low-obscuration star-forming galaxies, at $z\sim2.3$ in the Great Observatories Origins Deep Survey-North (GOODS-N). It is important to note that throughout this paper the term ``bulges'' is liberally used to refer to the often compact and dense central regions of the galaxies under consideration, rather than the strict definition of a classical bulge (or pseudobulge) used in connection to local galaxies. In the Milky Way this central component includes the chemical thick disk, which is also believed to form around $2<z<3$ and is confined to the inner few kpcs of the galaxy \citep{Queiroz21, Miglio21}. Thus, the word ``bulge'' in this paper refers to the central regions of the galaxies with no consideration, in defining the term, to their star formation activity, age, metallicity, light profile or dynamical state, and likely includes significant contributions from a forming bulge and thick disk. In fact, the primary goal of this paper is to investigate if the central regions of star-forming galaxies at the cosmic noon that are plausible candidate of today's MW-like galaxies are already characterized by different star formation history than the outer regions. Similarly, the term ``disk'' refers to the outer regions of the galaxy only, though this likely covers the thin disk with some contamination from the outer thick disk. We reconstruct the star formation history of our target galaxies using the fully Bayesian \textsc{Prospector} code \citep{Johnson21}, which we use to fit the sensitive CANDELS \citep{Grogin11,Koekemoer11} panchromatic spectral energy distributions (SEDs) of the two resolved, color-selected subcomponents, the center or ``bulge'' and the outskirts or ``disk'', to spectral populations synsthesis evolution models. To achieve as accurate age measures as possible, we also use spectroscopic redshift and gas-phase metallicity measures for our galaxies from the MOSFIRE Deep Evolution Field (MOSDEF) spectroscopic survey \citep{Kriek15} as strong priors in the fits. MOSDEF is also biased towards UV-selected galaxies (due to lower spectroscopic success rate for red galaxies), which provides ideal targets for this study. Metallicity measurements, along with coverage in the observed infrared (IR) from \textit{Spitzer}/IRAC can help break the age-metallicity degeneracy, while the rest-frame optical spectroscopy also provides accurate redshifts for SED fitting. The primary goal of this study is to study the star-formation history of the central regions (bulges) and outskirts (disks) of our targets to help provide empirical constraints the mechanisms of their assembly. If the central regions of these galaxies have been following substantially different evolutionary path than the outskirts, we should see this in the resulting star formation histories (SFHs): centers could form earlier than or coevally with the outskirts but most models expect the formation to be bursty and declining faster than the outer region \citep{Dekel09,Guo12,Franco20,Dimauro22} possibly in an inside out fashion, i.e. central measurements have a declining and sharply peaked SFH when compared to the disk measurements. In Section \ref{sec:data}, we describe the photometric and spectroscopic data used in SED fitting and core decomposition. Section \ref{sec:analysis} explains the techniques used to decompose galaxies into resolved central and outer components and deal with unresolved photometry ($K$-band/IRAC), as well as the \textsc{Prospector} model we use to fit SEDs. We discuss the resulting SFHs and their impact on our understanding of galaxy formation in Section \ref{sec:sfhs}. A summary of the conclusions is presented in Section \ref{sec:summary}. Throughout the paper we assume a flat $\Lambda$CDM cosmology with $\Omega_m=0.3$, $\Omega_\Lambda=0.7$, and $\mathrm{H}_0=70\mathrm{~km~s}^{-1}~\mathrm{Mpc}^{-1}$, as well as a Kroupa initial mass function \citep{Kroupa01} for stellar masses. All magnitudes are in the AB system. \section{Data and Sample Selection}\label{sec:data} Our sample consists of 60 $z\sim2.3$ galaxies from the GOODS-N field. GOODS-N is chosen because has both rest-frame optical spectroscopy from MOSDEF \citep{Kriek15} and photometric measurements ranging from the rest-frame ultraviolet (UV) to mid-IR (MIR). An initial sample of galaxies is first chosen from the multiwavelength \textit{Hubble Space Telescope} (HST) $H_\mathrm{F160W}$-selected CANDELS/SHARDS catalogs of \cite{Barro19}. The sample is then limited to only galaxies with MOSDEF spectroscopic measurements and high quality spectroscopic redshifts. Potential AGN are removed from the sample using mid-IR (MIR) \textit{Spitzer}/IRAC selections presented in \cite{Coil15}. Finally, these galaxies are matched to a sample of robust metallicity measurements from MOSDEF \citep{Sanders18} at $z\sim2.3$. \subsection{Photometric Data}\label{sec:phot} Photometric data is taken directly from the CANDELS/SHARDS catalogs \citep{Barro19} for each cross matched galaxy. In the UV, we include Kitt Peak North \textit{U}-band photometry from the Hawaii Hubble Deep Field North survey \citep{Capak04}. The optical photometry is composed of HST/ACS observations in the F435W, F606W, F775W, and F850LP filters from GOODS \citep{Giavalisco04}, as well as F814W from CANDELS \citep{Grogin11,Koekemoer11}. CANDELS data is also used in the near-IR (NIR) with the HST/WFC3 F105W, F125W, and F160W filters, with additional NIR measurements coming from HST/WFC3 F140W in the AGHAST survey GO: 11600 (PI: B. Weiner) and Subaru/MOIRCS \textit{K}-band \citep{Kajisawa11}. The photometry is rounded out by MIR \textit{Spitzer}/IRAC observations in the 3.6, 4.5, 5.6 and 8 $\mu$m filters \citep{Ashby13,Dickinson03}. Stellar masses from the catalog are also included as initial estimates for the stellar mass of the galaxy. These masses are derived with the FAST code \citep{Kriek09}. \subsection{Spectroscopic and Metallicity Data} We first attempt to incorporate \textit{H}-, \textit{J}-, and \textit{K}-band (rest-frame optical) spectroscopic measurements from MOSDEF \citep{Kriek15} into the sample of galaxies from GOODS-N. MOSDEF is ideal for a sample of LBGs, as the survey is biased towards galaxies in the middle of the SFMS in particular \citep[see Fig. 16 in][]{Kriek15}. Spectra are first required to be measured in GOODS-N, be a primary target of the survey (not a serendipitous object in the slit), and have a high quality spectroscopic redshift (i.e. measured from at least a single emission line with an S/N$\geq2$ and within the 95\% confidence interval of a photometric redshift). The MOSDEF spectra are then matched via ID to each galaxy in the photometric sample. Due to the low S/N continuum of the MOSDEF spectra, this spectroscopic proved unsuitable for direct use our SED fitting methods. Despite this, the redshifts and metallicities from MOSDEF are extremely useful in setting accurate priors on the free parameters in the SED. The aforementioned metallicities are measured from MOSDEF spectra at $z\sim2.3$ by \cite{Sanders18}. This sample contains galaxies with robustly measured redshifts at $2\leq z\leq2.7$, $\log(M_\star/M_\odot)\geq9$, and high S/N H$\alpha$ and H$\beta$ measurements. AGN are also excluded from this sample with a combination of IR, X-ray, and emission line diagnostics. Multiple indicators are used to measure the oxygen abundance of each galaxy. We convert oxygen abundances to metallicities using a solar oxygen abundance of $12+\log(\mathrm{O/H})_\odot=8.69$ \citep{Asplund09}. \section{Analysis}\label{sec:analysis} In this section we discuss the methods used to measure the SEDs of resolved subcomponents of galaxies. Galaxies are decomposed into ``bulge'' and ``disk'' components with color selections in Section \ref{sec:decomp}. In Section \ref{sec:prospect}, we discuss the \textsc{Prospector} code \citep{Johnson21} and the various settings and templates used. Lastly, Section \ref{sec:iter} includes a description of the iterative method used to estimate the unresolved IRAC photometries of the bulge and disk components. \subsection{Center and Outskirts Decomposition}\label{sec:decomp} In the present Universe, the Milky Way bulge and thick disk is known to be older and redder than the surrounding regions of the galaxy \citep[e.g.,][]{Bensby17,Barbuy18,Queiroz21,Miglio21}. As such, a forming bulge/stellar core can potentially be identified via its rest-frame optical colors. Using resolved, observed-frame $z_\mathrm{F850LP}-H_\mathrm{F160W}$ colors, we decompose each galaxy in our sample into separate central and outer components. This specific color is chosen because it spans the 4000\AA{} break, which is a known age indicator and can help identify older stellar populations \citep[e.g.,][]{Kauffmann03,Wild09}. Image cutouts of each galaxy in both filters and segmentation map cutouts are made with Montage v6.0\footnote{\url{http://montage.ipac.caltech.edu}}. Nearby objects are masked and the centroid in F160W is measured. A series of 50 circular apertures with radii ranging from 1 to 10 pixels (0.06\arcsec{} to 0.6\arcsec{}) are placed at the centroid and aperture photometries for both F850LP and F160W are measured. Fixed circular apertures are used because any detectable bulge at this redshift would likely be small enough to fall within the point spread function (PSF) FWHM. We then choose the center aperture that maximizes the color (i.e. contains the reddest flux within). To do this, we impose a condition that chooses a local maximum if the following local minimum (at a larger aperture radius) is less than 0.01 times the local maximum color, and the global maximum otherwise. This prevents overestimation of the center aperture by secondary peaks containing background flux or other contamination. An example bulge decomposition is shown in Figure \ref{fig:decomp} in a color image (F160W/F125W/F850LP for the red, green, and blue filters, respectively), as well as F160W and F850LP individually. The central aperture is shown with a green circle and highlighted by a red point in the inset $z_\mathrm{F850LP}-H_\mathrm{F160W}$ growth curve. The chosen centers are then confirmed visually, and in cases where a distinct center is not visually identifiable we rely on the identification from color selection. As another check, we examine the residuals of best-fit exponential disk models for a subsample of 10 galaxies. In theory, subtracting an exponential disk model from the galaxy should leave behind light from the bulge (or other clumpy regions of star-formation, which we discuss later). Exponential disk are fit to the F160W cutouts of the subsample using GALFIT \citep{Peng02,Peng10} and the best-fitting model is subtracted from the image cutout. For all galaxies in the subsample, the color-selected center aperture encloses the majority of the residual flux, suggesting that these selections are capturing the bulge and dense stellar core. The center fluxes are then computed via aperture photometry for each HST band. Photometric uncertainties are computed by summing in quadrature the sigma image ($1/\sqrt{w}$, where $w$ is the weight image). The corresponding outskirts fluxes are computed by subtracting the center from the total catalog fluxes for each filter. Outskirts uncertainties are computed by subtracting the center photometric error from the catalog flux error in quadrature. One potential contaminant in our color selection of galaxy centers is star forming regions that are reddened to to high levels of dust attenuation. In star forming regions, most dusty phases are early on in the young, initial part of a burst of star formation. As galaxies quench, they clear their interstellar medium (ISM) of dust and other material \citep[e.g.,][]{Ciesla21}, so these centers should be reddened due to age and not dust given the expected stellar populations of galactic centers. However, galaxies undergoing mergers may have bursts of dusty star formation throughout the galaxy, including the outskirts. The reconstructed SFH should capture this possibility and, in any case, the larger sample size may help reduce the impact of such outliers. As we shall see, however, in most of our galaxies the SFH of the outskirts is substantially smoother and less ``bursty'' than that of the centers. As mentioned earlier, these centers also likely include significant contribution from the chemical thick disk. However, we are interested in the formation and evolution of the galactic center as a whole, so investigating the evolution of both these components (bulge and thick disk) is useful in understanding the galaxy formation histories as a whole. Moreover, most quenched galaxies at $z\sim2$ tend to be fast rotators with large velocity dispersions \citep{Newman18}, and thus are most likely dominated by chemical thick disks, so understanding the formation of this component as well as the rest of the galactic center might help explain this behavior. \subsection{\textsc{Prospector} Inputs}\label{sec:prospect} For our SED fitting analysis, we use the \textsc{Prospector} code \citep{Johnson21}. \textsc{Prospector} forward models observed data (spectra and photometry) of composite stellar populations (CSPs) given a set of parameters describing the CSP (e.g. mass formed, metallicity, and dust extinction) and other observational effects (e.g. filters and redshift), which can either be fixed or varied. Using these models, likelihood and posterior probabilities are computed via comparison to observed data and noise properties. CSPs are generated with Flexible Population Stellar Synthesis \citep[FSPS,][]{Conroy09,Conroy10a,Conroy10b} code using \textsc{python}-FSPS \citep{ForemanMackey14}. Priors are also important ingredients to \textsc{Prospector}, given the Bayesian forward-modeling approach this code takes. Since significant parts of SED parameter space can be highly degenerate (e.g. the age-dust-metallicity degeneracy), priors can help shape the posterior distribution and capture more accurate stellar population properties. Moreover, choice of prior is crucial in determining nonparametric (piecewise step function) SFHs with lower quality data. In a non-Bayesian framework (using regularization), \cite{Ocvirk06} determine that only 8 episodes of star formation can be recovered with high quality spectroscopy ($R=10,000$, S/N$=100$). However, \cite{Leja19} find that with \textsc{Prospector}, nonparametric SFHs can be useful with photometry or low-quality spectra if the SFH prior is well tuned. In this work, the redshift is fixed at the spectroscopic redshift measured from MOSDEF. The metallicity ($\log Z_\star$) is allowed to vary and strong priors are applied using MOSDEF metallicity measurements \citep{Sanders18} for SED fits to the total galaxy and disk. The priors are only applied to the disk component because we assume that the metallicity of the galaxy should be dominated by the metallicity from the disk. This is supported by the fact that the Milky Way bulge consists of an older, metal poor population \citep[e.g.,][]{Queiroz21} and that bulges of star forming galaxies at $z\sim2.3$ should be more metal poor than the rest of the light from the galaxies. Dust and nebular emission are incorporated with the $V$-band optical depth ($\tau_V=1.086A_V$) and ionization parameter ($U_\mathrm{neb}$) allowed to vary. SFHs are modeled nonparametrically using the continuity prior described in \cite{Leja19}. This prior fits directly for the ratio of star formation rates (SFRs) between bins ($\Delta\log(\mathrm{SFR})$), which weights against discontinuities in the SFH (via sharp jumps or drops in the SFR). Each individual ratio ($\log(\mathrm{SFR}_i/\mathrm{SFR}_{i+1})$) is drawn from a Student's-t distribution, as in \cite{Leja19}, with an initial value of $\log(\mathrm{SFR}_i/\mathrm{SFR}_{i+1})=0$ for all SFR bins $i$. The SFH is computed over 7 fixed age bins (in lookback time from the observed redshift): \begin{align} 0<&t<30~\mathrm{Myr}\nonumber\\ 30<&t<99~\mathrm{Myr}\nonumber\\ 99<&t<218~\mathrm{Myr}\nonumber\\ 218<&t<479~\mathrm{Myr}\\ 479~\mathrm{Myr}<&t<1.06~\mathrm{Gyr}\nonumber\\ 1.06<&t<2.32~\mathrm{Gyr}\nonumber\\ 2.32<&t<\sim2.7~\mathrm{Gyr},\nonumber \end{align} where the last bin is adjusted to account for the age of the universe at the observed redshift. Only 5 age bins are used when decomposing the IRAC and $K$-band photometry, as discussed in the next section. The smaller size of the oldest bin also allows for a maximally old population \citep[see][]{Leja19}. The total mass formed in the best-fit SFH ($M_F$) is left as a free parameter and an initial guess is taken from the CANDELS/SHARDS measurements from FAST \citep{Barro19}. For subcomponents, this initial guess is scaled by the fraction of light in F160W: \begin{align} M_{x}=M_\mathrm{tot}\left(\frac{f_\mathrm{F160W,x}}{f_\mathrm{F160W,tot}}\right), \end{align} where $x$ refers to either the bulge or disk component. The mass-weighted age ($t_\mathrm{M}$ or $\langle t_{\star}\rangle_M$) is determined from the total mass formed and the SFH and the total mass formed is corrected to a stellar mass ($M_\star$) using the approximation \begin{align} \log M_\star=\log M_F+1.06-0.24\log(t_\mathrm{M})+0.01\log(t_\mathrm{M})^2 \end{align} with masses in solar units and ages in years \citep[Eq. 2]{Leja13}. This mass accounts for the total mass in stars and stellar remnants at observation (i.e. excluding mass lost during supernovae, winds, etc.), rather than all the mass formed throughout the galaxy's history. SEDs are fit to all filters described in Section \ref{sec:phot} where available for the total galaxy. For subcomponents, the $U$-band is not fit because it cannot be resolved (the unresolved $K$-band and IRAC data is dealt with in an iterative process explained in Sec. \ref{sec:iter}). Best-fit parameters are first determined by pure maximum-likelihood estimation via Levenberg-Marquardt (LM). Because much of the likelihood space in SED fitting is non-Gaussian and ill-conditioned, optimization methods like LM are not recommended for actually determining parameter values \citep{Johnson21}. Instead, the LM-determined parameters are used as an initial guess for Monte-Carlo sampling of the posterior probability distribution function (PDF). Monte-Carlo sampling is done using Monte-Carlo Markov Chains (MCMC) in the \textsf{emcee} code \citep{ForemanMackey13} with 64 walkers and 256 iterations (32 and 128 when decomposing IRAC photometry, see Sec. \ref{sec:iter}). Best-fit parameters are then determined by finding the maximum a posteriori (MAP) sample with uncertainties from the 16th and 84th percentiles of the chain. Example \textsc{Prospector} results for the total galaxy as well as bulge and disk components are shown in Figures \ref{fig:SED_ex} (SEDs) and \ref{fig:SFH_ex} (SFHs), and in Appendix \ref{sec:excov} (covariances). \subsubsection{Impact of Adopted Assumptions of Metallicity Priors} We use global metallicity measurements from \cite{Sanders18} as a prior in the integrated SED. Since these measurements are derived from the integrated optical spectrum, using them as priors for the resolved components requires specific assumptions. In the previous section, we describe how the metallicity prior is applied to the outer component only, and the central metallicity is allowed to vary over a top-hat prior from $-3<\log(Z/Z_\odot)<0.2$. This was justified by observations of predominantly older, metal-poor populations in the galactic center of the Milky Way \citep{Queiroz21}. However, results from the SED fitting (presented in Sec. \ref{sec:sfhs}) suggest that the central populations actually form more recently than the outskirts. This would imply the central metallicity may be better reflected by the integrated metallicity than the outskirts. To test this, we refit a subsample of galaxies that exhibited central populations with the youngest ages, instead applying the MOSDEF metallicity prior to both the center and outskirts. Applying the prior to both components is supported by the predominance of flat or slightly positive metallicity gradients in star-forming galaxies \citep{Simons21}. Applying these priors has no significant effects on the resulting star-formation histories or derived metallicities. Thus, we consider the results where the metallicity prior is applied to the total and outer photometry only, as in the original procedure. \subsection{Iterative Method for Decomposed IRAC Photometry}\label{sec:iter} The resolution of HST imaging allows for direct measurements of the decomposed bulge and disk fluxes. However, data from other instruments, namely ground-based $K$-band from Subaru/MOIRCS and 4 MIR bands from \textit{Spitzer}/IRAC, is too low resolution for resolved measurements. While fitting to the HST photometry only is possible, the constraints provided by the observed NIR/MIR are crucial in measuring the stellar mass and age, since this wavelength regime traces the rest-frame optical and NIR at $z\sim2.3$. In order to incorporate these important filters into the decomposed SEDs, we use a simple iterative method to estimate the $K$-band and IRAC flux of both the bulge and disk. This also motivates our use of a simple 2-component decomposition in lieu of more complicated techniques \citep[e.g. Voronoi tesselation, as in][]{Fetherolf20}, as the system of equations becomes more difficult to solve with more components. Note the $U$-band photometry is excluded from the decomposed components and this iterative method. First, SEDs are fit using fewer walkers and iterations (32 and 128, respectively) to the HST filters only (the number of SFH bins is reduced to 5 for this step so the number of data points is greater than the number of free parameters) in order to get MAP photometry estimates for all HST filters as well as the $K$-/IRAC bands. We then define the corrected $K$-band/IRAC photometry as the MAP predicted photometry multiplied by the ratio of the observed to MAP F160W flux (the longest wavelength HST filter available): \begin{align} f_\mathrm{x,corr}(\lambda)=f_\mathrm{x,MAP}(\lambda)\left(\frac{f_\mathrm{x,obs}(\mathrm{F160W})}{f_\mathrm{x,MAP}(\mathrm{F160W})}\right), \end{align} where $\lambda$ refers to one of the $K$-band or IRAC filters and $x$ indicates the bulge or disk subcomponent. The ``MAP'' and ``obs'' labels indicate these are fluxes estimated by \textsc{Prospector} or observed photometries, respectively. The corrected $K$-band/IRAC fluxes are then scaled to agree with the observed $K$-band/IRAC fluxes of the total galaxy: \begin{align} f'_\mathrm{x}(\lambda)=f_\mathrm{x,corr}(\lambda)\left(\frac{f_\mathrm{obs}(\lambda)}{\sum_xf_\mathrm{x,corr}(\lambda)}\right), \end{align} where the summation is over the bulge and disk components and $f'_\mathrm{x}(\lambda)$ is the ``observed'' flux of the subcomponents to be fit in the next iteration. The uncertainty in these ``observed'' fluxes is given by \begin{align} \delta f'_\mathrm{x}(\lambda)=\delta f_\mathrm{obs}(\lambda)\sqrt{\frac{f_\mathrm{obs}(\lambda)}{f_\mathrm{x,corr}(\lambda)}}. \end{align} The SED is then fit again, this time including the ``observed'' bulge and disk photometries and uncertainties. The process is repeated until the estimated total $K$-band/IRAC flux changes by $<5\%$ in all but one filter (this allows for flexibility when dealing with bands with very large photometric errors). This convergence is generally quick, though due to the longer computational times required to run \textsc{Prospector}, we limit this method to a maximum of 10 iterations. Only 2 of the 60 galaxies reach this limit. The behavior of the fractional change in the total $K$-band/IRAC flux over 10 iterations is show in Figure \ref{fig:iter} (solid lines and points) and compared to the fractional photometric error in each band (dashed lines). After 4 iterations, all bands are below the $5\%$ threshold for the change in flux. At the same times, all bands except for the $K$-band have changed by less than the corresponding fractional error, indicating that further refining the photometry in future iterations is unnecessary. Finally, \textsc{Prospector} is run with the full number of walkers, iterations, and SFH bins (64, 256, and 7, respectively) on the components using these converged values for the $K$-band/IRAC flux. The summed bulge and disk photometry in the NIR and MIR is shown with black crosses in Figure \ref{fig:SED_ex}. In general, there is good agreement between the observed $K$-band/IRAC flux of the total galaxy and the predicted photometry from this iterative analysis. \section{Star Formation Histories}\label{sec:sfhs} The measured SFHs of our bulge sample may provide insight into how bulges in Milky Way progenitors may have formed. If bulges in these lower mass systems formed first, followed by inside-out growth of the surrounding disk \citep[e.g.,][]{vanDokkum10,Carrasco10,Nelson16}, then we would expect to see significant levels of star formation early on. In the cold mode accretion/clump merger scenario \citep{Dekel09} or other scenarios where the bulge grows in a coeval way with the rest of the galaxy \citep[e.g.,][]{Kormendy04}, star formation rates should be more constant and the SFH should have a similar shape to that of the disk, though an increase in the clump accretion rate \citep[e.g.,][]{vanDokkum13} could result in a sharp peak in the bulge SFH. Similarly, a burst of star formation could imply a rapid growth of the bulge via wet disk contraction \citep{Dekel14,Zolotov15,Tacchella18} and the galaxies with the strongest bursts of star formation should appear the most compact. Figure \ref{fig:sfh_dens} shows the SFHs (sSFR and fraction of mass formed) of the bulge (top), disk (middle), and total galaxy (bottom) for all 60 galaxies in the sample. The shading indicates how many galaxies have a similar SFR in a given age bin. A prominent feature is a strong burst of star formation between a lookback time 30 and 100 Myr before $z\sim2.3$. This indicates that bulges in these galaxies are younger and have not built slowly over time. For most of the galaxies, both the disk and total galaxy build up more of their mass earlier on than the bulge and all 3 show a quenching event 0-10 Myr before observation. This quenching event agrees with the theory that bulge/core formation can morphologically quench galaxies by stabilizing the disk against future star formation \citep{Martig09} or that rapid gas consumption/AGN fueling can temporarily suppress star formation after compaction \citep{Tacchella16}. In Figure \ref{fig:sfh_dens_class}, we show the same SFHs, now as individual SFRs, separated in rows by the total stellar mass of the galaxy. This highlights the diversity in SFHs in this sample. A strong burst of SFR in the bulge is more prominent in lower mass galaxies, which also show relatively constant SFR in the disk and a rising SFH for the integrated galaxy. At higher masses, a single burst of star formation in the bulge is less common, with higher star formation rates usually distributed over a wider range of lookback times. The total SFR in these galaxies is also higher and more constant in these galaxies, compared to the increasing total SFHs in lower mass galaxies. This suggests that these galaxies may be undergoing inside out growth \citep{vanDokkum10,Carrasco10,vanDokkum14,Nelson16}, forming a larger fraction of the bulge mass at earlier times. Figure \ref{fig:sfh_dens_class} also illustrates the difference between the SFHs of the centers and outskirts of these galaxies. In most of sample, the center exhibits a large burst in star formation at late times while the outskirts form stars more steadily and have higher SFRs in general. This highlights the differential formation histories of the inner and outer regions of star forming galaxies and establishes the existence of a formation pathway distinct from the canonical inside-out growth mechanism used for massive galaxy formation. \subsection{On the Robustness of the Reconstruction of the SFH} The robustness of the reconstruction of the SFH in non-parametric form made by \textsc{Prospector} has been extensively tested in a number of previous work \citep{Johnson21,Leja21,Tacchella21,Ji22}. In particular, these works have used synthetic galaxies from the IllustrisTNG cosmological hydrodynamical simulations \citep{Springel18,NelsonD18,Pillepich18,Naiman18,Marinacci18,NelsonD19,Pillepich19} to directly compared the \textsc{Prospector} SFH output to the input galaxies' SFH. They have also tested the stability of the results against assumed priors, particular priors on the time dependence of the discretized SFH itself (e.g. continuity vs. Dirichlet priors). The general conclusions from these works is that the non-parametric SFHs derived by \textsc{Prospector} are robust and stable when good quality photometry covering a broad range of the rest-frame SED, from UV to near-IR is available, such as ours, and when some parameters, such as spectroscopic redshift and metallicity, are independently known and not left as free parameters during the fitting procedure. Here, although we do not repeat their tests and follow them in adopting the Continuity prior when deriving the SFH, we do test the stability of our results against the input photometry, the time binning adopted for the SFH reconstruction and the adoption of the metallicity prior. In Appendix \ref{sec:sfhcomp} we have compared the SFHs of the integrated galaxies shown in Figure \ref{fig:sfh_dens_class} with SFHs measures obtained from a much expanded photometric data set that comprises 42 photometric bands, with different time binning (9 time bins vs. our adopted 7 bins), and with and without assuming a strong metallicity prior. As discussed in Appendix \ref{sec:sfhcomp}, we conduct our test for the integrated SFHs only, and not for the centers and outskirts as well, because the extended photometric data is derived from ground-based images in natural seeing. The complexity of such data set, therefore, prevents us from running our decomposition analysis in a robust way in this case. The conclusions from our test is that the integrated SFH is robust and stable, however, which adds strong support that the decomposed SFHs of centers and outskirts, obtained with the same data sets, \textsc{Prospector} settings and priors, are robust as well. \subsection{Star Formation Timescales}\label{sec:timescales} To further illustrate the different formation histories of galactic centers and outskirts, we compare various star-formation timescales for the two components. In particular, we compute two values: the total galaxy timescale (equivalent to twice the standard deviation of the SFH) via \begin{align}\label{eq:tau} \tau_\mathrm{tot}=2\int_{t_\mathrm{obs}}^{t_\mathrm{univ}}(t-t_\mathrm{age})^2\frac{\mathrm{SFR}(t)}{M_{\star,\mathrm{tot}}}dt, \end{align} and the skewness of the SFH \begin{align}\label{eq:skew} \mathrm{Skew}=\frac{8}{\tau_\mathrm{tot}^3}\int_{t_\mathrm{obs}}^{t_\mathrm{univ}}(t-t_\mathrm{age})^3\frac{\mathrm{SFR}(t)}{M_{\star,\mathrm{tot}}}dt. \end{align} The total timescale describes how concentrated the star formation activity in the galaxy is, where a short timescale indicates the star formation is concentrated to a single burst while a long timescale would represent gradual, continuous build-up of stellar mass. The skewness describes how late or early the star-formation is occurring, with a large positive skewness indicating the star formation occurs late in the galaxy's history while a large negative skewness indicates significant early star formation. Conversely, a skewness near zero implies the star formation rate is evenly distributed throughout the galaxy's history. In Figure \ref{fig:timescales} we compare the skewness of galaxy centers (circles) and outskirts (squares) over a range of total stellar masses, also including a third parameter in the total timescale ($\tau_\mathrm{tot}$). Generally, galaxy centers have a large positive skewness indicating most of the star formation in the core occurs at later times. Conversely, the outskirts have low skewness for all masses. The difference between the inner (red) and outer (blue) regions is also apparent in the inset histogram. The short timescales on which this star formation occurs in the cores is also indicative of a late burst of star formation, which is reflected in the SFHs in Figures \ref{fig:sfh_dens} and \ref{fig:sfh_dens_class}. Conversely, the outskirts appear to form in much more gradual fashion, assembling their stellar mass over longer times, crudely $\approx5\times$ longer than the center, with slowly increasing SFR. Notably, the skewness of the centers decreases on average with increasing mass (red, open circles) while that of the outskirts stays constant (blue, open squares), further suggesting that higher-mass star-forming galaxies in the sample have a different central formation history than their low-mass counter parts. \subsection{Recent Star Formation and Compaction}\label{sec:recent_sf} The SFH of the centers often shows large enhancements of SFR in the two most recent time bins at 10 and 100 Myr, indicative of a burst, during which time they form a substantial fraction of their stellar mass (72\% on average). During the same time period, for most galaxies the SFR of the outskirts remains approximately constant and their stellar mass increases substantially less, by only 16\%. In other words, is most galaxies the centers become proportionally more massive than the outskirts, with an increased mass growth rate of 450\% over the bulge, and thus become more compact. This is consistent with the general features of the``compaction'' phenomenon predicted, for example, in presence of dissipative gas accretion in unstable disks \citep[e.g.,][]{Dekel14,Zolotov15,Tacchella16,Tacchella18}. To further examine the possibility that we are observing the centers of our galaxies during a compaction event, we compare the mass-weighted age and sSFR in the most recent 100 Myr with stellar mass, size $r_e$, and projected stellar mass density $\Sigma_1$, used as a measure of compactness, in Figure \ref{fig:cpt_ages}. In the first column, we compare the mass-weighted age (top) and recent sSFR (bottom) with the stellar mass of the bulge (red), disk (blue), and total galaxy (black). For the majority of galaxies, the centers are indeed substantially younger than the outskirts, with the age of the integrated galaxies being intermediate. The sSFR mirrors this behavior, with the centers having the largest values and the outskirt the lowest. More massive galaxies also tend to have lower sSFR in the past 100 Myr and their stellar populations are older, in both centers and outskirts, as well as for the integrated system. In the second and third columns, we compare the age and sSFR to size and central density of the integrated systems, both quantities acting as proxies of the compactness. Galaxies are separated into three different bins of total stellar mass, each indicated by different symbols: diamonds for low mass ($\log(M_\star/M_\odot)<9.5$), pentagons for intermediate mass ($9.5<\log(M_\star/M_\odot)<10.5$), and squares for high mass ($\log(M_\star/M_\odot)>10.5$). The color of a galaxy indicates its $\alpha$, where $\alpha\equiv\mathrm{SFR(100~Myr)}-\mathrm{SFR(10~Myr)}$. A larger value of $\alpha$ reflects a greater decrease in the sSFR, i.e., onset of a decrease in SFR that potentially leads to quenching, while a smaller $\alpha$ shows little change in the sSFR (or an increase if $\alpha<0$). Although there is considerable scatter in these plots, some trends seem discernible. The top central panel does not show any overall correlation between stellar age and size. Since such correlation has been observed for quiescent galaxies \citep{Williams14,Fagioli16,Ji22}, this would suggest that these galaxies are not close to initiating the quenching phase. The color-coding suggests however, that the galaxies for which $\alpha$ is larger, i.e., the SFR in the most recent time bin is small than that in the previous bin, have younger stellar populations, which would be consistent with a bursty behavior. The bottom central panel shows that galaxies with large $\alpha$ are at the top of the sSFR distribution and span the full range of observed size, which would be expected for bursty systems that have not yet started the structural transformations that appear to accompany the quenching process. We note that galaxies with low $\alpha$ seem to preferentially populate the low sSFR and large-size region of the plot. Both the bottom central and the bottom right panels show a decrease of sSFR with increasing size and density, respectively. This is likely a mass effect, as more massive galaxies intrinsically have more dense centers \citep[e.g.,][]{Tacchella21}, are larger (due to the size mass relation), and have lower sSFRs for a given SFR (since sSFR$\propto M_\odot^{-1}$). The top right panel does not appear contain any trend. Thus, the questions of whether the bursts of star formation that occurred in the centers during the $\sim100$ Myr prior to observation does or does not result in more compact galaxies, as expected during a gas compaction event \citep{Dekel14,Zolotov15,Tacchella16,Tacchella18} or merging of star forming clumps \citep{Dekel09,vanDokkum13}, cannot be answered by this analysis. Compaction may still result in the increased SFRs and build up of a dense and compact central structure; our sample and our analysis simply do not provide any evidence in favor or against it, even if we do appear to systematically detect bursting centers surrounded by more steadily star-forming outskirts. Finally, also note that more massive galaxies also tend to be older and experience less of a drop off in sSFR after the increased star formation levels. As evidenced by Figure \ref{fig:sfh_dens_class}, these galaxies may be interlopers from a more massive population, which tend to grow inside out \citep{vanDokkum10,Carrasco10,vanDokkum14}. \subsection{Main Sequence Evolution} From the SFH we can also predict the evolutionary paths of our galaxies in the SFR vs. $M_*$ plane, and their position relative to the Main Sequence. This is done in Figure \ref{fig:msevol}, which shows the evolution of our sample galaxies over the last two SFH time bins. Most galaxies appear slightly above the SFMS (red shaded region) at 100 Myr prior to observation (green points), consistent with being caught during a substantial burst of star formation. Subsequently, in the next time bin, as the burst subsides they evolve onto the main sequence by (yellow points). The majority of galaxies in the sample are also found on the SFMS close to observation (10 Myr bin). This behavior is similar to the confinement of galaxies in the SFMS shown in \cite{Tacchella16}. During the so-called ``blue-nugget'' (compact, star-forming) phase, galaxies move up to the top of the main-sequence as SFRs increase, which is seen in the 100 Myr SFRs. \cite{Tacchella16} find that this is followed by movement down to the lower end of the SFMS as gas consumption and feedback stall future star formation. They suggest this behavior occurs on timescales of roughly $\sim100$ Myr, which is supported by the location of our galaxies on the SFMS in the 10 Myr bin, roughly 100 Myr after the increase in SFRs. The predominance of main-sequence galaxies, combined with the notable burst in star formation just before observation suggest that the appearance of the burst in most of these galaxies is a selection effect of studying star-forming galaxies. The fact the galaxies in our sample are solidly within the MS, even after most of them experienced a burst, which brought them above the MS, perhaps explains why we find no evidence of structural transformations, i.e. the shrinking of size and increase of central stellar density, that appear to accompany galaxies as they descend below the MS \citep{Cheung12,Barro17,Whitaker17,Lee18,Ji22}: our galaxies are not yet quenching. \subsection{Star Formation Rate Gradients}\label{sec:gradients} Using the two-bin, spatially-resolved SFHs of our galaxies, we can also attempt a measure the time evolution of the SFR gradients. SFR density gradients are computed by dividing the SFR of each component by the area of that component. For this purpose, we define the inner and outer components as concentric circular regions with bulge radius $R$ and Kron radius \citep{Kron80}, respectively. Figure \ref{fig:gradients} shows the SFR density gradients (colored lines) for all galaxies in the sample across all seven time bins. The sample median gradients are also shown as black dashed lines. As Figure \ref{fig:gradients} suggests, for much of a star-forming galaxy's history, the gradients are relatively flat; namely, star formation builds up the inner and outer parts at approximately equal rates. This agrees with previous studies that find significant mass evolution at all radii in Milky Way progenitors \citep[e.g.,][]{vanDokkum13}. However, significant negative gradients in star formation rate density are present across the main-sequence at $z\sim1$ \citep{Nelson16}, a significant difference from the roughly equal growth at all radii that we seem to be observing at higher redshifts. In our sample of main-sequence galaxies, we find a dramatic shift to negative SFR gradients $\sim100$ Myr prior to observation (2nd panel, Fig. \ref{fig:gradients}a), which we associate with a rapid build-up of the central regions in the galaxies. This is also reflected in Figure \ref{fig:gradients}b, where on average the gradients become significantly negative at a lookback time of $\sim 100$ Myr. These negative gradients, with approximately the same slope as the gradients reported in \cite{Nelson16}, persist even after the largest increase in SFR at 100-300 Myr lookback time observed in several of our sample galaxies. Figure \ref{fig:gradtrends} examines the evolution of the gradients versus that of the integrated stellar mass, SFR and sSFR. The negative gradients are the steepest when the galaxies experience the largest SFR which, as we have seen from the SFH, happens at relatively recent lookback time from the observations, at 100-300 Myr. By this time, the galaxies have also assembled nearly all of the stellar mass found at the time of observation. As the bottom panel of Figure \ref{fig:gradtrends} shows, however, this is not the time when the galaxies have the largest sSFR, but rather an intermediate value (in log space). The change in SFR gradient also supports observed gas-phase metallicity gradients in star-forming galaxies. \cite{Simons21} find that the vast majority of star-forming galaxies at $0.6<z<2.6$ have flat or slightly positive metallicity gradients. In order to achieve these flat gradients, most of the mass in the galaxy must be built up evenly across the galaxy, since metals are produced in stars and through stellar evolution. We measure flat SFR gradients on average across $\sim96\%$ of the galaxies' lifetimes, which supports the formation of a flat metallicity gradient. Moreover, \cite{Simons21} find galaxies at $z\sim0$ have negative metallicity gradients for most masses, which suggests these galaxies must also have negative metallicity gradients for much of their lifetimes. In this case, these may galaxies may have shifted to a negative gradient and started to build up their centers, which produced increased gas-phase metallicities in the center. \section{Summary}\label{sec:summary} In this paper, we search for detections of forming or recently formed bulges in star-forming galaxies at $z\sim2.3$, an epoch that coincides with the formation of the bulge and chemical thick disk in the MW \citep{Queiroz21,Miglio21}. A sample of 60 galaxies is selected from MOSDEF \citep{Kriek15,Sanders18} and the GOODS-N CANDELS/SHARDS photometric catalogs \citep{Barro19} with accurate photometry, spectroscopic redshifts, and metallicities. Galaxies are decomposed into central and outer components via a color-selected circular aperture, from which resolved HST photometry is measured. Using the \textsc{Prospector} code \citep{Johnson21}, we fit SEDs to each galaxy in the sample using an iterative method to account for unresolved light in the ground-based $K$-band and \textit{Spitzer}/IRAC bands. The formation histories of these components provide an interesting insight into the differential formation of ``normal'' galaxies near the peak of cosmic star formation. These galaxies show strongly peaked SFHs for all but the most massive galaxies. While this means we may be observing the inside out growth of galaxies above $\log(M_\star/M_\odot)>10.5$, it also suggests that a rapid increase in the SFR is responsible for the formation of the centers of lower mass galaxies. Analysis of the timescales and skewness (Eq. \ref{eq:tau} and \ref{eq:skew}, respectively) of galaxies indicates that outskirts tend to have more uniform SFHs and much longer total timescales. Conversely, galactic centers have much more uneven SFHs with short timescales, indicative of a formation history dominated by a late burst of star formation. This increase in SFR may be due to a a gas compaction event \citep{Dekel14,Zolotov15,Tacchella16,Tacchella18} or an increase in the rate at which star-forming clumps are accreted into the galactic center \citep{Dekel09,vanDokkum13}, both of which should be reflected in the smaller sizes and larger central densities of galaxies with high sSFRs 100 Myr prior to observation. However, we find no trends in the age or sSFR of the central parts of galaxies with size and central density. Compaction of gas or increased clump accretion may still be possible mechanisms, but the subsequent change in morphology would not result in a more compact system. Analysis of the SFR gradients also reveals flat gradients on average for the majority of the galactic lifetimes, in support of previous studies of the mass evolution of Milky Way progenitors \citep{vanDokkum13}. These galaxies then transitions to a steep negative gradient at $\sim100$ Myr before observation, mirroring the inside-out growth found in studies of resolved H$\alpha$ emission \citep{Nelson16}. This evolution in the SFR gradient may also provide an explanation for the mostly flat observed metallicity gradients in main-sequence galaxies \citep{Simons21}, which may be due to the long period of time over which these galaxies have flat SFR gradients. \acknowledgements We are grateful to Avishai Dekel and Cristina Chiappini for reading the manuscript and for very useful comments. We acknowledge use of observations with the NASA/ESA \textit{Hubble Space Telescope} obtained from the MAST Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated under NASA contract NAS5-26555. Support for Program number HST-AR-15798 was provided through a grant from the STScI under NASA contract NAS5-26555. \bibliographystyle{aasjournal} \bibliography{references} \appendix \restartappendixnumbering \section{Verifying the Robustness of \textsc{Prospector} Star Formation Histories}\label{sec:sfhcomp} We have tested the stability and robustness of our reconstruction of the integrated SFHs, i.e., the SFH of each galaxy as a whole and not of centers and outskirts, for our samples against the input photometry, the adoption of the metallicity prior and the sampling of the lookback time: the 6 bins adopted in the primary runs vs. the 9 bins adopted here. Specifically, for each galaxy in the sample we have re-run \textsc{Prospector} (test runs) utilizing the expanded CANDELS photometric catalogs in the GOODS-N field which, in addition to the {\it HST} data taken during the GOODS \citep{Giavalisco04} and CANDELS \citep{Grogin11,Koekemoer11} surveys, the ground- and space-borne ancillary data from UV to FIR, includes the 25 medium-band photometry at optical wavelengths acquired during the SHARDS survey \citep{PerezGonzalez13} with the OSIRIS instrument at the 10.4-m telescope Gran Telescopio Canarias (GTC). The photometric apertures of the SHARDS data have been matched to those of the existing CANDELS data \citep{Barro19}. The total number of photometric bands used in the \textsc{Prospector} SED modeling for the test runs is 42. We have then compared the SFH obtained during these tests run with the corresponding integrated SFH obtained with the same setting used for the SFH of the centers and outskirts (primary runs). We have done two test runs, namely with and without the prior on gas-phase metallicity, using the MOSDEF measures that we used to the primary runs. Also, for all test runs we have sampled the SFH in 9 bins of look-back time to test the stability of the SFH with respect to the choice of the time bins. To compare the 9-bin SFH of the test runs with the 6-bin one for the primary runs, we have performed a linear interpolation of the SFH of the former at the central value of each bin in the latter. Figure \ref{fig:sfh_xphoto} shows the SFH derived during the test runs for the whole sample and also in three mass bins, the same adopted for the primary runs. A visual comparison with the integrated SFHs derived from the primary runs shows that the shape of the SFHs from the two sets of runs are in good qualitative agreement, suggesting that the derivation is robust. Figure \ref{fig:sfh_diff} quantifies the difference between the output of the sets of run: the left panel shows the absolute difference of the SFH of each galaxy in the two sets of runs, while the left panel the fractional difference. It can be seen that the median difference between the two sets of runs is essentially zero for all values of the look-back time, while the scatter remains small for the three most recent time bins and it only increases at large values of the look-back time, highlighting the difficulty of reconstructing the earliest phases of the SFH. Fortunately, the key results of this work are based on the difference of SFH of ``centers'' and ``outskirts'' during later stages of their evolution, at look-back times close to the time of observation. Overall, however, the agreement between the primary and test runs appears to be very good, demonstrating that the overall shape of the SFH is insensitive to the details of the input SED. Finally, Figure \ref{fig:sfh_zprior} illustrates the sensitivity of the output SFH to the gas-phase metallicity prior. This test is relevant for this work because during the primary runs the same metallicity prior is used for both the centers and the outskirts. A strong dependence of the SFH on the metallicity prior would have diminished the significance of the difference that we have observed for the two regions of the galaxies. As the figures illustrate, the test reveals only small differences in the output SFH, suggesting a small sensitivity to the metallicity prior. These differences are substantially smaller than the differences of the SFHs of centers and outskirts, suggesting that they are very unlikely the result of of the over-simplification of assigning the same metallicity to both regions of the galaxies. Figure \ref{fig:metal_diff} shows the difference between the SFHs derived adopting a strong prior on the gas phase metallicty and without such a prior. No systematic difference is observed, on average, between the two cases, with the scatter of the fractional difference (left panel) remaining approximately constant with look-back time. Overall, the test runs show that the overall reconstruction of the SFHs of the sample galaxies is robust against the input photometry and the assumption of the metallicity prior, supporting the validity of our conclusions. \restartappendixnumbering \section{Example Posterior Distributions and Covariances}\label{sec:excov} Figures \ref{fig:covar_ex_bulge}, \ref{fig:covar_ex_disk}, and \ref{fig:covar_ex_tot} show posterior distributions for the inner, outer, and integrated galaxy components for the example galaxy SED in Figure \ref{fig:SED_ex}. In general, all stellar population parameters are well constrained for both components and the total galaxy.
Title: A 2D Model for Coronal Bright Points: Association with Spicules, UV bursts, Surges and EUV Coronal Jets
Abstract: Coronal Bright Points (CBPs) are ubiquitous structures in the solar atmosphere composed of hot small-scale loops observed in EUV or X-Rays in the quiet Sun and coronal holes. They are key elements to understand the heating of the corona; nonetheless, basic questions regarding their heating mechanisms, the chromosphere underneath, or the effects of flux emergence in these structures remain open. We have used the Bifrost code to carry out a 2D experiment in which a coronal-hole magnetic nullpoint configuration evolves perturbed by realistic granulation. To compare with observations, synthetic SDO/AIA, Solar Orbiter EUI-HRI, and IRIS images have been computed. The experiment shows the self-consistent creation of a CBP through the action of the stochastic granular motions alone, mediated by magnetic reconnection in the corona. The reconnection is intermittent and oscillatory, and it leads to coronal and transition-region temperature loops that are identifiable in our EUV/UV observables. During the CBP lifetime, convergence and cancellation at the surface of its underlying opposite polarities takes place. The chromosphere below the CBP shows a number of peculiar features concerning its density and the spicules in it. The final stage of the CBP is eruptive: magnetic flux emergence at the granular scale disrupts the CBP topology, leading to different ejections, such as UV bursts, surges, and EUV coronal jets. Apart from explaining observed CBP features, our results pave the way for further studies combining simulations and coordinated observations in different atmospheric layers.
https://export.arxiv.org/pdf/2208.04308
\title{\Large{A 2D Model for Coronal Bright Points:\\ Association with Spicules, UV bursts, Surges and EUV Coronal Jets}} \correspondingauthor{D. N\'obrega-Siverio} \email{dnobrega@iac.es} \author[0000-0002-7788-6482]{D. N\'obrega-Siverio} \affiliation{Instituto de Astrof\'isica de Canarias, E-38205 La Laguna, Tenerife, Spain} \affiliation{Universidad de La Laguna, Dept. Astrof\'isica, E-38206 La Laguna, Tenerife, Spain} \affiliation{Rosseland Centre for Solar Physics, University of Oslo, PO Box 1029 Blindern, 0315 Oslo, Norway} \affiliation{Institute of Theoretical Astrophysics, University of Oslo, PO Box 1029 Blindern, 0315 Oslo, Norway} \author{F. Moreno-Insertis} \affiliation{Instituto de Astrof\'isica de Canarias, E-38205 La Laguna, Tenerife, Spain} \affiliation{Universidad de La Laguna, Dept. Astrof\'isica, E-38206 La Laguna, Tenerife, Spain} \keywords{magnetohydrodynamics (MHD) --- methods: numerical --- Sun: atmosphere --- Sun: chromosphere --- Sun: corona --- Sun: transition region} \section{INTRODUCTION}\label{sec:intro} Coronal Bright Points (CBPs) are a fundamental building block in the solar atmosphere. Scattered over the whole disc, CBPs consist of sets of coronal loops linking opposite polarity magnetic patches in regions of, typically, from $4$ to $43$ Mm transverse size, with heights ranging from $5$ to $10$ Mm \citep[see the review by][]{Madjarska:2019}. One of their most striking features is the sustained emission, for periods of several hours up to a few days, of large amounts of energy, which lend them their enhanced extreme-ultraviolet (EUV) and X-ray signatures \citep[e.g.,][]{Golub_etal:1974}. A significant fraction of the CBPs are observationally found to be formed as a consequence of chance encounters of opposite magnetic polarities at the surface \citep[e.g.,][]{Harvey:1985,Webb_etal:1993,Mou_etal:2018}. First theoretical explanations about this mechanism came in the 1990's through analytical models under the name of Converging Flux Models \citep{Priest_etal:1994,Parnell_Priest:1995}. There, the approaching motion in the photosphere of two opposite polarities that are surmounted by a nullpoint leads to reconnection and heating of coronal loops. Since then, this idea has been extended and studied using magnetohydrodynamics (MHD) experiments \citep[e.g.,][]{Dreher_etal:1997,Longcope:1998,Galsgaard_etal:2000,von-Rekowski_etal:2006a,Santos_Buchner:2007, Javadi_etal:2011,Wyper_etal:2018b,Priest_etal:2018, Syntelis_etal:2019}. However, the available CBP models are idealized, i.e., they rely on ad-hoc driving mechanisms, which do not reflect the stochastic granular motions, they lack radiation transfer to model the lower layers of the atmosphere, and/or miss optically thin losses and/or thermal conduction to properly capture the CBP thermodynamics. To understand the physics of CBPs, realistic numerical experiments are needed to address important open questions such as (a) the CBP energization, focusing on whether the granulation is enough to drive and sustain the reconnection at coronal heights to explain the CBP lifetimes; (b) the role of magnetic flux emergence, specially at the granular scale, to know whether it can disrupt the CBP topology and originate an eruption; and (c) the chromosphere underneath a CBP, with the aim of unraveling the unexplored impact of CBPs on the spicular activity and vice-versa. In this letter, we model a CBP through the evolution of an initial fan-spine nullpoint configuration. The choice of this configuration is because CBPs often appear above photospheric regions with a parasitic magnetic polarity embedded in a network predominantly of the opposite polarity, which typically leads to a nullpoint structure in the corona \citep{Zhang_etal:2012,Galsgaard_etal:2017,Madjarska_etal:2021}. The 2D experiment is carried out with the Bifrost code \citep{Gudiksen_etal:2011}, which self-consistently couples the different layers of the solar atmosphere and incorporates several physical mechanisms not included in CBP modeling in the past. To provide a direct link to observations, we calculate synthetic EUV images for SDO/AIA \citep[][]{Pesnell_etal:2012,Lemen_etal:2012}, and Solar Orbiter (SO)/EUI-HRI \citep[][]{Muller_etal:2020,Rochus_etal:2020}, and UV images for IRIS \citep[]{De-Pontieu_etal:2014}. \section{METHODS}\label{sec:methods} \subsection{Code}\label{sec:code} The experiment has been performed using Bifrost, a radiation-MHD code for stellar atmosphere simulations \citep{Gudiksen_etal:2011}. The code includes radiative transfer from the photosphere to the corona; the main losses in the chromosphere by neutral hydrogen, singly-ionized calcium and magnesium; thermal conduction along the magnetic field lines; optically thin cooling; and an equation of state with the 16 most important atomic elements in the Sun. \subsection{Initial Condition}\label{sec:initial_condition} \subsubsection{Background Stratification}\label{sec:background} The initial condition has been constructed using as background a preexisting 2D numerical simulation with statistically stationary convection in the photosphere and below. It encompasses from the uppermost layers of the solar interior to the corona ($-2.8$~Mm $\leq z \leq 67.0$~Mm, $z=0$ being the solar surface). The horizontal extent is $0.0$~Mm $\leq x \leq 64.0$~Mm. The grid is uniform with $4096\times4096$ cells, yielding a very high spatial resolution of $\Delta x=15.6 $~km and $\Delta z=17.0 $~km. The boundary conditions are the same as for the experiment by \cite{Nobrega-Siverio_etal:2016}. The top panel of Figure~\ref{fig:01} shows the horizontal averages of temperature and density of the background stratification. \subsubsection{Nullpoint Configuration}\label{sec:nullpoint} Over the previous snapshot, we have imposed a potential magnetic nullpoint configuration as shown in the bottom panel of Figure \ref{fig:01}. The potential field was calculated from a prescribed distribution at the bottom boundary, $z=-2.8$~Mm. The nullpoint is located at $(x_0, z_0) = (32, 8)$~Mm and the field asymptotically becomes vertical in height with $B_z=-10$~G, mimicking a coronal hole structure. The photospheric field contains a positive parasitic polarity at the center on a negative background. At $z=0$, the total positive flux and maximum vertical field strength are $\Phi^{+}=2.2\times10^{10}$~G~cm and $B_z=41.3$~G, respectively; the parasitic polarity covers 9.7~Mm; and the fan surface extends for 28.1~Mm. We will refer to the closed-loop domains on either side of the inner spine as \textit{chambers}. \section{RESULTS} \label{sec:results} \subsection{Overview of the Experiment}\label{sec:overview} Figure~\ref{fig:02} and associated movie provide an overview of the system evolution using temperature maps and synthetic observables for coronal and transition region (TR) temperatures (see Appendix \ref{app:synthetic}). In the experiment, the granulation quickly distorts the imposed magnetic field and reconnection is triggered as a consequence of perturbations at the nullpoint. We distinguish two well-defined phases: main stage and eruptive stage, summarized in the following and described in Sections~\ref{sec:main_stage} and \ref{sec:emergence}, respectively. The main stage (from $t=0$ to $t\approx 65$~min) covers the appearance of post-reconnection hot loops that leads to a CBP. For instance, at $t=40$~min (top row of Figure~\ref{fig:02}), our CBP is discernible as a set of hot coronal loops with enhanced SDO/AIA~193 and SO/EUI-HRI~174 emission above TR temperature loops visible in IRIS \ion{Si}{4}. This matches the SDO/IRIS observations by \cite{Kayshap_Dwivedi:2017}, where CBPs are inferred to be composed of hot loops overlying cooler smaller ones. Our CBP is found in the left chamber, indicating that there is a preferred reconnection direction, and it gets more compact with time before vanishing, as reported in observations \cite[]{Mou_etal:2018}. In the eruptive stage, the CBP comes to an end. Around $t=67$~min in the animation, a first hot ejection results from reconnection between emerging plasma at granular scale and the magnetic field of the right chamber, resembling the UV burst described by \cite{Hansteen_etal:2019}. More episodes of flux emergence and reconnection take place subsequently, dramatically destabilizing the system and producing more ejections. As an example, at $t=85.84$~min (bottom row of Figure \ref{fig:02}) a large and broad hot coronal jet is seen next to a cool surge, reminiscent of previous results by \cite{Yokoyama_Shibata:1996,Moreno-Insertis_Galsgaard:2013,Nobrega-Siverio_etal:2016}. The synthesis clearly reflects the high spatial resolution of SO/EUI-HRI to study coronal jets and shows that the surroundings of the surge have significant \ion{Si}{4} emission, similarly to observations \cite[e.g.,][]{Nobrega-Siverio_etal:2017,Guglielmino_etal:2019}. \subsection{Main Stage}\label{sec:main_stage} \subsubsection{Magnetic Reconnection and Association with CBP Features}\label{sec:reconnection} To illustrate the crucial role of coronal reconnection for the CBP during the main stage, Panel (a) of Figure \ref{fig:03} contains the magnetic field strength with the SDO/AIA 193 response superimposed. Soon after starting the experiment (see animation), a current sheet (CS) is formed (yellow dots in the panel). Focusing on the region delimited by the purple rectangle, we distinguish three clear patterns: (a) the reconnection site slowly drifts to the left; (b) the reconnection and associated heating behaves in a bursty way; and (c) the reconnection is oscillatory. The reconnection-site's displacement is shown in the top frame of Panel (b). The horizontal position of the CS center, $x_{\mathrm{CS}}$ (black), moves 7.4 Mm to the left in 65 minutes, while its vertical position, $z_{\mathrm{CS}}$ (blue), first moves from 8.0~Mm up to 9.2 Mm, to descend later to 7.6 Mm. These results are akin to observations, where CBP nullpoints are inferred to rise, descend, or show both types of behavior in addition to important horizontal displacement \citep{Galsgaard_etal:2017}. The bursty behavior is depicted in the bottom frame of Panel (b). The CS length, $L_{\mathrm{CS}}$, abruptly changes over times of minutes, reaching a maximum of $3.3$~Mm. These fluctuations are well correlated with the Joule heating released in the reconnection site, $Q_{\mathrm{Joule}}$: the Pearson correlation coefficient between both curves is 0.93. Note that when the diffusion region does not have an elongated CS (orange background in the panel), the Joule heating is minimal. This intermittent heating seems to be consistent with observed CBP emission variations over timescales of minutes \citep[see, e.g,][]{Habbal_Withbroe:1981,Ugarte-Urra_etal:2004,Kumar_etal:2011,Ning_Guo:2014,Chandrashekhar_Sarkar:2015,Gao_etal:2022}. The elongated CS also changes its angle, $\theta_{\mathrm{CS}}$ (defined anticlockwise with respect to the x-axis), several times, from, approximately, $45$ to $-45$ degrees, indicating oscillatory reconnection. Panel (b) shows a white/green background for the intervals when $\theta_{\mathrm{CS}}$ is positive/negative. Most of the time, the angle is positive: the reconnection inflows come from the right chamber and the upper-left part of the external field, while the outflows are located in the left chamber and upper-right part of the external field. This explains why the left chamber is the one showing the CBP, as hot post-reconnection loops are being deposited predominantly in this region. The predominance seems to be associated with the concentration near the inner spine of the structure, indicated with a blue-dashed line in Panel (a), which mainly moves to the left (see animation). Oscillatory reconnection is also found in CBP observations \citep{Zhang_etal:2014}. \subsubsection{The Photospheric Magnetic Field} \label{sec:main_stage_photospheric} Figure~\ref{fig:03} contains space-time magnetograms near the photosphere ($z=0.15$~Mm) with the actual resolution of the simulation (Panel (c)) and reducing it to the level of the SDO/HMI instrument (Panel (d), see Appendix \ref{app:synthetic}). To emphasize the CBP evolution, Panel (e) contains a space-time diagram showing the synthesized intensity of SDO/AIA 193 integrated along the vertical line-of-sight. Panels (a), (c) and (d) show that the magnetic field around the photospheric basis is quickly concentrated by the granular motions, leading to strong magnetic patches at the solar surface. In the figure, we have highlighted with colored dashed lines the concentrations that have collected the field lines near the fan surface on each side (orange and magenta) and near the inner spine (blue). The magnetic field concentrations are buffeted and dragged by the granular motions while being substantially deformed in the convection zone. In fact, the one related to the inner spine gets bent several times underneath the surface and develops horizontal magnetized structures (see animation of Panel (a) from $t\approx20$ to $30$~ min at $z\approx-1$~Mm and $x$ between 30 and 32~Mm). Simultaneously with the convergence at the photosphere of the two strong opposite concentrations (orange and blue lines), the post-reconnection loops in the corona start to brighten up in the EUV bands (see the superimposed SDO/AIA 193 response in Panel~(a) and the space-time diagram of Panel~(e)) marking the appearance of our CBP at around $t=30$~min. From observations, convergence is frequently involved in CBP formation \citep[see, e.g.,][and references therein]{Mou_etal:2016,Mou_etal:2018}. The convergence continues, and the CBP goes through a quiet phase until around $t=65$~min, when the eruptive phase starts, triggered by flux emergence, as explained in the next section. \subsection{Eruptive Stage}\label{sec:emergence} \subsubsection{Magnetic Flux Emergence at the Surface} The strong and complex subphotospheric magnetic structures developing around the inner spine mentioned in the previous section become buoyant and rise. They reach the surface to the right of the inner spine from $t=42$~min onwards, leading to anomalous granulation and increasing the unsigned vertical magnetic flux. The enhanced magnetic pressure in the anomalous granules further pushes the strong positive patch to the left, accelerating the convergence of the two main opposite polarities of the CBP. Panel~(f) shows the total unsigned flux in the horizontal domain of Figure~\ref{fig:03}, i.e., \begin{equation} \Phi = \int_{x_0=14.0\, \mathrm{Mm}}^{x_f=47.5\, \mathrm{Mm}}{\left|B_z (z=0)\right|dx}, \label{eq:flux} \end{equation} with the integral calculated with the $B_z$ distributions in the experiment (black curve) and in the reduced-resolution counterparts for SST/CRISP (green) and SDO/HMI (red). The unsigned flux grows roughly by a factor two from $t=42$~min to $t=70.67$~min, when it reaches its maximum. Interestingly, in contrast to a high-resolution instrument like SST/CRISP, the reduced resolution of SDO/HMI would miss a significant fraction of the flux. From this time onwards, the unsigned magnetic flux at the surface decreases while the two main polarities continue converging. This could be interpreted as magnetic cancellation, a fact that is frequently observed at the end of CBPs \citep[e.g.,][]{Mou_etal:2016,Mou_etal:2018}. \subsubsection{Eruptive ejections} In the atmosphere, the emerging field quickly expands, interacting with the preexisting magnetic structure of the right chamber and leading to eruptive behavior with different ejections. The first one occurs at $t=67$~min, between $x=24$ and 30~Mm, and it resembles a UV burst because of its enhanced TR emission (see animation of Figure \ref{fig:02}). The evolution of the system becomes quite complex at this stage. The current sheet of the UV burst interacts with the CBP current sheet, disrupting its magnetic configuration and causing the end of our CBP (see the horizontal dark band at $t=71$~min in Panel (d) of Figure~\ref{fig:03}). Another ejection with EUV signatures occurs right after, around $t=74$~min, developing an Eiffel tower shape, followed by the ejection of the broad EUV jet (reaching up to 13~MK) and surge that are shown in the bottom row of Figure~\ref{fig:02}. \subsection{The Chromosphere and Spicular Activity}\label{sec:spicules} To analyze the chromosphere below the CBP, Figure \ref{fig:04} illustrates space-time maps for $\zmax(x)$, defined as the maximum height at which $T=10^4$~K for each $x$, and the corresponding density $\strut \rho(x, \zmax)$ at that point. A major distinction is apparent between the regions inside and outside the fan surface. To clearly separate them, red-solid lines have been drawn at the horizontal position where the fan surface cuts the plane~$z=1.5$~Mm. Those outside have an interesting flake or {\it scale}-like appearance. Each {\it scale} is flanked by a quasi-parabolic trajectory signaling the rise and fall of individual spicules: an example is indicated around $x=5$ Mm and $t\approx$~44 min. During the main stage, mainly from $t=30$ to $t=65$~min, the strong field in the CBP's opposite polarities (dashed tracks) lowers the height of the chromospheric level below $2$~Mm. As these polarities converge, the chromosphere underneath the CBP (i.e., in the left chamber) gradually moves down, from $4$~Mm to 1~Mm, while the density roughly increases by an order of magnitude there. This region also shows some spicules, predominantly originated near the fan surface, with associated propagating disturbances that cross the magnetic field loops of this region (see example of both phenomena indicated within the left chamber). The most striking characteristic of the right chamber is the enormous rarefaction during the main stage resulting from the predominant reconnection direction that extracts plasma from here (a fact also found in idealized 3D nullpoint simulations without flux emergence by Moreno-Insertis and Galsgaard, in preparation). In fact, the density can be so low as a few $10^{-17}$ g cm$^{-3}$: these are coronal densities with cold temperatures. There are also spicular incursions in this chamber mainly coming from regions around the fan surface (see example around $x=39$~Mm and $t=$~32~min), but without associated propagating disturbances. In the eruptive stage, the leftmost part of the domain with $\zmax(x) > 8$~Mm corresponds to the ejection of the surge. Trajectories of multiple dense plasmoids expelled from the reconnection site are also visible with a fibril-like pattern. \section{DISCUSSION} \label{sec:discussion} In this letter we have shown that a wide class of CBPs may be obtained through magnetic reconnection in the corona driven by stochastic granular motions. Our numerical experiment has significant differences to previous CBP models: in contrast to, e.g., \citet{Priest_etal:1994,Priest_etal:2018} or \citet{Syntelis_etal:2019,Syntelis_Priest:2020}, the magnetic field topology consists in a nullpoint created by a parasitic polarity within a coronal hole environment; more importantly, the reconnection and creation of the CBP is self-consistently triggered by the convection that naturally occurs in the realistic framework provided by the Bifrost code. The latter feature also sets it apart from the idealized CBP coronal hole model of \citet{Wyper_etal:2018b}, in a which high-velocity horizontal driving at the photosphere is imposed to provide the energy released in the CBP. Our experiment shows striking similarities to observed CBP features in spite of the simplified 2D configuration. For instance, the CBP is composed of loops at different temperatures, with hotter loops overlying cooler smaller ones, and enhanced EUV/UV emission akin to observations \citep{Kayshap_Dwivedi:2017}. The projected length of these loops is around 8-10 Mm, which fits in the lower range of CBP sizes \citep{Madjarska:2019}. We can also reproduce other distinguishable observational features such as the motion of the nullpoint \citep{Galsgaard_etal:2017}, the convergence of the CBP footpoints \citep[][and references therein]{Madjarska:2019}, the brightness variations over periods of minutes \citep{Habbal_Withbroe:1981,Ugarte-Urra_etal:2004,Kumar_etal:2011,Ning_Guo:2014,Chandrashekhar_Sarkar:2015,Gao_etal:2022}, as well as the oscillatory behavior \citep{Zhang_etal:2014}. Magnetic flux emergence is crucial for the formation of roughly half of the CBPs \citep{Mou_etal:2018} and it can enhance the activity of already existing CBPs \citep{Madjarska_etal:2021}. In our model, flux emergence plays a role for the final eruptive stage of the CBP: only a few granules were affected by the emergence, but the consequences for the CBP and the subsequent phenomena are enormous. Observations with high-resolution magnetograms (e.g. by SST/CRISP) are needed to explore this relationship and to discern whether hot/cool ejections in the late stage of CBPs \citep[e.g.,][]{Hong_etal:2014,Mou_etal:2018,Galsgaard_etal:2019,Madjarska_etal:2022} may also follow, directly or indirectly, from small-scale flux emergence episodes. The chromosphere underneath the CBP in the model shows a number of remarkable features. Spicules are mainly originated from the fan surface (accompanied with propagating disturbances), which could perturb the CBP brightness (see, e.g., \citealp[][]{Madjarska_etal:2021} and Bose et al. in preparation). The chromosphere related to the CBP footpoints is reached at low heights, while there is a chamber that gets greatly rarefied because of the reconnection, reaching coronal densities with chromospheric temperatures. These results can be potentially interesting to unravel the chromospheric counterpart of CBPs and nullpoint configurations in general and need to be explored using coordinated observations as well. \begin{acknowledgements} This research has been supported by the European Research Council through the Synergy Grant number 810218 (``The Whole Sun'', ERC-2018-SyG) and by the Spanish Ministry of Science, Innovation and Universities through project PGC2018-095832-B-I00. The authors acknowledge the computer resources at the MareNostrum supercomputing installation and the technical support provided by the Barcelona Supercomputing Center (BSC, RES-AECT-2021-1-0023), as well as the support by the International Space Science Institute (ISSI, Berne) to the team \textit{Unraveling surges: a joint perspective from numerical models, observations, and machine learning}. The authors thank Dr. Fr\'ed\'eric Auch\`ere for his help to compute the synthetic observables for SO/EUI-HRI and Luc Rouppe van der Voort for illuminating conversations related to SST/CRISP. The authors are also grateful to Drs. Klaus Galsgaard and Maria Madjarska as well as to the two referees for their interesting comments and advice. DNS acknowledges support by the Research Council of Norway through its Centres of Excellence scheme, project number 262622, and through grants of computing time from the Programme for Supercomputing. \end{acknowledgements} \appendix \section{SYNTHETIC OBSERVABLES}\label{app:synthetic} For the coronal and TR emissivity images, we assume statistical equilibrium and coronal abundances \citep{Feldman:1992}. Thus, the emissivity can be computed as \begin{equation} \epsilon = n_H\ n_e\ G(T, n_e)\quad [\mathrm{erg}\, \mathrm{cm}^{-3}\, \mathrm{s}^{-1}\, \mathrm{sr}^{-1}], \label{eq:emiss} \end{equation} where $n_e$ is the electron number density, $n_H$ is the hydrogen number density, and $G(T, n_e)$ is the contribution function. For the coronal 193 channel of the Atmospheric Imaging Assembly \citep[AIA;][]{Lemen_etal:2012} onboard the Solar Dynamics Observatory \citep[SDO;][]{Pesnell_etal:2012}, we generate a lookup table for $G(T, n_e)$ with {\tt aia\_get\_response.pro} from SSWIDL with the flags {\tt /temp} and {\tt /dn} varying the electron number density from $10^6$ to $10^{13}$ cm$^{-3}$. Once we compute the emissivity, we degrade it to the SDO/AIA spatial resolution, which is 1\farcs5 \citep{Lemen_etal:2012}. To obtain the intensity map shown in Panel (e) of Figure~\ref{fig:03}, we integrate the emissivity along the vertical line-of-sight assuming no absorption from cool and dense features like the surge. In addition, we degrade the space-time map to the SDO/AIA cadence, which is 12~s. For the coronal 174 channel of the Extreme Ultraviolet Imager of the High Resolution Imager \citep[EUI-HRI;][]{Rochus_etal:2020} on Solar Orbiter \citep[SO;][]{Muller_etal:2020}, we use the contribution function privately provided by Dr. Fr\'ed\'eric Auch\`ere, member of the Solar Orbiter team, since, at the moment of writing this letter, the team is implementing the function in SSWIDL. The results are afterwards degraded to the EUI-HRI resolution, which corresponds to $100$~km pixel-size for perihelion observations \citep{Rochus_etal:2020}. For the Interface Region Imaging Spectrograph \citep[IRIS;][]{De-Pontieu_etal:2014} TR \ion{Si}{4} 1393.755 \AA\ line, we create a lookup table employing {\tt ch\_synthetic.pro} from SSWIDL with the flag {\tt /goft} varying the electron number density from $10^6$ to $10^{13}$ cm$^{-3}$. The output of this routine is multiplied by the silicon abundance relative to hydrogen to obtain the contribution function. To transform CGS units to the IRIS count number, the emissivity is multiplied by $(A\, p\, w\, \lambda)/(k\, r^2\, h\, c)$, where $A = 2.2$ cm$^2$ pix$^{-1}$ is the effective area for wavelengths between $1389$ and $1407$~\AA, $p=0\farcs167$ is the spatial pixel size, $w=0\farcs33$ is the slit width, $\lambda=1393.755$~\AA\ is the wavelength of interest, $k=4$ is the number of photons per DN in the case of FUV spectra, $r=3600 \cdot 180/\pi$ is the conversion of arcsec to radians, $h$ is the Planck constant, and $c$ is the speed of light. Finally, we degrade the results to the IRIS spatial resolution of 0\farcs33 \citep[see][]{De-Pontieu_etal:2014}. Note that we have used statistical equilibrium as an assumption; however, nonequilibrium ionization effects are relevant for TR lines such as \ion{Si}{4} 1393.755 \AA. In fact, in dynamic phenomena like surges, statistical equilibrium underestimates the real population of \ion{Si}{4} ions, so the actual emissivity could be larger than the one shown in Figure~\ref{fig:02} \citep[see][and references therein for details]{Nobrega-Siverio_etal:2018}. Regarding the SDO/HMI magnetogram of Panel (d) of Figure~\ref{fig:03}, we simply reduce the spatial/time resolution of the Panel (c) to the instrumental HMI values: 1\arcsec $\approx$ 726~km at 6173\AA\ and $45$~s of time cadence \citep{Scherrer_etal:2012}. The same approach is used for SST/CRISP \citep{Scharmer_etal:2008}, which has 0\farcs13 $\approx$~94 km at 6301\AA\ and $6$~s cadence, to obtain the unsigned vertical magnetic flux shown in Panel (f) of the figure. The chosen height to illustrate the magnetograms, $z=0.15$~Mm, is an approximation of the formation height of the \ion{Fe}{1} lines in which HMI and CRISP observe. \section{CURRENT SHEET ANALYSIS}\label{app:current_sheet} To analyze the current sheet (CS) behavior, we focus on the inverse characteristic length of the magnetic field \begin{equation} L_B^{-1} = \frac{ \left| \nabla \times \hbox{\textbf{\textit{B}}} \right| }{ \left| \hbox{\textbf{\textit{B}}} \right|}. \label{eq:lb} \end{equation} This quantity allows us to know where the abrupt changes of $B$ occur. The analysis is limited to the region within $22 \leq x \leq 40$~Mm and $5.5 \leq z \leq 12$~Mm (purple rectangle in Panel (a) of Figure \ref{fig:03}), and for $0 \leq t \leq 65$~min to avoid having secondary current sheets related to the flux emergence episode described in Section \ref{sec:emergence}. In this region, we take all the grid points with $L_B^{-1}\geq 0.01$~km$^{-1}$, so $L_B \leq 100$~km (yellow dots in Panel (a) of Figure \ref{fig:03}), performing a linear fit to their spatial distribution. The goodness of the fit (the $r^2$ parameter) tells us whether there is a collapsed/elongated CS or not. We have selected $r^2\geq 0.8$ as a criterion for a collapsed/elongated CS. Thus, we obtain the horizontal and vertical center position of the CS ($x_{\mathrm{CS}}$ and $z_{\mathrm{CS}}$, respectively); its length ($L_{\mathrm{CS}}$); and its angle ($\theta_{\mathrm{CS}}$), defined in the anticlockwise direction with respect to the $x$ axis, which is helpful to detect oscillatory reconnection. For example, at $t=17.67$~min, the linear fit is $z = -0.988\, x + 38.8$~Mm, with $L_{\mathrm{CS}}=1.08$~Mm, $r^2 = 0.946$, and $\theta_{\mathrm{CS}}=-44.6$ degrees, meaning that there is a collapsed CS whose inflows from the reconnection come from the left chamber and the upper-right part of the external field. In contrast, at $t=40.00$~min, the time shown in Figure \ref{fig:03}, the linear fit is $z = 0.927\, x - 17.4$~Mm, with $L_{\mathrm{CS}}=1.94$~Mm, $r^2 = 0.964$, and $\theta_{\mathrm{CS}}=42.8$ degrees, indicating that there is also an elongated CS but with reconnection inflows occurring now from the right chamber and the upper-left part of the external field. At $t=23.33$~min, instead, the $r^2$ parameter is 0.383, so there is not a collapsed/elongated CS. \bibliography{references} \bibliographystyle{aasjournal}
Title: Beam combiner for the Asgard/BIFROST instrument
Abstract: BIFROST will be a short-wavelength ($\lambda$ = 1.0 - 1.7$\mu$m) beam combiner for the VLT Interferometer, combining both high spatial ($\lambda$/2B = 0.8 mas) and spectral (up to R = 25,000) resolution. It will be part of the Asgard Suite of visitor instruments. The new window of high spectral resolution, short wavelength observations brings with it new challenges. Here we outline the instrumental design of BIFROST, highlighting which beam combiner subsystems are required and why. This is followed by a comparison All-In-One (AIO) beam combination scheme and an Integrated Optics (IO) scheme with ABCD modulation both in terms of expected sensitivity and the practical implementation of each system.
https://export.arxiv.org/pdf/2208.04968
\keywords{optical interferometry, Asgard/BIFROST, beam combination, high spectral resolution, VLTI} \section{Introduction} \label{sec:intro} % BIFROST is a planned visitor instrument for the Very Large Telescope Interferometer (VLTI) and part of the ASGARD suite of instruments (Martinod et al. 2022)\cite{asgard_martinod}. With its short wavelength (J band) and high spectral resolution (R~=~25,000) will open up a new observational window at the VLTI. A few key science drivers include measuring precision dynamical masses and ages for GAIA binaries, measuring the spin-orbit alignment for directly imaged exoplanets and off-axis exoplanet spectroscopy (Kraus et al. 2022)\cite{bifrost_kraus}. The instrument itself is still in the design phase, a key part of which is identifying which subsystems are necessary for the instrument to function. In addition to this the implications of the different beam combination schemes, namely an All-In-One (AIO) or a Integrated Optics (IO) beam combiner must be considered. In this proceeding we detail our design considerations to date, outlining the need for an Atmospheric Dispersion Corrector (ADC), Longitudinal Dispersion Corrector (LDC), Lithium Niobate Plates (LNP) and a fiber injection unit. This is followed by a discussion on the trade-offs between an All-In-One (AIO) and Integrated Optics (IO) beam combination system. \section{Atmospheric dispersion corrector} \label{sec:adc} The variation of refractive index of the atmosphere as a function of wavelength gives rise to angular dispersion. This is the effect where the observed position of an object on-sky changes as a function of wavelength. For an instrument such as BIFROST which couples light through single mode fibers, to first order its effective field of view on-sky is the diffraction limited psf of the telescope, which is around 70 mas at $\lambda$~=~\SI{1.1}{\micro\meter} on the Unit Telescopes (UTs). If the light across one band is dispersed over an angular range larger than the field of view of the fiber then not all wavelengths will be coupled into the fiber simultaneously. In which case an ADC will be required to remove the atmospheric angular dispersion. To determine if an ADC is necessary for BIFROST we used the atmospheric model of SCIFYsim (Laugier et al. 2021)\cite{2021sf2a.conf..339L} to estimate the on-sky offset for light at the edges with respect to the centres of our bands for the reasonable worst case scenario, an object at an altitude of 30$^{\circ}$ observed on the UTs. The results are shown in figure~\ref{fig:BIFROST_ADC_requirement}. Figure~\ref{fig:BIFROST_ADC_requirement} shows the position of the centres and edges of our Y+J (\SI{1.0}{} - \SI{1.4}{\micro\meter}) and H (\SI{1.5}{} - \SI{1.7}{\micro\meter}) band modes. The circles represent the approximate size of the diffraction limited PSFs for the various wavelengths. The black dashed rings are the effective field of view of the fiber which is placed on-top of the central wavelength of the two bands. In the H band when the fiber is positioned at the centre of the band, light from the edges does still fall within the field of view of the fiber. The edges of the band are displaced by 60\% of the radius of the fibers field of view. Approximating the beam profile that couples into the fiber as a Gaussian, the beam intensity is given by \begin{equation} I(r) = I(0) \textrm{exp}\left(\dfrac{-2r^2}{\omega(0)^2}\right), \end{equation} where $I(0)$ is the intensity at the peak, $r$ the distance from the centre of the beam and $\omega(0)$ the Gaussian beam radius (the radius at which the intensity has decreased to $1/e^2$). Therefore, for a beam displaced by 60\% of the fiber radius (the fiber radius being taken as $\omega(0)$) the intensity of the beam at the core of the fiber is only 50\% of the peak of the beam, leading to significant coupling losses. For the Y+J mode the result is even more significant, as figure~\ref{fig:BIFROST_ADC_requirement} shows, the field of view of the fiber placed at the centre of the band does not overlap at all with the edges of the band, leading to zero flux being coupled into the fiber from wavelengths at the edges of the band. Given these results an ADC will be required for BIFROST when observing with the UTs. Due to the smaller telescope diameters of \SI{1.8}{\meter} on the Auxiliary Telescopes (ATs) the fiber field of view will be larger and hence may not require an ADC. Observations at higher altitudes will also have looser constraints on the need for an ADC. One option would be to design an ADC that can be inserted and removed from the beam path such that it can be used for pointings at low altitudes, especially on the UTs and removed for observations where the loss of throughput due to the glass of the ADC exceeds the gain in coupling efficiency into the optical fibers. \section{Longitudinal Dispersion Corrector} \label{Sec:LDC} Observations of targets not at zenith require a different path length through the delay lines for the different beams of starlight being combined to compensate for the geometric path delay. As the refractive index of air varies as a function of wavelength, if the optical path length of two beam lines is equalised at one wavelength (the fringe tracking wavelength) there will be a residual Optical Path Difference (OPD) error between the beam lines at other wavelengths. To demonstrate this we have built a model to estimate the visibility loss as a function of the difference in air path between the beam lines. As the difference in air path increases, so does the residual OPD error which will push the interference fringes away from the centre of the coherence envelope, reducing the observed fringe contrast, in the worst case scenario removing the interference fringes entirely. A Longitudinal Dispersion Compensator (LDC) is designed to counteract the differential optical path length in air. The length of the coherence envelope is given by $L = \lambda^2/\Delta\lambda$ where $\lambda$ is the central wavelength of observation and $\Delta\lambda$ the range of wavelengths being interfered. Therefore an LDC is more likely to be needed at lower spectral resolutions where each spectral channel is more broadband. Haniff (2007)\cite{2007NewAR..51..565H} gives the maximum visibility due to the coherence envelope to be \begin{equation} V = sinc\left(\dfrac{\pi D\Delta\lambda}{\lambda^2}\right), \end{equation} where $D$ is the OPD between the two beams being interfered. To establish if an LDC is required for BIFROST we plot the visibility as a function of the difference in air path between the two beam lines. The OPD is given by $\textrm{OPD} = l(n_{\textrm{obs}} - n_{\textrm{ft}})$ where $n_{\textrm{obs}}$ is the refractive index of air at the wavelength of observation and $n_{\textrm{ft}}$ the fringe tracking wavelength. For wavelengths in the range $\lambda$ = \SI{0.3}{} - \SI{1.69}{\micro\meter} we use the model of Ciddor (1996)\cite{1996ApOpt..35.1566C} and for $\lambda$ = \SI{1.7}{} - \SI{2.5}{\micro\meter} Mathar (2007)\cite{2007JOptA...9..470M}. Ambient conditions of temperature 15$^\circ$C and a pressure of 1014 mbar are assumed. The wavelength range on a spectral channel, $\Delta\lambda$ is estimated from the spectral resolution R value. While fringe tracking for Asgard is performed in the K band it is possible to use the differential delay lines of BIFROST to change the wavelength which has zero OPD error by applying a static offset from the main delay lines. Therefore we examine the fringe contrast loss at the edges of the bands for the two modes of BIFROST (Y+J, $\lambda$ = \SI{1}{} - \SI{1.4}{\micro\meter} and H, $\lambda$ = \SI{1.5}{} - \SI{1.7}{\micro\meter}) when the OPD error is set to zero at the centre of each band. The results are shown in figure~\ref{fig:no_LDC_vis_loss}. As the figure shows the visibility losses for the R~=~1,000 spectral mode are negligible, the loss in throughput from the extra glass of the LDC will likely result in a worse SNR than if no LDC is placed in the beam path. The results are more ambiguous for the lowest spectral mode planned for BIFROST, R~=~50. In the H band visibilities drop to 88\% for differential air paths of \SI{100}{\meter}. The effect is even more pronounced in the J band where the fringe contrast drops to zero (the first null of the coherence envelope) for an air path of \SI{35}{\meter}. Such a large loss in visibility in the low resolution arm would make it unusable in the J band making an LDC necessary for BIFROST. Now that the need for an LDC has been established its location in the beam train must be considered. The first option is to put it in the BIFROST only part of the Asgard beam train, this will maximise the throughput of the fringe tracker Heimdallr. However, will lead to non-common path OPD errors between Heimdallr and BIFROST as the thickness of the LDCs are adjusted. This could in theory be compensated by modelling the change in OPD as the LDC glass thickness is adjusted and correcting this with the DDLs of BIFROST but placing the LDC in the common path of Heimdallr and BIFROST would be the simpler implementation. \section{Lithium Niobate plates} \label{sec:Lithium_niobate_plates} The use of optical fibers can reduce the instrumental contrast if orthogonal polarisation states have unmatched phases. To minimise this effect we will follow the design of MIRC-X (Anugu et al. 2020)\cite{2020AJ....160..158A} and PIONIER (Lazareff et al. 2012)\cite{2012A&A...543A..31L} and place Lithium Niobate (LiNbO3) plates in the beam path which can be rotated to introduce a differential phase shift between the polarisation states. An example of the effects of these plates in the MIRC-X instrument is shown in figure~\ref{fig:MIRC_X_LNB} which shows the significant increase in instrumental visibility as the phases of the two polarisation states are equalised. \section{Fiber injection unit} One the the biggest technical risks to the development of BIFROST is maintaining a good injection into the single mode fibers owing to the tight alignment tolerances involved in coupling atmospherically perturbed starlight into a single mode fiber with an $\sim$~\SI{6}{\micro\meter} core. The primary role of the fiber injection unit is to take the collimated beams of starlight and couple them into the fibers however it will also act as the Differential Delay Lines (DDLs) of BIFROST to enable us to cophase BIFROST with the other instruments on the Asgard table, as well as allowing us to apply global offsets to the path length as discussed in section~\ref{Sec:LDC}. On the Asgard table at the VLTI we will have one Deformable Mirror (DM) per beam line controlled by the Baldr instrument to correct any wavefront errors in the beams reaching the Asgard table. This should mean that once aligned with the beams coming from the DMs, the fiber injection into BIFROST will be stable, with only long term instrumental drifts of the optics between the DMs and the BIFROST fibers affecting the coupling. However we must have a way to correct for these instrumental drifts, either by steering the beams towards the fibers, as is done in the MYSTIC combiner (Monnier et al. 2018)\cite{2018SPIE10701E..22M}, or moving the fiber tips to the beams as is done in the MIRC-X combiner (Anugu et al. 2020)\cite{2020AJ....160..158A}. The layout of one design we are considering is shown in figure~\ref{fig:MYSTIC_style_injection} and is the same layout as implemented by MYSTIC. Here the first optic is the DDL, a flat mirror that is motorised to move along one axis to add or remove optical path for a single beam. This is followed by a Fast Steering Mirror (FSM) which allows for tip-tilt corrections to steer the beam and optimise injection into the fiber. Finally there is the Off-Axis-Parabola (OAP) and Single Mode Fiber (SMF) unit which are mounted together to maintain the optical alignment between the two. The advantage of this scheme is that the OAP and SMF do not move. This ensures both that the optical alignment of the OAP and SMF (which have very tight alignment tolerances with respect to each other, see Mortimer (2021)\cite{Mortimer_PhD_thesis}) will not become misaligned due to vibrations or non-repeatability of moving mounts. The disadvantage of such a scheme is that this introduces an extra two reflections (the DDL and FSM). Taking a typical reflection loss of a protected gold coated mirror of 2.5\% in the infrared this gives a 5\% reduction in throughput. An alternative layout is shown in figure~\ref{fig:simple_style_injection}. Here the dedicated DDL and FSM mirrors are removed and instead the OAP and SMF common mount (represented by the grey box in figure~\ref{fig:simple_style_injection}) is motorised. This plate is then motorised in all three axes, along the direction of the incoming beams to add additional path length, and within the plane of the incoming beam to centre the OAP on the beam. The advantage of this scheme is that it requires fewer optics, improving the throughput and reducing the wavefront error. The disadvantage this scheme is that the OAP fiber unit must move which will change the fibers shape. As an example the MIRC-X DDLs have a range of \SI{14}{\milli\meter} (Anugu et al. 2020)\cite{2020AJ....160..158A}. More work is required to verify if moving the fibers by such distances could alter the differential phase of orthogonal polarisation states (which would need a subsequent correction by the LNP, section~\ref{sec:Lithium_niobate_plates}) or even change the optical path length through the fibers. \section{Beam combination unit} Once the beams of starlight exit the single mode optical fibers they will need to be interfered with each other via the beam combiner, to produce interference fringes on all six baselines for our four telescope combiner. We are considering two different beam combination schemes, the AIO which has been utilised by a number of current and planned combiners (Monnier et al. 2018; Anugu et al. 2020; Mortimer et al. 2020)\cite{2018SPIE10701E..22M,2020SPIE11446E..0NA,2020SPIE11446E..0VM} as well as IO which is also utilised by a number of combiners (Le Bouquin et al. 2011; Gravity Collaboration et al. 2017; Pannetier et al. 2020)\cite{2011A&A...535A..67L,2017A&A...602A..94G,2020SPIE11446E..0TP}. In the following sections we present a comparison of the two combination methods against different metrics such as sensitivity and the practicalities of implementing and operating each system. \subsection{Sensitivity} Defining sensitivity by estimating a Signal to Noise Ratio (SNR) is not trivial in optical interferometry as it depends on many factors such as the number of photons reaching the detector, the visibility of the interference fringes, the number of telescopes being interfered and detector read noise. Here we calculate the ratio of the SNR for an AIO and IO four telescope combiner in two regimes, observations of very bright targets where we are photon noise dominated, and very faint targets where we are read noise dominated. Our SNR calculations are based on the equation presented in Mourard et al (2017)\cite{2017JOSAA..34A..37M} which itself is derived from Gordon \& Buscher (2012)\cite{2012A&A...541A..46G} and is given by \begin{equation} \label{equ:powerspectrum_SNR} \textrm{SNR} = \dfrac{\left(\dfrac{VF_{0}}{N_{\textrm{tel}}}\right)^2}{\sqrt{\textrm{PhotonNoise} + \textrm{ReadNoise} + \textrm{CoupledTerms}}}, \end{equation} where $\textrm{PhotonNoise}$, $\textrm{ReadNoise}$ represent the variance due to photon noise and read noise respectively and $\textrm{CoupledTerms}$ gives coupling terms between the two noise sources. The $\textrm{PhotonNoise}$ term is given by \begin{equation} \textrm{PhotonNoise} = 2N_{\textrm{ph}}\left(\dfrac{VF_{0}}{N_{\textrm{tel}}}\right)^2 + N_{\textrm{ph}}^2, \end{equation} the $\textrm{ReadNoise}$ term by \begin{equation} \textrm{ReadNoise} = N_{\textrm{pix}}\sigma^2 + N_{\textrm{pix}}^2\sigma^4, \end{equation} the $\textrm{CoupledTerms}$ by \begin{equation} \textrm{CoupledTerms} = 2\left(\dfrac{VF_{0}}{N_{\textrm{tel}}}\right)^2N_{\textrm{pix}}\sigma^2 + 2N_{\textrm{ph}}N_{\textrm{pix}}\sigma^2. \end{equation} Where $V$ is the measured visibility of the interference fringes, $F_0$ the number of stellar photons, $N_{\textrm{tel}}$ the number of telescopes being combined, $N_{\textrm{ph}}$ is the total number of photons recorded in the interferogram (from the source, sky background and thermal photons), $N_{\textrm{pix}}$ the number of pixels the fringes were sampled over and $\sigma$ the read noise of each pixel. \subsubsection{Photon noise dominated} \label{sec:photon_noise_dom} Following the methodology outlined in Mortimer (2021)\cite{Mortimer_PhD_thesis}, under the assumption that photon noise is the dominant source of noise (as is the case for very bright targets) and that the only source of photons are stellar photons (i.e. $N_{\textrm{ph}} = F_0$) the SNR in equation \ref{equ:powerspectrum_SNR} can be significantly simplified to \begin{equation} \textrm{SNR} = \dfrac{V\sqrt{F_0}}{\sqrt{2}N_{tel}}. \end{equation} It is then possible to take the ratio of the SNR for the AIO and IO beam combination schemes to calculate the relative signal to noise ratios, given by \begin{equation} \label{eq:snr_photon_noise} \dfrac{\textrm{SNR}_{\textrm{AIO}}}{\textrm{SNR}_{\textrm{IO}}} = \sqrt{\dfrac{F_{0_{\textrm{AIO}}}}{F_{0_{\textrm{IO}}}}}\dfrac{V_{\textrm{AIO}}N_{tel_{\textrm{IO}}}}{V_{IO}N_{tel_{\textrm{AIO}}}}. \end{equation} The first step is to estimate the relative number of stellar photons that are detected by comparing the throughput of the AIO and IO combination schemes ($F_{0_{\textrm{AIO}}}/F_{0_{\textrm{IO}}}$). A typical AIO combiner optimised for sensitivity can reach an optical throughput of $\sim$ 80\% (Mortimer. 2021)\cite{Mortimer_PhD_thesis} whereas an IO combiner can reach around 65\% (Benisty et al. 2009)\cite{2009A&A...498..601B}. There are a couple of additional caveats to these values however, firstly for the AIO combiner, as all the beams are combined into the same fringe packet we will implement photometric channels similar to MIRC-X (Anugu et al. 2020)\cite{2020AJ....160..158A} which takes 20\% of the light, meaning the amount of light left in the interferogram is around 64\%. The 65\% for the IO chip is the sum of the flux at all outputs, however as the combiner is pairwise (each interferogram only combines one baseline at a time) the amount of light in each interferogram is only 1/3 the total amount of light from each input as it is split evenly between the three baselines each telescope goes on to form, meaning the throughput is around 22\% in each interferogram. Therefore $F_{0_{\textrm{AIO}}}/F_{0_{\textrm{IO}}} \approx 3$. The next term to consider is the instrumental fringe contrast, the visibility that would be recorded for an ideal point source. For IO combiners, PIONIER has been shown to have an instrumental contrast of 80\% in wideband observations, tending towards 100\% if spectral dispersion is used (Benisty et al. 2009)\cite{2009A&A...498..601B} and GRAVITY has demonstrated 95\% for polarised light (Perraut et al. 2018)\cite{2018A&A...614A..70P}, we therefore adopt an estimate of 95\%. In an AIO combiner the interference fringes are each sampled at different spatial frequencies so the signal from each baseline can be separated during data analysis, however the spatial frequencies have to be sufficiently separated to avoid crosstalk meaning that for a given number of telescopes the fringes have to span a certain range of spatial frequencies (Mortimer \& Buscher 2022)\cite{2022MNRAS.511.4619M}. This leads to competing requirements that one wishes to minimise the number of pixels readout in the interferogram to minimise read noise, but also wants to sample the highest spatial frequency fringes well enough that there is not a sufficient loss in contrast due to poor pixel sampling. A compromise is to sample the highest frequency fringes on three pixels per fringe cycle. The instrumental contrast in this case can then be estimated by $V = \textrm{sinc}(\pi f)$ where $f$ is the fringe frequency in cycles/pixel. For three pixels/cycle this gives a value of $V$ = 82\% which we adopt here. Finally $N_{tel_{\textrm{AIO}}}$ = 4 as all four telescopes are combined simultaneously whereas $N_{tel_{\textrm{IO}}}$ = 2 as it is a pairwise combiner. Entering the above values into equation~\ref{eq:snr_photon_noise} we estimate $\textrm{SNR}_{\textrm{AIO}}/\textrm{SNR}_{\textrm{IO}}$ = 0.71 in this photon noise dominated regime giving IO a marginal but not insignificant advantage over AIO despite the three times higher throughput of AIO. This is mainly due to the pairwise combination scheme which the SNR goes linearly with, compared to the throughput ratio which the SNR only goes with the square root of. \subsubsection{Read noise dominated} \label{sec:read_noise_dominated} Read noise in detectors used for optical interferometry has improved dramatically in recent years, from around 10 e-/pixel (Le Bouquin et al. 2011)\cite{2011A&A...535A..67L} to around 0.3 e-/pixel with the SAPHIRA detector (Lanthermann et al. 2019)\cite{2019A&A...625A..38L}. However, as Mortimer (2021)\cite{Mortimer_PhD_thesis} showed, readnoise can still be significant for faint targets. The simplified SNR equation in this case is found by assuming that only readnoise contributes to the noise and so the SNR is given by \begin{equation} \textrm{SNR} = \dfrac{\left(\dfrac{VF_{0}}{N_{\textrm{tel}}}\right)^2}{\sqrt{N_{\textrm{pix}}\sigma^2 + N_{\textrm{pix}}^2\sigma^4}}. \end{equation} Again taking the ratio of the SNR for both the AIO and IO combination schemes and rearranging we arrive at \begin{equation} \dfrac{\textrm{SNR}_{\textrm{AIO}}}{\textrm{SNR}_{\textrm{IO}}} = \left(\dfrac{V_{\textrm{AIO}}F_{0_{\textrm{AIO}}}N_{\textrm{Tel}_{\textrm{IO}}}}{V_{\textrm{IO}}F_{0_{\textrm{IO}}}N_{\textrm{Tel}_{\textrm{AIO}}}}\right)^2 \sqrt{\dfrac{N_{\textrm{pix}_{\textrm{IO}}}\sigma^2 + N_{\textrm{pix}_{\textrm{IO}}}^2\sigma^4}{N_{\textrm{pix}_{\textrm{AIO}}}\sigma^2 + N_{\textrm{pix}_{\textrm{AIO}}}^2\sigma^4}}. \end{equation} The value for all the terms have been defined already in section~\ref{sec:photon_noise_dom} except for the number of pixels the interferogram is readout over. For IO, in theory only four pixels need to be read out to sample the four outputs per baseline, however we assume this is raised to eight pixels (two per output) to minimise the risk posed to the instrument if the light happens to land on a bad pixel. The number of pixels readout for an AIO combiner can be significantly more. We start by stating that the condition for sampling our interference fringes are that we wish to sample at least three pixels/cycle on the highest frequency fringes and at least four cycles of the interference fringes for the lowest spatial frequency fringes (as is the case for MIRC-X (Anugu et al. 2020)\cite{2020AJ....160..158A}). We can relate the conditions by looking at the ideal non-redundant spacing for a four telescope interferometer. Following the methodology discussed in Mortimer \& Buscher 2022)\cite{2022MNRAS.511.4619M} the best beam spacing that minimises the range of spatial frequencies while also minimising crosstalk in the power spectrum shown in figure \ref{fig:BIFROST_power_spectrum} which shows the highest spatial frequency fringes are five times higher than the lowest. Therefore if three pixels/cycle are sampled on the highest spatial frequency fringes then 15 pixels/cycle will be sampled on the lowest, multiplying this by four to sample four cycles results in 60 pixels being readout in the AIO scheme. It is worth noting that this beam spacing is not necessarily the configuration that would be used in practise but instead represents the most compact (hence requiring the fewest pixels readout) layout that could be used while still avoiding the worse effects of crosstalk. This then gives the value of $\textrm{SNR}_{\textrm{AIO}}/\textrm{SNR}_{\textrm{IO}}$ = 0.29, making the IO combination scheme approximately 3.5$\times$ more sensitive in this readnoise dominated regime. \subsection{Implementation} In addition to sensitivity considerations the actual implementation of each beam combination scheme must be considered as both schemes have their advantages and disadvantages. With regards to procurement AIO combiners can be cheaper and faster as they can be built from mostly off-the-shelf optical components. This removes any design or tooling costs involved in making low production run custom optical components and they can be delivered much faster as there is no development lead time. The opposite is true for IO combiners where due to the low volume and unique use case each IO chip to date has been designed and fabricated specifically for each combiner. Considering the practical implementation of such systems IO offers the advantage that once the chip has been fabricated the alignment should be much more stable as it is a monolithic unit whereas the AIO system requires many individual optics to be carefully aligned with respect to each other to work. This advantage of IO could be significant for BIFROST, which is to be a visitor instrument at the VLTI, as the alignment of the instrument will not be maintained between observing runs which could lead to a significant overhead if the instrument has to be realigned at the start of each observing run. Another advantage of IO to consider is it's compact size, a 4 telescope IO chip is of order a few centimetres in size (Benisty et al. 2009)\cite{2009A&A...498..601B} compared to of order \SI{50}{\centi\meter} for an AIO combiner (Mortimer. 2021)\cite{Mortimer_PhD_thesis}. This reduced footprint could be significant for BIFROST which will share one optical table with four other interferometric instruments as part of the Asgard Suite. The final consideration is the chromaticity of the combiners. AIO offers the advantage that it can be made from mostly reflective components making it fairly achromatic with combiners having been built that are capable of operating over broad wavelength ranges such as $\lambda$~=~\SI{1.1}{} - \SI{2.4}{\micro\meter} (Mortimer. 2021)\cite{Mortimer_PhD_thesis}. IO combiners on the other hand are chromatic, early discussions with vendors suggests that BIFROST will require two IO combiners, one designed to operate in the Y+J band ($\lambda$ = \SI{1.0}{}~-~\SI{1.4}{\micro\meter}) and one in the H band ($\lambda$ = \SI{1.5}{}~-~\SI{1.7}{\micro\meter}). Such a two chip system will add cost and complexity to the design of BIFROST. \section{Future work} In this proceeding we have outlined the critical subsystems required for the operation of BIFROST at the VLTI as well as presented a comparison between the AIO and IO beam combination schemes. The next steps in the development of the instrument will be to design the critical subsystems and progress towards a preliminary design review of BIFROST in Q1 of 2023. This will be followed by the final design review and integration of the instrument at the university of Exeter before commissioning at the VLTI in late 2024. \acknowledgments % We acknowledge support from ERC Consolidator Grant (Grant Agreement ID 101003096) and STFC Consolidated Grant (ST/V000721/1). \bibliography{report} % \bibliographystyle{spiebib} %
Title: Incorporating baryon-driven contraction of dark matter halos in rotation curve fits
Abstract: The condensation of baryons within a dark matter (DM) halo during galaxy formation should result in some contraction of the halo as the combined system settles into equilibrium. We quantify this effect on the cuspy primordial halos predicted by DM-only simulations for the baryon distributions observed in the galaxies of the SPARC database. We find that the DM halos of high surface brightness galaxies (with $\Sigma_{\rm eff}\gtrsim100$ $L_\odot$ pc$^{-2}$ at 3.6 $\mu$m) experience strong contraction. Halos become more cuspy as a result of compression: the inner DM density slope increases with the baryonic surface mass density. We iteratively fit rotation curves to find the balance between initial halo parameters (constrained by abundance matching), compression, and stellar mass-to-light ratio. The resulting fits often require lower stellar masses than expected for stellar populations, particularly in galaxies with bulges: stellar mass must be reduced to make room for the DM it compresses. This trade off between dark and luminous mass is reminiscent of the cusp-core problem in dwarf galaxies, but occurs in more massive systems: the present-epoch DM halos cannot follow from cuspy primordial halos unless (1) the stellar mass-to-light ratios are systematically smaller than expected from standard stellar population synthesis models, and/or (2) there is a net outward mass redistribution from the initial cusp, even in massive galaxies widely considered to be immune from such effects.
https://export.arxiv.org/pdf/2208.04326
\longtab[1]{ \begin{longtable}{lccccccccc} \caption{Best-fit parameters for stellar disks, bulges, primordial NFW halos and compressed halos. $V_{200}$ and $C_{200}$ are for the primordial NFW halos; $\rho_s$, $\gamma$ and $\beta$ are the parameters of the compressed halos in the ($\alpha$, $\beta$, $\gamma$) models, where the transition parameter has been fixed at $\alpha=1$; $r_s$ is the shared scale length of the prior-compression and post-compression halos given it is the unit in which the adiabatic contraction is calculated.} \label{tab:parameters}\\ \hline\hline {Galaxy name} & {$\Upsilon_{\rm disk}$} & {$\Upsilon_{\rm bulge}$} & {$V_{200}$} & {$C_{200}$} & {$r_s$} & {$\log(\rho_s)$} & {$\gamma$} & {$\beta$} & $\chi^2_\nu$ \\ {}& {($M_\odot/L_\odot$)} & {($M_\odot/L_\odot$)} & {(km/s)} & {} & {(kpc)} & {[$M_\odot$ kpc$^{-3}$]} & {} & {} \\ \hline \endfirsthead \caption{continued}\\ \hline {Galaxy name} & {$\Upsilon_{\rm disk}$} & {$\Upsilon_{\rm bulge}$} & {$V_{200}$} & {$C_{200}$} & {$r_s$} & {$\log(\rho_s)$} & {$\gamma$} & {$\beta$} & $\chi^2_\nu$\\ {}& {($M_\odot/L_\odot$)} & {($M_\odot/L_\odot$)} & {(km/s)} & {} & {(kpc)} & {[$M_\odot$ kpc$^{-3}$]} & {} & {} \\ \hline \endhead \hline \endfoot \hline \endlastfoot DDO161 & 0.16 & \dots & 60.70 & 4.35 & 19.11 & 5.97 & 1.14 & 3.45 & 5.31\\ DDO170 & 0.13 & \dots & 48.34 & 7.54 & 8.78 & 6.64 & 0.97 & 3.37 & 5.52\\ ESO079-G014 & 0.77 & \dots & 214.72 & 3.38 & 87.14 & 5.65 & 1.11 & 3.11 & 4.94\\ ESO116-G012 & 0.25 & \dots & 88.09 & 10.59 & 11.39 & 6.91 & 1.08 & 3.22 & 5.46\\ ESO563-G021 & 0.75 & \dots & 447.78 & 3.08 & 199.13 & 5.34 & 1.23 & 2.15 & 21.58\\ F561-1 & 0.10 & \dots & 57.28 & 3.57 & 21.97 & 5.91 & 1.03 & 3.86 & 3.12\\ F563-1 & 0.98 & \dots & 80.53 & 8.45 & 13.06 & 6.67 & 1.09 & 3.18 & 2.02\\ F563-V1 & 0.10 & \dots & 49.34 & 2.52 & 26.82 & 5.47 & 1.10 & 3.90 & 7.19\\ F563-V2 & 1.00 & \dots & 81.58 & 8.97 & 12.46 & 6.81 & 1.08 & 3.48 & 2.13\\ F565-V2 & 1.00 & \dots & 63.84 & 8.40 & 10.41 & 6.66 & 1.07 & 3.17 & 2.28\\ F567-2 & 0.10 & \dots & 52.53 & 5.85 & 12.30 & 6.44 & 0.92 & 3.47 & 1.85\\ F568-1 & 1.00 & \dots & 96.21 & 8.48 & 15.54 & 6.72 & 1.11 & 3.29 & 1.98\\ F568-3 & 1.00 & \dots & 118.81 & 2.26 & 71.91 & 5.34 & 1.09 & 4.00 & 4.72\\ F568-V1 & 0.98 & \dots & 84.43 & 10.26 & 11.27 & 6.89 & 1.13 & 3.21 & 0.27\\ F571-8 & 0.18 & \dots & 131.47 & 7.54 & 23.90 & 6.43 & 1.14 & 2.90 & 11.71\\ F571-V1 & 0.38 & \dots & 65.47 & 7.76 & 11.56 & 6.61 & 1.06 & 3.29 & 1.12\\ F574-1 & 0.55 & \dots & 85.18 & 6.28 & 18.57 & 6.40 & 1.11 & 3.35 & 2.42\\ F574-2 & 0.10 & \dots & 54.72 & 3.29 & 22.79 & 5.93 & 0.91 & 4.11 & 8.72\\ F579-V1 & 0.39 & \dots & 83.32 & 11.33 & 10.08 & 7.05 & 1.06 & 3.27 & 0.61\\ F583-1 & 0.68 & \dots & 69.06 & 5.82 & 16.25 & 6.33 & 1.04 & 3.39 & 2.57\\ F583-4 & 0.17 & \dots & 57.31 & 8.16 & 9.62 & 6.65 & 1.03 & 3.24 & 0.52\\ NGC0024 & 0.90 & \dots & 84.77 & 8.20 & 14.17 & 6.55 & 1.31 & 3.25 & 0.79\\ NGC0055 & 0.15 & \dots & 72.95 & 5.67 & 17.62 & 6.26 & 1.11 & 3.38 & 7.81\\ NGC0100 & 0.22 & \dots & 71.13 & 7.96 & 12.24 & 6.61 & 1.08 & 3.21 & 2.61\\ NGC0247 & 1.00 & \dots & 120.71 & 2.83 & 58.47 & 5.51 & 1.13 & 3.68 & 1.82\\ NGC0289 & 0.44 & \dots & 133.15 & 5.66 & 32.20 & 6.30 & 1.22 & 3.45 & 2.02\\ NGC0300 & 0.27 & \dots & 78.46 & 8.94 & 12.02 & 6.70 & 1.09 & 3.14 & 1.66\\ NGC1003 & 0.58 & \dots & 108.99 & 3.77 & 39.57 & 5.85 & 1.11 & 3.53 & 2.65\\ NGC1090 & 0.37 & \dots & 124.18 & 5.89 & 28.89 & 6.39 & 1.17 & 3.65 & 3.05\\ NGC1705 & 1.00 & \dots & 62.22 & 10.30 & 8.28 & 6.44 & 1.54 & 2.12 & 0.26\\ NGC2403 & 0.29 & \dots & 99.56 & 10.38 & 13.13 & 6.82 & 1.21 & 3.07 & 10.92\\ NGC2683 & 0.46 & 0.20 & 109.07 & 8.19 & 18.24 & 6.40 & 1.72 & 2.93 & 2.54\\ NGC2841 & 1.00 & 0.49 & 330.63 & 3.28 & 138.21 & 4.91 & 1.77 & 0.87 & 1.34\\ NGC2915 & 0.29 & \dots & 58.67 & 15.77 & 5.10 & 7.23 & 1.17 & 2.91 & 1.02\\ NGC2976 & 0.73 & \dots & 81.44 & 1.58 & 70.47 & 5.16 & 1.18 & 10.97 & 2.24\\ NGC2998 & 0.57 & \dots & 172.89 & 4.15 & 57.10 & 5.98 & 1.16 & 3.53 & 2.76\\ NGC3198 & 0.50 & \dots & 123.29 & 5.35 & 31.56 & 6.19 & 1.21 & 3.37 & 2.06\\ NGC3726 & 0.36 & \dots & 139.52 & 4.29 & 44.53 & 5.90 & 1.26 & 3.32 & 2.97\\ NGC3769 & 0.24 & \dots & 89.60 & 9.18 & 13.37 & 6.71 & 1.25 & 3.14 & 0.76\\ NGC3877 & 0.30 & \dots & 133.60 & 6.57 & 27.85 & 6.67 & 1.05 & 4.71 & 5.11\\ NGC3893 & 0.36 & \dots & 133.12 & 8.16 & 22.34 & 6.46 & 1.43 & 2.98 & 1.05\\ NGC3917 & 0.89 & \dots & 157.22 & 2.82 & 76.38 & 5.54 & 1.11 & 3.85 & 3.92\\ NGC3949 & 0.29 & \dots & 102.30 & 8.41 & 16.66 & 6.43 & 1.56 & 2.95 & 0.56\\ NGC3953 & 0.38 & \dots & 155.87 & 6.69 & 31.94 & 6.36 & 1.39 & 3.44 & 0.79\\ NGC3972 & 0.48 & \dots & 101.03 & 7.03 & 19.68 & 6.57 & 1.14 & 3.88 & 3.79\\ NGC3992 & 0.76 & \dots & 254.00 & 2.79 & 124.68 & 5.30 & 1.33 & 2.94 & 0.91\\ NGC4010 & 0.26 & \dots & 96.48 & 7.60 & 17.40 & 6.57 & 1.16 & 3.38 & 5.13\\ NGC4013 & 0.20 & 1.00 & 146.62 & 4.81 & 41.78 & 5.86 & 1.48 & 2.91 & 1.58\\ NGC4085 & 0.22 & \dots & 91.73 & 9.58 & 13.12 & 6.76 & 1.28 & 3.40 & 10.04\\ NGC4088 & 0.23 & \dots & 124.44 & 5.88 & 29.01 & 6.23 & 1.37 & 3.35 & 0.75\\ NGC4100 & 0.52 & \dots & 118.39 & 7.02 & 23.11 & 6.21 & 1.66 & 2.86 & 1.20\\ NGC4157 & 0.39 & 0.10 & 154.57 & 4.95 & 42.78 & 6.11 & 1.20 & 3.43 & 0.45\\ NGC4183 & 0.36 & \dots & 80.52 & 8.66 & 12.73 & 6.75 & 1.11 & 3.33 & 0.23\\ NGC4217 & 1.00 & 0.10 & 153.42 & 7.96 & 26.39 & 6.45 & 1.35 & 3.04 & 3.50\\ NGC4389 & 0.10 & \dots & 77.98 & 6.55 & 16.31 & 6.46 & 1.15 & 3.99 & 9.85\\ NGC4559 & 0.20 & \dots & 88.53 & 8.69 & 13.95 & 6.72 & 1.15 & 3.25 & 0.27\\ NGC5005 & 0.39 & 0.39 & 239.94 & 6.07 & 54.17 & 5.93 & 1.53 & 2.90 & 0.09\\ NGC5585 & 0.10 & \dots & 71.62 & 8.48 & 11.58 & 6.66 & 1.08 & 3.16 & 7.25\\ NGC5985 & 0.19 & 0.88 & 189.24 & 17.65 & 14.69 & 7.44 & 1.16 & 3.09 & 2.69\\ NGC6015 & 0.55 & \dots & 114.75 & 6.74 & 23.33 & 6.42 & 1.29 & 3.36 & 7.42\\ NGC6503 & 0.29 & \dots & 87.13 & 10.01 & 11.92 & 6.64 & 1.43 & 2.84 & 1.58\\ NGC6674 & 0.61 & 0.97 & 282.57 & 2.15 & 179.68 & 4.37 & 1.95 & 0.99 & 5.46\\ NGC6789 & 1.00 & \dots & 48.95 & 11.63 & 5.76 & 6.88 & 1.23 & 3.88 & 8.75\\ NGC7793 & 0.40 & \dots & 83.80 & 7.03 & 16.32 & 6.48 & 1.23 & 3.81 & 1.24\\ NGC7814 & 0.93 & 0.33 & 166.31 & 7.22 & 31.55 & 5.91 & 1.84 & 1.77 & 0.50\\ UGC00128 & 0.33 & \dots & 98.38 & 8.73 & 15.44 & 6.83 & 0.94 & 3.40 & 11.47\\ UGC00191 & 0.26 & \dots & 62.71 & 9.15 & 9.39 & 6.79 & 1.07 & 3.28 & 5.50\\ UGC00634 & 0.45 & \dots & 81.44 & 8.56 & 13.03 & 6.72 & 1.07 & 3.26 & 23.03\\ UGC00731 & 0.84 & \dots & 55.33 & 7.99 & 9.48 & 6.73 & 0.97 & 3.42 & 0.33\\ UGC00891 & 0.50 & \dots & 54.05 & 6.87 & 10.77 & 6.42 & 1.12 & 3.22 & 38.20\\ UGC01230 & 0.41 & \dots & 80.13 & 8.50 & 12.91 & 6.78 & 1.02 & 3.37 & 1.35\\ UGC01281 & 0.26 & \dots & 50.36 & 6.49 & 10.63 & 6.47 & 1.00 & 3.62 & 2.40\\ UGC02023 & 0.10 & \dots & 50.27 & 7.29 & 9.45 & 6.57 & 1.03 & 3.46 & 2.20\\ UGC02259 & 0.97 & \dots & 70.85 & 8.26 & 11.74 & 6.56 & 1.25 & 3.16 & 1.71\\ UGC02455 & 0.10 & \dots & 57.72 & 2.89 & 27.31 & 5.65 & 1.18 & 5.60 & 6.63\\ UGC02487 & 1.00 & 0.83 & 454.69 & 1.44 & 432.12 & 3.60 & 2.15 & -0.93 & 5.07\\ UGC02885 & 0.11 & 0.69 & 254.34 & 6.39 & 54.53 & 5.99 & 1.57 & 2.46 & 1.32\\ UGC03205 & 0.55 & 0.77 & 173.22 & 4.53 & 52.38 & 5.76 & 1.53 & 2.82 & 2.57\\ UGC03580 & 0.26 & 0.11 & 103.55 & 6.74 & 21.06 & 6.35 & 1.20 & 3.11 & 3.11\\ UGC04278 & 0.35 & \dots & 68.67 & 7.20 & 13.06 & 6.50 & 1.08 & 3.20 & 4.12\\ UGC04325 & 1.00 & \dots & 72.96 & 8.82 & 11.33 & 6.79 & 1.13 & 3.75 & 3.22\\ UGC04499 & 0.15 & \dots & 56.49 & 8.88 & 8.71 & 6.79 & 1.03 & 3.33 & 1.55\\ UGC05005 & 0.28 & \dots & 72.21 & 5.98 & 16.54 & 6.38 & 1.03 & 3.41 & 2.00\\ UGC05414 & 0.15 & \dots & 53.33 & 7.34 & 9.96 & 6.56 & 1.06 & 3.44 & 5.43\\ UGC05716 & 0.35 & \dots & 58.50 & 8.14 & 9.85 & 6.66 & 1.04 & 3.25 & 2.33\\ UGC05721 & 0.39 & \dots & 52.37 & 19.84 & 3.62 & 7.51 & 1.19 & 2.97 & 1.39\\ UGC05750 & 0.20 & \dots & 65.04 & 5.05 & 17.65 & 6.19 & 1.01 & 3.44 & 2.34\\ UGC05829 & 0.16 & \dots & 48.15 & 6.82 & 9.67 & 6.62 & 0.93 & 3.64 & 0.72\\ UGC05918 & 0.10 & \dots & 39.35 & 8.06 & 6.68 & 6.69 & 0.98 & 3.34 & 0.24\\ UGC05986 & 0.40 & \dots & 97.83 & 9.28 & 14.44 & 6.75 & 1.13 & 3.21 & 10.60\\ UGC05999 & 0.39 & \dots & 74.29 & 6.92 & 14.70 & 6.51 & 1.05 & 3.35 & 10.26\\ UGC06399 & 0.54 & \dots & 71.57 & 8.07 & 12.16 & 6.62 & 1.11 & 3.27 & 1.95\\ UGC06446 & 0.81 & \dots & 65.09 & 8.76 & 10.18 & 6.67 & 1.17 & 3.13 & 0.27\\ UGC06628 & 0.10 & \dots & 56.82 & 4.11 & 18.93 & 6.08 & 1.02 & 3.96 & 3.07\\ UGC06667 & 1.00 & \dots & 72.99 & 8.34 & 11.99 & 6.72 & 1.01 & 3.35 & 2.69\\ UGC06786 & 0.28 & 0.40 & 156.49 & 13.31 & 16.11 & 6.99 & 1.39 & 2.91 & 0.62\\ UGC06917 & 0.34 & \dots & 82.52 & 9.02 & 12.54 & 6.74 & 1.11 & 3.22 & 1.68\\ UGC06923 & 0.18 & \dots & 61.94 & 10.11 & 8.39 & 6.85 & 1.13 & 3.22 & 1.67\\ UGC06930 & 0.32 & \dots & 80.03 & 8.38 & 13.08 & 6.72 & 1.08 & 3.34 & 0.37\\ UGC06973 & 0.13 & 0.23 & 128.14 & 12.43 & 14.13 & 6.64 & 1.62 & 2.13 & 1.31\\ UGC06983 & 0.40 & \dots & 79.61 & 10.28 & 10.61 & 6.88 & 1.12 & 3.18 & 0.72\\ UGC07089 & 0.12 & \dots & 61.68 & 6.59 & 12.82 & 6.45 & 1.04 & 3.36 & 1.63\\ UGC07125 & 0.10 & \dots & 49.72 & 4.38 & 15.53 & 6.23 & 0.96 & 3.78 & 1.73\\ UGC07151 & 0.50 & \dots & 68.13 & 6.23 & 14.98 & 6.31 & 1.22 & 3.46 & 4.28\\ UGC07232 & 0.28 & \dots & 41.62 & 12.20 & 4.67 & 6.89 & 1.20 & 2.76 & 8.35\\ UGC07261 & 0.39 & \dots & 63.21 & 8.28 & 10.46 & 6.58 & 1.21 & 3.09 & 0.23\\ UGC07323 & 0.23 & \dots & 70.53 & 6.93 & 13.93 & 6.52 & 1.07 & 3.57 & 3.46\\ UGC07399 & 0.99 & \dots & 77.94 & 12.28 & 8.69 & 6.92 & 1.25 & 2.83 & 3.36\\ UGC07524 & 0.31 & \dots & 69.47 & 6.20 & 15.35 & 6.42 & 1.02 & 3.45 & 1.21\\ UGC07603 & 0.21 & \dots & 50.15 & 12.50 & 5.50 & 7.05 & 1.10 & 3.08 & 2.25\\ UGC07608 & 0.72 & \dots & 54.18 & 8.68 & 8.55 & 6.73 & 1.05 & 3.30 & 1.58\\ UGC07690 & 0.54 & \dots & 59.43 & 6.21 & 13.11 & 5.96 & 1.48 & 2.15 & 1.41\\ UGC08286 & 1.00 & \dots & 67.84 & 8.22 & 11.31 & 6.62 & 1.18 & 3.33 & 2.60\\ UGC08490 & 0.89 & \dots & 63.43 & 8.82 & 9.85 & 6.52 & 1.37 & 2.84 & 0.40\\ UGC08550 & 0.55 & \dots & 50.96 & 8.60 & 8.12 & 6.57 & 1.20 & 2.94 & 1.06\\ UGC08699 & 0.38 & 0.46 & 133.09 & 7.83 & 23.27 & 6.27 & 1.64 & 2.70 & 0.80\\ UGC09037 & 0.13 & \dots & 128.52 & 5.61 & 31.36 & 6.27 & 1.13 & 3.37 & 2.14\\ UGC09992 & 0.10 & \dots & 41.04 & 6.25 & 9.00 & 6.39 & 1.06 & 3.43 & 2.58\\ UGC10310 & 0.22 & \dots & 58.49 & 8.87 & 9.04 & 6.80 & 1.01 & 3.37 & 0.91\\ UGC11557 & 0.10 & \dots & 69.84 & 5.61 & 17.07 & 6.33 & 1.07 & 3.57 & 1.95\\ UGC11820 & 0.40 & \dots & 70.61 & 5.52 & 17.51 & 6.24 & 1.08 & 3.33 & 5.24\\ UGC12506 & 0.53 & \dots & 159.40 & 10.38 & 21.04 & 7.00 & 1.10 & 3.36 & 0.33\\ UGC12632 & 0.16 & \dots & 53.61 & 9.27 & 7.92 & 6.85 & 0.98 & 3.28 & 0.46\\ UGC12732 & 0.40 & \dots & 68.84 & 7.65 & 12.33 & 6.62 & 1.04 & 3.28 & 0.58\\ UGCA442 & 0.80 & \dots & 52.78 & 6.32 & 11.44 & 6.32 & 1.13 & 3.32 & 8.39\\ \end{longtable} }
Title: Panchromatic evolution of three luminous red novae: Forbidden hugs in pandemic times -- IV
Abstract: We present photometric and spectroscopic data on three extragalactic luminous red novae (LRNe): AT2018bwo, AT2021afy, and A2021blu. AT2018bwo was discovered in NGC45 (at about 6.8 Mpc) a few weeks after the outburst onset. During the monitoring period, the transient reached a peak luminosity of 10^40 erg/s. AT2021afy, hosted by UGC10043 (~49.2 Mpc), showed a double-peaked light curve, with an early maximum reaching a luminosity of 2.1x10^41 erg/s. Finally, for AT2021blu in UGC5829 (~8.6 Mpc), the pre-outburst phase was well-monitored by several photometric surveys, and the object showed a slow luminosity rise before the outburst. The light curve of AT2021blu was sampled with an unprecedented cadence until the object disappeared behind the Sun, and was then recovered at late phases. The light curve of LRN AT2021blu shows a double peak, with a prominent early maximum reaching a luminosity of 6.5x10^40 erg/s, half that of AT2021afy. The spectra of AT2021afy and AT 2021blu display the expected evolution for LRNe: a blue continuum dominated by prominent Balmer lines in emission during the first peak, and a redder continuum consistent with that of a K-type star with narrow absorption metal lines during the second, broad maximum. The spectra of AT2018bwo are markedly different, with a very red continuum dominated by broad molecular features in absorption. As these spectra closely resemble those of LRNe after the second peak, probably AT 2018bwo was discovered at the very late evolutionary stages. This would explain its fast evolution and the spectral properties compatible with that of an M-type star. From the analysis of deep frames of the LRN sites years before the outburst, and considerations of the light curves, the quiescent progenitor systems of the three LRNe were likely massive, with primaries ranging from about 13Mo for AT2018bwo, to 13-18Mo for AT 2021blu, and over 40Mo for AT2021afy.
https://export.arxiv.org/pdf/2208.02782
\title{Panchromatic evolution of three luminous red novae} \subtitle{Forbidden hugs in pandemic times -- IV} \author{A. Pastorello\inst{1}\fnmsep\thanks{andrea.pastorello@inaf.it} \and G.~Valerin\inst{1,2} \and M.~Fraser\inst{3} \and A.~Reguitti\inst{4,5,1} \and N.~Elias-Rosa\inst{1,6} \and A.~V.~Filippenko\inst{7,8} \and C. Rojas-Bravo\inst{9} \and L.~Tartaglia\inst{1} \and T.~M.~Reynolds\inst{10,11} \and S.~Valenti\inst{12} \and J.~E. Andrews\inst{13} \and C.~Ashall\inst{14} \and K.~A.~Bostroem\inst{15} \and T.~G.~Brink\inst{7} \and J.~Burke\inst{16,17} % \and Y.-Z.~Cai\inst{18,19,20} \and E.~Cappellaro\inst{1} \and D.~A.~Coulter\inst{9} \and R.~Dastidar\inst{4,5} % \and K.~W.~Davis\inst{9} \and G.~Dimitriadis\inst{21} % \and A.~Fiore\inst{22,23} \and R.~J.~Foley\inst{9} \and D.~Fugazza\inst{24} \and L.~Galbany\inst{6,25} \and A.~Gangopadhyay\inst{26,27} \and S.~Geier\inst{28,29} \and C.~P.~Guti\'errez\inst{30,10} \and J.~Haislip\inst{31} \and D.~Hiramatsu\inst{16,17,32,33} \and S.~Holmbo\inst{34} \and D.~A. Howell\inst{16,17} \and E.~Y.~Hsiao\inst{35} \and T.~Hung\inst{9} \and S.~W.~Jha\inst{36} \and E.~Kankare\inst{10,37} \and E.~Karamehmetoglu\inst{34} \and C.~D. Kilpatrick\inst{38} \and R.~Kotak\inst{10} \and V.~Kouprianov\inst{31} \and T.~Kravtsov\inst{10} \and S.~Kumar\inst{35} \and Z.-T.~Li\inst{39,40} \and M.~J.~Lundquist\inst{41} \and P.~Lundqvist\inst{42} \and K.~Matilainen\inst{10} \and P.~A.~Mazzali\inst{43,44} \and C.~McCully\inst{16} \and K.~Misra\inst{26} \and A.~Morales-Garoffolo\inst{45} \and S.~Moran\inst{10} \and N.~Morrell\inst{46} \and M.~Newsome\inst{16,17} \and E.~Padilla~Gonzalez\inst{16,17} \and Y.-C.~Pan\inst{47} \and C.~Pellegrino\inst{16,17} \and M.~M.~Phillips\inst{46} \and G.~Pignata\inst{4,5} \and A.~L.~Piro\inst{48} \and D.~E.~Reichart\inst{31} \and A.~Rest\inst{49,50} \and I.~Salmaso\inst{1,2} \and D.~J.~Sand\inst{51} \and M.~R.~Siebert\inst{9} \and S.~J.~Smartt\inst{52} \and K.~W.~Smith\inst{52} \and S.~Srivastav\inst{52} \and M.~D.~Stritzinger\inst{34} \and K.~Taggart\inst{9} \and S.~Tinyanont\inst{9} \and S.-Y.~Yan\inst{20} \and L.~Wang\inst{53} \and X.-F.~Wang\inst{20,54} \and S.~C.~Williams\inst{30,10} \and S.~Wyatt\inst{51} % \and T.-M.~Zhang\inst{39,40} \and T.~de~Boer\inst{55} \and K.~Chambers\inst{55} \and H.~Gao\inst{55} % \and E.~Magnier\inst{55} % } \institute{INAF - Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, I-35122 Padova, Italy\\ \email{andrea.pastorello@inaf.it} \and Universit\'a degli Studi di Padova, Dipartimento di Fisica e Astronomia, Vicolo dell’Osservatorio 2, 35122 Padova, Italy \and School of Physics, O’Brien Centre for Science North, University College Dublin, Belfield, Dublin 4, Ireland \and Instituto de astrofísica, Facultad de Ciencias Exactas, Universidad Andres Bello, Fern\'andez Concha 700, Las Condes, Santiago, Chile \and Millennium Institute of Astrophysics (MAS), Nuncio Monsenor S\'otero Sanz 100, Providencia, Santiago, 8320000, Chile \and Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans s/n, E-08193, Barcelona, Spain \and Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA \and Miller Institute for Basic Research in Science, University of California, Berkeley, CA 94720, USA \and Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064, USA \and Tuorla Observatory, Department of Physics and Astronomy, University of Turku, FI-20014 Turku, Finland \and Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen, Jagtvej 128, 2200 K\o{}benhavn N, Denmark \and Department of Physics and Astronomy, University of California, Davis, 1 Shields Avenue, Davis, CA 95616-5270, USA \and Gemini Observatory, 670 North A'ohoku Place, Hilo, HI 96720-2700, USA \and Department of Physics, Virginia Tech, 850 West Campus Drive, Blacksburg, VA 24061, USA \and DIRAC Institute, Department of Astronomy, University of Washington, 3910 15th Avenue NE, Seattle, WA 98195-0002, USA \and Las Cumbres Observatory, 6740 Cortona Dr. Suite 102, Goleta, CA 93117, USA \and Department of Physics, University of California, Santa Barbara, Santa Barbara, CA 93106, USA \and Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216, China \and Key Laboratory for the Structure and Evolution of Celestial Objects, Chinese Academy of Sciences, Kunming 650216, China \and Physics Department and Tsinghua Center for Astrophysics (THCA), Tsinghua University, Beijing 100084, China \and School of Physics, Trinity College Dublin, The University of Dublin, Dublin 2, Ireland \and European Centre for Theoretical Studies in Nuclear Physics and Related Areas (ECT$^\star$), Fondazione Bruno Kessler, I-38123, Trento, Italy \and INFN-TIFPA, Trento Institute for Fundamental Physics and Applications, Via Sommarive 14, I-38123 Trento, Italy \and INAF – Osservatorio Astronomico di Brera, via E. Bianchi 46 I-23807, Merate, Italy \and Institut d’Estudis Espacials de Catalunya (IEEC), E-08034, Barcelona, Spain \and Aryabhatta Research Institute of observational sciencES, Manora Peak, Nainital 263 002, India \and Hiroshima Astrophysical Science Centre, Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526, Japan \and Gran Telescopio Canarias (GRANTECAN), Cuesta de San Jos\'e s/n, 38712 Bre\~na Baja, La Palma, Spain \and Instituto de Astrof\'isica de Canarias, V\'ia L\'actea s/n, 38200 La Laguna, Tenerife, Spain \and Finnish Centre for Astronomy with ESO (FINCA), University of Turku, FI-20014 Turku, Finland \and Department of Physics and Astronomy, University of North Carolina, 120 East Cameron Avenue, Chapel Hill, NC 27599, USA \and Center for Astrophysics \textbar{} Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138-1516, USA \and The NSF AI Institute for Artificial Intelligence and Fundamental Interactions \and Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, 8000 Aarhus C, Denmark \and Department of Physics, Florida State University, 77 Chieftan Way, Tallahassee, FL 32306, USA \and Department of Physics and Astronomy, Rutgers, The State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854, USA \and Turku Collegium for Science, Medicine and Technology, University of Turku, FI-20014 Turku, Finland \and Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), Northwestern University, Evanston, IL 60208, USA \and Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China \and School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 101408, China \and W.~M.~Keck Observatory, 65-1120 Ma-malahoa Highway, Kamuela, HI 96743-8431, USA \and The Oskar Klein Centre, Department of Astronomy, Stockholm University, AlbaNova, SE-10691 Stockholm, Sweden \and Astrophysics Research Institute, Liverpool John Moores University, ic2, 146 Brownlow Hill, Liverpool L3 5RF, UK \and Max-Planck Institut f\"ur Astrophysik, Karl-Schwarzschild-Str. 1, D-85741 Garching, Germany \and Department of Applied Physics, University of C\'adiz, Campus of Puerto Real, 11510 C\'adiz, Spain \and Las Campanas Observatory, Carnegie Observatories, Casilla 601, La Serena, Chile \and Graduate Institute of Astronomy, National Central University, 300 Zhongda Road, Zhongli, Taoyuan 32001, Taiwan \and The Observatories of the Carnegie Institution for Science, 813 Santa Barbara St., Pasadena, CA 91101, USA \and Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA \and Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA \and Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721-0065, USA \and Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast, BT7 1NN, UK \and Chinese Academy of Sciences, South America Center for Astronomy, National Astronomical Observatories, CAS, Beijing 100101, China \and Beijing Planetarium, Beijing Academy of Science and Technology, Beijing 100044, China \and Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA\\ } \date{Received August 03, 2022; accepted Month dd, 202X} \abstract {We present photometric and spectroscopic data on three extragalactic luminous red novae (LRNe): \object{AT~2018bwo}, \object{AT~2021afy}, and \object{AT~2021blu}. \object{AT~2018bwo} was discovered in \object{NGC~45} (at about 6.8~Mpc) a few weeks after the outburst onset. During the monitoring period, the transient reached a peak luminosity of $10^{40}$~erg~s$^{-1}$. \object{AT~2021afy}, hosted by \object{UGC~10043} ($\sim 49.2$~Mpc), showed a double-peaked light curve, with the two peaks reaching a similar luminosity of $2.1 (\pm 0.6) \times 10^{41}$~erg~s$^{-1}$. Finally, for \object{AT~2021blu} in \object{UGC~5829} ($\sim 8.6$~Mpc), the pre-outburst phase was well-monitored by several photometric surveys, and the object showed a slow luminosity rise before the outburst. The light curve of \object{AT~2021blu} was sampled with an unprecedented cadence until the object disappeared behind the Sun, and it was then recovered at late phases. The light curve of LRN \object{AT~2021blu} shows a double peak, with a prominent early maximum reaching a luminosity of $6.5 \times 10^{40}$~erg~s$^{-1}$, which is half of that of \object{AT~2021afy}. The spectra of \object{AT~2021afy} and \object{AT~2021blu} display the expected evolution for LRNe: a blue continuum dominated by prominent Balmer lines in emission during the first peak, and a redder continuum consistent with that of a K-type star with narrow absorption metal lines during the second, broad maximum. The spectra of \object{AT~2018bwo} are markedly different, with a very red continuum dominated by broad molecular features in absorption. As these spectra closely resemble those of LRNe after the second peak, \object{AT~2018bwo} was probably discovered at the very late evolutionary stages. This would explain its fast evolution and the spectral properties compatible with that of an M-type star. From the analysis of deep frames of the LRN sites years before the outburst, and considerations of the light curves, the quiescent progenitor systems of the three LRNe were likely massive, with primaries ranging from about 13~M$_\odot$ for \object{AT~2018bwo}, to $14_{-1}^{+4}$~M$_\odot$ for \object{AT~2021blu}, and over 40~M$_\odot$ for \object{AT~2021afy}.} \keywords{binaries: close --- stars: winds, outflows --- stars: individual: AT~2018bwo --- stars: individual: AT~2021afy --- stars: individual: AT~2021blu} \section{Introduction} \label{sect:intro} Luminous red novae (LRNe) are optical transients that are thought to result from a close binary interaction leading to the ejection of a common envelope, eventually followed by the coalescence of the stellar cores \citep[e.g.][]{iva17}. LRNe span an enormous range of luminosities, but they have a surprisingly similar spectral evolution. About five orders of magnitude separate the peak luminosity of the faintest Galactic objects such as \object{OGLE~2002-BLG-360} \citep{tyl13} and \object{V1309~Sco} \citep{mas10,tyl11} from bright extragalactic events \citep[$0 \gtrsim M_V \gtrsim -15$~mag; see, e.g. the sample presented by][]{pasto19a}. The latter objects, with intermediate luminosity between those of classical novae and core-collapse supernovae, are 'gap transients' \citep[][]{kas12,pasto19,cai22a}. LRNe have structured light curves, with a phase of slowly rising luminosity lasting months to years, followed by a major outburst. The outburst is usually characterised by a short-duration early peak, during which the object has a blue colour, followed by a plateau or a second broad peak with a redder colour. While a LRN during the slow pre-outburst brightening has not been spectroscopically monitored yet, spectra during the early peak show a blue continuum with prominent lines in emission of the Balmer series, similar to those of other gap transients. At later phases (during the plateau or the second peak), the spectral continuum of LRNe becomes progressively redder, the Balmer lines become weaker, and many narrow absorption lines of metals appear. At very late phases, the optical spectrum resembles that of intermediate to late M-type stars, dominated by prominent absorption bands of TiO and VO \citep{mar99,kim06,bar07,bar14,tyl15}. While not all of the physical processes leading to LRN outbursts have been fully understood \citep{iva13a,iva13b}, significant progress has been made in the last decade from the observational side. In particular, follow-up observations of \object{V1309~Sco} revealed the signature of unstable mass transfer in a binary system when the primary filled its Roche lobe. The process may led to the ejection of a common envelope and the loss of angular momentum of the binary \citep{pej16a,pej17,mcl18,mcl20}. Regardless of the mass, a binary stellar system after the ejection of the common envelope can either evolve to a new stable and closer binary configuration \citep[][and references therein]{jon20}, or to a final coalescence \citep{tyl06}, as is what happened for \object{V1309~Sco} \citep{tyl11}. The merging event is the most popular scenario to explain the LRN observables \citep[e.g.][]{kam21}. The double-peaked light curves and most of the observational properties of LRN outbursts are fairly well explained by gas outflow following the coalescence of the two stellar cores \citep[e.g.][for V838~Mon]{tyl06}, and subsequent shock interaction with the outer envelope. This scenario has been successfully modelled by a number of authors \citep{sha10,nan14,met17,mcl17}. However, the merger scenario for LRNe has been occasionally challenged by late-time observations of the remnant \citep[e.g. in the case of V838~Mon;][]{gor20}. Regardless of the fate of the system (merger or survived binary), the mass accretion onto an equatorial disk may power polar jets colliding with the slow-moving envelope, which may account for the properties of LRNe \citep{kas16,sok16a,sok16b,sok20,sok21}. From an observational point of view, most LRNe display an early blue peak in the light curve resulting from the outflow of hot material ejected in the merging process. However, in some LRNe the initial blue colour is not detected \citep[e.g. in \object{AT~2015dl} and \object{AT~2020hat};][]{bla17,pasto21a}. This can be due to an observational bias, as the early blue peak is a short-duration event. Alternatively, the lack of an initial blue phase can be due to an expanded, red giant (or supergiant) primary star. In this paper, we report extensive datasets for three LRNe. First, we present new data for AT~2018bwo which complement those released by \citet{bla21}. \object{AT~2018bwo} is an object whose observations do not show evidence of an early blue phase, but its explosion epoch is poorly constrained. Furthermore, we present optical and near-infrared (NIR) data for two LRNe discovered in 2021: \object{AT~2021afy} and \object{AT~2021blu}. In the case of the latter, Sloan $g$- and $r$-band light curves were also presented by \citet{sora22}. In contrast to the monitoring campaigns of other LRNe in our programme \citep{pasto21a,pasto21b}, the follow-up campaigns of these two objects were not significantly affected by the COVID-19 pandemic restrictions as to our access to observational facilities. This study is complemented by a companion paper on another LRN monitored during the same period, AT~2021biy \citep{cai22}. We provide the basic information for the three transients and their host galaxies in Sect. \ref{Sect:hosts}. The photometric data are presented in Sect. \ref{sect:photo}; the evolution of their physical parameters (bolometric luminosity, photospheric radius, and temperature) is illustrated in Sect. \ref{sect:TLR}; and their spectral evolution is described in Sect. \ref{sect:spec}. The nature of the progenitors, the mechanisms producing LRN outbursts, and the updated version of the correlations presented by \citet{pasto19a} and \citet{pasto21a} are discussed in Sect. \ref{Sect:discussion}. A brief summary follows in Sect. \ref{Sect:conclusions}. \section{\object{AT~2018bwo}, \object{AT~2021afy}, \object{AT~2021blu}, and their host galaxies} \label{Sect:hosts} \object{AT~2018bwo}\footnote{The object is also known as DLT18x, ATLAS18qgb, and Gaia18blv.} was discovered by the DLT40 survey \citep{tar18} on 2018 May 22.93 \citep[UT dates are used throughout this paper;][]{val18}.\footnote{As mentioned by \citet{bla21}, the discovery unfiltered magnitude reported by \protect\citet{val18}, 16.44 (AB mag scale), is incorrect; see Sect. \ref{sect:2018bwo_lc}.} Its coordinates are $\alpha = 00^h 14^m 01\fs69$ and $\delta = -23\degr11\arcmin35\farcs21$ (J2000). \citet{cla18} noted the similarity with the spectrum of an F-type star and, also taking into account the faint absolute magnitude of the object, proposed an LRN classification for \object{AT~2018bwo}. The host galaxy, \object{NGC~45}, is a nearly face-on SABd spiral. Although the object lies in the outskirts of \object{NGC~45} ($31\farcs7$ W and $39\farcs7$ S of the host-galaxy nucleus), it is very close to contaminating background sources (Fig. \ref{Fig:21afy_21blu_FC}, top panel). While at odds with other methods (``sosie'' galaxies\footnote{See, e.g., \protect\citet{bot85} for a description of the method.}, Tully-Fisher, kinematic), for the host-galaxy distance ($d$) we adopt the most recent value based on the tip of the red giant branch method \citep{sab18}, $d = 6.79\pm0.49$~Mpc, corresponding to a distance modulus of $\mu = 29.16 \pm 0.36$~mag. This value of $\mu$ is similar to that adopted by \citet{bla21}, $\mu = 29.11 \pm 0.10$~mag. A (modest) average reddening within the host galaxy was estimated by \citet{mora07}, $E(B-V) = 0.04$~mag. \citet{bla21} adopted an even lower value ($E(B-V) = 0.01$~mag) based on the spectral energy distribution (SED) of the LRN progenitor. The peripheral location of \object{AT~2018bwo} and the presence of very blue sources in its vicinity suggest a negligible contribution of the host galaxy to the total line-of-sight reddening. For this reason, hereafter we assume that the total reddening towards \object{AT~2018bwo} is entirely due to the Milky Way contribution \citep[$E(B-V)_{\rm MW} = 0.02$~mag;][]{sch11}. \object{AT~2021afy}\footnote{The object is also known as ZTF21aaekeqd.} was discovered by the Zwicky Transient Facility \citep[ZTF;][]{bel19,gra19} survey on 2021 January 10.52, at a magnitude of $r = 20.48$ \citep{mun21}. Alert of the discovery was released by the ALeRCE broker\footnote{\url{http://alerce.online/object/ZTF21aaekeqd}.} \citep{car20}. The coordinates of the transient are $\alpha = 15^{h}48^{m}43\fs172$ and $\delta = +21\degr51\arcmin09\farcs62$ (J2000). The object lies above the disk plane of the edge-on spiral (Sbc-type) galaxy \object{UGC~10043} (Fig. \ref{Fig:21afy_21blu_FC}, middle panel). For the host galaxy, a Tully-Fisher distance of about 49.2~Mpc was inferred by \citet[][with H$_0 = 73$~km~s$^{-1}$~Mpc$^{-1}$, and assuming $\Omega_{\rm matter} = 0.27$ and $\Omega_{\rm vacuum} = 0.73$]{tul16}. Hence, the adopted distance modulus is $\mu = 33.46 \pm 0.45$~mag\footnote{The distance to \object{UGC~10043} is debated, as Tully-Fisher values reported in the literature range from about 40 to almost 60 Mpc, but are still within the (large) error bars adopted in \protect\citet{tul16} estimate.}. While the Milky Way reddening towards \object{AT~2021afy} is modest, $E(B-V)_{\rm MW} = 0.05$~mag, the detection of prominent absorption of the interstellar Na~I doublet (Na~I\,D) $\lambda\lambda$5890,~5896 in the transient's spectra at the redshift of the host galaxy (see Sect. \ref{sect:spec}) suggests significant reddening, which is unexpected given the peripheral location of \object{AT~2021afy} from the nucleus of \object{UGC~10043}. For this reason, we speculate that the gas and dust cloud is circumstellar, or located in the proximity of the object. Accounting for the contribution of the host-galaxy reddening (see Sect. \ref{sect:2021afy_spec} for details), we infer a total line-of-sight colour excess of $E(B-V)_{\rm tot} = 0.43 \pm 0.11$~mag. \object{AT~2021blu}\footnote{Alternative survey names are ATLAS21dic, ZTF21aagppzg, PS21akb, and Gaia21cwl.} was discovered by the Asteroid Terrestrial-impact Last Alert System \citep[ATLAS;][]{ton18,smi20} survey on 2021 February 1.47, at an ATLAS-orange magnitude of $o = 18.486$ \citep{ton21}. The coordinates of the transient are $\alpha = 10^{h}42^{m}34\fs340$ and $\delta = +34\degr26\arcmin14\farcs60$ (J2000). The object lies in a remote location of the irregular (Im type) galaxy \object{UGC~5829}. While a distance of 8~Mpc (with Hubble constant H$_0 = 75$~km~s$^{-1}$~Mpc$^{-1}$) was estimated by \citet{tf88}, the kinematic distance corrected for Local Group infall into Virgo gives $d = 8.64 \pm 0.61$~Mpc \citep{mou00} (computed adopting H$_0 = 73$~km~s$^{-1}$~Mpc$^{-1}$), and a distance modulus of $\mu = 29.68 \pm 0.15$~mag. The site of \object{AT~2021blu} is shown in Fig. \ref{Fig:21afy_21blu_FC} (bottom panel). The remote location of the transient in the host galaxy and the nondetection of the Na~I\,D narrow interstellar feature at the redshift of \object{UGC~5829} suggest no reddening due to host galaxy dust. For this reason, we assume that extinction is only due to the Galactic contribution, $E(B-V)_{\rm MW} = 0.02$~mag \citep{sch11}. We remark that \object{AT~2021blu} was initially classified as a luminous blue variable outburst by \citet{uno21}. However, as we detail in the next sections, follow-up data indicate that both this object and \object{AT~2021afy} are LRNe. \section{Photometric data} \label{sect:photo} Basic information on the instrumental configurations used for the photometric campaigns of the three LRNe is provided in Appendix \ref{Appendix:A}. The reduction of the optical photometric data collected with our facilities was carried out with the {\sc SNOoPY}\footnote{{\sc SNOoPY} is a package for supernova photometry using point-spread-function (PSF) fitting and/or template subtraction developed by E. Cappellaro. A package description can be found at \url{http://sngroup.oapd.inaf.it/ecsnoopy.html}.} package. Science frames were first bias-subtracted and flatfield-corrected. {\sc SNOoPY} allows us to carry out the astrometric calibration of the images, and PSF-fitting photometry of the target after template subtraction, if required. Owing to their remote locations in the host galaxies, simple PSF-fitting photometry was used to obtain the photometric data for \object{AT~2021afy} and \object{AT~2021blu}, while template subtraction was necessary for \object{AT~2018bwo}. Deep template images of the \object{AT~2018bwo} explosion site (with Johnson $U$, $B$, $V$; and Sloan $g$, $r$, $i$ filters) were obtained on 2021 July 7 with one of the 1~m telescopes of the Las Cumbres Observatory global telescope network. The instrumental magnitudes in the Sloan filters were then calibrated using zero points and colour-term corrections with reference to the Sloan Digital Sky Survey (SDSS) catalogue. As the field of \object{AT~2018bwo} was not sampled by SDSS, the Sloan-filter photometry of this LRN was calibrated using reference stars taken from the Pan-STARRS catalogue. A catalogue of comparison stars to calibrate photometry in the Johnson-Cousins filters was obtained by converting Sloan and Pan-STARRS magnitudes to Johnson-Cousins magnitudes using the transformation relations of \citet{chr08}. Finally, for the $o$- and $c$-band ATLAS data, we directly used the template-subtracted forced photometry \protect\citep{ton18,smi20} released through the ATLAS data-release interface\footnote{\url{https://fallingstar-data.com/forcedphot/queue/}.}. {\it Swift} optical and ultraviolet (UV) magnitudes (see Appendix \ref{Appendix:A}) were measured with the task {\sc uvotsource} included in the {\sc UVOT} software package {\sc HEAsoft}\footnote{\url{https://heasarc.gsfc.nasa.gov/docs/software/heasoft/}.} distribution v. 6.28. We performed aperture photometry using a $3''$ radius, while the sky contribution was computed in a ring placed between $5''$ and $10''$ from the source. NIR images required some preliminary processing steps. We first constructed sky images for each filter by median-combining several dithered science frames. The contribution of the bright NIR sky was hence subtracted from individual science frames. To improve the signal-to-noise ratio (S/N), we finally combined the sky-subtracted frames. For the NOTCam data (see Appendix \ref{Appendix:A}), the above steps were performed using a version of the NOTCam Quicklook v2.5 reduction package\footnote{\url{http://www.not.iac.es/instruments/notcam/guide/observe.html}.} with a few functional modifications (e.g., to increase the field of view of the reduced image). The following steps (astrometric calibrations, PSF-fitting photometry, and zero-point corrections) were made using {\sc SNOoPY} and the same prescriptions as for the optical images. Reference stars from the Two Micron All-Sky Survey (2MASS) catalogue \citep{skr06} were used for the photometric calibration. The final magnitudes of \object{AT~2018bwo}, \object{AT~2021afy}, and \object{AT~2021blu} in the optical bands are given in Tables A1, A2, and A3, respectively\footnote{The tables are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via \url{http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/}.}. The light curves of \object{AT~2018bwo} and \object{AT~2021afy} are shown in Fig. \ref{Fig:18bwo_lc} and \ref{Fig:21afy_lc}, respectively. The long-term light curves of \object{AT~2021blu} from the UV to the NIR are shown in Fig. \ref{Fig:21blu_lc} (top panel). The bottom-left panel of Fig. \ref{Fig:21blu_lc} displays in detail the UV light curves of the \object{AT~2021blu} outburst during the first peak, while the bottom-right panel shows the evolution of the optical and NIR light curves during the LRN outburst, before the seasonal gap. \subsection{AT~2018bwo} \label{sect:2018bwo_lc} Although \object{AT~2018bwo} was discovered on 2018 May 22, the object was also visible in DLT40 images taken eight and six days prior, at a comparable brightness. Earlier images are not available as the object was in solar conjunction. These early DLT40 images are unfiltered, but were calibrated to match the Johnson-Cousins $R$ band. In all these frames, the brightness remains nearly constant at $R \approx 18.1$--18.2~mag. The lack of earlier images prevents us from setting a stringent limit on the LRN onset. Monthly unfiltered DLT40 stacked images obtained from June to August 2017 do not show signs of the LRN down to a limit of $R \approx 21.8$~mag. A closer nondetection is provided by the {\it Gaia} Alert team\footnote{\url{http://http://gsaweb.ast.cam.ac.uk/alerts/home}.}, which reports that no source is visible at the location of the object on 2018 January 15, hence about four months before the discovery. Therefore, we can only estimate a lower limit for the LRN outburst duration. The last positive detection of the LRN is $\sim 2.5$ months after the discovery, while observations at later epochs only provide upper detection limits. We find some differences between our Sloan-band light curves and those presented by \citet{bla21}. Our data have smaller scatter, and they appear to be $\sim 0.15$~mag brighter in the $g$ band and 0.2~mag fainter in the $r$ band, while there is a fair agreement in the $i$ band. As both datasets were obtained after template subtraction, this mismatch is puzzling. We note, however, that we used the Pan-STARRS reference catalogue for the calibration. Other possible explanations are the low S/N of the source in individual images taken with 1~m-class telescopes, or inaccurate colour-term corrections. Our optical data reveal that the LRN remained in a sort of plateau for over three weeks after the discovery, at average magnitudes of $g = 19.78 \pm 0.27$ and $V = 19.02 \pm 0.17$~mag, which provide absolute magnitudes of $M_g = -9.45 \pm 0.45$ and $M_V = -10.14 \pm 0.45$~mag. We also obtain the intrinsic colours in this phase, $g-r = 1.44 \pm 0.28$~mag and $B-V = 1.90 \pm 0.24$~mag. The plateau is followed by a luminosity drop in all bands. In its initial phase, the light-curve decline is relatively slow, but it becomes very steep $\sim 50$ days after the discovery. As for most of LRNe, the object leaves the plateau earlier in the bluer bands than in the redder bands. The overall shape of the light curve of \object{AT~2018bwo} is reminiscent of those of LRNe during the late plateau phase (or soon after the broad, red light-curve maximum). This similarity is corroborated by spectroscopic clues, as the observed spectra of \object{AT~2018bwo} resemble the late-time spectra of canonical LRNe (see Sect. \ref{sect:spec}). \citet{bla21} suggested that the merger's photosphere was initially at a much lower temperature and with a larger radius than typical LRNe. However, this statement is not supported by stringent observational constraints. In particular, from the available data, we can presume that the outburst onset occurred a few months before the LRN discovery, and we cannot rule out that the intrinsic colour was initially much bluer than that observed. Consequently, while we agree that the red colour of \object{AT~2018bwo} is an indication of a more expanded and cooler photosphere, this is possibly due to the late discovery of the transient \citep{pasto19a,pasto19b,cai19,pasto21a,pasto21b}. \subsection{AT~2021afy} \label{sect:2021afy_lc} The light curve of the \object{AT~2021afy} outbursts is well sampled in the optical and NIR bands (Fig. \ref{Fig:21afy_lc}). In contrast, only limited information is available for the pre-outburst phases. To better constrain the epoch of the LRN outburst onset, we analysed the ZTF DR3 images, finding a weak ($\sim 2.7\sigma$) detection of the transient at $r = 20.77\pm0.49$~mag on 2021 January 7.555, three days before the discovery announced by \citet{mun21}. However, nondetections are registered on the same day in the $g$ band ($> 20.72$~mag) or at earlier epochs. To increase the S/N, we also stacked\footnote{Information on the pre-outburst staked images of the \object{AT~2021afy} field is given in Appendix \ref{Appendix:A.1} (Table \ref{Table:A1.4}).} the highest-quality images obtained in December 2020, and no source was detected down to $r > 21.1$~mag. No activity was revealed in earlier images provided by ZTF. In particular, we stacked images in the $g$, $r$, and $i$ bands obtained over several months in mid-2018, and no source was detected at the LRN location to the following limits: $g \gtrsim 20.95$, $r \gtrsim 22.05$, and $i \gtrsim 21.33$~mag. The available data allow us to constrain a first light-curve rise, which lasts at least 10 days. The $g$-band maximum, derived through a low-order polynomial fit to the light curve, is reached on MJD = $59231.7 \pm 1.6$, at $g = 20.63 \pm 0.03$~mag. Hereafter, this epoch will be used as a reference for \object{AT~2021afy}. The $r$-band peak is reached 0.7~days later. Accounting for the total line-of-sight extinction, the intrinsic colour at the first maximum is $g-r \approx 0.16$~mag. After the first peak, the light curves decline in all bands for about two weeks, reaching a relative minimum 0.3--0.4 mag fainter, followed by a second, broader maximum about one month later. At the second peak, we measure $g = 20.59 \pm 0.02$~mag, while $g-r$ is similar to the colour of the first peak. This broad peak is then followed by a rapid decline in all bands, and the colours become rapidly much redder ($g-r \approx 1.1$~mag, $\sim 90$~days after the first peak). We note that the minimum between the two light-curve peaks is more pronounced in the blue bands than in the red bands, while the NIR light curves show sort of a long-lasting plateau after the first maximum, although the NIR light curves are not well sampled. Regardless of the filter, and in contrast with the behaviour of other LRNe, the luminosity of \object{AT~2021afy} at the time of the first maximum is very similar to that of the second peak in all bands. Accounting for the reddening and the distance adopted in Sect. \ref{Sect:hosts}, we obtain the following $g$-band absolute magnitudes at the two peaks: $M_{g,{\rm max1}} = -14.46 \pm 0.63$ and $M_{g,{\rm max2}} = -14.48 \pm 0.63$~mag. \subsection{AT~2021blu} \label{sect:2021blu_lc} \begin{table*} \caption{\label{tab:photo_pre} Archival data, obtained from February 2006 to May 2017, of the source at the \object{AT~2021blu} location. } \centering \begin{tabular}{ccccc} \hline\hline UT Date & MJD & Filter & Magnitude & Instrumental configuration\\ \hline 2006-02-25 & 54094.15 & $B$ & 23.50 (0.14) & INT + WFC \\ 2006-02-25 & 54094.13 & $V$ & 23.03 (0.26) & INT + WFC \\ 2006-12-25 & 54094.12 & $r$ & 22.98 (0.12) & INT + WFC \\ 2010-03-21 to 2013-02-12 & 55801.07$^\ast$ & $g$ & 23.27 (0.21) & PS1 (stack) \\ 2011-03-14 to 2014-02-09 & 56246.25$^\ast$ & $r$ & 22.98 (0.18) & PS1 (slack) \\ 2011-05-16 to 2015-01-12 & 56246.96$^\ast$ & $z$ & 22.83 (0.27) & PS1 (slack) \\ 2010-12-31 to 2015-01-22 & 56271.52$^\ast$ & $y$ & $>$21.86 & PS1 (slack) \\ 2011-03-14 to 2015-01-12 & 56582.35$^\ast$ & $i$ & 22.86 (0.20) & PS1 (slack) \\ 2016-03-09 & 57456.33 & $g$ & 23.03 (0.34) & Bok + 90prime \\ 2016-02-03 & 57421.37 & $r$ & $>$22.73 & Bok + 90prime \\ 2016-02-04 & 57422.40 & $r$ & 22.79 (0.44) & Bok + 90prime \\ 2016-02-04 & 57422.40 & $z$ & 22.51 (0.35) & KPNO4m + Mosaic3 \\ 2016-02-06 & 57424.38 & $z$ & 22.50 (0.33) & KPNO4m + Mosaic3 \\ 2016-02-15 & 57433.38 & $r$ & $>$22.70 & Bok + 90prime \\ 2017-03-22 & 57834.34 & $z$ & 22.03 (0.18) & KPNO4m + Mosaic3 \\ 2017-03-25 & 57837.31 & $z$ & 22.06 (0.16) & KPNO4m + Mosaic3 \\ 2017-05-17 & 57890.21 & $g$ & $>$22.60 & Bok + 90prime \\ \hline \end{tabular} \\ \tablefoot{Johnson-Bessell $B$ and $V$ data are in the Vega magnitude scale, while Sloan $g$, $r$, $i$, $z$ and Pan-STARRS $y$ data are in the AB magnitude scale. The Pan-STARRS data were obtained after stacking individal images collected from March 2010 to January 2015.\\ $(\ast)$ Average MJD of the stacked image.} \end{table*} \begin{table*} \caption{\label{tab:photo_maxima} Epochs (MJDs) and apparent magnitudes of the two light-curve peaks of \object{AT~2021blu} in the different filters.} \centering \begin{tabular}{cccccc} \hline\hline Filter & $\lambda_{mean}$~(\AA) & MJD~(peak~1) & Magnitude~(peak~1) & MJD~(peak~2) & Magnitude~(peak~2) \\ \hline $UVW2$ & 2140 & $59258.81 \pm 0.27$ & $17.07 \pm 0.02$ & -- & -- \\ $UVM2$ & 2273 & $59258.86 \pm 0.21$ & $16.86 \pm 0.04$ & -- & -- \\ $UVW1$ & 2688 & $59258.85 \pm 0.32$ & $16.63 \pm 0.05$ & -- & -- \\ $U$ & 3416 & $59258.86 \pm 0.22$ & $16.04 \pm 0.04$ & $59393.2 \pm 1.8 $ & $18.51 \pm 0.05$ \\ $B$ & 4313 & $59258.88 \pm 0.08$ & $16.90 \pm 0.02$ & $59399.3 \pm 2.7 $ & $18.34 \pm 0.03$ \\ $g$ & 4751 & $59258.89 \pm 0.10$ & $16.69 \pm 0.02$ & $59401.8 \pm 5.3 $ & $17.89 \pm 0.02$ \\ $cyan$ & 5409 & $59258.89 \pm 0.18$ & $16.69 \pm 0.04$ & $59403.9 \pm 15.4$ & $17.62 \pm 0.11$ \\ $V$ & 5446 & $59258.91 \pm 0.07$ & $16.69 \pm 0.02$ & $59404.2 \pm 5.4 $ & $17.48 \pm 0.03$ \\ $r$ & 6204 & $59258.96 \pm 0.16$ & $16.51 \pm 0.02$ & $59410.0 \pm 8.9 $ & $17.18 \pm 0.06$ \\ $orange$ & 6866 & $59258.97 \pm 0.13$ & $16.61 \pm 0.02$ & $59412.8 \pm 10.6$ & $17.08 \pm 0.07$ \\ $i$ & 7519 & $59258.99 \pm 0.07$ & $16.56 \pm 0.02$ & $59412.3 \pm 4.0 $ & $16.93 \pm 0.02$ \\ $z$ & 8992 & $59259.16 \pm 0.17$ & $16.66 \pm 0.02$ & $59415.6 \pm 7.0 $ & $16.82 \pm 0.03$ \\ $J$ & 12350&$>59264.11$& $<16.20$ & $59419.7 \pm 10.8$ & $15.91 \pm 0.08$ \\ $H$ & 16620&$>59264.12$& $<16.03$ & $59421.6 \pm 10.3$ & $15.81 \pm 0.08$ \\ $K$ & 21590&$>59264.12$& $<15.74$ & $59425.3 \pm 8.7 $ & $15.59 \pm 0.09$ \\ \hline \end{tabular} \\ \tablefoot{Johnson-Bessell $U$, $B$ and $V$, UV and NIR magnitudes are the Vega system, while Sloan $g$, $r$, $i$, $z$ data are in the AB magnitude system. } \end{table*} \subsubsection{Pre-outburst data} \label{sect:pre2021blu} The field of \object{AT~2021blu} was extensively observed in the last few years. We inspected images released by the main surveys through public archives. To increase the S/N, we created periodic stacks\footnote{Information on the pre-outburst ZTF stacked images of the \object{AT~2021blu} field is provided in Appendix \ref{Appendix:A.1} (Table \ref{Table:A1.5}).} using good-quality ZTF images, and a source was detected at the location of the LRN already in 2018. In addition, very deep images taken with ground-based, mid-sized telescopes in 2006 and early 2016 show a source of $\sim 23$~mag at the LRN location (Fig. \ref{Fig:HST}). In particular, Johnson-Bessell $B$ and $V$, and Sloan $r$ images taken in February 2006 with the Isaac Newton Telescope (INT) equipped with the wide-field camera (WFC) reveal a faint source at the LRN position, with $B = 23.50 \pm 0.14$, $V = 23.03 \pm 0.26$, and $r = 22.98 \pm 0.12$~mag. The source is also detected in deep PS1 reference images determined by stacking frames obtained from March 2010 to January 2015. Specifically, the stack PS1 frame in the $r$ band shows the source being at the same magnitude as in 2006, with an intrinsic colour of $g-r \approx 0.27$~mag. The magnitudes of the source at the position of \object{AT~2021blu} in the 2006--2017 period are reported in Table \ref{tab:photo_pre}. We further discuss these archival data in Sect. \ref{Sect:progenitors_archive}, as they likely provide us with the most stringent information on the quiescent progenitor of \object{AT~2021blu}. We note, however, that the low spatial resolution and the relatively low S/N of these images do not allow us to rule out the presence of contaminating sources in the proximity of the LRN location. Furthermore, archive frames in the Sloan $g$, $r$ and $z$ filters obtained in 2016 with the 2.3~m Bok and the 4~m Mayall telescopes (both hosted at the Kitt Peak Observatory) equipped with mosaic cameras still show the putative progenitor of \object{AT~2021blu}. Over the decade, this source experienced modest magnitude evolution, and in February 2016 it had marginally brightened by $\sim 0.15$--0.2~mag in the $g$ and $r$ bands (see Fig. \ref{Fig:HST}, and Table \ref{tab:photo_pre}). More-recent images show this source becoming progressively more luminous: in one year (in March 2017) it has brightened by $\sim 0.5$~mag in the $z$ band, and the object has been repeatedly detected at later epochs. The $r$-, $w$- and $i$-band light curves from December 2019 to January 2021 (approximately from $-420$~d to $-40$~d before $g$-band maximum) show some luminosity fluctuations superposed on a global, slow luminosity rise (Fig. \ref{Fig:21blu_lc}, top panel), very similar to those observed in other LRNe \citep{bla17,bla20,pasto19b,pasto21a,pasto21b}. The $r-i$ colour remains at about 0.15--0.2~mag during that period. As for other members of this family, this slow luminosity rise follows the ejection of the common envelope, and it is possibly powered by collisions between circumbinary shells. \subsubsection{Photometric evolution of the outburst} \label{sect:outburstlc2021blu} The object is later observed in outburst (in early February 2021) by ATLAS on MJD = 59246.49 (at an $o$-band magnitude of $18.73 \pm 0.15$). The object experiences a fast rise, reaching the first (blue) maximum light in a bit less than two weeks. The epoch of the $g$-band maximum is MJD = $59258.89 \pm 0.10$, which is used hereafter as a reference for \object{AT~2021blu}. From the apparent magnitudes at the first peak, $g = 16.69 \pm 0.02$~mag ($V = 16.69 \pm0.02$~mag), we estimate the following absolute magnitudes and intrinsic colours: $M_{g,{\rm pk1}} = -13.07 \pm 0.15$ and $M_{V,{\rm pk1}} = -13.06 \pm $ 0.15~mag, with $g-r = 0.16 \pm 0.03$, $B-V = 0.19 \pm 0.03$~mag. The UV light curves obtained with {\it Swift} rapidly reach maximum brightness at nearly the same time as the $g$-band peak, at magnitudes between 16.5 and 17 (depending on the {\it Swift} UV filters; Fig. \ref{Fig:21blu_lc}, bottom-left panel). The first peak is followed by a luminosity decline which lasts about 75~days, during which \object{AT~2021blu} fades by $\sim 4.5$~mag in the $U$ band, 3.1~mag in the $B$ band, 2~mag in the $V$ band, 2.7~mag in the $g$ band, 1.6~mag in the $r$ band, and 1.5~mag in the $i$ band (see Fig. \ref{Fig:21blu_lc}, bottom-right). A decline similar to that of the red optical bands is also observed in the NIR domain, although this phase was not well sampled. The UV light curves exhibit a very rapid post-peak decline, more rapid than the one observed in the $U$ band, with the LRN fading below the detection threshold of {\it Swift}/UVOT about three weeks after maximum. Later, the luminosity rises again in all bands, reaching a second, broader peak, earlier in the blue filters. In particular, the light curve reaches the second $g$-band maximum on MJD~=~$59401.8 \pm 5.3$, which is about 143~days after the first peak. The apparent magnitude at the second maximum is $g=17.89 \pm 0.02$~mag, which provides an absolute magnitude of $M_{g,{\rm pk2}} = -11.87 \pm 0.16$~mag, while the reddening-corrected colour at this epoch is $g-r = 0.71 \pm 0.08$~mag. The second peak is reached slightly later (on MJD~=~$59404.2 \pm 5.4 $) in the $V$-band, at a magnitude of $V = 17.48 \pm 0.03$~mag ($M_{V,{\rm pk2}} = -12.26 \pm 0.15$~mag). At this epoch, we determine a reddening-corrected colour of $B-V = 0.84 \pm 0.04$~mag. The times and the apparent magnitudes of the two light-curve peaks were estimated through a low-order polynomial fit to the photometric data, and the resulting values for the different filters are reported in Table \ref{tab:photo_maxima}. While the first light-curve maximum is observed nearly at the same time in the different bands, the second maximum is reached earlier in the blue filters than in the red and NIR ones, as expected from a cooling photosphere. Then, the object disappeared behind the Sun soon after the second maximum, and it was recovered two months later, showing a very fast decline in all the bands lasting about 100~days, with a slower decline rate in the NIR bands. After a faint minimum at $i = 23.06 \pm 0.31$~mag ($M_i = -6.66 \pm 0.35$~mag), the luminosity shows a short-duration hump lasting about 30--40~days in the red-optical and NIR bands, which is $\sim 0.5$~mag brighter than the minimum. Finally, the light curves settle to nearly constant magnitude in all bands ($i = 22.80 \pm 0.18$~mag, hence $M_i = -6.92 \pm 0.23$~mag). We note that a similar red hump was observed at late phases in other LRNe, including \object{AT~2021jfs} \citep{pasto19b} and \object{AT~2021biy} \citep{cai22}. A comparison of the Sloan $r$ absolute light curve for the three LRNe presented in this paper with that of \object{AT~2021biy} \citep{cai22} is shown in Fig. \ref{Fig:LRN_absolute}. While the late-time red hump is evident in \object{AT~2021biy}, it is a lower-contrast but more~persistent feature in the light curve of \object{AT~2021blu}. Although the nature of these bumps has not been convincingly expained so far, extra energy radiated by ejecta collisions with circumstellar shells is a plausible explanation. We note that the two LRNe with short-lasting outbursts in Fig. \ref{Fig:LRN_absolute}, \object{AT~2021afy} and \object{AT~2018bwo}, display a fast-declining light curve without evident late brightenings. \subsubsection{{\it Hubble Space Telescope} imaging of AT~2021blu} \label{sect:HSTima} We used a deep ($15 \times 60$~s) Liverpool Telescope (LT) plus IO:O $r$-band image of \object{AT~2021blu} as a reference to search for a possible progenitor in archival {\it Hubble Space Telescope} ({\it HST}) ACS-WFC data\footnote{Program GO-15922, PI R. B. Tully.} taken on 2019 December 29, and available through the Mikulski Archive for Space Telescopes\footnote{\url{https://archive.stsci.edu/}.}. A second epoch\footnote{Program GO-16691, PI R. J. Foley.} of {\it HST} imaging of the \object{AT~2021blu} location was obtained on 2022 February 24, $\sim 1$~yr after the LRN outburst. Unfortunately, AT~2021blu lies at the edge of the available ACS image obtained in December 2019. In order to match the pre- and post-explosion images, we had to align the two frames using sources in the field that were situated east of \object{AT~2021blu}. Furthermore, only very few point sources were detected in both the LT and {\it HST} data. We hence used a collection of sources in the field to align the images, including compact clusters (that were unresolved by LT) and background galaxies. Using 11 such sources, the position of \object{AT~2021blu} on the {\it HST}+ACS $F606W$ image was localised with a root-mean-square uncertainty of 67~mas (see Fig. \ref{Fig:HST}, bottom-left panel). Within this region, we find a single, bright source which we suggest to be the progenitor. The {\sc DOLPHOT} package \citep{Dolphin00} was used to perform PSF-fitting photometry on the progenitor candidate. While $2 \times 380$~s exposures were taken with ACS in each of the $F606W$ and $F814W$ filters, these images were dithered and the location of AT~2021blu lies outside the field of view of one of the dither positions. We hence are left with only a single 380~s image in each of the $F606W$ and $F814W$ filters. We carefully examined this image for cosmic rays, but found that our photometry is unaffected by them. The following magnitudes are measured for the progenitor candidate: $F606W = 21.826 \pm 0.008$ and $F814W = 21.226 \pm 0.009$~mag (in the Vega magnitude system). All other sources within $1''$ from this candidate are much fainter, and their integrated flux is about $6\%$ and $10\%$ of that of the \object{AT~2021blu} precursor in the $F606W$ and $F814W$ filters, respectively. Given the distance and extinction values adopted in Sect. \ref{Sect:hosts}, we obtain the following absolute magnitudes for the precursor of \object{AT~2021blu}: $M_{F606W} = -7.91 \pm 0.15$ and $M_{F814W} = -8.49 \pm 0.15$~mag. Assuming a 5800~K blackbody consistent with the observed colour, we used the {\sc IRAF} task {\sc SYNPHOT} to calculate a conversion to Sloan filters, which provides $r = 21.82$ and $i = 21.66$~mag (AB system). These magnitudes are significantly brighter than earlier detections from ground-based telescopes, suggesting that the system was already in a pre-eruptive phase. For this reason, these {\it HST} data do not provide striking information on the nature of the quiescent progenitor system. The second epoch of {\it HST} observations of the \object{AT~2021blu} field was obtained about 26 months later, when the LRN was very faint after the long luminosity decay following the second peak, and before the short-duration hump discussed at the end of Sect. \ref{sect:outburstlc2021blu}. The source at the LRN location was much redder than at the first {\it HST} epoch, with $F606W = 23.392 \pm 0.016$ and $F814W = 21.700 \pm 0.012$~mag (in the Vega system). At this epoch, the integrated flux contribution of all faint sources within a radius of $1''$ from the transient is $33\%$ in $F606W$ and $11\%$ in $F814W$ of the \object{AT~2021blu} flux. This may help to guess the contamination of background sources to late-time photometry of the LRN obtained with low spatial resolution ground-based facilities. Finally, applying the same strategy as above to convert magnitudes from the {\it HST} to the Sloan systems, we infer $r = 23.26$ and $i = 22.38$~mag (in the AB system). \section{Luminosity, radius, and temperature evolution} \label{sect:TLR} Adopting a similar approach as for other LRN studies \citep[see, e.g.][]{cai19,bla20,bla21,pasto21b}, we now estimate the bolometric light curves and the evolution of the temperature and the radius for the three objects. The broad-band light curves illustrated in Sect. \ref{sect:photo} were used to infer the bolometric ones for the three LRNe. To obtain the bolometric luminosity at a selected epoch, we fit the reddening-corrected SED of the object at that epoch with a blackbody function. If the observation in one band is not available at that epoch, its flux contribution is estimated through an interpolation of available photometric data in adjacent epochs. Blackbody fits to the data of \object{AT~2021blu} at some selected epochs are shown for illustrative purposes in Fig. \ref{Fig:BBfits}. The bolometric flux and the blackbody temperature ($T_{\rm bb}$), along with their uncertainties, are determined through Monte Carlo simulations, as detailed by \citet{val22}. The procedure is repeated for all epochs with multi-band observations. The resulting bolometric curves of the three LRNe are shown in Fig. \ref{Fig:TLR} (bottom panel) and are compared with those of six well-studied LRNe. \object{AT~2021afy} is one of the brightest objects in our sample. The two bolometric peaks have a very similar luminosity $L_{\rm bol} \approx 2.1 (\pm 0.6) \times 10^{41}$~erg~s$^{-1}$ (which accounts for the errors in the host galaxy distance and the reddening estimate; see Sect. \ref{Sect:hosts}). Only \object{AT~2017jfs} is more luminous than \object{AT~2021afy}. In contrast with the expectation for a bright LRN, \object{AT~2021afy} remains luminous for a relatively short time ($\sim 3$ months). \object{AT~2021blu} has a quite luminous first peak, with a $L_{\rm bol} \approx 6.5 \times 10^{40}$~erg~s$^{-1}$, followed five months later by a fainter, second, broad maximum at $L_{\rm bol} \approx 3.1 \times 10^{40}$~erg~s$^{-1}$. The overall bolometric evolution resembles that of \object{AT~2018hso} \citep{cai19}, which is only marginally brighter than \object{AT~2021blu}. As already mentioned in Sect.\ref{sect:2018bwo_lc}, we could not follow the early-time evolution of \object{AT~2018bwo}. Consequently, we cannot precisely constrain the time of the early maximum, along with the duration of the LRN outburst. However, we argue that the outburst onset occurred several weeks before the discovery. We arbitrarily fixed the epoch of the early maximum at 40~days before our earliest detection. The object already appears to be on the plateau (or on a low-contrast second broad peak), with an average bolometric luminosity slightly exceeding $L_{\rm bol} \approx 10^{40}$~erg~s$^{-1}$. In this phase, it is marginally brighter than \object{AT~2020hat}, an object that did not show a high-contrast early peak, and one order of magnitude brighter than \object{AT~2019zhd}, the lowest-luminosity object of the sample. \begin{table*} \caption{\label{tab:speclog}Log of spectroscopic observations of the three LRNe discussed in this paper. } \centering \begin{tabular}{lcccccc} \hline\hline UT Date & MJD & Phase &Instrumental configuration & Exp. time (s) & Range (\AA) & Res. (\AA) \\ \hline \multicolumn{7}{c}{AT~2018bwo} \\ \hline 2018-05-23 & 58261.18 & +8.3 & $11.1\times9.8$~m~SALT + RSS + PG0900 & 900 & 3640-7260 & 6 \\ 2018-05-26 & 58263.42 & +10.5 & 8.1~m Gemini-South + Flamingos-2 + JHG5801 & 2400 & 8920-18000 & $\cdots$\\ 2018-06-05 & 58274.41 & +21.5 & 6.5~m Magellan/Baade + FIRE & 2029 & 8200-24680 & $\cdots$ \\ 2018-06-18 & 58287.30 & +34.4 & 4.1~m SOAR + Goodman + grt.400 & 1200 & 3700-7120 & 6.4 \\ 2018-07-11 & 58310.59 & +57.7 & 10~m~Keck-I +LRIS +600/4000 & 1108 & 5600-10200 & 6 \\ % 2018-08-25 & 58355.15 & +102.2 & 10.4~m~GTC+OSIRIS + R1000B + R1000R & 2400+2400 & 3630-9800 & 7,8 \\ 2018-09-12 & 58373.26 & +120.4 & 6.5~m Magellan/Baade + FIRE & 1522 & 8200-24680 & $\cdots$ \\ 2018-09-17 & 58380.90 & +128.0 & 10~m~Keck-I +LRIS +600/4000 & 3600 & 5700-10200 & 6 \\ % \hline \multicolumn{7}{c}{AT~2021afy} \\ \hline 2021-01-25 & 59239.26 & +7.6 & 10.4~m GTC + OSIRIS + R1000B & 3000 & 3630-7870 & 7 \\ 2021-02-17 & 59262.25 & +30.6 & 10.4~m GTC + OSIRIS + R1000B & 3600 & 3640-7870 & 7 \\ 2021-02-24 & 59269.19 & +37.5 & 10.4~m GTC + OSIRIS + R1000R & 3600 & 5080-10200 & 8 \\ 2021-04-06 & 59310.10 & +78.4 & 10.4~m GTC + OSIRIS + R1000R & 3600 & 5100-10400 & 8 \\ 2021-04-23 & 59327.02 & +95.3 & 10.4~m GTC + OSIRIS + R1000R & 2700 & 5100-10400 & 8 \\ \hline \multicolumn{7}{c}{AT~2021blu} \\ \hline 2021-02-06 & 59251.45 & $-$7.4 & 2.0~m FNT + FLOYDS & 3600 & 3500-10000 & 15 \\ 2021-02-07 & 59252.30 & $-$6.7 & 3.05~m Shane + Kast + 600/4310+300/7500 & 2460+2400 & 3620-10700 & 5,9 \\ 2021-02-10 & 59255.02 & $-$3.9 & 2.56~m NOT + ALFOSC + gm4 & 1800 & 3400-9650 & 14 \\ 2021-02-10 & 59255.46 & $-$3.4 & 2.0~m FNT + FLOYDS & 2700 & 3500-10000 & 15 \\ 2021-02-11 & 59256.24 & $-$2.6 & 3.05~m Shane + Kast + 452/3306+300/7500 & 1230+1200 & 3300-10300 & 5,9 \\ 2021-02-16 & 59261.05 & +2.2 & 1.82~m Copernico + AFOSC + VPH7 & 2700 & 3250-7270 & 14 \\ 2021-02-16 & 59261.05 & +2.2 & 3.6~m DOT + ADFOSC + 676R & 900 & 3550-8850 & 12 \\ 2021-02-18 & 59263.02 & +4.1 & 2.56~m NOT + ALFOSC + gm4 & 2440 & 3400-9700 & 14 \\ 2021-02-18 & 59263.29 & +4.4 & 3.05~m Shane + Kast + 600/4310+300/7500 & 2160+2100 & 3630-10740 & 5,10 \\ 2021-02-21 & 59266.12 & +7.2 & 3.6~m DOT + ADFOSC + 676R & 1200 & 3800-8880 & 11.5 \\ 2021-02-23 & 59268.37 & +9.5 & 10.0~m Keck-II + NIRES & & 9640-24660 & $\cdots$ \\ 2021-02-25 & 59270.96 & +12.1 & 10.4~m GTC + OSIRIS + R1000B & 540 & 3630-7880 & 7 \\ 2021-03-02 & 59275.18 & +16.3 & 3.6~m DOT + ADFOSC + 676R & 1800 & 3700-8870 & 11.5 \\ % 2021-03-05 & 59278.04 & +19.2 & 3.6~m DOT + ADFOSC + 676R & 1800 & 3900-8890 & 11.5 \\ 2021-03-07 & 59280.39 & +21.5 & 3.05~m Shane + Kast + 600/4310+300/7500 & 3060+3000 & 3620-10730 & 5,9 \\ 2021-03-14 & 59287.02 & +28.1 & 1.82~m Copernico + AFOSC + VPH7 & 3600 & 3350-7270 & 15 \\ 2021-03-15 & 59288.45 & +29.6 & 2.0~m FNT + FLOYDS & 2700 & 3500-10000 & 15 \\ 2021-03-18 & 59291.09 & +32.2 & 3.58~m TNG + LRS + LRB/LRR & 1800+1800 & 3350-9700 & 10,10 \\ 2021-03-30 & 59303.47 & +44.6 & 2.0~m FNT + FLOYDS & 2700 & 4000-10000 & 15 \\ 2021-04-02 & 59306.92 & +48.0 & 2.56~m NOT + ALFOSC + gm4 & 3600 & 3400-9680 & 14 \\ 2021-04-08 & 59312.46 & +53.6 & 2.0~m FNT + FLOYDS & 3600 & 3500-10000 & 15 \\ 2021-04-19 & 59323.89 & +65.0 & 10.4~m GTC + OSIRIS + R1000B + R1000R & 1500+1500 & 3630-10400 & 7,8 \\ 2021-05-05 & 59339.99 & +81.1 & 3.58~m + TNG+ LRS + LRB/LRR & 5400+3600 & 3400-9600 & 10,10 \\ 2021-05-12 & 59346.32 & +87.4 & 10.0~m Keck-I + LRIS + 600/400+400/8500 & 900+900 & 3150-10150 & 5,6 \\ 2021-05-18 & 59352.98 & +94.1 & 2.56~m NOT + ALFOSC + gm4 & 3800 & 3400-9600 & 18 \\ 2021-05-30 & 59364.97 & +106.1 & 2.56~m NOT + ALFOSC + gm4 & 3000 & 3400-9650 & 14 \\ 2021-06-04 & 59369.28 & +110.4 & 3.05~m Shane + Kast + 452/3306+300/7500 & 1230+1200 & 3400-10000 & 5,9 \\ 2021-06-15 & 59380.92 & +122.0 & 10.4~m GTC + OSIRIS + R2000B/R2500R & 1200+900 & 3850-7680 & 3.1,3.4 \\ 2021-06-29 & 59394.91 & +136.0 & 10.4~m GTC + OSIRIS + R1000B & 900 & 3630-7870 & 7 \\ 2021-07-08 & 59404.91 & +146.0 & 10.4~m GTC + OSIRIS + R1000B & 1200 & 3630-7870 & 7 \\ \hline \end{tabular} \tablefoot{For \object{AT~2018bwo}, the phases are computed from the first LRN detection (2018 May 14; MJD = 58252.905). The phases for the other two objects (\object{AT~2021afy} and \object{AT~2021blu}) are computed with respect to their $g$-band light-curve peaks, that occurred on MJD = $59231.7\pm1.6$ and MJD = $59258.89\pm0.10$, respectively. The resolution reported here are the FWHM of the night-sky lines. For further information on the instruments, and identification of the acronyms, see Appendix \ref{Appendix:B}.} \end{table*} The evolution of $T_{\rm bb}$ is shown in the top-left panel of Fig.~\ref{Fig:TLR}. \object{AT~2021blu} is one of the hottest objects in the sample. The lack of simultaneous observations in multiple bands before the LRN outburst makes the $T_{\rm bb}$ estimates very uncertain. However, during the slow luminosity rise of the pre-outburst phase, $T_{\rm bb}$ remains between 7000 and 8000~K. Then, the temperature rises while the LRN reaches the first maximum. At peak, $T_{\rm bb} \approx 8800$~K, then declines to a relative minimum ($T_{\rm bb} \approx 4300$~K) three months after the bolometric maximum. During the photometric rise to the second broad peak, the temperature grows again and reaches a maximum of $T_{\rm bb} \approx 5000$~K. Finally, it declines monotonically to $T_{\rm bb} \approx 2600$~K at $\sim 300$~d, and more rapidly later, reaching $\sim 1800$~K one month later, when the bolometric light curve reaches a local minimum before the red hump discussed in Sect. \ref{sect:outburstlc2021blu}. One may wonder if the assumption of a thermal continuum at such late phases is appropriate in the case of \object{AT~2021blu}. However, although \object{AT~2021blu} was not observed in spectroscopy after $\sim 5$ months past maximum (see Sect. \ref{sect:2021blu_spec} and Table \ref{tab:speclog}), the SED is still consistent with a blackbody (Fig. \ref{Fig:BBfits}). Furthermore, LRN \object{AT~2021biy} \citep{cai22} showed a similar behaviour in the late-time light curve and in the temperature evolution, while its spectra resembled those of intermediate M-type stars. All of this makes the assumption of thermal radiation at very late epochs plausible also for \object{AT~2021blu}. \object{AT~2021afy} has a smoother temperature evolution. From the first days after the outburst onset and up to maximum light, $T_{\rm bb}$ remains nearly constant, at $\sim 7000$~K. Then, two weeks after maximum, it slowly declines to a minimum of $T_{\rm bb} \approx 6000$~K, and rises again up to $T_{\rm bb} \approx 6700$~K at the time of the second light-curve maximum. The late phases are characterised by a linear temperature decline, which fades to $T_{\rm bb} \approx 2800$--2900~K at phase 110~d. Finally, the blackbody temperature of \object{AT~2018bwo} slowly declines from $T_{\rm bb} \approx 3700$~K to $\sim 2500$~K over the observed follow-up campaign, similar to \object{AT~2020hat} during the plateau \citep{pasto21a} and \object{AT~2015dl} at the time of the second light-curve peak \citep{bla17}. Regardless of the exact explosion epoch, \object{AT~2018bwo} appears to have a cooler photosphere than other comparison objects, in agreement with \citet{bla21}. We can then estimate the evolution of the photospheric radius ($R_{\rm ph}$) for the three LRNe (Fig. \ref{Fig:TLR}, top-right panel). The $R_{\rm ph}$ value for \object{AT~2021afy} initially rises from 3300~R$_\odot$ to 5000~R$_\odot$ at the first maximum. After maximum, it increases more slowly, reaching $R_{\rm ph} \approx 8000$--9000~R$_\odot$ over three months later. The radius evolution of \object{AT~2018bwo} is somewhat similar, with $R_{\rm ph}$ ranging from about 3800~R$_\odot$ to 6500~R$_\odot$ over the two months following the discovery. We note that both \object{AT~2021afy} and \object{AT~2018bwo} were observed in the optical bands at phases later than 110--120~d, but these observations mostly provide detection limits. In contrast, the two LRNe are clearly detected in the NIR bands, indicating a predominant emission in the IR domain \citep{bla21}. This incomplete SED information leads us to give uncertain bolometric luminosity estimates inferred from single-blackbody fits and, consequently, unreliable values of the temperature and the radius at very late phases. The well-sampled panchromatic light curve of \object{AT~2021blu} allows us to study in detail how its $R_{\rm ph}$ evolves with time. In the pre-outburst phase, $R_{\rm ph}$ remains in the range 260--350~R$_\odot$. At this phase, we expect that the photosphere is located in the common envelope. From phase about $-11$~d to maximum light, $R_{\rm ph}$ rises from 1000~R$_\odot$ to 1900~R$_\odot$. After the peak, the photospheric radius initially declines to a local minimum observed two weeks after maximum ($R_{\rm ph} \approx 1500$~R$_\odot$) and then rises again until $\sim 105$~d, reaching a value of $R_{\rm ph} \approx 3750$~R$_\odot$. This is followed by a shallow dip ($R_{\rm ph} \approx 3300$~R$_\odot$ at nearly 120~d) and a further increase. The radius, in fact, reaches a new maximum ($R_{\rm ph} \approx 4200$~R$_\odot$) soon after the broad light-curve peak, and then the photosphere recedes again by a few hundred solar radii when the object was re-observed after the seasonal gap. This phase is then followed by a new increase of the photospheric radius, which is initially slow, but later (at $\sim 310$~d) rapidly rises to a value of $R_{\rm ph} \approx 6500$~R$_\odot$ at $\sim 330$~d, when the light curve reaches a minimum before the very late red and NIR hump. This feature, noticed also in \object{AT~2021biy} \citep{cai22} at a similar phase, can result from an additional source of energy, such as the CSM interaction. The comparisons in Fig. \ref{Fig:TLR} suggest not only that LRNe span a wide range of luminosities, but that there is also an evident heterogeneity in the bolometric light-curve shapes, with some objects showing a luminous early peak, while others (including \object{AT~2020hat} and, to a lesser extent, \object{AT~2021afy}) have a low-contrast first peak. The same heterogeneity is observed in the evolution of the temperature and radius at the photosphere; if LRNe are produced by the coalescence of the stellar cores in a binary system, this diversity can be considered as an indication that the two stellar components span a wide range of parameters. \section{Spectroscopic data} \label{sect:spec} \citet{bla21} presented some optical and NIR spectra of \object{AT~2018bwo}. We complement their observations with an additional set of spectra obtained from a few days after the LRN discovery to $\sim 5$ months later. The spectra cover three phases of the LRN evolution: soon after the discovery, at the end of the plateau, and at very late phases when most of the LRN flux is emitted in the IR domain. We obtained five epochs of spectroscopy for \object{AT~2021afy}. All observations were performed after the first light-curve peak, until $\sim 95$~d. Given the faint apparent magnitude of the object, all spectra were obtained using the 10.4\,m Gran Canarias Telescopio (GTC) with the Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy (OSIRIS). Finally, \object{AT~2021blu} has a more extensive spectroscopy, ranging from one week before maximum light to $\sim 146$~d. The instruments used for the spectroscopic observations of the three objects are listed in Appendix \ref{Appendix:B}, and basic information on the spectra is provided in Table~\ref{tab:speclog}. All spectra were taken at the parallactic angle \citep{fil82}, except those obtained at Keck-I, where an atmospheric dispersion corrector is employed. The spectra were reduced using tasks in {\sc IRAF}\footnote{{\sc IRAF} was distributed by the National Optical Astronomy Observatory, which was operated by the Association of Universities for Research in Astronomy (AURA), Inc., under a cooperative agreement with the National Science Foundation (NSF).} or with dedicated pipelines such as {\sc FOSCGUI}\footnote{{\sc FOSCGUI} is a {\sc Python}-based graphic user interface (GUI) developed by E. Cappellaro, and aimed at extracting supernova spectroscopy and photometry obtained with FOSC-like instruments. A package description can be found at \url{http://sngroup.oapd.inaf.it/foscgui.html}.} tool. The different tools perform the usual preliminary reduction steps, including bias subtraction and flatfield corrections of the two-dimensional images. Then, the spectra are calibrated in wavelength using comparison-lamp spectra and the night-sky lines, and 1-D spectra are optimally extracted. The spectra are flux-calibrated using spectra of standard stars taken during the night, and the calibration is finally checked versus the available photometry. Finally, the broad absorption bands of O$_2$ and H$_2$O due to Earth's atmosphere are removed using the spectra of early-type stars, which are characterised by a nearly featureless continuum at the wavelengths of the telluric bands. \subsection{AT~2018bwo} \label{sect:2018bwo_spec} The spectra of \object{AT~2018bwo}, shown in Fig. \ref{Fig:specseq18bwo}, have a red continuum with a number of molecular bands (primarily TiO), prominent both in the optical and the NIR regions. While our spectra complement those available in the literature, for a detailed line identification we direct the reader to \citet{bla21}. Our spectra are corrected only for Milky Way reddening, as specified in Sect. \ref{Sect:hosts}. Hereafter, the phases will be with reference to the time of the earliest LRN detection (MJD = 59252.9). Our first optical spectrum, at phase $+8.3$~d, is noisy; hence, the narrow metal lines in absorption typical of LRNe in this phase cannot be discriminated from noise patterns. We detect narrow emission lines (H, [O~II], [O~III], [N~II], [S~II]) caused by contamination from host-galaxy H~II regions, along with some bumps in the continuum which are possibly due to the emerging TiO features (in particular at 5200--5400~\AA). The second spectrum was obtained two days later, and covers only the NIR domain (Fig. \ref{Fig:specseq18bwo}, top panel). It is characterised by a strong red continuum, but a few broad absorption features are observed at $\sim 10,900$--11,300~\AA\ (a known combination of CN and TiO features), and at $\sim 12,250$--12,650~\AA\ due to AlO and TiO, as proposed by \citet{bla21}. Combining the $+8.3$~d optical spectrum with the NIR spectrum at $+10.5$~d, we measure the continuum temperature with a blackbody fit and find it to be $T_{\rm bb} = 3750 \pm 250$~K. A NIR spectrum was also obtained at $+21.5$~d (Fig. \ref{Fig:specseq18bwo}, top panel); it shows most of the features detected before, along with a prominent absorption band of TiO at 9100--9850~\AA\ \citep{val98}. The AlO plus TiO blend at $\sim 12,250$--12,650~\AA\ is now less evident. The temperature of the continuum, $T_{\rm bb} = 3850 \pm 300$~K, is similar to that observed 11~days earlier, and is also consistent with those reported in Fig. \ref{Fig:TLR} at a similar phase. A second optical spectrum of \object{AT~2018bwo}, with higher S/N, was obtained at $+34.4$~days. In this case, we see a red continuum ($T_{\rm bb} = 3600 \pm 400$~K), a forest of narrow unresolved metal lines \citep[also detected by][]{bla21}, along with some TiO bands, with the strongest being at 6100--6400~\AA. The clear detection of narrow absorption lines of Ba~II and Fe~II allows us to estimate the photospheric velocity at this phase, $\sim 220$~km~s$^{-1}$. The H$\alpha$ feature due to the LRN is barely visible, and cannot be disentangled from the narrow H$\alpha$ of a nearby H~II region. The third optical spectrum (phase $+57.7$~d) taken with the 10~m Keck-I telescope has good S/N. It shows a remarkably red continuum ($T_{\rm bb} = 2750 \pm 200$~K) dominated by broad TiO absorption features. Metal lines (with Ba~II being particularly strong) are still well visible. From the position of the minimum of the absorption metal lines, we infer a photospheric velocity of $\sim 220$~km~s$^{-1}$, still constant, and consistent (marginally higher) with that reported by \citet{bla21} for an almost coeval spectrum. H$\alpha$ has a P~Cygni profile, with an unresolved emission component, and an absorption which is blueshifted by $\sim 500$~km~s$^{-1}$. Very-late-time optical spectra (at $+102.2$ and $+128.0$~d; see Fig. \ref{Fig:specseq18bwo}, bottom panel) show a continuum flux only above 7300~\AA, along with very pronounced absorption bands at 7600--8000~\AA, 8200--8750~\AA, 8850--9050~\AA, and above 9200~\AA\ due to TiO, VO, and CN, usually visible at these phases in LRNe \citep[e.g.][]{mar99}. We also obtained a third NIR spectrum at $+120.4$~d, which is very similar to the spectrum obtained 110.6~d after the first LRN detection\footnote{This phase corresponds to $+103.1$~d adopting their reference epoch.} shown by \citet{bla21}. We confirm the detection of a number of molecular bands (TiO, VO, CN, and AlO), along with that of the CO band heads. All of these features are in common with late M-type to early L-type cool stars, as reported by \citet{bla21}. However, while we confirm the detection of the broad molecular bands, our spectrum of \object{AT~2018bwo} does not convincingly support the detection of the numerous narrow metal lines identified in the late-time NIR spectrum of \citet[][see their Fig. 7]{bla21}. \subsection{AT~2021afy} \label{sect:2021afy_spec} We obtained five GTC+OSIRIS spectra of \object{AT~2021afy}, spanning a period from a week to over 3 months after maximum brightness. The spectral sequence is shown in Fig. \ref{Fig:specseq21afy}. Deep interstellar absorption of Na~I\,D is present at the host-galaxy redshift, which is attributed to material along the LRN line of sight. Assuming a standard gas-to-dust ratio, we expect a significant extinction of the transient's light in the host galaxy. We measure this Na~I\,D absorption in the three higher-S/N spectra and find an equivalent width (EW) of $2.4\pm0.7$~\AA. Following \citet{tur03}, we obtain the amount of host-galaxy extinction from the relation between EW of the Na~I\,D and colour excess, $E_{\rm host}(B-V) = 0.38 \pm 0.11$~mag. Accounting for the Milky Way reddening component, we obtain a total colour excess of $E_{\rm tot}(B-V) = 0.43$~mag (see Sect. \ref{Sect:hosts}). The five spectra, after the correction for the total reddening estimated above, show the typical evolution of LRNe \citep[see, e.g.][]{pasto19a}. The spectrum at $+7.6$~d shows a moderately blue continuum with $T_{\rm bb} = 8100 \pm 700$~K, prominent lines of the Balmer series in emission with a Lorentzian profile and a full width at half-maximum (FWHM) velocity ($v_{\rm FWHM}$) of $\sim 560 \pm 100$~km~s$^{-1}$ (after correction for instrumental resolution). Some line blanketing of metal lines (mostly Fe~II) is likely responsible for the flux drop at wavelengths shorter than $\sim 4500$~\AA. The second spectrum, at phase $+30.6$~d, is more noisy. It appears to be slightly redder ($T_{\rm bb} = 7300 \pm 700$~K), and H$\alpha$ is significantly weaker but marginally broader, with $v_{\rm FWHM} \approx 640 \pm 190$~km~s$^{-1}$. A higher-S/N spectrum was taken at $+37.5$~d, and now shows a number of absorption metal lines (Fe~II, Ba~II, Sc~II), as observed in other LRNe during the second photometric peak \citep{pasto19a,pasto21a}. The continuum temperature is $T_{\rm bb} = 6900 \pm 600$~K and H$\alpha$ is still visible in emission, with $v_{\rm FWHM} \approx 655 \pm 165$~km~s$^{-1}$. The narrow metal lines in absorption become more prominent at $+78.4$~d, the spectral continuum indicates a much lower temperature ($T_{\rm bb} = 5400 \pm 600$~K), and H$\alpha$ becomes more pronounced, although narrower (with $v_{\rm FWHM} \approx 490 \pm 100$~km~s$^{-1}$). Its profile cannot be well fitted by a Gaussian function, so its FWHM has been obtained through a Lorentzian fit. In this phase, the Ca~II NIR triplet is in emission, becoming the second strongest spectral feature. The last spectrum (at phase $\sim +95.3$~d; $T_{\rm bb} = 5200 \pm 900$~K) has lower S/N, and shows prominent NIR Ca~II lines and H$\alpha$, the latter with $v_{\rm FWHM} \approx 410 \pm 100$~km~s$^{-1}$. \subsection{AT~2021blu} \label{sect:2021blu_spec} Optical spectra of \object{AT~2021blu} were obtained from $\sim 1$~week before the first blue peak to $\sim 5$~months later, corresponding approximately to the time of the second (red) maximum. We collected almost thirty spectra, although not all of them have good S/N. The sequence with the best-quality spectra is shown in Fig. \ref{Fig:specseq21blu}. All spectra obtained during the first peak (from $-7.4$~d to $+12.1$~d) are very similar, being characterised by a blue continuum (with $T_{\rm bb}$ in the range between 7500~K and 8000~K) and Balmer emission lines having Lorentian profiles and a typical $v_{\rm FWHM} \approx 400$--500~km~s$^{-1}$. Paschen lines are also detected in the good-quality $+4.4$~d spectrum, along with numerous multiplets of Fe~II in emission. The H lines are marginally resolved, with $v_{\rm FWHM} = 460 \pm 90$~km~s$^{-1}$. The continuum temperature remains between 7500 and 8000~K over the entire period. From about $+19.2$~d to $+32.2$~d, the spectra become progressively redder, the Fe~II emission lines are replaced by absorption features, and H$\alpha$ becomes fainter, although its profile always remains in pure emission. A residual Lorentzian profile still seems to persist, but the highest-resolution spectra in this phase are only marginally resolved (with $v_{\rm FWHM} < 460$~km~s$^{-1}$). In the two spectra at $+21.5$~d and $+22.2$~d, the continuum temperature declines to $T_{\rm bb} = 6500 \pm 600$~K and $T_{\rm bb} = 5950 \pm 250$~K, respectively. Other metal lines are now visible in absorption, including Fe~II, Sc~II, Ba~II, Na~I\,D, Ca~II (H\&K and the NIR triplet), and O~I. The subsequent spectra show even more pronounced metal lines (in particular, the Ba~II multiplets), while the continuum temperature continues its decline from $T_{\rm bb} = 5500 \pm 350$~K at $+48.0$~d to $T_{\rm bb} = 4350 \pm 350$~K at $+94.1$~d (see also Fig. \ref{Fig:TLR}, top-left panel). In this phase, the profile of the H$\alpha$ emission line becomes more asymmetric, with a redshifted emission peak. The FWHM velocity at $+65.0$~d obtained through a Lorentzian fit is $470 \pm 95$~km~s$^{-1}$. Hereafter, the continuum temperature rises again, reaching $T_{\rm bb} = 5300 \pm 450$~K at $+146.0$~d. At this phase (starting $\sim 100$~d after the blue peak), the light curve reaches the broader and redder second maximum. The spectra are dominated by a forest of metal lines, while H$\beta$, visible until now, disappears in the last available spectra (at phases above $\sim +130$~d). At the same time, the H$\alpha$ profile becomes more markedly asymmetric with time (see Fig. \ref{Fig:Halpha}, right panel). While our spectroscopic monitoring campaign of \object{AT~2021blu} stopped $\sim 5$~months after maximum brightness, an optical spectrum was obtained by \citet{sora22} $\sim 8$~months after maximum, showing the typical TiO bands observed in LRN spectra at late epochs. The H$\alpha$ luminosity evolution of \object{AT~2021blu} is shown in the top panel of Fig. \ref{Fig:Halpha_flux}, while the evolution of the Balmer decrement (the H$\alpha$/H$\beta$ flux ratio) is reported in the middle panel. The values inferred for \object{AT~2021blu} are compared with those of \object{AT~2021afy}, while no measure was performed on the \object{AT~2018bwo} spectra because of the strong contamination of the narrow lines owing to nearby H~II regions. We note that the evolution of the H$\alpha$ luminosity of both \object{AT~2021blu} and \object{AT~2021afy} roughly follows the global trend of the bolometric light curves. Except for the very early phases, when the H$\alpha$ luminosity of the two objects is comparable, it is systematically fainter in \object{AT~2021blu}. The evolution of $v_{\rm FWHM}$ for \object{AT~2021blu} and \object{AT~2021afy}, obtained by fitting the H$\alpha$ line with a Lorentzian function, is shown in the bottom panel of Fig. \ref{Fig:Halpha_flux}. We note that $v_{\rm FWHM}$ has a very slow evolution in both objects, and tends to decrease with time. The Balmer decrement ($\beta$) of \object{AT~2021blu} (Fig. \ref{Fig:Halpha_flux}, middle panel) has a minimum value of $\beta \approx 2$ at around the time of the early light-curve peak, and it is similar to that observed in the $+7.2$~d spectrum of \object{AT~2021afy}. These values are only slightly smaller than those expected from Case B recombination. The Balmer decrement of \object{AT~2021blu} increases up to $\beta \approx 11$--12 one month later, then declines to $\beta \approx 6$ about two months past maximum light, and finally remains nearly constant during the long-lasting light-curve minimum. The high-quality, moderate-resolution GTC spectrum obtained at phase $+122.0$~d reveals the nature of the asymmetry of H$\alpha$. The line is mostly in emission, but a narrow absorption component is observed, blueshifted by $110 \pm 20$~km~s$^{-1}$ (Fig. \ref{Fig:Halpha}, left panel), similar to that observed in good-resolution spectra of LRNe \object{AT~2020hat} \citep{pasto21a} and \object{NGC4490-2009OT1} \citep{smi16}. This spectrum of \object{AT~2021blu} allows us to identify the forest of lines visible at the time of the second photometric peak (Fig. \ref{Fig:specid21blu}). For the 5600--7600~\AA\ region, we follow the identification performed in the \object{AT~2020hat} spectrum presented by \citet{pasto21a}, given the excellent match of the lines observed in the two spectra, while for the bluest spectral region, we use the transitions listed by \citet{moo45}. The forest of narrow features identified in the \object{AT~2021blu} spectrum in Fig. \ref{Fig:specid21blu} are real metal lines and not noise patterns, as they are also detected in the best-resolution spectra of other LRNe (see Fig. \ref{Fig:cfr_highres}) at a similar evolutionary stage. In particular, we find evidence for the presence of neutral and singly ionised Fe, Ti, Cr, Sc, V, Sr, Ba, and Y, along with Mn~II. While the detection of Ca~I lines is only tentative, the main Ca~II lines are outside the range of the $+122.0$~d spectrum. However, the H\&K and the NIR triplet of Ca~II are unequivocally detected in the low-resolution spectra at earlier and later epochs. The very strong absorption lines of Ba~II allow us to precisely estimate the photospheric velocity as $250 \pm 20$~km~s$^{-1}$. A NIR spectrum of \object{AT~2021blu} was obtained with the Keck-II telescope equipped with the Near-Infrared Echellette Spectrometer (NIRES; see Fig. \ref{Fig:specNIR21blu}, top panel) about 10~d after the first peak. The continuum matches that of a blackbody with $T_{\rm bb} = 6600 \pm 70$~K. Searching for individual features, we identify only H lines in emission of the Paschen and Brackett series, with a profile which is approximately Lorentzian, with a FWHM velocity of $160 \pm 20$~km~s$^{-1}$ (Fig. \ref{Fig:specNIR21blu}, bottom panel). \section{Discussion} \label{Sect:discussion} From the data for the three transients discussed in this paper, it is evident that LRNe span a very wide range of observational parameters, as reported in previous studies \citep[see, e.g.][]{pasto19a,bla21}. In particular, the light curve of \object{AT~2021afy} exhibits a very small luminosity difference between the two peaks, while in \object{AT~2021blu} the luminosity of the early peak is largely predominant over that of the second peak. Apparently, \object{AT~2018bwo} does not show an early blue peak, although the observations suggest that the object was discovered in a late stage of its evolution. In this small sample, \object{AT~2021blu} is the object with the best observational coverage: we constrained its long-lasting phase with a slowly rising light curve prior to the LRN outburst, the classical double-peaked light curve of the outburst, and finally a late-time hump in the red optical and NIR light curves. All of this makes \object{AT~2021blu} one of the rare LRNe with comprehensive information on the main photometric parameters along its entire evolution. For \object{AT~2018bwo} and \object{AT~2021blu}, we can also constrain the properties of the progenitor system through the inspection of archival images, when the stars were likely in quiescence. As we subsequently see in Sect. \ref{Sect:progenitors_archive}, this photometric information is crucial for inferring the progenitor mass. Other parameters of the progenitor can be estimated through simple models available in the literature (see Sect. \ref{Sect:merging}). Finally, correlations among observational parameters of LRNe are systematically investigated in Sect. \ref{sect:corr}. \subsection{Progenitors} \label{Sect:progenitors_archive} \citet{bla21} performed a detailed analysis of the nature of the stellar system that produced \object{AT~2018bwo}. In particular, they found a yellow source at the location of \object{AT~2018bwo} in the {\it HST} images obtained in 2004, 14~yr before the LRN outburst. At that epoch, the progenitor system was assumed to be in a quiescent stage. The progenitor's photometry reported by \citet{bla21}, with our assumptions regarding the host-galaxy distance and reddening, provides $M_{F555W} = -5.85$~mag, and colours of $F435W-F555W = 0.49$~mag and $F555W-F814W = 0.67$~mag. Adopting the standard transformations between magnitudes in the natural {\it HST} photometric system and the Johnson-Bessell system (for an F6 star), we obtain $M_V = -5.92 \pm 0.36$~mag. With this absolute magnitude, the binary system producing \object{AT~2018bwo} belongs to the intermediate-luminosity population of LRN progenitors. As discussed by \citet{bla21}, the absolute magnitude of the quiescent progenitor and the luminosity of the LRN outburst are tightly correlated with the mass of the progenitor system. \citet{bla21} compared the photometric parameters of the progenitor of \object{AT~2018bwo} (adopting slightly different reddening assumptions) with both single and binary stellar evolution models, and found that the best-matching progenitor was a binary with a massive yellow supergiant primary, whose mass ranged from 11 to 16~M$_\odot$. The binary interaction then led to the ejection of a common envelope as massive as 0.15--0.5~M$_\odot$ \citep{bla21}. Unfortunately, the photometric evolution of the system after the ejection of the common envelope is poorly constrained, as only a shallow upper limit to the total magnitude of the system is available at that phase ($M_V \gtrsim -7.5$~mag, using the stacked unfiltered images obtained in mid-2017 by DLT40, and scaled to Johnson-Bessell $V$-band photometry). Furthermore, \object{AT~2018bwo} was not observed at early phases because of the gap caused by solar conjunction. \citet{bla21} suggested that if the object was not very old when discovered, a very expanded photosphere at the time of the coalescence was necessary to explain its initial red colour. However, we cannot rule out that the object was discovered when it was already at the red peak (or the plateau) phase. Our interpretation is supported by the detection of \object{AT~2018bwo} about 1~week before the discovery epoch (Sect. \ref{sect:2018bwo_lc}), at a similar magnitude. In this respect, \object{AT~2018bwo} was likely discovered at a similar evolutionary stage as LRN \object{UGC12307-2013OT1} presented by \citet{pasto19a}, where the early blue peak was missed owing to the seasonal gap. Given the relatively large distance of the host galaxy ($\sim 49.2$~Mpc), we have limited information about the \object{AT~2021afy} progenitor. {\it HST} imaged the LRN field\footnote{Program GO-8645, PI R. Windhorst.} on 2000 September 7. From an inspection of the available $F300W$ and $F814W$ images, no source is visible at the LRN location down to $\sim 23.6$~mag and $\sim 23.4$~mag, respectively. Furthermore, public stacked images obtained by Pan-STARRS several years before the outburst do not show sources at the location of \object{AT~2021afy}, with upper limits of $g = 23.05$, $r = 23.20$, $i = 23.40$, and $z = 22.81$~mag (Table \ref{Table:A1.4}). Adopting the Johnson-to-Sloan band transformation relations of \citet{jes05} for normal stars, we obtain an upper detection limit of $M_V > -11.66$~mag for the quiescent system. With the ZTF stacked images obtained in mid-2018, shallow upper limits for the slow pre-LRN rise are also derived ($g = 20.95$, $r = 22.05$, $i = 21.33$~mag). Again, using \citet{jes05} conversions, we infer a limit of $M_V > -13.20$~mag for the pre-LRN brightening. This phase is then followed by the classical LRN luminosity evolution, characterised by two peaks with almost the same luminosity, separated by a shallow minimum (see Sect. \ref{sect:2021afy_lc}). The information available for the quiescent progenitor of \object{AT~2021blu} is less robust than that of \object{AT~2018bwo}. The only pre-outburst {\it HST} observation was taken in December 2019 (Sect. \ref{sect:HSTima}), when the object was already in the slow brightening phase. For this reason, we inspected earlier images taken with ground-based telescopes and found a source with minor variability across the 2006 to 2016 decade (see Sect. \ref{sect:pre2021blu}). In 2006, we infer the following absolute magnitudes and intrinsic colours for the precursor of \object{AT~2021blu}: $M_V = -6.72 \pm 0.30$~mag, $B-V = 0.45 \pm 0.29$~mag, and $M_r = -6.76 \pm 0.19$~mag. We also inspected PS1 template images obtained by stacking numerous frames taken before early 2015, and we inferred the following reddening-corrected absolute magnitudes and colours: $M_g = -6.49 \pm 0.26$~mag, $g-r = 0.27 \pm 0.28$~mag, $r-i = 0.11 \pm 0.27$~mag, $i-z = 0.02 \pm 0.34$~mag, and $M_y > -7.84$~mag. \begin{sidewaysfigure*} \centering {\includegraphics[angle=270,width=10.2cm]{fig18a.ps} \includegraphics[angle=270,width=10.2cm]{fig18b.ps} \includegraphics[angle=270,width=10.2cm]{fig18c.ps} \includegraphics[angle=270,width=10.2cm]{fig18d.ps}} \caption{Location of the \object{AT~2021blu} progenitor in the HRD (cyan starred symbol), and comparison with different solar-metallicity evolutionary tracks. {\it Top-left panel:} Comparison with evolutionary tracks for single stars. The track of a 14~M$_\odot$ star is indicated with a blue solid line, those of 13 and 15~M$_\odot$ with dot-dashed lines, all other tracks with blue dotted lines. {\it Bottom-left panel:} Comparison with tracks for binary systems with initial orbital periods $P \approx 100$~days ($log(P) = 2$). {\it Top-right panel:} Comparison with tracks for binary systems with initial orbital periods $P \approx 40$~days ($log(P) = 1.6$). {\it Bottom-right panel:} Comparison with tracks for binary systems with initial orbital periods $P \approx 15$~days ($log(P) = 1.2$). Binary tracks with primaries of 12 to 20~M$_\odot$ are shown as lines with different colours. Binary tracks for different mass ratios ($q = M_1/M_2$, where $M_1$ indicates the mass of the primary star and $M_2$ that of the secondary star) are also shown. All tracks are taken from the BPASS database \protect\citep[][]{eld17,sta18}. The insets show a close-up view of the location of the \object{AT~2021blu} progenitor in the HRD, along with the tracks of different configurations of binaries of 13--18~M$_\odot$ that intersect the error bars of the \object{AT~2021blu} progenitor. } \label{Fig:HRDprogenitor} \end{sidewaysfigure*} First, we assume that the measured source is the progenitor star and that the flux contamination of nearby stars is negligible (see Sect. \ref{sect:HSTima}). The main parameters of this source can be estimated by fitting the SED with a blackbody function, as detailed in Sect. \ref{sect:TLR}. The best fit to the SED is obtained with a blackbody of $T_{\rm eff} = 6800 \pm 300$~K (Fig. \ref{Fig:SEDprogenitor}). Given the above colours and accounting for the error bars, the source at the progenitor's location corresponds to a star of F3--F4 spectral type. We also infer $L_{\rm bol} = (1.55 \pm 0.23) \times 10^{38}$~erg~s$^{-1}$ and $R_{0}= 144 \pm 14$~R$_\odot$ for the putative progenitor of \object{AT~2021blu}. We now discuss the issue of the flux contamination from nearby sources in the photometry of the \object{AT~2021blu} progenitor. In Sect \ref{sect:HSTima}, we estimated that on 2019 December 29 the flux of the contaminating sources within a radius of $1''$ from the transient was $\sim6\%$ in $F606W$ and $\sim10\%$ in $F814W$ of the LRN precursor flux. If we consider the flux of the quiescent progenitor in the Sloan $r$ and $i$ bands during the 2006--2016 period, the total flux of other stars within $1''$ is estimated to be about $18\%$ and $32\%$ (respectively) of the progenitor flux. Although {\it HST} did not observe the field of \object{AT~2021blu} in blue filters, we note that the contaminating sources are significantly redder than the \object{AT~2021blu} progenitor. For this reason, the contamination is expected to be modest in the blue bands. Removing the contribution of the contaminating source would probably make the progenitor slightly bluer, changing the intrinsic colour to $r-i \approx -0.04$~mag and thus shifting its classification towards an early-F star \citep[see, e.g.][]{fin07,fuk11}. However, given that precise information on contaminating sources is only available for two {\it HST} filters, hereafter we assume that the flux of the source observed from 2005 to 2016 is largely dominated by the progenitor's contribution, with the caveat that the progenitor is possibly slightly hotter ($T \approx 7200$~K) and fainter. To constrain the mass of the \object{AT~2021blu} progenitor, we made use of a grid of BPASS evolutionary-track models \footnote{The tracks are taken from \url{https://bpass.auckland.ac.nz/index.html}.} for single stars and binary systems at solar metallicity \citep{eld17,sta18}, and plotted them in the Hertzsprung-Russell diagram (HRD). Single-star models from 10 to 25~M$_\odot$ are shown in Fig.~\ref{Fig:HRDprogenitor} as blue dotted lines, along with binary models with primary stars having ZAMS mass ($M_1$) ranging from 12 to 20~M$_\odot$ (the tracks for the different stellar masses are shown with different colours). For each value of $M_1$, we report tracks computed for different mass ratios of the two members of the binary ($q^{-1} = M_2/M_1 = 0.1$, 0.3, 0.5, and 0.7) and for three indicative initial orbital periods ($P \approx 15$, 40, and 100~days). The cyan starred symbol in Fig.~\ref{Fig:HRDprogenitor} represents the photometric point of the \object{AT~2021blu} progenitor obtained without removing the contribution of stars within $1''$. Single-star models for $M_1 = 14 \pm 1$~M$_\odot$ provide an excellent match to the observed photometry of the candidate progenitor of \object{AT~2021blu}. Evolutionary tracks for binary systems also well match the position of the observed progenitor in the HRD, in particular binaries whose primary has a mass ranging between 13 and 16~M$_\odot$. We note, however, that even systems with more massive primaries are consistent with the progenitor's photometry when the initial orbital period decreases, as we can expect for the binary progenitor of \object{AT~2021blu}. Consequently, if we include systems with $log(P) = 1.2$ (nearly 15~days), the range of possible masses for the primary star widens to 13--18~M$_\odot$ (see the insets in Fig.\ref{Fig:HRDprogenitor}). Unfortunately, the mass of the secondary companion is poorly constrained, as the evolutionary tracks are less sensitive to its mass; hence, we can only provide crude limits to the total binary mass, which lie in the $13\leq M/M_\odot<36$ range\footnote{The lower binary mass limit is computed assuming $M_1 = 13$~M$_\odot$ and $q^{-1} \ll 0.1$, while the upper limit is computed assuming $M_1 = 18$~M$_\odot$ and $q^{-1} \lesssim 1$.}. We remark that the above mass estimates assume that the observed progenitor in quiescence is not affected by significant circumstellar reddening. Additional reddening would make the progenitor more luminous and hotter, hence leading to a larger mass. Furthermore, removing the flux of the contaminants within $1''$ would shift the location of the progenitor in the HRD to a slightly higher effective temperature and a marginally lower bolometric luminosity, without significantly changing the mass estimates. \citet{koc14} noted the existence of a possible correlation between the LRN absolute magnitude at maximum brightness and the progenitor mass, which was later confirmed by \citet{bla21} based on a wider compilation of data from the literature. The analysis of \citet{bla21} has been recently revised by \citet{cai22}, with different assumptions about the distance and the reddening, and after adopting Johnson-Bessell $V$ as the reference band. Finally, \citet{cai22} considered the magnitude of the second peak (or the plateau) instead of the magnitude of the first peak, as the former is likely dependent on the mass of the recombining hydrogen, while the latter is probably more sensitive to the parameters (mass and velocity) of the high-velocity gas ejected in the polar direction during the merging process \citep{met17}. The empirical relation between the absolute magnitudes in the $V$ band at the second peak (or the plateau) and the mass (weighted by the uncertainties) obtained by \citet{cai22} is \begin{equation} \label{eq1} $$log\,\biggl(\frac{M_{\rm prog}}{{\rm M}_\odot}\biggl) = (-0.162\pm0.020)\,M_V -(0.701\pm0.048) $$. \end{equation} \noindent This relation can be used to infer an independent estimate of the mass of the LRN progenitors when the early-time light curve is not available. The masses of progenitors of LRNe with known photometric information during the second peak (or the plateau) inferred using Eq. \ref{eq1} are reported in Table \ref{tab:param_LRNe} (Column 12). For \object{AT~2021blu}, we obtain $M_{\rm prog} = 19.3_{-9.5}^{+18.6}$~M$_\odot$, consistent (within the large uncertainties) with the mass derived through the comparison of the archival progenitor imaging with the evolutionary tracks discussed above. This makes more plausible our suggestion that the faint source imaged in archival frames at the \object{AT~2021blu} location was dominated by the flux of the LRN progenitor. As a consistency check, we note that \object{AT~2021blu} is marginally brighter than \object{AT~2015dl} \citep[whose progenitor system was estimated to have a primary of $18\pm1$~M$_\odot$;][]{bla17}; thus, a total mass of $\sim 19$~M$_\odot$ is a realistic estimate for the primary progenitor of \object{AT~2021blu}. For \object{AT~2021afy} we infer a much larger progenitor mass, as expected from the high luminosity of its second light-curve peak: $M_{\rm prog} = 46_{-25}^{+54}$~M$_\odot$. We note, however, that the large error in the $V$-band absolute magnitude at maximum (Sect. \ref{Sect:hosts}, and Table \ref{tab:param_LRNe}) makes this mass estimate quite uncertain. At the adopted distance, \object{AT~2021afy} is slightly less luminous than \object{SNhunt248} \citep{kan15,mau15,mau18} at the first brightness maximum, and is somewhat similar to \object{NGC4490-2011OT1} \citep{smi16,pasto19a} (see Table \ref{tab:param_LRNe}). For this reason, we expect that its binary progenitor system belongs to the massive edge of the LRN distribution. \subsection{The merger scenario} \label{Sect:merging} While there is a consensus that the LRN phenomenon is an outcome of common-envelope binary evolution \citep[][and references therein]{iva17,bar17,jon20}, whether the two stars merged or rearranged into a new stable binary configuration is more debated \citep{how20}. Convincing arguments favouring the final merging scenario were provided by the detailed study of the Galactic LRN \object{V1309~Sco} \citep{mas10,tyl11,mas22}. Specifically, as mentioned in Sect. \ref{sect:intro}, the decade-long follow-up observations of \object{V1309~Sco} performed by the OGLE survey, and the thorough observational studies by \citet{tyl11}, revealed the multiple stages leading to the LRN eruption. \object{V1309~Sco} showed a long-lasting phase of slow luminosity rise with a superposed binary variability, with a period of $\sim 1.44$~days. Then, the photometric period started to shorten as a consequence of the loss of systemic angular momentum, finally leading to the orbital decay. In 2007, the light curve showed a sudden decline, and the signatures of binary modulation disappeared. This was interpreted as the consequence of an outflow of material from the primary that generated an optically thick, expanding envelope. The common envelope engulfed the binary system, hiding the binary modulated variability. As a consequence of a continuous optically thick outflow \citep{pej14}, the photometric minimum was followed by a gradual luminosity rise which lasted about half a year. In that phase, \object{V1309~Sco} brightened by $\sim 4$~mag. A steep brightening by a further $\sim 4$~mag in about 10 days followed, and was attributed to the initial thermal energy from the outer, high-velocity, hot ejecta launched in the polar direction during the coalescence of the secondary's core onto the primary \citep[e.g.][]{met17}. Although such high-cadence monitoring is not available for other LRNe, fragments of the sequence of the physical processes leading to an LRN outburst were observed for a number of extragalactic objects\footnote{Extensive datasets of bright LRNe were provided by \protect\citet{bla17,mau15,mau18,kan15,wil15,kur15,smi16,gor16,lip17,bla17,bla20,bla21,cai19,cai22,pasto19a,pasto19b,pasto21a,pasto21b,str20}.}. All of them are intrinsically more luminous and longer lasting than \object{V1309~Sco}, but the general morphology of the light curve is similar. In particular, for the closest events, we monitored the slow brightening phase after the common-envelope ejection, which lasted up to a few years (see Fig. \ref{Fig:21blu_lc}, top panel, for \object{AT~2021blu}), followed by a rapid brightening to the first maximum, and a subsequent luminosity decline to a plateau (or a new rise to a second, much broader and redder maximum). This particular morphology of the light curve was discussed in a number of studies. \citet{mcl17} proposed that the first light-curve maximum is produced by a violent, merger-driven, high-velocity gas outflow. Then, a rapid luminosity decline is followed by a plateau or a second broad peak, usually explained by the recombination of the H-rich gas \citep{iva13b,mcl17}, with most of the LRN energy being radiated during the plateau phase. This interpretation is supported by the effective temperature showing a minor evolution in this phase, the H$\alpha$ emission component disappearing, and the spectrum becoming dominated by narrow absorption lines of metals. However, some scatter in the effective plateau temperatures can be noticed among the objects, with $T_{\rm eff}$ spanning from 3000~K to 6000~K, suggesting that shell-shell interaction or even a further mass ejection \citep{iva13b} can also contribute to sustaining the light curve during this phase. \begin{sidewaystable*} \centering \tiny \caption{\label{tab:param_LRNe} Photometric parameters for the complete LRN sample.} \begin{tabular}{lccccccccccc} \hline\hline LRN name & Host & $t_{\rm host}$ & Distance & $E(B-V)_{\rm tot}$ & $M_V$(prog) & $M_V$(CE) & $M_{\rm pk1}$ & $M_{\rm pk2}$ & $L_{\rm pk1}/L_{\rm pk2}$ & $\Delta~t_{1~{\rm dex}}$ & $M_{\rm prog}$ \\ & galaxy & & (Mpc) & (mag) & (mag) & (mag) & (mag) & (mag) & optical & (days) & (M$_\odot$) \\ \hline \object{AT~2021afy} & UGC~10043 & $4.2\pm0.7$ & $49.2\pm8.0$ & $0.43\pm0.11$ & $>-11.66\pm0.56$ & $>-13.20\pm0.56$ & $-14.44\pm0.57$ & $-14.57\pm0.56$ & 1.0 & 120 & $46_{-25}^{+64}$ \\ % \object{UGC12307-2013OT1} & UGC~12307 & $9.8\pm1.0$ & $39.7\pm2.8$ & $0.22\pm0.02$ & $>-11.88\pm0.17$ & -- & -- & $-15.03\pm0.42$ & -- & -- & $54_{-30}^{+67}$ \\ % \object{AT~2017jfs} & NGC~4470 & $1.9\pm2.1$ & $35.2\pm2.7$ & $0.02\pm0.01$ & $>-11.26\pm0.17$ & -- & $-15.46\pm0.46$ & $-14.38\pm0.17$ & 2.6 & 157 & $43_{-23}^{+50}$ \\ % \object{SNhunt248} & NGC~5806 & $3.2\pm0.8$ & $22.5\pm3.8$ & $0.04\pm0.01$ & $-8.99\pm0.36$ & $-11.18\pm0.36$ & $-14.87\pm0.36$ & $-14.07\pm0.36$ & 2.1 & 169 & $58\pm2~(\star)$ \\ \object{AT~2020kog} & NGC~6106 & $5.3\pm0.6$ & $22.5\pm4.7$ & $0.37\pm0.07$ & $>-9.82\pm0.50$ & $-11.17\pm0.53$ & $-13.17\pm0.51$ & $-12.68\pm0.51$ & 2.0 & $>$100 & $23_{-11}^{+23}$ \\ % \object{AT~2018hso} & NGC~3729 & $1.3\pm0.8$ & $21.3\pm0.6$ & $0.30\pm0.08$ & $-9.05\pm0.25$ & -- & $-13.89\pm0.28$ & $-12.16\pm0.26$ & 3.7 & 201 & $18.6_{-9.1}^{+17.7}$ \\ % \object{NGC3437-2011OT1} & NGC~3437 & $5.2\pm0.5$ & $20.9\pm4.2$ & $0.02\pm0.01$ & $>-9.98\pm0.43$ & $>-10.83\pm0.43$ & $-13.06\pm0.48$ & $-13.33\pm0.43$ & 0.9 & 174 & $29_{-15}^{31}$ \\ % \object{AT~2014ej} & NGC~7552 & $2.4\pm0.7$ & $20.6\pm1.5$ & $0.31\pm0.15$ & $>-8.22\pm0.50$ & -- & $>-14.70\pm0.50$ & $-14.36\pm0.50$ & $>2.2$ & $>98$ & $42_{-23}^{+49}$ \\ % \object{NGC4490-2011OT1} & NGC~4490 & $7.0\pm0.4$ & $9.6\pm1.3$ & $0.32\pm0.32$ & $-7.32_{-1.03}^{+1.10}$ & $-9.18_{-1.10}^{+1.17}$ & $-14.35_{-1.00}^{+1.08}$& $-14.54_{-1.00}^{+1.08}$ & 0.9 & 200 & $30_{-22}^{+50}~(\star)$\\ \object{AT~1997bs} & NGC~3627 & $3.1\pm0.4$ & $9.2\pm0.3$ & $0.21\pm0.04$ & $-7.61\pm0.21$ & -- & $-13.34\pm0.15$ & $-11.51\pm0.17$ & 3.2 & 62 & $14.6_{-6.9}^{+13.1}$ \\ % \object{AT~2021blu} & UGC~5829 & $9.8\pm0.6$ & $8.64\pm0.61$ & $0.02\pm0.01$ & $-6.72\pm0.30$ & $-8.33\pm0.43$ & $-13.06\pm0.15$ & $-12.26\pm0.15$ & 2.4 & 242 & $19.3_{-9.5}^{+18.6}$ \\ % \object{AT~2021biy} & NGC~4631 & $6.5\pm0.7$ & $7.46\pm0.50$ & $0.27\pm0.02$ & $-7.93\pm0.17$ & $-8.78\pm0.20$ & $-13.81\pm0.16$ & $-12.65\pm0.16$ & 2.7 & 375 & $20.5\pm3.5~(\star)$ \\ \object{AT~2018bwo} & NGC~45 & $7.8\pm0.7$ & $6.79\pm1.13$ & $0.02\pm0.01$ & $-5.92\pm0.36$ & $-7.47\pm0.36$ & -- & $-10.14\pm0.45$ & -- & $>72$ & $13_{-2}^{+3}~(\star)$\\ % \object{AT~2015dl} & M~101 & $5.9\pm0.3$ & $6.43\pm0.57$ & $0.01\pm0.01$ & $-7.19\pm0.36$ & $-10.10\pm0.47$ & $-12.70\pm0.21$ & $-11.46\pm0.31$ & 2.0 & 229 & $18\pm1~(\star)$ \\ \object{AT~2020hat} & NGC~5068 & $6.0\pm0.4$ & $5.16\pm0.21$ & $0.09\pm0.01$ & $-2.99\pm0.09$ & $-8.87\pm0.76$ & $-10.72\pm0.27$ & $-10.08\pm0.26$ & 1.5 & 131 & $8.5_{-3.7}^{+6.6}$ \\ % \object{AT~2019zhd} & M~31 & $3.0\pm0.4$ & $0.785\pm0.009$ & $0.055\pm0.005$& $0.17\pm0.14$ & $-5.74\pm0.28$ & $-9.08\pm0.13$ & $-7.59\pm0.32$ & 5.1 & 27 & $3.4_{-1.3}^{+2.0}$ \\ % \object{M31LRN2015} & M~31 & $3.0\pm0.4$ & $0.785\pm0.009$ & $0.35\pm0.11$ & $-2.25\pm0.47$ & $-5.41\pm0.42$ & $-10.12\pm0.42$ & $-9.13\pm0.42$ & 2.4 & 62 & $4.0_{-1.0}^{+1.5}~(\star)$\\ \object{M31RV} & M~31 & $3.0\pm0.4$ & $0.785\pm0.009$ & $0.12\pm0.02$ & $-5.04\pm0.32$ & $>-7.04\pm0.15$ & $-9.54\pm0.15$ & $-8.66\pm0.15$ & 2.0 & 110 & $5.0_{-2.0}^{+3.3}$ \\ % \object{CK~Vul}$(\dag$) & Galaxy & -- &$3.2_{-0.6}^{+0.9}\times10^{-3}$& $0.80\pm0.15$ & $>-9.0_{-2.4}^{+1.3}$ & -- & $-12.0_{-2.4}^{+1.3}$ & $-12.4_{-2.4}^{+1.3}$ & 0.7 & 400 & -- \\ % \object{V838~Mon} & Galaxy & -- &$6.1(\pm0.6)\times10^{-3}$ & $0.85\pm0.02$ & $-1.29\pm0.22$ & $-6.67\pm0.22$ & $-9.76\pm0.22$ & $-9.43\pm0.22$ & 1.7 & 82 & $8\pm3~(\star)$ \\ % \object{V4332~Sgr} & Galaxy&--&$3.85_{-1.57}^{+4.65}\times10^{-3}$& $0.32\pm0.10$ & $3.94_{-1.72}^{+1.14}$ & -- & -- & $-5.21_{-1.93}^{+1.33}$ & -- & -- & $1.0\pm0.5~(\star)$ \\ % \object{V1309~Sco} & Galaxy & -- &$3.5(\pm1.5)\times10^{-3}$& $0.70\pm0.15$& $3.33\pm1.04$ & $-1.39\pm1.04$ & $-7.02\pm1.04$ & $-5.48\pm1.04$ & 3.1 & 29 & $1.54\pm0.5~(\star)$ \\ % \object{OGLE-2002-BLG-360} & Galaxy & -- &$8.20(\pm0.15)\times10^{-3}$&$1.0\pm0.2~(\ddag)$& $2.23\pm0.50$& $-1.43\pm0.54$ & $ 0.10\pm0.54$ & $ 0.79\pm0.65$ & 1.0 & 837 & $0.15_{-0.01}^{+0.01}$\\ % \object{OGLE-2002-BLG-360} & Galaxy & -- &$8.20(\pm0.15)\times10^{-3}$&$1.0\pm0.2~(\ddag)$& $1.13\pm0.28$& $-3.54\pm0.28$ & $-4.56\pm0.28$ & $-4.65\pm0.30$ & 1.0 & 837 & -- \\ \hline \end{tabular} \\ \tablefoot{The table reports the LRN name (Column 1); the host-galaxy name (Column 2) and its morphological type code (from Hyperleda; Column 3); the distance (Column 4); the total colour excess (Column 5); the $V$-band absolute magnitude of the quiescent progenitor (Column 6), the brightest $V$ absolute magnitude of the pre-outburst phase (Column 7), the first light-curve peak (Column 8), and the second light-curve peak (Column 9); the optical luminosity ratio of the first to the second light-curve peak (Column 10); time taken by the LRN to decrease its luminosity by one order of magnitude from the peak (Column 11); and the progenitor mass estimate using Eq.~\ref{eq1} or taken from \protect\citet{cai22} (Column 12).\\ $(\star)$ Mass estimates obtained through the direct detection of the progenitor or via light-curve modelling, as adopted by \protect\citet{cai22}. $(\ddag)$ The Milky Way reddening towards \object{OGLE-2002-BLG-360} follows a non-standard reddening law \protect\citep[$E(B-V)_{\rm MW} = 1$~mag, with $R_V=2.5$;][]{nat13}. While the parameters for the $I$-band photometry are robust, those inferred for $V$ photometry are uncertain owing to the low-cadence follow-up observations in that filter. The $V$ magnitudes are obtained through an interpolation of the $V-I$ colour curve at the epochs of the $I$-band peaks. $(\dag$) In this table, the parameters for \object{CK~Vul} are taken from \protect\citet{ban20}.} \end{sidewaystable*} \citet{mat22} recently presented accurate one-dimensional models of LRN light curves which improve on previous studies based on \citet{pop93} approximations. The models of \citet{mat22} assume that the short-lasting initial blue peak is due to thermal energy release from the low-mass, fast outer ejecta dominated by radiation pressure, while the second long-duration red peak emission is powered by hydrogen recombination. This study offers a grid of light-curve models showing two luminosity peaks, remarkably similar to those observed for LRNe. Following \citet{mat22}, we can estimate the LRN parameters during the first peak. In particular, the ejected mass ($M_{\rm ej}$) is inferred from \begin{equation} \label{eq2} $$ \frac{M_{\rm ej}}{10^{-2}~{\rm M}_\odot} \approx \frac{v_{\rm ej}}{500~{\rm km~s}^{-1}} \times \biggl(\frac{t_{\rm pk1}}{6.7~{\rm d}}\biggl)^{2} $$, \end{equation} \noindent where $t_{\rm pk1}$ is the duration of the first peak and $v_{\rm ej}$ is the velocity of the outer ejecta. The launching radius ($R_0$) is given by \begin{equation} \label{eq3} $$ \frac{R_0}{10~{\rm R}_\odot} \approx \frac{L_{\rm pk1}}{10^{39}~{\rm erg~s}^{-1}} \times \biggl(\frac{v_{\rm ej}}{500~{\rm km~s}^{-1}}\biggl)^{-2} $$, \end{equation} \noindent where $L_{\rm pk1}$ is the bolometric luminosity of the first light-curve peak. Finally, an upper limit to the energy radiated during the first peak ($E_{\rm pk1}$) can be obtained from \begin{equation} \label{eq4} $$ E_{\rm pk1} \approx L_{\rm pk1} \times t_{\rm pk1} $$. \end{equation} We use Eq.~\ref{eq2}-\ref{eq4} to infer the early physical parameters of the \object{AT~2021blu} ejecta, adopting the following values for the observed parameters: $v_{\rm ej} = 460$~km~s$^{-1}$ (see Sect. \ref{sect:2021blu_spec}, for the ejecta velocity at early phases), $L_{\rm pk1} = 6.5 \times 10^{40}$~erg~s$^{-1}$, and $t_{\rm pk2} \approx 30$~days. We obtain $M_{\rm ej} \approx 0.18$~M$_\odot$, $R_0 \approx 770$~R$_\odot$, and $E_{\rm pk1} \approx 1.7 \times 10^{47}$~erg. We note that the above launching-radius estimate is reasonably similar to that inferred from the blackbody fit to the pre-outburst SED in Sect. \ref{sect:TLR} (see Fig. \ref{Fig:TLR}, top-right panel). The same calculation can be performed for \object{AT~2021afy}, taking into account that the distance towards \object{UGC~10043} adopted in this paper is affected by a large uncertainty (Sect. \ref{Sect:hosts}). From the observed parameters $v_{\rm ej} = 560$~km~s$^{-1}$, $L_{\rm pk1} = 2.05 (\pm 0.61) \times 10^{41}$~erg~s$^{-1}$, and $t_{\rm pk1} \approx 30$~days, we infer $M_{\rm ej} \approx 0.22$~M$_\odot$ and $R_0 = 1640\pm490$~R$_\odot$, while the upper limit to the energy radiated during the first peak is $E_{\rm pk1} = 5.3 (\pm 1.6) \times 10^{47}$~erg. Hence, although the mass of the fast and hot ejecta is similar in the two objects, the energy radiated during the first peak is a least a factor of two (up to four) times higher in \object{AT~2021afy} than in \object{AT~2021blu}. After the early peak, the light curve reaches a minimum before rising again to the second broad maximum, which is mostly powered by hydrogen recombination. We still follow \citet{mat22} to describe the recombination phase. The mass of the recombining hydrogen shell ($M_{\rm rec}$) is obtained through the relation \citep[equivalent to Eq.~16 in][assuming that H recombines at a characteristic constant density of $\rho_{\rm rec} \approx 10^{-11}$~g~cm$^{-3}$]{mat22} \begin{equation} \label{eq5} $$ \frac{M_{\rm rec}}{{\rm M}_\odot} \approx \biggl(\frac{t_{\rm pk2}}{140~{\rm days}} \times \frac{v_{\rm rec}}{300~{\rm km~s}^{-1}}\biggl)^3 $$. \end{equation} For \object{AT~2021blu}, we assume a recombination phase lasting $t_{\rm pk2} = 200$~days and a luminosity $L_{\rm pk2} \approx 3.1 \times 10^{40}$~erg~s$^{-1}$ during the second peak. More tricky is the choice of the velocity of the recombining material ($v_{\rm rec}$). If $v_{\rm rec} \approx v_{\rm FWHM}({\rm H}\alpha) = 360$~km~s$^{-1}$ (at the time of the second peak; see Sect. \ref{sect:2021blu_spec} and Fig. \ref{Fig:Halpha_flux}), we obtain $M_{\rm rec} \approx 5$~M$_\odot$. If we instead adopt the photospheric velocity from the minimum of the absorption metal lines (250~km~s$^{-1}$), we infer a much smaller mass value of $M_{\rm rec} \approx 1.2$~M$_\odot$. We remark that the above mass estimates should be regarded as upper limits, obtained through the crude assumption that H recombination is the only source powering the light curve at this phase. In the case of \object{AT~2021afy}, we adopt the following parameters during the second peak: $t_{\rm pk2} = 50$~days, $L_{\rm pk2} \approx 2.1 \times 10^{41}$~erg~s$^{-1}$, and $v_{\rm rec} \approx 550$~km~s$^{-1}$. This last value is obtained from a weighted average of $v_{\rm FWHM}({\rm H}\alpha)$ measured in the spectra of \object{AT~2021afy} from +30 to +80~d after the first peak. We find that $M_{\rm rec} \approx 0.3$~M$_\odot$, about one order of magnitude smaller than the mass of the recombining material inferred for \object{AT~2021blu}. Although we have poor constraints on the epoch of the \object{AT~2018bwo} outburst onset, we can tentatively estimate the mass of the recombining gas from the observed parameters during the plateau. We adopt $v_{\rm rec} \approx 500$~km~s$^{-1}$ (Sect. \ref{sect:2018bwo_spec}) and $L_{\rm pk2} \approx 10^{40}$~erg~s$^{-1}$ (Sect. \ref{sect:TLR}), while the minimum plateau duration is $t_{\rm pk2} = 40$~days. With these assumptions, we obtain $M_{\rm rec} > 0.1$~M$_\odot$. With a plateau duration of at least 60~days, the mass rises to $M_{\rm rec} \approx 0.4$~M$_\odot$, which is within the range of ejected mass extremes (0.02--2~M$_\odot$) determined by \citet{bla21} using different calibration methods (see their Sect.~5.3). Finally, using the values of $t_{\rm pk2}$ and $L_{\rm pk2}$ reported above, the upper limits to the total energy radiated during the second peak can be derived from \begin{equation} \label{eq6} $$ E_{\rm rec} \approx L_{\rm pk2} \times t_{\rm pk2} $$. \end{equation} \noindent Applying Eq.~\ref{eq6} to \object{AT~2021blu}, we obtain $E_{\rm rec} \approx 5.3 \times 10^{47}$~erg, which is equivalent to the energy radiated during the first peak, and it is smaller than the value inferred for \object{AT~2021afy} ($E_{\rm rec} \approx 9 \times 10^{47}$~erg)\footnote{If we account for the error in the luminosity at the second peak, $L_{\rm pk2} = 2.1(\pm0.6) \times 10^{41}$~erg~s$^{-1}$, the upper limit to the total energy radiated by \object{AT~2021afy} in this phase ranges from $0.65$ to $1.15 \times 10^{48}$~erg.}. Assuming a plateau duration of 60~days, for \object{AT~2018bwo} we infer $E_{\rm rec} \approx 5 \times 10^{46}$~erg, which is over one order of magnitude less than that of \object{AT~2021blu}. Once hydrogen has fully recombined, the luminosity abruptly declines, analogous to what is observed in Type IIP supernovae at the end of the plateau. Without radioactive material powering the light curve as happens in supernova explosions, we expect that the LRN bolometric light curve settles onto the nearly constant luminosity threshold of the resulting merger, although occasionally late-time humps can be observed, especially in the NIR domain, probably consequences of late-time interaction with confined circumstellar shells. This was observed in \object{AT~2021blu}, \object{AT~2021biy}, \citep[][and Fig. \ref{Fig:LRN_absolute}]{cai22}, as well as earlier in \object{AT~2017jfs} \citep{pasto19b}. \begin{table*} \caption{\label{tab:param_LRNe2} Additional observed parameters for a sub-sample of LRNe followed during the early blue peak. } \centering \begin{tabular}{lccccc} \hline\hline LRN name & $L_{\rm bol,peak}$ & $ L_{+7~{\rm d}}({\rm H}\alpha)$ & $v_{{\rm FWHM},+7~{\rm d}}({\rm H}\alpha)$ & $T_{\rm eff,peak}$ & $R_{\rm ph,peak}$ \\ & ($10^{39}$~erg~s$^{-1}$) & ($10^{37}$~erg~s$^{-1}$) & (km~s$^{-1}$) & (K) & (au) \\ \hline \hline AT~2021afy & $205\pm61$ & $53.0\pm6.8$ & $560\pm110$ & $6910\pm810$ & $24\pm5$ \\ % AT~2017jfs & $552\pm172$ & $390.6\pm36.9$ & $<820$ & $7190\pm260$ & $36\pm6$ \\ % SNhunt248 & $309\pm110$ & $513.5\pm21.3$ & $440\pm100$ & $7310\pm70$ & $26\pm5$ \\ % AT~2020kog & $89\pm13$ & $28.7\pm2.9$ & $460\pm50$ & $10860\pm1090$ & $6.3\pm1.0$ \\ % AT~2018hso & $109\pm27$ & $38.5\pm9.2$ & $<625$ & $8060\pm280$ & $12.7\pm1.3$ \\ % NGC3437-2011OT1 & $61\pm27$ & $95.7\pm12.5$ & $<740$ & $6090\pm300$ & $17\pm4$ \\ % AT~2014ej & $300\pm20$ & $138.6\pm15.5$ & $690\pm10$ & $8200\pm320$ & $19\pm2$ \\ % NGC4490-2011OT1 & $282\pm207$ & $157.9\pm51.2$ & $480\pm25$ & $11830\pm3310$ & $9.5\pm5.1$ \\ % AT~1997bs & $63\pm5$ & $34.1\pm3.4$ & $585\pm40$ & $7500\pm640$ & $11.5\pm1.5$ \\ % AT~2021blu & $65\pm9$ & $47.2\pm1.6$ & $500\pm100$ & $8730\pm140$ & $8.4\pm0.6$ \\ % AT~2021biy & $159\pm130$ & $101.8\pm8.1$ & $<500$ & $11430\pm1410$ & $7.7\pm3.1$ \\ % AT~2020hat & $7.8\pm0.7$ & $0.41\pm0.15$ & $<640$ & $4630\pm120$ & $10.3\pm0.6$ \\ % AT~2019zhd & $2.09\pm0.17$ & $0.073\pm0.014$ & $130\pm30$ & $7030\pm210$ & $1.08\pm0.07$ \\ % M31LRN2015 & $3.81\pm0.90$ & $0.21\pm0.05$ & $<640$ & $7940\pm1560$ & $2.45\pm0.75$ \\ % V838~Mon & $3.15\pm0.30$ & $0.081\pm0.008$ & $230\pm20$ & $7920\pm170$ & $2.24\pm0.21$ \\ % V1309~Sco & $0.29\pm0.08$ & $0.0046\pm0.0012$& $150\pm15$ & $5970\pm260$ & $1.00\pm0.29$ \\ % \hline \end{tabular} \tablefoot{The table reports the LRN name (Column 1); the bolometric luminosity at the first peak (Column 2); the H$\alpha$ luminosity (Column 3) and the FWHM velocity of H$_\alpha$ (Column 4) at phase $\sim+7$~d; the effective temperature (Column 5) and the photospheric radius (Column 6) at the first maximum.} \end{table*} The three objects discussed in this paper follow the general evolutionary framework of the best-studied events in the Milky Way and M~31. For this reason, we believe that most (or even all) of them are the outcome of merging events \citep[but see][]{gor20}. But the heterogeneity observed in the light-curve shape, luminosity, and duration suggests a wide range of the physical parameters involved. In particular, the early-time sharp blue peak observed in \object{AT~2021blu} and its higher temperature at maximum brightness suggest a smaller photospheric radius than that of \object{AT~2021afy}. The interpretation of the observational properties of \object{AT~2018bwo} is more tricky, as the object was likely discovered a long time after maximum brightness. However, even if the object was older at discovery than assumed by \citet{bla21}, the very low photosperic temperature implies an initially larger photospheric radius than that of \object{AT~2021blu}. While other considerations indicate that the progenitors of \object{AT~2021afy} and \object{AT~2021blu} were both massive systems (see Sect. \ref{sect:corr}), the above estimates suggest very different configurations for the two LRNe. A very expanded primary star was likely the progenitor of \object{AT~2021afy}, while a proportionally smaller amount of material was launched by this event. In contrast, the \object{AT~2021blu} precursor was characterised by a very compact initial configuration and more-massive ejecta. The parameters of \object{AT~2018bwo} stay in the middle, although the progenitor system was likely less massive than the other two LRNe. The enormous difference between the inferred parameters for these three objects can be explained by a different fate of the system: while the large ejected mass of \object{AT~2021blu} can only result from the coalescence of massive stars, two different scenarios can be invoked to explain the low ejected mass of \object{AT~2021afy}: the massive primary merged with a very low-mass companion, or the system survived as a binary system. However, as remarked by \citet{mat22}, the ejected mass and the radius strongly depend on the adopted velocity, and the presence of an extra heating source (such as shock interaction with circumbinary material) may severely affect the above estimates. \subsection{Correlations among physical parameters} \label{sect:corr} With the inclusion of data presented in this paper \citep[plus AT~2021biy, studied in detail by][]{cai22}, we update with four new objects the diagrams showing possible correlations among the photometric parameters of LRNe presented by \citet{pasto21b}. The results are shown in Fig.~\ref{Fig:correlations} (top panels), while the parameters adopted for all objects are reported in Table \ref{tab:param_LRNe}. The new objects confirm the correlations between the absolute magnitude of the quiescent progenitor system with the absolute magnitude at the end of the slowly rising pre-outburst phase (panel B), the blue peak (panel C), and the broad red peak (or the plateau; panel D), with more-luminous LRN outbursts being produced by more-luminous (and, consequently, more-massive) stellar systems, as pointed out by \citet{koc17} and \citet{bla21}. To quantify the strength of the correlations, we carried out a Pearson test\footnote{The parameters of \object{OGLE-2002-BLG-360}, a very peculiar object, were excluded in running the Pearson test.}, obtaining the following $p$-values: $1.3 \times 10^{-5}$, $1.1 \times 10^{-6}$, and $6.9 \times 10^{-8}$ for panels B, C, and D (respectively). We also note a weak correlation ($p$-value = 0.02) between the luminosity ratio of the two LRN maxima, and the time during which the luminosity stays between $L_{\rm peak}$ and $0.1\,L_{\rm peak}$ (panel A). As noticed by \citet{bla21}, dimmer objects seem to have a shorter duration, although \object{OGLE-2002-BLG-360} \citep{tyl13} appears to be an outlier, as it does not follow the general observational LRN trends. However, this object was observed mostly in the Johnson-Cousins $I$ band, had a limited colour information, and showed a very peculiar, triple-peaked light curve which is challenging to interpret. We also remark that for \object{CK~Vul}, quite limited photometric information is available\footnote{The object erupted in June 1670 and brightened again in April 1671. Only uncertain visual observations are documented from historical records, collected by \protect\citet{sha85}.}; hence, the inferred quantities should be regarded as simply indicative. We inspect other possible correlations of physical parameters computed at the time of the early blue peak (Fig. \ref{Fig:correlations}, bottom panels; the values for individual objects are reported in Table \ref{tab:param_LRNe2}). In this analysis, we do not consider LRNe discovered after the early peak, such as AT~2018bwo and AT~2014ej, or whose photometric information is not accurate enough for a reliable estimate of $T_{\rm eff}$ and $R_{\rm ph}$ at that phase. In Fig.~\ref{Fig:correlations}, we report the bolometric luminosity at the first peak (obtained through a blackbody fit to the SED) versus $T_{\rm eff}$ (panel E) and $R_{\rm ph}$ (panel F) computed at the same phase. Again, there is a general trend, with dimmer LRNe having lower temperatures and smaller radii at the photosphere. Following the same approach as above, we performed a Pearson test to verify the robustness of the correlations in panels E and F, and obtained $p$-values of 0.02 and $2.6 \times 10^{-5}$, respectively. Finally, we inspect possible correlations of the bolometric luminosity at the blue peak with the H$\alpha$ luminosity $L_{+7~{\rm days}}$(H$\alpha$) (panel G) and the velocity of the expanding material $v_{\rm FWHM}$(H$\alpha$)\footnote{This value was computed for all objects through a Lorenzian fit to the line profile.} (panel H) inferred from spectra obtained $\sim 1$~week after the first maximum. We made this choice because only a very limited number of LRN spectra are available at maximum brightness. Panel G shows a clear trend between $L_{\rm bol,peak}$ and $L_{+7~{\rm days}}$(H$\alpha$) ($p$-value = $9.2 \times 10^{-5}$), with dimmer LRNe having fainter H$\alpha$ luminosity. On the other hand, when the FWHM velocity is considered (panel H), the Pearson test does not reveal a significant correlation ($p$-value = 0.48), although a correlation between peak luminosity and outflow velocity was predicted for LRNe by \citet[][see their Fig.~21]{pej16b}. The above correlations resemble those proposed by \citet{bla21} with slightly different parameters, but giving very similar outcomes: globally, the most-luminous LRNe are longer-duration events, often showing an early light-curve peak. Hence, they are expected to have initially a hotter, more expanded, and higher-velocity photosphere, producing more-luminous H$\alpha$ spectral lines. \citet{bla21} suggested that the presence or the absence of an early blue peak in the light curve is due to different ionisation states of the gas shell where the photosphere is located. This shell would be initially fully ionised in 'hot' events, and only marginally ionised in 'cold' LRNe. Hot LRNe are usually high-luminosity events produced by more-massive progenitor systems. \citet{bla21} proposed that energetic outflows generated by such massive binaries can efficiently ionise the circumstellar shell generated during a previous mass-loss event. However, while the spectral appearance during the first photometric peak may support this explanation, it cannot be comfortably applied to all objects, including the faint-but-hot \object{AT~2019zhd} \citep[Fig. \ref{Fig:TLR}, and][]{pasto21a}. While \citet{sana12} predicted that mergers are a common endpoint of the evolution of massive stars in binary systems, precise rate estimates are still not available. We may expect that the LRN rates in different luminosity bins depend on the systemic mass. Using ZTF data, a volumetric rate of $7.8^{+6.5}_{-3.7} \times 10^{-5}$~Mpc$^{-1}$~yr$^{-1}$ has been recently computed by \citet{kar22} in the absolute magnitude range $-16 \leq M_r \leq -11$~mag, hence considering intrinsically luminous events only. This is consistent with the larger volumetric rate of $8 \times 10^{-4}$~Mpc$^{-1}$~yr$^{-1}$ estimated in the local Universe by \citet[][including intrinsically faint LRNe]{how20}, with 1--2 events per decade being expected in the Galaxy. This estimate is consistent with the discovery of a handful of LRNe in the Galaxy over the past 30~yr. The rates of LRNe are broadly dominated by the dimmest events, as discussed by \citet{koc14}, implying that mergers of low-mass binaries are more common by 2--3 orders of magnitude than those of massive binaries. We also note that a number of low-mass Galactic contact binaries have been proposed to be in the pathway to become mergers \citep[see, e.g.][]{wad20}. \section{Conclusions} \label{Sect:conclusions} We presented photometric and spectroscopic datasets for three new objects (\object{AT~2018bwo}, \object{AT~2021afy}, and \object{AT~2021blu}) that belong to the high-luminosity population of LRNe. All of them are most likely the outcome of merging events involving massive stars. However, they exhibit different properties, with \object{AT~2021blu} being initially hotter and having a smaller photospheric radius than \object{AT~2021afy} (and probably also \object{AT~2018bwo}, despite its epoch of outburst onset being poorly constrained). In addition, the duration of the \object{AT~2021blu} outburst is twice as long as that of the other two objects, suggesting a larger outflowing mass. Comparisons among observed parameters suggest that the three objects discussed here belong to the bright LRN population, with \object{AT~2021afy} being one of the most luminous events discovered to date. Making use of the correlation between the absolute magnitude of the outburst and the progenitor's mass presented by \citet{cai22}, we estimate that the progenitor of \object{AT~2021afy} has a mass likely exceeding 40~M$_\odot$ (although with a very large uncertainty). The binary progenitor system of \object{AT~2021blu} is characterised by a primary star of 13--18~M$_\odot$, slightly more massive than that of \object{AT~2018bwo} (11--16~M$_\odot$) reported by \citet{bla21}. Our study supports previous evidence \citep[e.g.][]{pasto19a} that LRNe span a very wide range of physical properties, and that most observational parameters are somewhat correlated. In particular, the peak luminosity of LRN light curves appears to be correlated with the outburst duration, the H$\alpha$ luminosity, the photospheric radius, the effective temperature, and (most importantly) the luminosity and mass of the progenitor stellar systems, as advocated by \citet{koc17} and \citet{bla21}. To increase our ability to characterise LRN variety, we need to expand the sample of events with excellent spectral and photometric coverage, and with available information regarding the quiescent progenitors. This will enable us to fine-tune the above correlations, making them a valuable tool for estimating the parameters of LRNe when only incomplete datasets are available, as well as for inferring the luminosity and mass of LRN binary progenitors without the need for a direct detection of the progenitor flux from archival pre-outburst images obtained with high-spatial-resolution facilities. \begin{acknowledgements} We thank Jorge Anais Vilchez, Abdo Campillay, Nahir Mu\~noz-Elgueta, Natalie Ulloa, and Jaime Vargas-Gonz\'alez for performing the observations on the Swope Telescope at Las Campanas Observatory, Chile; Takashi Nagao for his help with the NOT observations; and WeiKang Zheng for his help with Keck observations. We also thank Jun Mo for his help with the TNT data reduction. We acknowledge the support of the staffs of the various observatories where data were obtained.\\ MF is supported by a Royal Society -- Science Foundation Ireland University Research Fellowship. AR acknowledges support from ANID BECAS/DOCTORADO NACIONAL 21202412. GP and AR acknowledge support from the Chilean Ministry of Economy, Development, and Tourism’s Millennium Science Initiative through grant IC12009, awarded to the Millennium Institute of Astrophysics. EC, NER, and LT acknowledge support from MIUR, PRIN 2017 (grant 20179ZF5KS) ``\textit{The new frontier of the Multi-Messenger Astrophysics: follow-up of electromagnetic transient counterparts of gravitational wave sources}.'' NER also acknowledges partial support from the Spanish MICINN grant PID2019-108709GB-I00 and FEDER funds, and from the program Unidad de Excelencia María de Maeztu CEX2020-001058-M. TMR acknowledges the financial support of the Jenny and Antti Wihuri and the Vilho, Yrj{\"o} and Kalle V{\"a}is{\"a}l{\"a} Foundations. Research by SV is supported by NSF grants AST-1813176 and AST-2008108. Time-domain research by the University of Arizona team and DJS is supported by NSF grants AST-1821987, 1813466, 1908972, $\&$ 2108032, and by the Heising-Simons Foundation under grant \#2020-1864. KAB acknowledges support from the DIRAC Institute in the Department of Astronomy at the University of Washington. The DIRAC Institute is supported through generous gifts from the Charles and Lisa Simonyi Fund for Arts and Sciences, and the Washington Research Foundation. The LCO team is supported by NSF grants AST-1911225 and AST-1911151. JB is supported by NSF grants AST-1911151 and AST-1911225, as well as by National Aeronautics and Space Administration (NASA) grant 80NSSC19kf1639. YZC is funded by China Postdoctoral Science Foundation (grant 2021M691821). RD acknowledges funds by ANID grant FONDECYT Postdoctorado No. 3220449. LG acknowledges financial support from the Spanish Ministerio de Ciencia e Innovaci\'on (MCIN), the Agencia Estatal de Investigaci\'on (AEI) 10.13039/501100011033, and the European Social Fund (ESF) ``Investing in your future'' under the 2019 Ram\'on y Cajal program RYC2019-027683-I and the PID2020-115253GA-I00 HOSTFLOWS project, from Centro Superior de Investigaciones Cient\'ificas (CSIC) under the PIE project 20215AT016, and the program Unidad de Excelencia Mar\'ia de Maeztu CEX2020-001058-M. RK acknowledges support from the Academy of Finland (340613). KM acknowledges BRICS grant DST/IMRCD/BRICS/Pilotcall/ProFCheap/2017(G) for this work. AMG acknowledges financial support from the 2014–2020 ERDF Operational Programme and by the Department of Economy, Knowledge, Business and University of the Regional Government of Andalusia through the FEDER-UCA18-107404 grant. MDS is supported by grants from the VILLUM FONDEN (grant 28021) and the Independent Research Fund Denmark (IRFD; 8021-00170B). AVF's group at UC Berkeley has received support from the Miller Institute for Basic Research in Science (where AVF was a Miller Senior Fellow), the Christopher R. Redlich Fund, and numerous individual donors. LW is sponsored (in part) by the Chinese Academy of Sciences (CAS), through a grant to the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile. This work was supported in part by NASA Keck PI Data Award 2021A-N147 (PI: Jha), administered by the NASA Exoplanet Science Institute. This work is supported by National Natural Science Foundation of China (NSFC grants 12033003, 11633002), the Scholar Program of Beijing Academy of Science and Technology (DZ:BS202002), and the Tencent Xplorer Prize. This work is partially supported by China Manned Space Project (CMS-CSST-2021-A12).\\ This work is based on observations made with the Nordic Optical Telescope (NOT), owned in collaboration by the University of Turku and Aarhus University, and operated jointly by Aarhus University, the University of Turku and the University of Oslo, representing Denmark, Finland and Norway, the University of Iceland and Stockholm University at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias; the 10.4\,m Gran Telescopio Canarias (GTC), installed in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'isica de Canarias, in the Island of La Palma; the 2.0\,m Liverpool Telescope operated on the island of La Palma by Liverpool John Moores University at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'isica de Canarias with financial support from the UK Science and Technology Facilities Council; the 3.58\,m Italian Telescopio Nazionale Galileo (TNG) operated the island of La Palma by the Fundaci\'on Galileo Galilei of the Istituto Nazionale di Astrofisica (INAF) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'isica de Canarias; the 4.1\,m Southern African Large Telescope (SALT) of the South African Astronomical Observatory, Southerland, South Africa; the 6.5\,m Magellan-Baade Telescope located at the Las Campanas Observatoy, Chile; the Southern Astrophysical Research Telescope and the Panchromatic Robotic Optical Monitoring and Polarimetry Telescopes (PROMPT) of the Cerro Tololo Inter-American Observatory (CTIO), on Cerro Pach\'on, Chile; the 3.05\,m Shane telescope of the Lick Observatory, University of California Observatories, USA; the 1.82\,m Copernico and the 67/92\,cm Schmidt telescopes of INAF --- Osservatorio Astronomico di Padova, Asiago, Italy; the 3.6\,m Devasthal Optical Telescope, the 1.3\,m Devasthal Fast Optical Telescope (DFOT), and the 1.04\,m Sampurnanand Telescope (ST) of the Aryabatta Research Institute of Observational Sciences (ARIES), Manora Peak, Naintal, Uttarakhand, India; the 0.8\,m Tsinghua-NAOC Telescope at Xinglong Observatory (China); the 0.6\,m Rapid Eye Mount (REM) INAF telescope, hosted at ESO La Silla Observatory, Chile, under program ID 43308; and the 2.3\,m Bok Telescope, operated by Stewart Observatory, Kitt Peak National Observatory, Arizona, USA. This work makes also use of observations from the Las Cumbres Observatory global telescope network (including the Faulkes North Telescope, and the 2.54\,m Isaac Newton telescope operated by the Isaac Newton Group of Telescopes (ING), Roque de los Muchachos, La Palma, Spain. The NIRES data presented herein were obtained at the W. M. Keck Observatory from telescope time allocated to NASA through the agency's scientific partnership with the California Institute of Technology and the University of California. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. A major upgrade of the Kast spectrograph on the Shane 3\,m telescope at Lick Observatory was made possible through generous gifts from William and Marina Kast as well as the Heising-Simons Foundation. Research at Lick Observatory is partially supported by a generous gift from Google. The paper is also based on observations obtained at the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by AURA, Inc., under a cooperative agreement with the NSF on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigaci\'on y Desarrollo (Chile), Ministerio de Ciencia, Tecnolog\'ia e Innovaci\'on (Argentina), Minist\'erio da Ciência, Tecnologia, Inova\c{c}\~oes e Comunica\c{c}\~oes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). The data presented herein were obtained in part with ALFOSC, which is provided by the Instituto de Astrofisica de Andalucia (IAA) under a joint agreement with the University of Copenhagen and NOT.\\ This study is also based on observations made with the NASA/ESA {\it Hubble Space Telescope}, obtained from the Data Archive at the Space Telescope Science Institute (STScI), which is operated by AURA, Inc., under NASA contract NAS 5-26555. This work made use of data from the All-Sky Automated Survey for Supernovae (ASAS-SN), obtained through the Sky Patrol interface (\url{https://asas-sn.osu.edu/}). This research is based on observations made with the mission, obtained from the MAST data archive at STScI, which is operated by AURA, Inc., under NASA contract NAS 5–26555. These observations are associated with programs with IDs 8645, 15922, and 16691. This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. ATLAS is primarily funded to search for near-Earth objects through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogs from the survey area. The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen's University Belfast, STScI, and the South African Astronomical Observatory, and The Millennium Institute of Astrophysics (MAS), Chile.\\ The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, STScI, NASA under grant NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, NSF grant AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.\\ This publication is partially based on observations obtained with the Samuel Oschin 48-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility (ZTF) project. ZTF is supported by the NSF under grant AST-1440341 and a collaboration including Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, the University of Washington, Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos National Laboratories, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW.\\ We acknowledge ESA Gaia, DPAC, and the Photometric Science Alerts Team (\url{http://gsaweb.ast.cam.ac.uk/alerts}). We acknowledge the use of public data from the {\it Swift} data archive. This research made use of the WISeREP database (\url{https://wiserep.weizmann.ac.il}). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. This publication used data products from the Two Micron All-Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by NASA and the NSF. \\ This work used the Binary Population and Spectral Synthesis (BPASS) models as last described by Eldridge, Stanway, et al. (2017) and by Stanway, Eldridge, et al. (2018).\\ The authors wish to recognise and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. \end{acknowledgements} \begin{appendix} \section{Instruments used in the photometric campaigns} \label{Appendix:A} \object{AT~2018bwo} was monitored in the optical bands using the 0.41~m Prompt~3 (with Sloan $g$ and $r$ filters) and the 0.41~m Prompt~6 (with Johnson $V$ and $R$ filters), both hosted by the Cerro Tololo Inter-American Observatory (CTIO, Chile). Additional unfiltered photometry was obtained with the 0.41~m Prompt~5 (CTIO) telescope and the 0.41~m Prompt-MO-1 telescope (Meckering Observatory, south-western Australia), both operating in the framework of the DLT40 survey. Other data were obtained with the 1~m Swope Telescope of the Las Campanas Observatory, and several 1~m telescopes equipped with Sinistro cameras and $UBVgri$ filters, operating within the Las Cumbres Observatory global telescope network, in the framework of the Global Supernova Project. In particular, the telescopes used for the \object{AT~2018bwo} campaign are hosted at the Siding Spring Observatory (SSO; Australia), the South African Astronomical Observatory (SAAO, South Africa), and CTIO. Single-epoch observations were also obtained with the 3.56~m New Technology Telescope (NTT) equipped with EFOSC2, hosted at ESO-La Silla Observatory (Chile), and the 10.4~m GTC with OSIRIS, hosted at La Palma (Canary Islands, Spain). A small number of photometric data were provided by the {\it Gaia} survey in the {\it Gaia} $G$ band, and by ZTF. Four epochs taken by ZTF with the Sloan $g$ and $r$ filters\footnote{The images were retrieved from \url{https://www.ztf.caltech.edu/}; \protect\cite{mas19}.} during the late-time light-curve decline were measured by us using the template-subtraction technique. Finally, photometry in the orange ($o$) and cyan ($c$) bands are provided by the two 0.5~m ATLAS telescopes (ATLAS~1 is on Haleakal\"a and ATLAS~2 on Maunaloa, Hawaii, USA). \object{AT~2021afy} was followed in $g$ and $r$ by ZTF. Although ZTF publicly provides forced photometry, we re-analysed the ZTF images obtained through the Public Data Release 3 without applying a template subtraction. We made this choice because the object was located in a remote position of the host-galaxy centre, and the ZTF templates were not optimal (in particular, in the $g$ band). We complemented the ZTF data with multi-band observations obtained with the following instruments: the 2~m Liverpool Telescope (LT) equipped with IO:O; the 2.56~m Nordic Optical Telescope (NOT) equipped with the Alhambra Faint Object Spectrograph and Camera (ALFOSC) and the NOT near-infrared Camera and spectrograph (NOTCam); the 10.4~m GTC with OSIRIS; the 1.82~m Copernico Telescope with the Asiago Faint Object Spectrograph and Camera (AFOSC) and the 0.67/0.92~m Schmidt Telescope with a Moravian camera of the Padova Observatory (Istituto Nazionale di Astrofisica, INAF, hosted at Mt. Ekar, near Asiago, Italy); and the 0.6~m Rapid Eye Mount (REM) telescope with the ROSS2 and REMIR cameras, hosted by the European Southern Observatory (ESO) in La Silla (Chile). \object{AT~2021blu} was extensively observed by the following public surveys: the All-Sky Automated Survey for Supernovae \citep[ASAS-SN;][in the $g$ band]{sha14,koc17}\footnote{ASAS-SN photometry is publicly released through the Sky Patrol ASAS-SN interface (\url{https://asas-sn.osu.edu}).} which works through a network of small (0.14~m) telescopes in different world-wide sites; the 1.22~m Samuel Oschin Telescope at the Palomar Observatory (California, USA) serving the ZTF survey\footnote{In this case, we used public ZTF forced photometry, which is released through the Lasair (\url{https://lasair.roe.ac.uk/}) and ALeRCE (\url{https://alerce.online/}) brokers, and already shown by \protect\citet{sora22}.}; the two 0.5~m ATLAS telescopes, and the two 1.8~m Pan-STARRS\footnote{The acronym stands for Panoramic Survey Telescope $\&$ Rapid Response System.} \citep[PS1 and PS2;][]{cha19,fle20,mag20} telescopes at Haleakal\"a (Hawaii, USA). Multi-Band data were also obtained with the same facilities used for the \object{AT~2021afy} campaign (except REM, owing to the northern declination of \object{AT~2021blu}), plus 0.4~m to 1~m telescopes of the Las Cumbres Observatory global telescope network equipped with SBIG STL6303 and Sinistro cameras, respectively, and hosted at the McDonald Observatory (Texas, USA) and the Teide Observatory (Tenerife, Canary Islands, Spain). Other telescopes used in the monitoring campaign of \object{AT~2021blu} are the 0.8~m Tsinghua-NAOC Telescope \citep[TNT;][]{wang08,hua12} with a PIXIS back-illuminated 1300B CCD camera at Xinglong Observatory (China); the 3.6~m Devasthal Optical Telescope (DOT) equipped with ADFOSC, the 1.3~m Devasthal Fast Optical Telescope (DFOT) and the 1.04~m Sampurnanand Telescope (ST) operated by Aryabhatta Research Institute of Observational Sciences (ARIES; India) with optical imagers. Data in the optical (with $u$, $b$, and $v$ filters) and UV (in the $uvw2$, $uvm2$, and $uvw1$ bands) domains were obtained by the {\it Neil Gehrels Swift Observatory} spacecraft \citep{geh04} equipped with UVOT \citep{rom05}. \subsection{Photometry tables} \label{Appendix:A.1} Tables A1, A2, and A3 are available in electronic form at the CDS, and contain the following information: the epoch and the MJD of the observation (Columns 1 and 2, respectively); the filter (Column 3); the magnitude and the error (Columns 4 and 5, respectively); the instrumental configuration (Column 6); additional notes (Column 7). \begin{table*} \setcounter{table}{3} \caption{\label{Table:A1.4} Information on the PS1 and ZTF stacked images obtained in the decade before the outburst of \object{AT~2021afy}, and detection limits.} \centering \begin{tabular}{cccccccc} \hline\hline Initial date & Initial MJD & Final date & Final MJD & Average MJD & Filter & magnitude & CCD code\\ \hline 2011-05-31 & 55712.31 & 2012-05-16 & 56063.55 & 55888.13 & $g$ & $>$23.05 & PS1 \\ 2010-07-01 & 55378.42 & 2013-07-03 & 56476.40 & 56069.75 & $r$ & $>$23.20 & PS1 \\ 2011-05-19 & 55700.48 & 2014-07-11 & 56849.38 & 56396.19 & $i$ & $>$23.40 & PS1 \\ 2010-03-01 & 55256.57 & 2014-09-17 & 56917.23 & 55928.69 & $z$ & $>$22.81 & PS1 \\ 2018-03-26 & 58203.42 & 2018-09-08 & 58369.23 & 59286.82 & $g$ & $>$20.95 & ZTF-c15 \\ 2018-03-09 & 58186.48 & 2018-05-20 & 58258.25 & 59222.37 & $r$ & $>$22.05 & ZTF-c15 \\ 2018-04-20 & 58228.51 & 2018-09-06 & 58367.14 & 59297.82 & $i$ & $>$21.33 & ZTF-c15 \\ 2020-12-11 & 59194.55 & 2020-12-13 & 59196.57 & 59195.64 & $r$ & $>$21.11 & ZTF-c15 \\ 2020-12-15 & 59198.56 & 2020-12-18 & 59201.57 & 59199.56 & $r$ & $>$21.11 & ZTF-c15 \\ \hline \end{tabular} \tablefoot{The table reports the epoch and MJD of the first image (Columns 1 and 2), the epoch and MJD of the last image (Columns 3 and 4), the average MJD of the stacked image (Column 5), the filter (Column 6), the detection magnitude limits (Column 7), and the identificatione code of the stacked images (Column 8).} \end{table*} \begin{table*} \setcounter{table}{4} \caption{\label{Table:A1.5} Information on the ZTF stacked images and photometry of the pre-outburst source at the location of \object{AT~2021blu}.} \centering \begin{tabular}{cccccccc} \hline\hline Initial date & Initial MJD & Final date & Final MJD & Average MJD & Filter & magnitude & CCD code\\ \hline 2018-04-25 & 58233.22 & 2018-06-23 & 58292.20 & 58260.72 & $g$ & $>$21.98 & ZTF-c12 \\ 2018-03-25 & 58202.26 & 2018-06-24 & 58293.18 & 58248.47 & $g$ & $>$21.85 & ZTF-c09 \\ 2018-10-31 & 58422.50 & 2019-06-28 & 58662.21 & 58520.01 & $g$ & 22.20 (0.54) & ZTF-c09 \\ 2019-10-09 & 58765.51 & 2019-12-29 & 58846.52 & 58802.75 & $g$ & $>$22.17 & ZTF-c12 \\ 2019-10-02 & 58758.51 & 2019-12-29 & 58846.51 & 58790.41 & $g$ & $>$21.74 & ZTF-c09 \\ 2020-01-04 & 58852.52 & 2020-01-29 & 58877.38 & 58865.24 & $g$ & 22.10 (0.46) & ZTF-c12 \\ 2020-01-01 & 58849.48 & 2020-01-31 & 58879.27 & 58869.97 & $g$ & 22.04 (0.36) & ZTF-c09 \\ 2020-02-01 & 58880.42 & 2020-02-27 & 58906.29 & 58891.04 & $g$ & $>$22.09 & ZTF-c09 \\ 2020-02-01 & 58880.42 & 2020-02-27 & 58906.31 & 58895.65 & $g$ & $>$22.00 & ZTF-c12 \\ 2020-03-01 & 58909.28 & 2020-03-28 & 58936.27 & 58916.00 & $g$ & $>$21.55 & ZTF-c12 \\ 2020-03-04 & 58912.26 & 2020-03-31 & 58939.31 & 58925.76 & $g$ & $>$21.60 & ZTF-c09 \\ 2020-04-15 & 58954.17 & 2020-04-29 & 58968.24 & 58962.46 & $g$ & 22.04 (0.42) & ZTF-c09 \\ 2020-04-15 & 58954.28 & 2020-04-29 & 58968.27 & 58963.53 & $g$ & 22.07 (0.33) & ZTF-c12 \\ 2020-05-02 & 58971.25 & 2020-05-29 & 58998.18 & 58983.74 & $g$ & $>$21.90 & ZTF-c12 \\ 2020-05-03 & 58972.23 & 2020-06-05 & 59005.17 & 58987.19 & $g$ & $>$21.84 & ZTF-c09 \\ 2020-06-01 & 59001.17 & 2020-06-27 & 59027.20 & 59016.53 & $g$ & $>$21.69 & ZTF-c12 \\ 2020-10-18 & 59140.51 & 2020-11-05 & 59158.48 & 59149.21 & $g$ & $>$21.46 & ZTF-c09 \\ 2020-10-18 & 59140.51 & 2020-11-06 & 59159.52 & 59150.39 & $g$ & $>$21.74 & ZTF-c12 \\ 2020-11-12 & 59165.53 & 2020-11-29 & 59182.56 & 59174.27 & $g$ & 21.83 (0.49) & ZTF-c09 \\ 2020-11-12 & 59165.52 & 2020-12-01 & 59184.45 & 59175.00 & $g$ & 21.83 (0.50) & ZTF-c12 \\ 2020-12-02 & 59185.51 & 2020-12-22 & 59205.43 & 59196.32 & $g$ & 21.62 (0.66) & ZTF-c09 \\ 2020-12-05 & 59188.47 & 2020-12-22 & 59205.43 & 59198.29 & $g$ & 21.72 (0.46) & ZTF-c12 \\ 2021-01-01 & 59215.49 & 2021-01-17 & 59231.42 & 59222.69 & $g$ & 21.64 (0.48) & ZTF-c12 \\ 2021-01-08 & 59222.39 & 2021-01-18 & 59232.43 & 59227.44 & $g$ & 21.60 (0.32) & ZTF-c09 \\ 2018-04-08 & 58216.22 & 2018-06-15 & 58284.18 & 58243.86 & $r$ & 21.62 (0.51) & ZTF-c09 \\ 2018-04-06 & 58214.20 & 2018-06-15 & 58284.17 & 53245.90 & $r$ & 21.74 (0.46) & ZTF-c12 \\ 2018-11-07 & 58429.52 & 2019-06-24 & 58658.18 & 58524.29 & $r$ & $>$21.72 & ZTF-c12 \\ 2018-10-31 & 58422.53 & 2019-07-05 & 58669.18 & 58509.83 & $r$ & 21.75 (0.54) & ZTF-c09 \\ % 2019-09-25 & 58751.53 & 2019-12-29 & 58846.46 & 58800.39 & $r$ & $>$21.58 & ZTF-c09 \\ 2019-10-20 & 58776.51 & 2019-12-29 & 58846.46 & 58812.89 & $r$ & $>$21.61 & ZTF-c12 \\ 2020-01-01 & 58849.45 & 2020-01-29 & 58877.34 & 58861.95 & $r$ & $>$21.40 & ZTF-c12 \\ 2020-01-01 & 58849.44 & 2020-01-31 & 58879.34 & 58867.73 & $r$ & 21.74 (0.36) & ZTF-c12 \\ 2020-02-01 & 58880.32 & 2020-02-27 & 58906.23 & 58890.86 & $r$ & 21.75 (0.50) & ZTF-c09 \\ 2020-02-01 & 58880.34 & 2020-02-20 & 58899.34 & 58891.63 & $r$ & 21.71 (0.47) & ZTF-c12 \\ 2020-03-01 & 58909.32 & 2020-03-31 & 58939.21 & 58916.04 & $r$ & $>$21.68 & ZTF-c12 \\ 2020-04-15 & 58954.24 & 2020-04-29 & 58968.19 & 58961.56 & $r$ & 21.82 (0.33) & ZTF-c12 \\ 2020-04-15 & 58954.22 & 2020-04-29 & 58968.17 & 58962.46 & $r$ & 21.83 (0.33) & ZTF-c09 \\ 2020-05-01 & 58970.21 & 2020-05-29 & 58998.25 & 58981.72 & $r$ & 21.91 (0.42) & ZTF-c12 \\ 2020-05-03 & 58972.17 & 2020-06-27 & 59027.18 & 58990.65 & $r$ & 21.93 (0.52) & ZTF-c09 \\ 2020-06-04 & 59004.25 & 2020-06-27 & 59027.17 & 59017.06 & $r$ & $>$21.34 & ZTF-c12 \\ 2020-10-14 & 59136.52 & 2020-11-05 & 59158.53 & 59147.65 & $r$ & 21.74 (0.53) & ZTF-c09 \\ 2020-10-18 & 59140.53 & 2020-11-06 & 59159.49 & 59149.61 & $r$ & 21.61 (0.47) & ZTF-c12 \\ 2020-11-12 & 59165.49 & 2020-11-28 & 59181.49 & 59172.69 & $r$ & 21.48 (0.39) & ZTF-c12 \\ 2020-11-12 & 59165.49 & 2020-12-02 & 59185.47 & 59174.79 & $r$ & 21.26 (0.31) & ZTF-c09 \\ 2020-12-01 & 59184.51 & 2020-12-17 & 59200.52 & 59193.48 & $r$ & 21.30 (0.38) & ZTF-c12 \\ 2020-12-10 & 59193.52 & 2020-12-27 & 59210.43 & 59200.32 & $r$ & 21.29 (0.29) & ZTF-c09 \\ 2021-01-04 & 59218.45 & 2021-01-04 & 59218.45 & 59218.45 & $r$ & 21.32 (0.30) & ZTF-c09 \\ 2021-01-01 & 59215.44 & 2021-01-05 & 59219.36 & 59219.42 & $r$ & 21.32 (0.42) & ZTF-c12 \\ 2021-01-01 & 59215.44 & 2021-01-17 & 59231.47 & 59222.68 & $r$ & 21.29 (0.37) & ZTF-c12 \\ 2021-01-07 & 59221.40 & 2021-01-11 & 59225.41 & 59223.43 & $r$ & 21.31 (0.30) & ZTF-c12 \\ 2021-01-08 & 59222.44 & 2021-01-12 & 59226.49 & 59224.47 & $r$ & 21.34 (0.26) & ZTF-c09 \\ 2021-01-13 & 59227.40 & 2021-01-17 & 59231.47 & 59229.43 & $r$ & 21.36 (0.34) & ZTF-c12 \\ 2021-01-14 & 59228.47 & 2021-01-18 & 59232.42 & 59230.45 & $r$ & 21.35 (0.28) & ZTF-c09 \\ 2018-04-24 & 58232.30 & 2018-05-28 & 58266.22 & 58251.60 & $i$ & $>$20.98 & ZTF-c12 \\ 2018-04-24 & 58232.30 & 2018-05-28 & 58266.22 & 58251.97 & $i$ & $>$21.06 & ZTF-c09 \\ \hline \end{tabular} \tablefoot{The table reports the epoch and MJD of the first image (Columns 1 and 2), the epoch and MJD of the last image (Columns 3 and 4), the average MJD of the stacked image (Column 5), the filter (Column 6), the magnitude (Column 7), and the CCD chip identification code of the ZTF images (Column 8).} \end{table*} \section{Instruments used in the spectroscopic campaigns} \label{Appendix:B} The spectra of \object{AT~2018bwo}, which cover four months of the LRN evolution, were taken with the $11.1\times9.8$~m Southern African Large Telescope (SALT) with the Robert Stobie Spectrograph (RSS; hosted near Sutherland, South Africa); the 8.1~m Gemini South Telescope equipped with FLAMINGOS2 and the 4.1~m Southern Astrophysical Research (SOAR) Telescope plus the Goodman spectrograph (both located on Cerro Pach\'on, Chile); the 6.5~m Magellan-Baade Telescope with the Folded-port InfraRed Echelette (FIRE)\footnote{FIRE data were reduced following the prescriptions detailed by \protect\citet{hsi19}.} spectrometer at the Las Campanas Observatory (Chile); and the 10~m Keck-I Telescope with the Low Resolution Imaging Spectrograph \citep[LRIS;][]{oke95} on Maunakea (Hawaii, USA). GTC, equipped with OSIRIS, was used for a late-time spectrum of \object{AT~2018bwo}, and all spectra of \object{AT~2018bwo}. The following instruments were used in the spectroscopic campaign of \object{AT~2021blu}: the 2.0~m Faulkes North Telescope (FNT) with FLOYDS of the Las Cumbres Observatory node on Haleakal\"a (Hawaii, USA); the 3.05~m Shane telescope equipped with the Kast spectrograph (hosted at Lick Observatory, near San Jose, California, USA); Keck-I plus LRIS and Keck-II with NIRES; the 1.82~m Copernico Telescope plus AFOSC; the DOT plus ADFOSC; the GTC with OSIRIS; the NOT with ALFOSC; and the 3.58~m Telescopio Nazionale Galileo (TNG) with the Device Optimized for the LOw RESolution (DOLORES, or LRS). \end{appendix}
Title: Energy transport during 3D small-scale reconnection driven by anisotropic plasma turbulence
Abstract: Energy dissipation in collisionless plasmas is a longstanding fundamental physics problem. Although it is well known that magnetic reconnection and turbulence are coupled and transport energy from system-size scales to sub-proton scales, the details of the energy distribution and energy dissipation channels remain poorly understood. Especially, the energy transfer and transport associated with three dimensional (3D) small-scale reconnection that occurs as a consequence of a turbulent cascade is unknown. We use an explicit fully kinetic particle-in-cell code to simulate 3D small scale magnetic reconnection events forming in anisotropic and Alfv\'enic decaying turbulence. We identify a highly dynamic and asymmetric reconnection event that involves two reconnecting flux ropes. We use a two-fluid approach based on the Boltzmann equation to study the spatial energy transfer associated with the reconnection event and compare the power density terms in the two-fluid energy equations with standard energy-based damping, heating and dissipation proxies. Our findings suggest that the electron bulk flow transports thermal energy density more efficiently than kinetic energy density. Moreover, in our turbulent reconnection event, the energy-density transfer is dominated by plasma compression. This is consistent with turbulent current sheets and turbulent reconnection events, but not with laminar reconnection.
https://export.arxiv.org/pdf/2208.02350
. \begin{document} \title{Energy transport during 3D small-scale reconnection driven by anisotropic plasma turbulence} \correspondingauthor{Jeffersson Andres Agudelo Rueda} \email{jeffersson.agudelo.18@ucl.ac.uk} \author[0000-0001-5045-0323]{Jeffersson A Agudelo Rueda} \affiliation{Department of Physics and Astronomy, Dartmouth College, Hanover, NH, USA} \affiliation{Mullard Space Science Laboratory, UCL, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK} \author[0000-0002-0497-1096]{Daniel Verscharen} \affiliation{Mullard Space Science Laboratory, UCL, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK} \author[0000-0002-0622-5302]{Robert T. Wicks} \affiliation{Department of Mathematics, Physics and Electrical Engineering, Northumbria University, Newcastle upon Tyne, NE1 8ST, UK} \author[0000-0002-5982-4667]{Christopher J. Owen} \affiliation{Mullard Space Science Laboratory, UCL, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK} \author[0000-0003-3623-4928]{Georgios Nicolaou} \affiliation{Mullard Space Science Laboratory, UCL, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK} \author[0000-0003-2672-9249]{Kai Germaschewski} \affiliation{Space Science Center, University of New Hampshire, Durham NH 03824, USA} \author[0000-0003-3623-4928]{Andrew P. Walsh} \affiliation{European Space Astronomy Centre, Urb. Villafranca del Castillo, E-28692 Villanueva de la CaГ±ada, Madrid, Spain} \author[0000-0002-1682-1212]{Ioannis Zouganelis} \affiliation{European Space Astronomy Centre, Urb. Villafranca del Castillo, E-28692 Villanueva de la CaГ±ada, Madrid, Spain} \author[0000-0002-5999-4842]{Santiago Vargas Dom\'inguez} \affiliation{Universidad Nacional de Colombia, Observatorio AstronГіmico Nacional, Ed. 413 BogotГЎ, Colombia} \keywords{Magnetic Reconnection, Turbulence.} \section{Introduction} \label{sec:intro} The solar wind in the inner heliosphere is a weakly collisional turbulent plasma in which the energy is transported from large ($\sim 10^9 $ km) to small ($\sim 10^{-1} $ km) scales via an active turbulent cascade \citep{coleman1968turbulence, marsch1990spectral}. Although the collisionless nature of the solar wind precludes classical viscous dissipation of these turbulent fluctuations, the non-adiabatic evolution of the solar wind \citep{gazis1982voyager,matteini2007evolution,hellinger2011heating} suggests the action of local heating mechanisms \citep{barnes1968collisionless,goldstein2015kinetic}. The plasma-physics processes responsible for this heating are not fully understood yet. The observed velocity distribution function of the solar wind species often exhibits non-thermal features e.g. \citep{marsch1982solar,feldman1975solar,feldman1978characteristic,mccomas1992solar}. Important progress has been made to understand heating and energy dissipation e.g. \citep{gary1999collisionless,howes2017diagnosing,klein2017diagnosing,matthaeus2020pathways}. Landau damping, ion cyclotron damping and stochastic heating are considered collisionless dissipation mechanisms that transfer energy from the electromagnetic field to the plasma particles \citep{marsch2003ion, kasper2008hot, chandran2010perpendicular,chandran2013stochastic}. The dissipation occurs predominantly in intermittent structures which form in plasma turbulence \citep{matthaeus1999turbulence,kiyani2015dissipation}. Like turbulence, magnetic reconnection is a process that emerges on a broad range of scales and under a large variety of plasma conditions. Magnetic reconnection occurs when magnetic structures form regions in which the frozen-in condition is locally broken allowing the exchange of particles between the magnetic structures \citep{hesse1988theoretical,schindler1988general}. Magnetic reconnection and turbulence are closely linked. Magnetic reconnection self-consistently occurs as a consequence of the turbulent cascade \citep{servidio2010statistics, loureiro2020nonlinear,agudelo2021three} and turbulence emerges in current sheets, exhaust flows, electron streamers, and shocks associated with reconnection events \citep{pucci2017properties,kowal2017statistics,lapenta2020local}. During magnetic reconnection, plasma particles are heated and accelerated while the magnetic field topology changes \citep{pontin2011three,zweibel2016,lazarian20203d}. The role of magnetic reconnection for the evolution of energy in collisionless plasmas is unclear. Although magnetic reconnection transports energy from large to small scales \citep{sundkvist2007dissipation,franci2017magnetic, loureiro2020nonlinear}, the details of the energy transport across scales and the role of reconnection in the turbulent cascade are a matter of ongoing research \citep{loureiro2017role,franci2017magnetic,PhysRevE.104.065206}. The energy transfer between fields and particles as well as the transfer between kinetic and thermal degrees of freedom during reconnection are the key objectives of this research area. The energy transfer and transport associated with magnetic reconnection has been addressed by previous studies that focus on idealized 2D Harris current-sheet reconnection \citep{yin2001hybrid,schmitz2006kinetic,wang2015comparison,pezzi2021dissipation}, 3D laminar collisionless reconnection in the context of magnetospheres \citep{wang2018electron} and 2D reconnection in turbulent plasma \citep{fadanelli2021energy}. In this work, we use particle-in-cell (PIC) simulations to study the energy transfer associated with 3D small-scale magnetic reconnection that self-consistently occurs as a consequence of an anisotropic turbulent cascade. In section \ref{sec:energy_dist}, we present our theoretical framework to study the energy transfer and transport in our plasma simulations. In section \ref{sec:Initialization_sim} we present our simulation results emphasizing the presence of agyrotropy in section \ref{sec:Agirotropy} and the energy distribution in section \ref{sec:energy_distri}. In section \ref{sec:discussion}, we discuss the implications of our results and in section \ref{sec:conclusions} we provide conclusions. \section{Energy transfer and transport} \label{sec:energy_dist} The total energy in a closed volume of plasma is partitioned amongst the particles and the electromagnetic fields. The bulk kinetic energy density of the particle species $s$ is associated with the first velocity moment of the particle velocity distribution function $f_{s} = f_{s}(\vec{x},\vec{v},t)$ and therefore with the bulk flux of the particles. The thermal energy density is associated with the second velocity moment and thus the pressure of the particles. The evolution of $f_{s}$ follows the Boltzmann equation \begin{equation} \frac{\partial f_{s}}{\partial t} + \vec{v} \cdot \nabla f_{s} + \frac{q_{s}}{m_{s}} \left(\vec{E} + \vec{v} \times \vec{B} \right) \cdot \nabla_{v}f_{s}= \left( \frac{\partial f_{s}}{\partial t} \right)_{c}, \label{eqn:boltz} \end{equation} \noindent where $\vec{v}$ is the velocity, $\vec{E}$ is the electric field, $\vec{B}$ is the magnetic field, $q_{s}$ is the charge and $m_{s}$ is the mass of a particles. The term $(\partial f_{s} / \partial t )_{c}$ on the right-hand side represents the change in the distribution function due to collisions. This term includes individual correlations between fields and particles, based on the particles' individual Coulomb potentials \citep{klimontovich1997physics}. To study the energy transport we derive a set of energy equations based on the Boltzmann equation (\ref{eqn:boltz}). We first define the density \begin{equation} n_{s}\equiv \int f_{s} d^{3}v, \label{eqn:density} \end{equation} \noindent the bulk velocity \begin{equation} \vec{u}_{s} \equiv \frac{1}{n_{s}} \int f_{s}\vec{v} d^{3}v, \label{eqn:bulkvelocity} \end{equation} \noindent and the pressure tensor \begin{equation} \tensndd{P}_{s} \equiv m_{s}\int f_{s}(\vec{v}-\vec{u}_s)(\vec{v}-\vec{u}_{s}) d^{3}v, \label{eqn:pressure_tensor} \end{equation} \noindent where $(\vec{v}-\vec{u}_s)(\vec{v}-\vec{u}_{s})$ is the dyadic product. We define the heat flux vector \begin{equation} \vec{h}_{s} \equiv \frac{1}{2} m_{s}\int f_{s}(\vec{v}-\vec{u}_{s})\cdot(\vec{v}-\vec{u}_s)(\vec{v}-\vec{u}_{s}) d^{3}v. \label{eqn:heatvector} \end{equation} \noindent We define the first moment of the collision term in Eq. (\ref{eqn:boltz}) as \begin{equation} \vec{\Xi}^{1} = \int \vec{v}\left( \frac{\partial f_{s}}{\partial t} \right)_{c} d^{3}v, \label{eqn:collint} \end{equation} \noindent and the second moment as \begin{equation} \tensndd{\Xi}^{2} = \int \vec{v}\vec{v}\left( \frac{\partial f_{s}}{\partial t} \right)_{c} d^{3}v, \label{eqn:collint2} \end{equation} \noindent With these definitions, we compute the first and second moments of Eq. (\ref{eqn:boltz}) (see Appendix \ref{app:Energy_equa} for details). The first moment of Eq. (1) yields the kinetic energy equation \begin{equation} \frac{d \varepsilon_{s}^{k}}{d t} + \vec{u}_{s} \cdot \nabla \cdot \tensndd{P}_{s} + \varepsilon_{s}^{k}\nabla\cdot\vec{u}_{s} - q_{s}n_{s}(\vec{u}_{s} \cdot \vec{E}) = \Xi^{k}_{s}, \label{eqn:firstmomener_text} \end{equation} \noindent where $d/dt=\partial /\partial t + (\vec{u}_{s} \cdot \nabla)$ is the total time derivative, \begin{equation} \varepsilon^{k}_{s} = \frac{1}{2} n_{s}m_{s}(\vec{u}_{s} \cdot \vec{u}_{s}), \end{equation} \noindent is the kinetic energy density and \begin{equation} \Xi^{k}_{s} = m_{s} \vec{u}_{s} \cdot \vec{\Xi}_{s}^{1} \end{equation} \noindent represents the irreversible kinetic energy transfer. The terms $\vec{u}_{s} \cdot \nabla \cdot \tensndd{P}_{s}$, $\varepsilon_{s}^{k}\nabla\cdot\vec{u}_{s}$ and the advective term $(\vec{u}_{s}\cdot \nabla)\varepsilon^{k}_{s}$ are associated with the term $\vec{v} \cdot \nabla f_{s}$ in Eq. (\ref{eqn:boltz}). Therefore, these terms represent kinetic energy-density transport due to the free streaming of particles. Conversely, the term $-q_{s}n_{s}(\vec{u}_{s} \cdot \vec{E})$, associated with the electric field, represents the energy-density transfer between particle bulk flows and fields. The second moment of Eq. (\ref{eqn:boltz}) yields the thermal energy equation \begin{equation} \frac{d \varepsilon_{s}^{th}}{d t} + \nabla \cdot \vec{h}_{s} + \nabla\vec{u}_{s}:\tensndd{P}_{s} + \varepsilon_{s}^{th}\nabla \cdot \vec{u}_{s} = \Xi^{th}_{s}, \label{eqn:secondmomener_text} \end{equation} \noindent where \begin{equation} \varepsilon^{th}_{s} = \frac{1}{2} Tr (\tensndd{P}_{s}) \end{equation} \noindent is the thermal energy density and \begin{equation} \Xi^{th}_{s} = -m_{s} \vec{u}_{s} \cdot \vec{\Xi}^{1} + \frac{m_{s}}{2}Tr\left(\tensndd{\Xi}^{2}\right) \end{equation} \noindent represents the irreversible the thermal energy transfer. The term $Tr$ stands for the trace of the tensor and $\nabla\vec{u}_{s}:\tensndd{P}_{s}$ is the double contraction of the strain tensor $\nabla\vec{u}_{s}$ and $\tensndd{P}_{s}$. The terms $\nabla \cdot \vec{h}_{s}$, $ \nabla\vec{u}_{s}:\tensndd{P}_{s}$ and $\varepsilon_{s}^{th}\nabla \cdot \vec{u}_{s}$, associated with $\vec{v} \cdot \nabla f_{s}$ in Eq. (\ref{eqn:boltz}), represent thermal energy-density transport due to the free streaming of particles. The terms on the left-hand sides of Eqs. (\ref{eqn:firstmomener_text}) and (\ref{eqn:secondmomener_text}) describe collisionless processes whereas the terms on the right-hand sides describe collisional processes in the plasma which generate an increase in the plasma entropy. {Equations (\ref{eqn:firstmomener_text}) and (\ref{eqn:secondmomener_text}) alone do not capture total energy conservation because they do not account for the rate of change in the electromagnetic energy density $\partial \varepsilon^{em}/\partial t$, nor for the electromagnetic energy flux $\nabla \cdot \vec{S}$, where} \begin{equation} \varepsilon^{em} = \frac{1}{2}\left( \frac{1}{\mu_{0}} \vec{B}\cdot\vec{B} + \epsilon_{0}\vec{E}\cdot\vec{E} \right) \end{equation} { is the electromagnetic energy density and $\vec{S}=\vec{E}\times\vec{B}/\mu_{0}$ is the Poynting vector. The expression that accounts for these changes is Poynting's theorem} \begin{equation} \frac{\partial \varepsilon^{em}}{\partial t} + \nabla \cdot \vec{S} +\vec{J}\cdot\vec{E}=0. \label{eqn:poynting_theo} \end{equation} {Nevertheless, Equations (\ref{eqn:firstmomener_text}) and (\ref{eqn:secondmomener_text}) are exact in their description of the kinetic and thermal energy-density transfer and transport, as well as the energy-density exchange between fields and particles.} Before tackling the energy transfer problem, we explicitly define the following three terms, which are often used interchangeably in the literature: \begin{itemize} \item \emph{Heating} is any increase in $\varepsilon^{th}_{s}$, and \emph{cooling} is any decrease in $\varepsilon^{th}_{s}$. Heating can be {either reversible or irreversible}. \item \emph{Damping} is any decrease in $\varepsilon^{em}$, and \emph{growth} is any increase in $\varepsilon^{em}$. Damping/growth can be {either reversible or irreversible}. \item \emph{Dissipation} is any irreversible energy transfer {leading to an increase in} $\varepsilon^{th}_{s}$. \end{itemize} Dissipation corresponds to an increase in entropy of the velocity distribution function, which is challenging to quantify directly both in space measurements and in simulations. Nonetheless, recent studies \citep{pezzi2019energy,matthaeus2020pathways,pezzi2021dissipation} show that in collisionless plasmas \emph{energy-based dissipation proxies} such as the Zenitani parameter \citep{zenitani2011new} \begin{equation} D_{z,s}=\vec{J}\cdot \left( \vec{E} + \vec{u}_{s}\times \vec{B} \right) - n_{s}q_{s}(\vec{u}_{s}\cdot \vec{E}), \label{eqn:zenitani} \end{equation} and the strain-pressure interaction $\nabla\vec{u}_{s}:\tensndd{P}_{s}$ \citep{yang2017energy} {are spatially correlated with dimensionless measures of non-thermal distribution functions \citep{kaufmann2009boltzmann,greco2012inhomogeneous,liang2019decomposition} and plasma agyrotropy \citep{scudder2008illuminating}}. In Eq.~(\ref{eqn:zenitani}), $\vec{J}=\sum_{s=i,e}{q_{s}n_{s}\vec{u}_{s}}$ is the electric current density. These energy-based dissipation proxies are effectively \emph{power density} terms derived from the left-hand sides of our Eqs. (\ref{eqn:firstmomener_text}) and (\ref{eqn:secondmomener_text}). According to our definitions, $D_{z,s}$ is a damping measure since it quantifies the energy transfer from the electromagnetic fields into bulk kinetic energy and vice versa. The strain-tensor interaction has gyrotropic and agyrotropic contributions. We de-compose the pressure tensor as ${P}_{ij,s}={p}_{s}\delta_{ij}+{\Pi}_{ij,s}$, where \begin{equation} p_{s}=\sum_{i=1}^{3} P_{ii,s}/3 \label{eqn:pi_pressure_d} \end{equation} \noindent is the isotropic scalar pressure and \begin{equation} \Pi_{ij,s} = (P_{ij,s} + P_{ji,s})/2 - p_{s}\delta_{ij} \label{eqn:PI_pressure_of} \end{equation} \noindent is the deviatoric pressure. Likewise, the strain-rate tensor $\nabla\vec{u}_{s}$ can be expressed as $\nabla{u}_{ij,s} = \theta_{s} \delta_{ij}/3 + D_{ij,s}$, where $\theta_{s}=\nabla\cdot\vec{u}_{s}$ represents the dilatation term and \begin{equation} D_{ij,s}=\frac{1}{2}\left(\frac{\partial u_{i,s}}{\partial x_{j}} + \frac{\partial u_{j,s}}{\partial x_{i}}\right) - \frac{1}{3}\theta_{s}\delta_{ij}, \end{equation} \noindent represents the symmetric traceless strain-rate tensor \citep{yang2017energy}. Thus, the strain-tensor interaction, which is a heating/cooling proxy according to our definitions, is \begin{equation} \nabla\vec{u}_{s}:\tensndd{P}_{s} = p_{s}\theta_{s} + \Pi_{ij,s}D_{ij,s}, \label{eqn:double_duP} \end{equation} \noindent where the first terms on the right-hand side is known as $p\theta_{s}$ and the second term on the right-hand side is known as $Pi \mh D_{s}$\citep{yang2017energy}. For comparison with previous studies \citep{pezzi2021dissipation,bandyopadhyay2020statistics}, in section \ref{sec:about_dissi}, we compute $D_{ij,s}$, $p_{s}\theta_{s}$ and $\Pi_{ij,s}D_{ij,s}$ and compare them with the energy transfer and transport terms, $-n_{s}q_{s}(\vec{u}_{s}\cdot\vec{E})$ and $\nabla\vec{u}_{s}:\tensndd{P}_{s}$, in Eqs. (\ref{eqn:firstmomener_text}) and (\ref{eqn:secondmomener_text}). \section{Simulation results} \label{sec:Initialization_sim} \subsection{Simulation setup} We use the explicit Plasma Simulation Code \citep[PSC,][]{germaschewski2016plasma} to simulate anisotropic AlfvГ©nic turbulence in an ion-electron plasma in the presence of a constant background magnetic field {$\vec{B}_{0} = B_{0}\uvec{z}$}. The simulation domain is an elongated box of size $L_{x} \times L_{y} \times L_{z} = 24d_{i}\times24d_{i}\times125d_{i}$ with spatial resolution $\Delta x =\Delta y = \Delta z = 0.06d_{i}$, where $d_{i}=c/\omega_{pi}$ is the ion inertial length, $c$ is the speed of light, $\omega_{pi}=\sqrt{n_{0}q_{i}^{2}/m_{i}\epsilon_{0}}$ is the ion plasma frequency and $n_{0}$ is the constant initial ion density. The background ion AlfvГ©n speed ratio in our simulations is $v_{A,i}/c=0.1$, where $v_{A,i}=B_{0} / \sqrt{\mu_{0}n_{0}m_{i}}$ is the ion AlfvГ©n speed. The number of macro particles per cell is $100$ ions and $100$ electrons. We use a mass ratio of $m_{i}/m_{e} = 100$ so that $d_e = 0.1 d_{i}$. We set the initial thermal-to-magnetic energy ratio $\beta_{s}=2 n_0 \mu_{0} k_{B}T_{s}/B_{0}^{2} = 1$ where $T_{s}$ is the temperature of species $s$ and $k_{B}$ is the Boltzmann constant. The details of the simulation setup and the overall simulation results are presented by \citet{agudelo2021three}, where the authors report a reconnection event that involves two reconnecting flux-ropes. \subsection{Reconnection event overview} Panel a) in Figure \ref{fig:three_composition} shows the volume rendering of the current density in our simulation domain at the simulated time $t=120 \omega_{pi}^{-1}$. Current filaments that form in the turbulent cascade are mostly elongated along the direction of the background magnetic field. At this time in the simulation, we apply the set of indicators presented by \citet{agudelo2021three} to identify and locate reconnection sites. We select one reconnection event that involves two reconnecting flux-ropes as shown in panel b) of Figure \ref{fig:three_composition} where the magnetic-field lines are color-coded with $|\vec{B}|$. The magnetic flux-ropes contain an intense magnetic field, especially the lower flux-rope which is more twisted and has a smaller radius than the upper flux-rope. Conversely, the magnetic field between the flux-ropes is weak. The cuts in panel b) show $J_{z}$ in the $xy$ simulation plane. For our analysis of this event, we apply a 2D cut in the $xy$-plane at $z=77 d_{i}$. Panel c) of Figure \ref{fig:three_composition} shows the magnetic-field lines of the field components in the $xy$-plane, i.e., $(B_{x},B_{y})$ as black contours. Panel c) illustrates the complexity of the magnetic topology in the region of interest. For our energy analysis we select a volumetric sub-region of size $10$ $d_{i}^{3}$ centered around the identified reconnecting region. The green squared in panel c) highlights the intersection of the selected sub-region with the central 2D cut from panel b). Even though the background field is in the z-direction, the current structures are not exactly aligned with the z-direction. Instead, the geometric features of the reconnection event are aligned with the plane perpendicular to the current sheet that sustains the magnetic gradient. Therefore, we determine a reference frame that is aligned with the main axis of the current sheet. We determine the direction of the main axis of the current-sheet by 3D rendering $J_{z}$ and measuring the inclination of the coherent structure that crosses the point $x=13.5 d_{i}$ and $y=21.5 d_{i}$ in the $xy$-plane. We then apply a coordinate transformation from the reference frame (RF) ($x,y,z$) to a new RF ($r,p,a$) aligned with the main axis of the current-sheet. The unit vectors of this RF are ($\uvec{r},\uvec{p},\uvec{a}$). In this RF $\uvec{a}$ is anti-parallel to the main axis of the current-sheet, $\uvec{p}$ is an arbitrary vector in the plane perpendicular to $\uvec{a}$, and $\uvec{r}$ is the vector that completes the right-handed coordinate system. Since the components $r$ and $p$ are in the plane perpendicular to the current structure, we denote them as the in-plane components. {In the following analysis we use the RF ($r,p,a$) and select a cubic region of size 10 $d_{i}$. Although the event is three-dimensional and the magnetic field lines extend in three dimensions, we select a 2D cut of the cubic region in the $rp$-plane similar to the green square in panel c) of Figure \ref{fig:three_composition}. This 2D cut is representative of the reconnection event as we show in Section \ref{sec:Agirotropy}.} Panel a) in Figure \ref{fig:Ba_uea_uia_2000} shows the magnetic-field magnitude in the region of interest normalized to $B_{0}$. {The black contours represent the in-plane magnetic-field lines which we compute by creating an array of seed points placed on the vertices of a squared grid in the $rp$-plane. Then, we use the in-plane magnetic field vectors to create the streamlines. We propagate the numerical integration in both directions: forwards and backwards.} Panel b) in Figure \ref{fig:Ba_uea_uia_2000} {shows $(B_{a}-B_{a,0})/B_{0}$, where $B_{a}$ is the out-of-plane component of the magnetic field and $B_{a,0}=\vec{B}_{0}\cdot\uvec{a}$ is the projection of $B_{0}$ on the $a$-direction. We subtract $B_{a,0}$ to improve the visibility of the multipolar configuration of this component.} The black arrows in this panel represent the in-plane magnetic-field vectors $\vec{B}_{rp} = B_{r} \uvec{r} + B_{p} \uvec{p}$. In order for reconnection to occur, the in-plane components of the magnetic fields of reconnecting structures must have {different} directions. {The plotted in-plane magnetic-field lines suggest the presence of effective separatrices between regions of opposite $\vec{B}_{rp}$ within the black square.} The in-plane magnetic-field lines in panel a) along with the direction of the in-plane magnetic-field vectors suggest the presence of two x-points which we mark with two black stars, one located at {$r=5.58d_{i}$} and $p= 6.6d_{i}$ and the other at {$r=6.5d_{i}$} and $p=6.2d_{i}$. {We establish the position of the x-points by identifying the saddle points of the in-plane magnetic field.} The magnetic configuration is complex, and the black square outlines the central region in which the reconnection occurs. {Within this region, the magnetic field is non-uniform. The sub-region where $|\vec{B}| \approx 0$ represents a null region.} From now on, we refer to the region enclosing the x-points as the diffusion region. Since transverse 2D cuts to 3D magnetic flux-ropes resemble the geometry of magnetic islands, we now refer to the magnetic-field lines which are quasi-circular in panel a) as magnetic islands. Panel c) shows the out of plane component of the electron velocity $u_{a,e}$ normalized to the ion AlfvГ©n speed $v_{A,i}$. The red color indicates electrons moving out of the plane whereas the blue color indicates electrons moving into the plane. The black arrows of this panel represent the in-plane electron velocity vectors $\vec{u}_{rp,e} = u_{r,e} \uvec{r} + u_{p,e} \uvec{p}$. Within the region of interest, there are counter-streaming electrons following the separatrices. Likewise, we locate electrons streaming out of the plane through the diffusion region. Within the magnetic islands the electrons stream into the plane. In most of the magnetic islands, the electrons follow quasi-circular orbits due to their magnetization. However, in the magnetic island centered at $r=4.5d_{i}$ and $p=2.2d_{i}$, the electrons demagnetize and traverse into the magnetic island connecting with the stream of electrons at the edge of the magnetic island. Panel d) shows the out of plane component of the ion velocity $u_{a,i}$ normalized to $v_{A,i}$. The black arrows in this panel represent the in-plane ion velocity vectors $\vec{u}_{rp,i} = u_{r,i} \uvec{r} + u_{p,i} \uvec{p}$. Within the diffusion region the out of plane ion velocity is small which suggests that the ion motion is mostly constrained to the plane. The in-plane motion, however, is considerable and the ions move across the separatrices since they are demagnetized. \subsection{Particle agyrotropy in the diffusion region} \label{sec:Agirotropy} During the reconnection of magnetic flux-ropes, the plasma expansion/contraction is not isotropic. Therefore, at kinetic scales the plasma pressure of each species can develop anisotropy and agyrotropy. Figure \ref{fig:pepi_PI} shows our pressure terms according to Eqs. (\ref{eqn:pi_pressure_d}) and (\ref{eqn:PI_pressure_of}) for electrons and ions normalised to the initial ion pressure $p_{0}=n_{0}m_{i}v_{A,i}^{2}$. Panels a) and e) show the isotropic scalar pressure for electrons $p_{e}$ and ions $p_{i}$ respectively. For both species, the scalar pressure is greater inside the magnetic islands than outside due to the large density of particles (not shown here). Likewise, $p_{e}$ and $p_{i}$ display gradients along and across the separatrices. We find that $p_{e}$ is lower in the region between the magnetic islands as well as between the x-points compared to inside the magnetic islands. Panels b), c) and d) of Figure \ref{fig:pepi_PI} show the off-diagonal components of the electron pressure tensor according to Eq.~(\ref{eqn:PI_pressure_of}), $\Pi_{ra,e}$, $\Pi_{pa,e}$, and $\Pi_{pr,e}$, respectively. We here introduce our notation $\langle ... \rangle$ for the spatial average of a quantity over a given domain. The averages over the sub-domain of $|\Pi_{ra,e}|$, and $|\Pi_{pa,e}|$, $\langle |\Pi_{ra,e}| \rangle$, and $\langle |\Pi_{pa,e}| \rangle$, are about 10$\%$ of $\langle p_{e} \rangle$. $\Pi_{ra,e}$ and $\Pi_{pa,e}$ present a strong dipole-like configuration centered on the magnetic islands. There is a shallower, yet visible, gradient in $\Pi_{ra,e}$, $\Pi_{pa,e}$ and $\Pi_{pr,e}$ in the region between the islands as well as in the diffusion region. Conversely, $\Pi_{pr,e}$ exhibits a quadrupolar configuration within the magnetic islands. The non-zero values of $\Pi_{ra,e}$, $\Pi_{pa,e}$ and $\Pi_{pr,e}$ show that the plasma is agyrotropic, suggesting that small-scale kinetic processes occur. Similar patterns are reported along the separatrices of 2D collisionless reconnection \citep{yin2001hybrid,schmitz2006kinetic,wang2015comparison} and laminar 3D collisionless reconnection \citep{wang2018electron}. However, unlike previous studies, we observe the same patterns within the magnetic islands of turbulent 3D magnetic reconnection. This is a fundamental difference between the reconnection that occurs in turbulence and steady state reconnection that occurs in Harris current-sheet configurations. Panels f), g) and h) of Figure \ref{fig:pepi_PI} show the off-diagonal components of the ion pressure tensor according to Eq. (\ref{eqn:PI_pressure_of}), $\Pi_{ra,i}$, $\Pi_{pa,i}$ and $\Pi_{pr,i}$ respectively. The off-diagonal terms for ions, unlike electrons, have a less coherent pattern attached to the in-plane magnetic field topology. The reason for this detachment lies in the de-magnetization of the ions at these scales. Nevertheless, there is a gradient of these terms suggesting agyrotropy effects in the ion dynamics as well. Figure \ref{fig:pe_PI_12} shows a magnification of the region enclosed by the black square in Figure \ref{fig:Ba_uea_uia_2000}. Panels a) to d) show a magnification of the electron pressure terms shown in panels a) to d) of Figure \ref{fig:pepi_PI}. To make a direct comparison with previous 2D studies, panels e) to h) show sketches summarizing known patterns associated with the electron pressure components that emerge from 2D collisionless reconnection in the absence of a guide field \citep{yin2001hybrid,schmitz2006kinetic,wang2015comparison}. In this region, unlike within the magnetic islands of Figure \ref{fig:pepi_PI}, our simulation results of the electron pressure patterns match those patterns shown in the sketches in panels e) to h) in the region where the magnetic field has a local minimum according to panel a) in Figure \ref{fig:three_composition}. However, below the x-point located at {$r=5.58 d_{i}$} and $p=6.6d_{i}$, the pattern no longer corresponds to the sketched expectations. Moreover, $\Pi_{pr,e}$ is less coherent, and we do not recognize a clear quadrupolar configuration as in the sketches for the 2D case. {Figure~\ref{fig:pei_PI} shows a 3D representation of the pressure components for electrons in panels a) through d), and for ions in panels e) through h). The 2D cut at $a=2.76d_{i}$ corresponds to the 2D cut in Figure~\ref{fig:pepi_PI}. The plotted 3D structures are isosurfaces of the pressure component depicted on the 2D planes. For a given quantity $\psi$, the value of the isosurfaces corresponds to $S_{\psi} = \pm (\langle |\psi| \rangle + 2\sigma_{|\psi|})$, where $\sigma_{|\psi|}$ is the standard deviation of $|\psi|$. The isosurfaces in panels a) through h) have the shape of elongated and thin surfaces with local curvatures along the $a$-axis. The agyrotropic patterns in Figure~\ref{fig:pepi_PI} extend for $\sim 5d_{i}$ along the ${a}$-axis.} \subsection{Energy transfer and transport} \label{sec:energy_distri} We use the power density expressions for the kinetic energy in Eq. (\ref{eqn:firstmomener_text}) and thermal energy in Eq. (\ref{eqn:secondmomener_text}) to describe the energy transfer and transport associated with our reconnection event. To compute the partial time derivatives of a quantity, we use a central difference approach. Since the AlfvГ©n transient time is $\sim 100$ $\omega^{-1}_{pi}$, a time resolution of $6$ $\omega_{pi}^{-1}$ is sufficient to capture the relevant dynamics of interest. To estimate the spatial derivatives, we use a standard cell-centered first neighbors approach. We calculate all scalar products cell-wise in the simulation domain. Panels a) to e) of Figure \ref{fig:power_e} show 2D cuts of each term in Eq. (\ref{eqn:firstmomener_text}) for electrons normalized to $\Delta \varepsilon_{0}=\omega_{pi}m_{i}v_{A,i}^{2}$. Panel a) shows, at the simulation time $t=120 \omega^{-1}_{pi}$, the total time derivative of the kinetic energy density ${d\varepsilon^{k}_{e}}/{dt}$. The domain exhibits considerable temporal changes of the kinetic energy density at the centers of the magnetic islands. We also detect negative ${d\varepsilon^{k}_{e}}/{dt}$ at the edge of the top left magnetic island and positive ${d\varepsilon^{k}_{e}}/{dt}$ in the diffusion region. Conversely, there is almost no change in $\varepsilon^{th}_{e}$ in the region between the x-points. Panel b) shows the scalar product $\vec{u}_{e}\cdot \nabla \cdot \overline{\vec{P}}_{e}$ which quantifies the change of kinetic energy due to the advection of the pressure tensor. This energy change is transported by the electron flow. The quantity $\vec{u}_{e}\cdot \nabla \cdot \overline{\vec{P}}_{e}$ is also known as the pressure work \citep{fadanelli2021energy}. There is a strong conversion of energy associated with the pressure work at the center of the magnetic islands. However, the energy change associated with this term is around 10 times greater than the local $d\varepsilon^{k}_{e}/dt$. Unlike $d\varepsilon^{k}_{e}/dt$, at the edge of the top left magnetic island, there is a strong gradient of $\vec{u}_{e}\cdot \nabla \cdot \overline{\vec{P}}_{e}$ from the left-hand side of the magnetic island to the right-hand side. In addition, $\vec{u}_{e}\cdot \nabla \cdot \overline{\vec{P}}_{e}$ has a local minimum in the region between the x-points. Panel c) shows $\varepsilon^{k}_{e}\nabla\cdot\vec{u}_{e}$ which represents the kinetic energy change due to divergent or convergent flow patterns in the electron bulk velocity. Like for the previous terms, $\varepsilon^{k}_{e}\nabla\cdot\vec{u}_{e}$ is greater at the center of the magnetic islands than in the region between them. There is no noticeable gradient of this terms between the x-points. Although panels a), b) and c) show similar patterns in their signs, there are local differences, specially in the diffusion region. Panel d) shows $-q_{e}n_{e}(\vec{u}_{e}\cdot\vec{E})$ which represents the energy exchange between the electrons and the electric field. We find a considerable energy conversion not only within the magnetic islands but also in the region between the islands as well as in the region between the x-points. In the region between the x-points, the electrons gain kinetic energy from the electric field. Along the separatrix next to the top left island, the electron bulk motion is decelerated by the electric field. Comparing panels b) and d) $\vec{u}_{e}\cdot \nabla \cdot \overline{\vec{P}}_{e}$ and $-q_{e}n_{e}(\vec{u}_{e}\cdot\vec{E})$ balance with each other in the diffusion region. Panel e), shows $\Xi^{k}_{s}$, which we compute as the sum of all terms at the left hand side of Eq. (\ref{eqn:firstmomener_text}). There are regions with positive and negative $\Xi^{k}_{e}$ within the magnetic islands. On the contrary, $\Xi^{k}_{e}$ is predominately positive within the diffusion region and along the separatrices. Although we do not include binary collisions in our code explicitly, we acknowledge that the finite number of macro particles affects the system in a way similar to collisions, and leads to an undersampling of non-thermal fine structure in the velocity distribution function, which generates a loss of information and thus increase in entropy. In a real plasma, binary collisions between particles drive the system to a thermal equilibrium, thus smoothing out the distribution function. Similarly, a finite number of particles represents a low number of counts to compute the statistical measures. Therefore, when computing macroscopic quantities, the contribution from non-thermal particles is overshadowed by the core distribution. It is effectively a coarse-grained effect, similar to the actual effect of collisions, albeit on a different time scale. Although this effect occurs earlier in PIC simulations with a finite number of particles than in the real solar wind. We conjecture that the impact is ultimately comparable. Panels f) to j) of Figure \ref{fig:power_e} show 2D cuts of each term in Eq. (\ref{eqn:secondmomener_text}) normalized to $\Delta \varepsilon_{0}$. Panel f) depicts $d\varepsilon^{th}_{e}/dt$. As in the kinetic-energy case, $d\varepsilon^{th}_{e}/dt$ has local extrema associated with the magnetic islands. The main change in $d\varepsilon^{th}_{e}/dt$ is due to the advective term $(\vec{u}_{e}\cdot \nabla) \varepsilon^{th}_{e}$. By direct comparison with panel b), we note similar power density patterns between $d\varepsilon^{th}_{e}/dt$ and $\vec{u}_{e}\cdot \nabla \cdot \overline{\vec{P}}_{e}$. For the heat-flux term $\nabla \cdot \vec{h}_{e}$, we do not directly compute $\nabla \cdot \vec{h}_{e}$ as a particle moment, but use a Hammett-Perkins approach \citep{hammett1990fluid} to estimate its contribution. This approach has been successfully applied in previous collisionless reconnection studies \citep{wang2015comparison,ng2015island,ng2017simulations}. In this framework, we estimate \begin{equation} \nabla \cdot \vec{h}_{e} \approx v^{th}_{e} \frac{1}{2} |k_{0}|Tr\left[P_{ij,e} - \langle P_{ij,e}\rangle - \delta_{ij} (n_{e} - \langle n_{e} \rangle)\langle {T}_{e}\rangle\right], \label{eqn:divh_HP} \end{equation} where $v^{th}_{e} = \sqrt{2 k_{B}T_{e}/m_{e}}$ is the thermal speed of the electrons. The wave number $k_{0} = \sqrt{3}/|L_{s}|$ is a representative wave number associated with a sub-domain of volume $V_{s}=L_{s}^{3}$ where $L_{s}=10.08$ $d_{i}$ which we select as the region to study the energy conversion during the reconnection event. Panel g) shows our estimation of $\nabla \cdot \vec{h}_{e}$. There is a positive power density contribution from particle heatflux inside the magnetic islands. Conversely, there is a negative contribution in the regions between the magnetic islands. Panel h) depicts the energy transfer $\nabla \vec{u}_{e}:\tensndd{P}_{e}$ between kinetic and thermal energies. This term has contributions from the diagonal elements of the tensors which are associated with the isotropic energy transport and from the off-diagonal elements that quantify the agyrotropy in the plasma. There is positive $\nabla \vec{u}_{e}:\nabla \tensndd{P}_{e}$ in the region between the magnetic islands which is associated with counter streaming electrons. We locate an x-like structure centered in the region where the magnetic field strength exhibits a local minimum. In the region between the x-points as well as to the left of the diffusion region, $\nabla \vec{u}_{e}:\overline{\vec{P}}_{e}<0$. \makeatletter\onecolumngrid@push\makeatother \clearpage \makeatletter\onecolumngrid@pop\makeatother Panel i) shows the thermal energy transport $\varepsilon^{th}_{e} \nabla \cdot \vec{u}_{e}$ associated with the compression/expansion of the electron flow. At first glance the positive/negative patterns in $\varepsilon^{th}_{e} \nabla \cdot \vec{u}_{e}$ seem very similar to the patterns in $\nabla \vec{u}_{e}:\overline{\vec{P}}_{e}$. The reason for this similarity is that the main energy transport in $\nabla \vec{u}_{e}:\overline{\vec{P}}_{e}$ is associated with the contribution of diagonal elements as we show in section \ref{sec:about_dissi}. However, we find local differences due to the agyrotropic contributions. From all terms on the left-hand sides of Eqs. (\ref{eqn:firstmomener_text}) and (\ref{eqn:secondmomener_text}), only the terms associated with the strain tensor present an extended asymmetric x-point-like structure in the diffusion region. Comparing panels c) and i), $\varepsilon^{th}_{e}\nabla \cdot \vec{u}_{e}$ is on average larger and forms broader structures than $\varepsilon^{k}_{e}\nabla \cdot \vec{u}_{e}$. Panel j), shows $\Xi^{th}_{s}$, which we compute as the sum of all terms on the left hand side of Eq. (\ref{eqn:secondmomener_text}).This energy transfer is significant as the different terms on the left-hand side of Eq. (\ref{eqn:secondmomener_text}) do not sum to zero. In Figure \ref{fig:power_ei_1D}, we show vertical 1D cuts of the power density terms along the $p$-direction at $r = 5.58$ $d_{i}$ to visualize the relation between the different terms for plasma electrons. We further show a magnification of the region delimited by the black square from Figure \ref{fig:Ba_uea_uia_2000}. Panel a) shows the kinetic power density terms in Eq. (\ref{eqn:firstmomener_text}). We observe that the fluctuations in $d\varepsilon^{k}_{e}/dt$ (green line) and $\varepsilon^{k}_{e}\nabla \cdot \vec{u}_{e}$ (black line) are negligible compared with $\vec{u}_{e}\cdot\nabla\cdot\overline{\vec{P}}_{e}$ (red line) and $-q_{e}n_{e}\vec{E}\cdot\vec{u}_{e}$ (yellow line). However, there is a noticeable disturbance in all quantities in the range $p = 6.96$ $d_{i}$ to $p = 7.84$ $d_{i}$ which is located in the region of the diffusion region at which the magnetic field is nearly zero. Along the 1D cut, $\vec{u}_{e}\cdot\nabla\cdot\overline{\vec{P}}_{e}$ and $-q_{e}n_{e}\vec{E}\cdot\vec{u}_{e}$ are anti-correlated. This anti-correlation breaks when the disturbances in $d\varepsilon^{k}_{e}/dt$ and $\varepsilon^{k}_{e}\nabla \cdot \vec{u}_{e}$ occur. For this panel, the curve of $\Xi^{k}_{s}$ (blue line) changes sign when crossing the x-point. Panel b) shows the thermal power density terms in Eq. (\ref{eqn:secondmomener_text}). Comparing panels a) and b), we observe that the fluctuations in the thermal power density terms are more pronounced than those in the kinetic power density. In panel b), the fluctuations in $d\varepsilon^{th}_{e}/dt$ (green line) and $\nabla \cdot \vec{h}_{e}$ (red line) are negligible compared with $\nabla \vec{u}_{s}:\overline{\vec{P}}_{e}$ (black line) and $\varepsilon^{th}_{e}\nabla \cdot \vec{u}_{e}$ (yellow line). Unlike in the kinetic power density case, the contributions from all terms in Eq. (\ref{eqn:secondmomener_text}) are either positive or negative at the same location showing no anti-correlation between the dominant terms. We note that $\Xi^{th}_{e}$ (blue line), unlike $\Xi^{k}_{e}$, is positive on both sides of the x-point. {Figure \ref{fig:power3d} shows a 3D representation of the kinetic power density terms in panels a) through d), and of thermal power density terms in panels e) through h). The 2D cut at $a=2.76d_{i}$ corresponds to the 2D cuts in Figure~\ref{fig:power_e}. The plotted 3D structures are isosurfaces of the power density terms depicted on the 2D planes. Panels a) through d) show that the isosurfaces of ${d\varepsilon^{k}_{e}}/{dt}, \vec{u}_{e}\cdot\nabla\cdot \overline{\vec{P}}_{e}$, and $\varepsilon^{k}_{e}$ are mostly thin filaments, whereas the isosurfaces of $-q_{e}n_{e}\vec{u}_{e}\cdot\vec{E}$ consist of broad patches. Moreover, there are more regions with $-q_{e}n_{e}\vec{u}_{e}\cdot\vec{E}>0$ than with $-q_{e}n_{e}\vec{u}_{e}\cdot\vec{E}<0$. Panels e) and f) show that the isosurfaces of ${d\varepsilon^{th}_{e}}/{dt}$ and $\nabla \cdot \vec{h}_{e}$ are also filamentary. Conversely, panels g) and h) show that the isosurfaces of $\vec{u}_{s}:\overline{\vec{P}}_{e}$ and $\varepsilon^{th}_{e}\nabla \cdot \vec{u}_{e}$ are mainly thin sheets.} {Figure \ref{fig:xikinther_e} depicts isosurfaces of $\Xi^{k}_{e}$ in panel a) and $\Xi^{th}_{e}$ in panel b). The most evident isosurface of $\Xi^{k}_{e}$ is a filament located within the reconnecting flux-rope. Conversely, the isosurfaces of $\Xi^{th}_{e}$ are mostly thin sheets connected to the flux ropes.} \subsection{Time evolution} PIC simulations are affected by finite particle size, finite number of particles, and numerical integration errors that are effectively ``collisional'' contributions since they generate phase-space particle diffusion \citep{hockney1971measurements,dawson1983particle,klimontovich2013statistical,birdsall2018plasma,grovselj2019fully}. Although the right-hand sides of the power-density relations in Eqs.~(\ref{eqn:firstmomener_text}) and (\ref{eqn:secondmomener_text}) include the contribution from numerical sources such as round-off errors and numerical heating, they also include contributions from the averaged, secular (including quasi-linear) correlations between fields and the particle distribution functions \citep{klein2016measuring,howes2017diagnosing,klein2017diagnosing}. As shown by the field--particle correlation method, meaningful averages of the non-linear correlations between the fluctuating electric field and the fluctuating perturbation of the distribution function define the secular transfer of energy from the fields to the particles. Thus, even in a purely collisionless plasma, the right hand sides of Eqs.~(\ref{eqn:firstmomener_text}) and (\ref{eqn:secondmomener_text}), after suitable averaging, is not exactly zero. In this interpretation, the averaging over higher-order field-particle correlations introduces the irreversibility and thus the dissipation into the kinetic description. All PIC systems share this behaviour with statistical particle systems in reality. In this section, we present a time-evolution analysis of the energy-density terms in order to estimate the nature of $\Xi^{k}_{s}$ and $\Xi^{th}_{s}$. Figure~\ref{fig:time_evolution}a) shows the time evolution of the energy densities averaged over the full simulation domain (solid curves, subscript $full$) and averaged over the sub-domain (dashed curves, subscript $sub$). The curves are normalized to $\varepsilon_{0}=m_{i}v_{A,i}^{2}$. The total energy density is $\varepsilon^{T}=\varepsilon^{k}_{e}+\varepsilon^{k}_{i}+\varepsilon^{th}_{e}+\varepsilon^{th}_{i}+\varepsilon^{em}$. The averaged total energy densities $\langle \varepsilon^{T}_{full}\rangle$ and $\langle \varepsilon^{T}_{sub}\rangle$ (green curves) remain approximately constant. This suggests that numerical heating is negligible in our energy balance. The thermal energy densities of both the ions (black curves) and the electrons (magenta curves) are greater than the kinetic energy densities of both the ions (yellow curves) and electrons (red curves). When averaged over the sub-domain, the energy densities present more variability due to the in-flowing and out-flowing of energy density through the boundaries of the sub-domain. Nevertheless, the time evolution of the quantities $\varepsilon_{full}$ and $\varepsilon_{sub}$ is approximately comparable. Figure \ref{fig:time_evolution}b) depicts the time evolution of the absolute values of the energy-density rates $\Delta \langle \varepsilon \rangle / \Delta t$ (dashed curves) and the dissipative power-density rates $\Xi$ (solid-dotted curves), now averaged over the full simulation domain and normalized to $\varepsilon^{T}_{full}$. The time difference $\Delta t = 6\omega^{-1}_{pi}$ is the difference between two consecutive output times of our simulation. As shown in panel b), in the case of the ions, the thermal energy-density rate (black-dashed curve) and the kinetic energy-density rate (yellow-dashed curve) are greater than the dissipative power-density terms $\Xi^{th}_{i}$ (black solid-dotted curve) and $\Xi^{k}_{i}$ (yellow solid-dotted curve). The same ordering applies to the electron case in which the thermal energy-density rate (magenta dashed curve) and kinetic energy-density rate (red dashed curve) are greater than $\Xi^{k}_{e}$ (red solid-dotted curve) and $\Xi^{k}_{e}$ (magenta solid-dotted curve). For both species, $\Delta \varepsilon^{th}$ increases faster than $\Delta \varepsilon^{k}$. During the initial phase of the simulation ($t\omega_{pi} \lesssim 100$), we find that $\langle \Xi^{k}_{e}\rangle > \langle \Xi^{th}_{e} \rangle$ when averaged over the full simulation domain. Afterwards, for $t\omega_{pi} \gtrsim 100$, we find that $\langle \Xi^{k}_{e}\rangle < \langle \Xi^{th}_{e} \rangle$ when averaged over the full simulation domain until the simulation ends. The time $t\omega_{pi} \approx 100$ corresponds to the moment at which the overall $J^{rms}$ reaches its global maximum in our simulation and significant magnetic reconnection sets in \citep{agudelo2021three}. The total energy-density rate $\Delta \varepsilon^{T}_{full}$ (green dashed curve) is lower than $\Delta \varepsilon^{k}$ and $\varepsilon^{th}$ for both species. Moreover, $\langle \Xi^{th}_{e} \rangle$ and $\langle \Xi^{k}_{e} \rangle$ are negligible compared with the kinetic and thermal energy-density rates. This suggests that any irreversible energy transfer and thus numerical heating are negligible for the energy balance in our simulation. However, $\Xi^{th}_{e}$ and $\Xi^{k}_{e}$ are locally important near the reconnection region, see Figure \ref{fig:power_ei_1D}. \subsection{Comparison with damping and heating proxies} \label{sec:about_dissi} In recent studies \citep{yang2017energy,pezzi2019energy,matthaeus2020pathways,pezzi2021dissipation} the collisionless energy dissipation problem is tackled by studying quantities such as the Zenitani parameter defined in Eq. (\ref{eqn:zenitani}) and the strain-pressure interaction defined in Eq. (\ref{eqn:double_duP}). We also explore these damping and heating proxies for comparison with our methods. Figure \ref{fig:D_pepi_PI_22} depicts 2D cuts in the $rp$-plane and 1D cuts of these damping and heating proxies. Panel a) shows $D_{ze}$. Similar to our kinetic and thermal power density terms, the magnetic islands present strong variations of $D_{ze}$. On the contrary, in the diffusion region, we see a coherent positive $D_{ze}$ signature. Panel b) shows $p\theta_{e}$. The positive/negative patterns of this quantity are almost identical to our patterns of $\nabla \vec{u}_{e}:\overline{\vec{P}}_{s}$ (panel h) in Figure \ref{fig:power_e}. This similarity illustrates that the main contribution to the strain-tensor interaction comes from the diagonal elements of the strain tensor. Panel c) shows $Pi \mh D_{e}$. Although the positive/negative patterns in $Pi \mh D_{e}$ are similar to those in $p\theta_{e}$, $Pi \mh D_{e}$ presents clear differences, especially near the null region where $Pi \mh D_{e}$ has the opposite sign of $p\theta_{e}$ along the separatrices. Moreover, along the separatrices, $|Pi$-$D_{e}| > D_{ze}$ and they share the same sign whereas in the region between the x-points $Pi \mh D_{e} < 0$ and $D_{ze}>0$. Panel d) shows 1D cuts of $D_{ze}$ (blue line), $p\theta_{e}$ (red line) and $Pi \mh D_{e}$ (black line). We find that $p\theta_{e}$ is highly variable and, on average, greater than $D_{ze}$ and $Pi \mh D_{e}$. This is considerably different compared with the Harris current-sheet case \citep{pezzi2021dissipation} in which $D_{ze}$ is the dominant energy-transfer proxy. However, this behavior is consistent with turbulent simulations \citep{pezzi2021dissipation} and with observations of turbulent reconnection \citep{bandyopadhyay2021energy}. \section{Discussion} \label{sec:discussion} The type of magnetic reconnection that occurs from a turbulent cascade \citep{servidio2010statistics, loureiro2020nonlinear,fadanelli2021energy,agudelo2021three} presents a more complex geometry of the diffusion region compared to its laminar counterpart. Likewise, the geometry of the regions with enhanced energy transport and transfer is more complex. Moreover, in a 3D geometry, the particle motion along the out of plane direction allows energy transfer that 2D geometry precludes. For instance, the agyrotropic patterns in magnetic islands of 2D reconnection \citep{scudder2008illuminating} are located in the diffusion region outside the magnetic islands. Conversely, in our 3D case we observe agyrotropic patterns in the cross section of the flux-ropes, which we call magnetic islands. Since the plasma density is greater in the centers of the magnetic islands, these regions exhibit a greater plasma pressure compared to outside the islands. Patterns of agyrotropic plasma pressure are present not only within the magnetic islands but also in the regions between them (Figure \ref{fig:pepi_PI}). The non-uniform guide magnetic field present in this reconnection event affects its geometry. Despite the 3D nature of this event, for the diffusion region in which {$|\vec{B}| \leq 0.4B_{0}$}, we observe gyrotropic/agyrotropic patterns (section \ref{sec:Agirotropy}) similar to those observed in 2D laminar reconnection without guide field \citep{yin2001hybrid}. However, given the complex geometry of our event, we do not observe gyrotropic/agyrotropic patterns matching 2D reconnection in the part of the diffusion region below the x-points. Moreover, we do not observe a quadrupolar pattern of the in-plane component $\Pi_{pr,e}$ \makeatletter\onecolumngrid@push\makeatother \clearpage \makeatletter\onecolumngrid@pop\makeatother \noindent (Figure \ref{fig:pe_PI_12}d) within the diffusion region which is characteristic of agyrotropy in 2D reconnection without guide field \citep{yin2001hybrid}. In the reconnection event that we analyze, although of turbulent nature, the out of plane electron motion is consistent with the 3D shape of electron diffusion regions observed in laboratory plasmas \citep{furno2005coalescence,yoo2013observation,yamada2014conversion}. In our event, ${d\varepsilon^{k}_{e}}/{dt}>0$ along the separatrices and ${d\varepsilon^{k}_{e}}/{dt}<0$ in the outer part of the reconnecting magnetic island (Figure \ref{fig:power_e}a). This corresponds to the acceleration of electrons along the separatrices (Figure \ref{fig:power_e}c) and the presence of a stagnation region. The shear between the flux ropes increases the electron thermal energy and pressure and the bulk kinetic energy reduces at the stagnation point. At the locations of the separatrices $\vec{u}_{e}\cdot \nabla \cdot \overline{\vec{P}}_{e}>0$ (Figure \ref{fig:power_e}b). This suggests electron streams that increase the electron pressure. Conversely, $\vec{u}_{e}\cdot \nabla \cdot \overline{\vec{P}}_{e}<0$ in the region between the x-points. This suggests electron streams that reduce the electron pressure and push the plasma within the diffusion region to a local thermal equilibrium. While reconnection is occurring, the high pressure electrons are allowed to fill in the diffusion region. Within the diffusion region, the electric field increases the electron kinetic energy density and the work done by the electric field on the electrons $-q_{e}n_{e}(\vec{u}_{e}\cdot\vec{E})$ partially balances with the advection of the electron pressure. This is consistent with previous studies \citep{fadanelli2021energy}. The irreversible electron energy-density change $\Xi^{k}_{e}$ (Figure \ref{fig:power_e}e) is non-zero everywhere in the vicinity of the reconnecting structures. The quantity $\Xi^{k}_{e}$ displays structures with positive and negative values within the magnetic islands suggesting that collisional processes accelerate and decelerate electron bulk flows within the magnetic islands. Conversely, in the diffusion region, the positive value of $\Xi^{k}_{e}$ indicates that electrons are irreversibly accelerated. Unlike previous studies of turbulent reconnection \citep{fadanelli2021energy}, we estimate the electron thermal energy transfer associated with each term of Eq. (\ref{eqn:secondmomener_text}). Compared to the case of the kinetic power density, the thermal power density terms present stronger fluctuations. This is evident when comparing ${d\varepsilon^{k}_{e}}/{dt}$ (Figure \ref{fig:power_e}a) and ${d\varepsilon^{th}_{e}}/{dt}$ (Figure \ref{fig:power_e}f) as well as comparing $\varepsilon^{k}_{e}\nabla\cdot\vec{u}_{e}$ (Figure \ref{fig:power_e}c) and $\varepsilon^{th}_{e}\nabla\cdot\vec{u}_{e}$ ( Figure \ref{fig:power_e}i). This difference suggests that the electron bulk flows more efficiently transport thermal energy density than bulk kinetic energy density. The power density terms associated with the compression/expansion of the flow $\nabla \vec{u}_{e}:\overline{\vec{P}}_{s}$ and $\varepsilon^{th}_{e}\nabla\cdot\vec{u}_{e}$ exhibit a strong coherence with the electron motion along the reconnection separatrices. The electron streams gain thermal energy (i.e., heating) associated with the reconnection. This is consistent with simulations of fast collisionless reconnection at low $\beta$ \citep{loureiro2013fast} and observations of magnetospheric reconnection \citep{chasapis2017electron,holmes2019structure}. The most important contribution to $\nabla \vec{u}_{e}:\overline{\vec{P}}_{s}$ comes from the isotropic part of the strain pressure term. Correspondingly, $\varepsilon^{th}_{e}\nabla\cdot\vec{u}_{e}$ presents patterns similar to $\nabla \vec{u}_{e}:\overline{\vec{P}}_{s}$. Moreover, the contribution of the off-diagonal elements in $\nabla \vec{u}_{e}$ and $\overline{\vec{P}}_{e}$ to the thermal energy transport is less than the isotropic contribution, which is consistent with previous studies of turbulent reconnection \citep{fadanelli2021energy,bandyopadhyay2021energy}. The terms associated with electron compressibility ($\varepsilon^{th}_{e}\nabla\cdot\vec{u}_{e}$ and $\nabla \vec{u}_{e}:\overline{\vec{P}}_{s}$) are typically greater than the heat flux contribution ($\nabla \cdot \vec{h}_{s}$) suggesting that, for collisionless reconnection, compressible thermal energy-density transport is important for electrons. {On average, within the sub-domain, the electrons gain kinetic energy at the expense of the electric field (Figure~\ref{fig:power3d}d). The electrons both lose and gain thermal energy (Figures~\ref{fig:power3d}g and \ref{fig:power3d}h) predominantly along thin sheet-like structures.} Similar to the irreversible kinetic energy-density transfer $\Xi^{k}_{e}$, the irreversible thermal energy transfer $\Xi^{th}_{e}$ is non-zero within the reconnecting structures as well as within the diffusion region. Moreover, electrons irreversibly gain thermal energy density at the location of the separatrices and within the diffusion region. {The irreversible kinetic energy-density transfer is mainly confined to the flux-ropes in our simulation (Figure~\ref{fig:xikinther_e}a). Conversely, the irreversible thermal energy-density transfer (Figure~\ref{fig:xikinther_e}b) occurs in thin sheet-like structures that extend for over $5d_{i}$.} Although $\langle \Xi^{k}_{e} \rangle$ is negligible compared to $\Delta \varepsilon^{k}_{e}$ (Figure~\ref{fig:time_evolution}b), the fact that $\Xi^{k}_{e}$ is comparable to $q_{e}n_{e}\vec{E}\cdot \vec{u}_{e}$ and $\vec{u}_{e} \cdot \nabla \cdot \overline{\vec{P}}_{e}$ (Figure~\ref{fig:power_ei_1D}a) implies that $\Xi^{k}_{e}$ must be considered in the local kinetic energy transfer of electrons as it includes important information about the oscillating energy associated with instantaneous field--particle correlations \citep{klein2016measuring,howes2017diagnosing,klein2017diagnosing}. Only meaningful averages of the non-linear correlations between the fluctuating electric field and the fluctuating perturbation of the distribution function define the secular transfer of energy from the fields to the particles. Therefore, we propose that an energy-balance analysis based on the energy-density expressions derived from the collisionless Vlasov equation is not entirely accurate for kinetic simulations. {Because numerical effects in kinetic simulations act as an effective collision operator, the energy balance equations derived from Vlasov without provision for the terms on the right-hand side of Eq. \ref{eqn:boltz} are not exactly satisfied.} Comparing our results with damping ($D_{z,e}$) and heating ($p\theta_{e}$ and $Pi \mh D_{e}$) proxies \citep{pezzi2021dissipation}, we observe that fluctuations of $p\theta_{e}$ inside the diffusion region (Figure \ref{fig:D_pepi_PI_22}d) are typically greater than fluctuations of $D_{z,e}$ and $Pi \mh D_{e}$. {Integrating over the sub-domain (not shown here), we find that $p\theta_{e}/\Delta\varepsilon_{0}|_{V} > |PiD_{e}|/\Delta\varepsilon_{0}|_{V}$. This suggests that, within the sub-domain, the electron heating is mostly due to compressive effects.} This is consistent with results from turbulent simulations \citep{pezzi2021dissipation} and observations of turbulent reconnection \citep{bandyopadhyay2021energy}, but not with results from simulations of laminar reconnection. The proxies $p\theta_{e}$ and $Pi \mh D_{e}$ share the same signs at most locations in our simulation domain. However, in the diffusion region near the null region, the opposite sign of $Pi \mh D_{e}$ and $p\theta_{e}$ suggests that agyrotropic heating mechanisms can emerge to compensate for any reduction or increase in the thermal energy density due to isotropic heating mechanisms. {Moreover, integrating over the sub-domain and over time, we find that $p\theta_{e}|_{V,t}=0.0137$ and $Pi \mh D_{e}|_{V,t}=-0.0190$. This suggests that $p\theta$ is greater than $Pi\mh D$ within the diffusion region {at the particular time selected but not thought out the whole simulation}, due to a local effect.} The positive values of $D_{z,e}$ and the negative value of $p\theta_{e}$ and $Pi \mh D_{e}$ in the region between the x-points suggest that electrons gain kinetic energy density from the fields while losing thermal energy density. {Between the x-points, the electric field accelerates electrons (Figure~\ref{fig:power_e}d). The increase in the electrons' kinetic energy density may be due to Landau damping \citep{landau1946oscillations,howes2006astrophysical,li2016energy}. Conversely, the magnetic pressure (not shown here) increases near the region between the x-points. The total pressure balance requires a depletion of $p_{e}$ and $p_{i}$ (as confirmed by Figures~\ref{fig:pepi_PI}a and \ref{fig:pepi_PI}e) in the diffusion region. Plasma pressure depletion have been suggested to be responsible for the onset of fast reconnection in collisionless plasmas \citep{liu2022first}. Thus the expansion and the consequential cooling-off of the electrons reduces their thermal energy. } \section{Conclusions} \label{sec:conclusions} We derive a framework to quantify the collision-like effects that lead to irreversible energy transfer and thus dissipation in PIC plasmas. We identify and locate magnetic reconnection as a key mechanism for heating, damping, and dissipation in plasma turbulence in low-collisionality systems like the solar wind. Previously, the transfer and transport of energy in plasmas with low collisionality has been studied separately in simulations of reconnection \citep{hesse1998electron,hesse2001collisionless,zenitani2011new,munoz2017turbulent,pucci2018energy,pezzi2019energy,pezzi2021dissipation} and turbulence \citep{wan2012intermittent,yang2017energy,li2019collisionless,pezzi2021dissipation}. The transfer and transport in magnetic reconnection that forms from a turbulent cascade have been limited to 2D geometries \citep{parashar2009kinetic,fadanelli2021energy} and observations \citep{chasapis2018energy,bandyopadhyay2020statistics}, while the 3D case has received little attention. We study, for the first time the energy transport associated with 3D magnetic reconnection that occurs as a consequence of a turbulent cascade to a high level of detail and including all power density terms resulting from the full Boltzmann equation. We extend the analysis of similar studies \citep{fadanelli2021energy} by exploring the transfer and transport of thermal energy for electrons. The energy transfer and transport in collisionesless plasmas is believed to be governed by non-thermal and kinetic mechanisms such as resonant \citep{marsch2003ion, kasper2008hot} and non resonant heating processes \citep{chandran2010perpendicular,chandran2013stochastic}. However, the irreversible energy transport is ultimately associated with collisional effects \cite{schekochihin2009astrophysical}. The agyrotropy signatures present in the reconnection diffusion region as well as in the reconnecting magnetic structures allows for agyrotropic energy transfer mechanisms such agyrotropic-driven instabilities to take place not only near the electron diffusion region \citep{ricci2004influence,roytershteyn2012influence,graham2017instability} but also within the reconnecting magnetic structures. {These signatures are three-dimensional as they extend in the $a$-direction for over $5d_{i}$}. A study of the instabilities that occur during a 3D turbulent reconnection event would be worthwhile to enhanced our understanding of the collisionless energy dissipation. We show that the contribution to the energy-density transfer from collision is not negligible. To determine the exact source of this contribution, future work must use large number of particles while keeping the 3D geometry. In addition, the inclusion of a controllable collision operator would allow for a detailed study of collisions in 3D reconnection \citep{pezzi2017solar,donnel2019multi,boesl2020collisional,pezzi2021dissipation}. The general framework that we introduce is suitable for estimating the irreversible energy-density transfer of the particle species in the solar wind. For instance, Eqs. (\ref{eqn:firstmomener_text}) and (\ref{eqn:secondmomener_text}) can be applied to spacecraft data to study the radial evolution of energy as a function of heliospheric distance in the solar wind. This work would be of interest both for the energetics of solar-wind electrons \citep{scime1994regulation,innocenti2020collisionless} and the solar wind prontons \citep{matteini2007evolution,hellinger2011heating,adhikari2020turbulence} \acknowledgments J.A.A.R~is supported by the European Space Agency's Networking/Partnering Initiative (NPI) program under contract 4000127929/19/NL/MH/mg and the Colombian program Pasaporte a la Ciencia, Foco Sociedad - Reto 3 under grant 3933061. D.V.~is supported by STFC Ernest Rutherford Fellowship ST/P003826/1. D.V., G.N. and C.J.O.~are supported by STFC Consolidated Grants ST/S000240/1 and ST/W001004/1. R.T.W.~is supported by STFC Consolidated Grant ST/V006320/1. K.G.~is supported by NSF grant AGS-1460190. This work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC Capital Grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations Grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. This work was discussed at the ``Joint Electron Project'' at MSSL. \appendix \section{Derivation of the equations for the energy densities} \label{app:Energy_equa} To derive the $n$th moment of the Boltzmann equation (\ref{eqn:boltz}), we take the dyadic product of Eq. (\ref{eqn:boltz}) with $\vec{v}^{n}$ on the left and integrate the equation over the entire velocity space. The zeroth moment leads to \begin{equation} \frac{\partial n_{s} }{\partial t} + \nabla \cdot (n_{s}\vec{u}_{s}) = \Xi_{s}^{0}. \label{eqn:zero_moment} \end{equation} \noindent The collision operator has the property $\Xi_{s}^{0} = 0$ as it must conserve the number of particles. In this case, Eq. (\ref{eqn:zero_moment}) is the continuity equation. The first moment leads to \begin{equation} \frac{\partial (n_{s}\vec{u}_{s})}{\partial t} + \frac{1}{m_{s}} \nabla \cdot \tensnd{P}_{s} - \frac{q_{s}}{m_{s}}n_{s}(\vec{E} + \vec{u}_{s}\times\vec{B}) = \vec{\Xi}_{s}^{1}, \label{eqn:frist-mom1} \end{equation} \noindent where \begin{equation} \tensnd{P}_{s} \equiv m_{s}\int f_{s}\vec{v}\vec{v} d^{3}v. \label{eqn:pressure_tensor2} \end{equation} \noindent We separate the second moment in Eq. (\ref{eqn:pressure_tensor2}) according to $ \nabla \cdot \tensnd{P} = \nabla \cdot \tensndd{P} + \nabla \cdot (nm\vec{u}\vec{u})$, where $\tensndd{P}$ is defined in Eq. (\ref{eqn:pressure_tensor}). Invoking Eq. (\ref{eqn:zero_moment}), Eq. (\ref{eqn:frist-mom1}) takes the form \begin{equation} \frac{d (n_{s}m_{s}\vec{u}_{s})}{d t} = - \nabla \cdot \tensndd{P}_{s} - (\nabla \cdot \vec{u}_{s})n_{s} m_{s}\vec{u}_{s} + q_{s}n_{s}(\vec{E} + \vec{u}_{s}\times\vec{B}) + m_{s}\vec{\Xi}_{s}^{1}. \label{eqn:firstmom} \end{equation} \noindent This equation describes the total change in time of the bulk momentum density for each species. The second moment of Eq. (\ref{eqn:boltz}) yields \begin{eqnarray} \frac{\partial \tensndd{P}_{s}}{\partial t} + \nabla \cdot \left[({Q}_{ijk,s} + u_{i,s}{P}_{ij,s} + {P}_{ij,s}u_{k,s} + u_{j,s}{P}_{ik,s})\vec{\hat{e}^{i}}\otimes\vec{\hat{e}^{j}}\otimes\vec{\hat{e}^{k}}\right] - \frac{q_{s}}{m_{s}}\left( \tensndd{P}_{s}\times\vec{B} - \vec{B}\times \tensndd{P}_{s} \right) \nonumber = \\ -\nabla \cdot\left(n_{s}m_{s}\vec{u}_{s}\vec{u}_{s}\vec{u}_{s}\right) - \frac{\partial (n_{s}m_{s}\vec{u}_{s}\vec{u}_{s})}{\partial t} + q_{s}n_{s}\left[ \vec{E}\vec{u}_{s} + \vec{u}_{s}\vec{E} + \frac{1}{m_{s}} \left( (\vec{u}_{s}\vec{u}_{s})\times\vec{B} - \vec{B}\times (\vec{u}_{s}\vec{u}_{s})\right) \right] + m_{s}\tensndd{\Xi}_{s}^{2}, \label{eqn:secondmom2} \end{eqnarray} \noindent where $Q_{ijk,s}$ represent the elements of the heat-flux tensor \begin{equation} \tensrdd{Q}_{s} \equiv m_{s}\int f_{s}(\vec{v}-\vec{u}_{s})(\vec{v}-\vec{u}_s)(\vec{v}-\vec{u}_{s}) d^{3}v. \end{equation} \noindent Eqs. (\ref{eqn:firstmom}) and (\ref{eqn:secondmom2}) are the exact first and second moments of Eq. (\ref{eqn:boltz}). We proceed to derive expressions for the energy densities $\varepsilon_{s}^{k}$ and $\varepsilon_{s}^{th}$. For this purpose, we take the scalar product of Eq. (\ref{eqn:firstmom}) with $\vec{u}_{s}$ which leads to Eq. (\ref{eqn:firstmomener_text}). To obtain an expression for the thermal energy $\varepsilon^{th}$, we take the trace of Eq. (\ref{eqn:secondmom2}). For the calculation of the trace of the cross product terms in Eq. (\ref{eqn:secondmom2}) we use an element-wise approach. If $\vec{A}$ is a vector and $\tensnd{M}$ is a second rank tensor, the cross product is defined as $\vec{A}\times\tensnd{M} = \epsilon_{lip}A_{i}M_{pq}\vec{\hat{e}^{l}}\otimes\vec{\hat{e}^{q}}$. It can be shown that $\tensnd{M} \times \vec{A} =-\left( \vec{A}\times \tensnd{M}^{T} \right)^{T}$, where $\tensnd{M}^{T}$ is the transposed of $\tensnd{M}$ and $Tr(\vec{A}\times\tensnd{M}) = \epsilon_{ijk}A_{i}M_{jk}$. Moreover, if $\tensnd{M}$ is a symmetric tensor, then $Tr(\vec{A}\times\tensnd{M}) = 0$. In addition, the trace of $\nabla \cdot \tensrdd{Q}$ corresponds to $2\nabla \cdot \vec{h}$. This procedure leads to \begin{eqnarray} \frac{d \varepsilon_{s}^{th}}{d t} + \frac{d \varepsilon_{s}^{k}}{d t} + \nabla \cdot \vec{h}_{s} + \nabla\vec{u}_{s}:\tensndd{P}_{s} + (\nabla \cdot \vec{u}_{s})\varepsilon_{s}^{th} + \vec{u}_{s}\cdot(\nabla \cdot \tensndd{P}_{s}) = \nonumber \\ - (\nabla \cdot \vec{u}_{s})\varepsilon_{s}^{k} + q_{s}n_{s}\vec{E}\cdot\vec{u}_{s} + \frac{1}{2}Tr\left( m_{s} \tensndd{\Xi}_{s}^{2}\right). \label{eqn:a20} \end{eqnarray} \noindent Combining Eqs. (\ref{eqn:firstmomener_text}) and (\ref{eqn:a20}), we obtain Eq. (\ref{eqn:secondmomener_text}). \bibliography{Energy_transport_reconnection}{} \bibliographystyle{aasjournal}
Title: Model independent bounds on Type Ia supernova absolute peak magnitude
Abstract: We put constraints on the peak absolute magnitude of type Ia supernova using the Pantheon sample for type Ia supernova observations and the cosmic chronometers data for the Hubble parameter by a model independent and non-parametric approach. Our analysis is based on the Gaussian process regression. We find percent level bounds on the peak absolute magnitude. For completeness and to check the consistency of the results, we also include the Baryon acoustic oscillation data and the prior of the comoving sound horizon from Planck 2018 cosmic microwave background observations. The inclusion of these two data gives tighter constraints on it at the sub-percent level. The mean values of peak absolute magnitude from all these data are consistent with each other and the values are approximately equal to -19.4.
https://export.arxiv.org/pdf/2208.14740
\title{Model independent bounds on Type Ia supernova absolute peak magnitude} \author{Bikash R. Dinda \orcid{0000-0001-5432-667X}} \email{bikashdinda.pdf@iiserkol.ac.in} \email{bikashd18@gmail.com} \affiliation{ Department of Physical Sciences, Indian Institute of Science Education and Research Kolkata, India. } \author{Narayan Banerjee} \email{narayan@iiserkol.ac.in} \affiliation{ Department of Physical Sciences, Indian Institute of Science Education and Research Kolkata, India. } \keywords{Type Ia supernovae observations, Hubble parameter, BAO, CMB} \date{\today} \section{Introduction} \label{sec-intro} The late time cosmic acceleration was first discovered by the type Ia supernovae observations \citep{SupernovaSearchTeam:1998fmf,SupernovaCosmologyProject:1998vns,2011NatPh...7Q.833W}. These observations are based on the fact that the type Ia supernovae are standard candles and the peak absolute magnitude, $M_B$ of a fiducial mean type Ia supernovae light curve is uniform. The discovery of the late time cosmic acceleration led to the concept of dark energy (for details see \citep{Peebles:2002gy,SupernovaCosmologyProject:2008ojh}), where the dark energy is considered to be an exotic matter component in the Universe that has an effective large negative pressure. The peak absolute magnitude, $M_B$ of type Ia supernova plays an important role in the determination of the expansion history of the Universe since the cosmic distances like luminosity distance of an astronomical object is related to the relative amplitude of the Type Ia supernovae. This relative amplitude depends both on the observed magnitude, $m$, and the absolute magnitude, $M_B$. So, the determination of the cosmic distances not only depends on the observed magnitude but also the absolute magnitude \citep{Linden2009CosmologicalPE,Camarena:2019rmj,Pan-STARRS1:2017jku,Camlibel:2020xbn}. That is why the observational constraints on the cosmological parameters like the deceleration parameter, the matter-energy density parameter, the dark energy density parameter, etc. are estimated based on the value of the absolute peak magnitude, $M_B$ \citep{Cao:2022ugh,Colgain:2022nlb}. Thus it is important to know the exact value of $M_B$. The determination of the peak absolute magnitude of type Ia supernovas are mainly based on the anchors like stellar parallax \citep{vanLeeuwen:2007xw,2018ApJ...855..136R,Greene:2021shv}, detached eclipsing binary stars \citep{Pietrzynski:2013gia}, and maser emission from supermassive black holes \citep{Reid:2019tiq,Pihlstrom:2004vg,Gao:2015tqd}. These methods are mainly astrophysical and restricted to very low redshift observations only. For example, in SHOES observations, the determination of $M_B$ is based on type Ia supernova data for redshift, $z<0.15$ with the anchors mentioned above \citep{Riess:2016jrr,Riess:2020fzl}. It is also important to include the higher redshift type Ia supernova observations to determine the value of $M_B$. For this purpose, the Pantheon sample for type Ia supernova observations is useful, where the data have the redshift range up to nearly $2.2$ \citep{Pan-STARRS1:2017jku}. It is also important to add other cosmological data like cosmic chronometers data \citep{Jimenez:2001gg,Pinho:2018unz}, baryon acoustic oscillation data \citep{eBOSS:2020yzd} etc. to the type Ia supernova data to estimate $M_B$ and check the consistency of its value. In the literature, there are some attempts to compute $M_B$ from the cosmological point of view \citep{Camarena:2019rmj,Sapone:2020wwz,Kumar:2021djt,Camarena:2021jlr,Gomez-Valent:2021hda,Cai:2021weh}. These studies are mainly based on the type Ia supernovae data like Pantheon \citep{Pan-STARRS1:2017jku} with other combinations of data sets like cosmic microwave background (CMB) observations \citep{Planck:2015fie,Planck:2018vyg}, baryon acoustic oscillations (BAO) observations \citep{eBOSS:2020yzd} etc. Some of these methods like in Refs, \citep{Camarena:2019rmj,Camarena:2021jlr} are not completely independent of astrophysical anchors like stellar parallax \citep{vanLeeuwen:2007xw,2018ApJ...855..136R,Greene:2021shv} and masers \citep{Reid:2019tiq,Pihlstrom:2004vg,Gao:2015tqd}. However, a few other methods like in references \cite{Sapone:2020wwz,Kumar:2021djt,Gomez-Valent:2021hda,Cai:2021weh} depend completely on the cosmological data. These methods are either cosmological model dependent or based on the parametrization of $M_B$. Thus it is worthwhile to consider a model independent and non-parametric approach to estimate $M_B$ from the cosmological data and this estimation should be independent of any astrophysical data or any other data. The motivation of this work is to compute the bounds on $M_B$ with a complete model independent and parameter free approach from the cosmological data only. For this purpose, we mainly consider the Pantheon sample for the supernova type Ia observations \citep{Pan-STARRS1:2017jku} and the cosmic chronometer data for the Hubble parameter \citep{Jimenez:2001gg,Pinho:2018unz}, because these data are independent of any fiducial cosmological model. The CMB \citep{Planck:2018vyg} and the BAO \citep{eBOSS:2020yzd} data, on the other hand, are dependent on a fiducial cosmological model. As the primary motivation of the present work is a model-independent estimation of $M_B$, we do not include these data sets to start with. However, we will see that as BAO and CMB data sets have significantly smaller error margins (standard deviation), their inclusion in the analysis helps obtain tighter constraints on $M_B$. It is important to note that this difference in the error margins is the only effect of the inclusion of the model dependent data sets, as we will see, the mean value of $M_B$ is hardly affected by the addition of the model-dependent data in the analysis. This paper is organized as follows. In Sec.~\ref{sec-basic}, we mention basic equations related to the cosmological background dynamics. In Sec.~\ref{sec-data}, we mention some details of the observational data used in our analysis. In Sec.~\ref{sec-methods}, we present our model independent and non-parametric methodology to obtain bounds on the $M_B$ parameter from these observational data. In Sec.~\ref{sec-result}, we present our results and discuss the significance of these results. Finally, in Sec.~\ref{sec-summary}, we summarize the work. \section{Basics} \label{sec-basic} In our entire analysis, we consider that the Universe is spatially homogeneous and isotropic. We further assume that the Universe is spatially flat too. With these two assumptions, the background geometry can be described by the flat Friedmann-Lema\^itre-Robertson-Walker (FLRW) metric given by $dS^{2} = - dt^{2} + a^{2} (t) dR^2$, where $dS$ is the line element of the space-time, $dR$ is the three dimensional Euclidean line element, $t$ is the cosmic time and $a$ is the cosmic scale factor. In this scenario, the luminosity distance, $d_L$ is related to the Hubble parameter, $H$ with an integration equation given as \begin{equation} d_L(z) = c(1+z) \int_0^z \frac{d\tilde{z}}{ H(\tilde{z}) }, \label{eq:H_to_dL} \end{equation} \noindent where $z$ is the cosmological redshift given as $1+z = \frac{a_0}{a}$, where $a_0$ is the present value of $a$ and $\tilde{z}$ is the corresponding dummy argument; $c$ is the speed of light in vacuum. The observed luminosity distance of a type Ia supernova, located at a particular redshift, is related to the observed relative peak magnitude ($m$) of the supernovae with a simple equation given as \begin{equation} m(z) - M_B = \mu(z) = 5 \log_{10}{ \left[ \frac{d_L(z)}{ \text{Mpc} } \right] } + 25, \label{eq:dL_to_m} \end{equation} \noindent where $M_B$ is the peak absolute magnitude of the same supernovae. $\mu$ is called the distance modulus. The above equation is independent of any cosmological model and valid for the only assumption that the Universe is spatially homogeneous and isotropic. \section{Observational data} \label{sec-data} As mentioned in the introduction, in our analysis, we mainly consider two types of observational data. The first one is the Pantheon compilation for the type Ia supernova observations. This compilation consists of data for $m(z)$ at 1048 redshift data points \citep{Pan-STARRS1:2017jku}. Also, these data are binned over 40 redshift bins. We use these binned data in our analysis and denote this as 'SN' data. We are not explicitly writing down all the $m(z)$ values of these data in this paper, because these data are publicly available. To get an idea of the mean and standard deviation values of $m(z)$, see the black error bars in the top panel of Figure~\ref{fig:data_all}. The second one is the cosmic chronometer data for the Hubble parameter as a function of redshift \citep{Jimenez:2001gg,Pinho:2018unz}. We denote this as 'CC' data and any quantity with subscript 'C' corresponds to the values of that quantity at CC redshift points. This data contains 31 redshift points and the corresponding mean and standard deviation of the Hubble parameter. These are plotted in the middle panel of Figure~\ref{fig:data_all} with black colored bars with circle marker format. The CC data has the redshift range from $0.07$ to $1.965$. For the sake of completeness, we have also included the BAO data in our analysis. BAO data is not completely model independent because the results have been obtained by considering a fiducial cosmological model. But, it is useful since the error bars in BAO data are smaller compared to the CC data. The BAO observations consist of measurements for both the line-of-sight direction and the transverse direction \citep{eBOSS:2020yzd}. The line of sight direction data is closely related to the Hubble parameter through the quantity $D_H(z)/r_d$, where $r_d$ is the comoving sound horizon at the baryon-drag epoch and $D_H(z)=c/H(z)$. So to use the data, we must know the $r_d$ value. We consider the $r_d$ value obtained from the Planck 2018 result given by $r_d=147.09 \pm 0.26$ Mpc from Planck18:TT,TE,EE+lowE+lensing. In this way, we include the CMB data too. Using $r_d$ (from CMB) and $D_H(z)$ (from BAO), we get corresponding $H(z)$. The details of these have been mentioned in the next section. These $H(z)$ values and the corresponding errors are plotted in the middle panel of Figure~\ref{fig:data_all} with blue colored bars with star marker format. We also use the transverse BAO data which is closely related to the comoving angular diameter distance $D_M(z)$, through the quantity given by $D_M(z)/r_d$. Hence this data is related to $d_L$. The details are mentioned in the next section. The obtained values of $d_L(z)$ are plotted in Figures~\ref{fig:GPR_dL_vs_BAO_dL} and ~\ref{fig:SN_dL_vs_BAO_dL} with black error bars. The obtained values of $d_L(z)$ and the corresponding errors are plotted in the bottom panel of Figures~\ref{fig:data_all} with black colored bars. Throughout this paper, the BAO data is denoted by 'BAO'. By the 'BAO' notation, we also mean that the value of $r_d$ from Planck 2018 data has been used. Any quantity with subscript 'B' corresponds to the values of that quantity at the BAO redshift points. \section{Methodology} \label{sec-methods} If we know the observed $m$ and $d_L$ corresponding to a type Ia supernova, we can in principle find its peak absolute magnitude $M_B$ with the help of Eq.~\eqref{eq:dL_to_m}. For this purpose, we have type Ia supernova observations like Pantheon compilation \citep{Pan-STARRS1:2017jku} which provides us the data for $m(z)$. If we consider any theoretical model or any parametrization, we can get a functional form of $d_{L}(z)$ either directly or via the functional form of $H(z)$ through Eq.~\eqref{eq:H_to_dL}. Once we have the functional form of $d_L(z)$, we can put constraints on the parameter $M_B$ (along with other parameters of that model or parametrization). For this case, in principle, it is possible that we can get a constraint on the parameter $M_B$ from only the type Ia supernova observations and this constraint should degenerate to the constraints on other parameters. For better constraints on $M_B$, one can add other data sets. But, in this analysis, we are not considering any model or any parametrization, rather we want constraints on $M_B$ in a model independent way. Without considering any model or any parametrization, we can not compute $M_B$ with only the type Ia supernova observations. We have to add at least one another type of observation either related to the luminosity distance (or any other quantity closely related to it like the angular diameter distance) or related to the Hubble parameter. For the first case, cosmological observations like BAO \citep{eBOSS:2020yzd} are useful. For the second case, observations like the cosmic chronometers \citep{Jimenez:2001gg,Pinho:2018unz} are useful. Or one can also combine all of these three data. In general, $d_L(z)$ data (here BAO data) and the $m(z)$ data (here Pantheon compilation) are not at the same redshift points. For this reason, we can not use Eq.~\eqref{eq:dL_to_m} to compute $M_B$ from the combination of these two data sets in a straightforward way. For similar reasons, we can not use the Hubble parameter data (here CC data) and the $m(z)$ data together to put a constraint on $M_B$ in a straightforward way. One possible way to overcome these problems is to use the Gaussian process regression (GPR) technique \citep{Seikel_2012,Shafieloo_2012,Hwang:2022hla}. This technique is useful to construct an approximate mean function of any relevant quantity and the corresponding standard deviation from the observations related to that quantity in a model independent way.\footnote{The Gaussian process regression technique is not completely model independent because there are some hyper-parameters involved in this analysis related to the kernel. Also, there are some other parameters related to the approximate mean function for the prior knowledge in GPR, if we don't consider the zero mean function. In this way, GPR is not completely model or parametrization independent. But GPR is still useful because the results do not significantly depend on these parameters.} For example, from $z$, $m$ and $\Delta m$ data points (obtained from the SN data), we can construct an approximate mean function for $m(z)$ (at the similar redshift range) and also the corresponding standard deviation, $\Delta m(z)$. We denote the standard deviation of any quantity, $X$ as $\Delta X$ throughout this paper. In recent years, GPR has been popularly used in cosmology \citep{Ruiz-Zapatero:2022xbv,Ruiz-Zapatero:2022zpx,Velasquez-Toribio:2021ufm,2022PDU....3600998M,Mehrabi_2022,Zhang_2022,Bernardo:2021cxi,Vazirnia:2021xuu,Li:2021onq,Escamilla-Rivera:2021rbe,Bernardo:2021mfs,Mukherjee:2021ggf,Keeley:2020aym,Bonilla:2020wbn,Zheng:2020tau,Mukherjee:2020ytg,Liu:2020pfa,Wang:2019yob,Mukherjee:2020vkx,Liao:2019qoc,Haridasu:2018gqm,Zhang:2018gjb,Wang:2016iij,Wang:2017jdm,Zhang:2016tto,Nair:2013sna,Seikel:2013fda}. Let us first focus on SN and CC data sets only. We can use GPR to reconstruct $H$ and $\Delta H$ at SN redshift points from the CC data. With the reconstructed $H$ we can reconstruct $m$ as a function of $M_B$ using Eqs.~\eqref{eq:H_to_dL} and~\eqref{eq:dL_to_m}, but the reconstruction of $\Delta m$ is difficult because an integration in Eq.~\eqref{eq:H_to_dL} is not straightforward and there is no standard procedure for ascertaining the propagation of uncertainty through an integration. On the other hand, we can reconstruct $m$ and $\Delta m$ at CC redshift points from SN data using GPR. In fact, we can also reconstruct the derivative of $m$ and the corresponding standard deviation using the GPR itself. So we shall choose this method. The details of this method are given in the following subsection. \subsection{Obtaining constraints on $M_B$ from SN and CC data} Let us discuss the methodology to obtain constraints on $M_B$ from SN and CC data step by step. \subsubsection{First step} In the first step, we use GPR to reconstruct $m(z)$, $\Delta m(z)$, $m'(z)$, $\Delta m'(z)$ and Cov$[m(z),m'(z)]$ from SN data at target redshift points which are CC redshift points in our case. Throughout the paper, we denote the covariance between any two quantities $A$ and $B$ as Cov$[A,B]$. Let us briefly discuss the Gaussian process regression (GPR). In GPR, we assume that the observed data, $Y$ (for example, in this case, it is $m$ from SN data) is a multivariate normal distribution, described by only a mean vector and a covariance matrix. The data $Y$ can be expressed by a vector as $Y=[y_1,y_2,...,y_n]^T$, where $y_1,y_2,...,y_n$ are all the observed values at given data points $x_1,x_2,...,x_n$ respectively; $n$ is the number of observed data points. The superscript 'T' represents the transpose of a vector or a matrix. The data points can also be expressed by a vector X, expressed as $X=[x_1,x_2,...,x_n]^T$ (for example, in this case, it is the redshift points of the SN data). Throughout the discussion, we follow the notation that capital letters correspond to vectors or matrices and the small letters correspond to a single value. If noise in the observed data is present, that can be added to the covariance matrix in the distribution of $Y$. Another important assumption in GPR is that the predicted smooth mean function from this data is also a joint multivariate distribution with this data. Theoretically, this distribution is infinite-dimensional. But in the application, we do not need a smooth function, rather we need the values of the predicted mean at the target points. For example, here we need predicted mean values of $m$ at CC redshift points. If the total number of target points is $n^*$, then the joint distribution of data and predicted mean has the dimension $n+n*$. Let us denote the predicted quantity as $F^*$ which has mean vector $\overline{F^*}=[\overline{f^*_1},\overline{f^*_1},...,\overline{f^*_{n^*}}]^T$ and a covariance matrix Cov$[F^*]$, which has $n^* \times n^*$ number of elements. To find these values using GPR, we need another important quantity, called the kernel covariance. In literature, there are some forms of this kernel covariance. Among those, the squared exponential kernel is most used. One of the main reasons is that it is infinitely differentiable. In this kernel, the covariance elements are expressed as \begin{equation} k(x_1,x_2)=\sigma_f^2 \exp \left[ -\frac{|x_1-x_2|^2}{l^2} \right], \label{eq:kernel_SE} \end{equation} \noindent where $\sigma_f^2$ is the signal variance that determines the average deviation of a function from its mean along the region of target points and $l$ is the length scale in which the function changes significantly. These parameters are called hyper-parameters. We also need prior information on an approximate mean function which is used both for the input data and the predicted value. In practice, many authors use the zero mean function, but we use the corresponding mean function from the flat $\Lambda$CDM model. Let us denote these approximate input values and the approximate target values vectors as $M$ (with $n$ number of elements) and $M^*$ (with $n^*$ number of elements) respectively. Once we have the approximate mean function and the kernel covariance matrix, the predicted mean vector, $\overline{F^*}$ and the covariance matrix, Cov$[F^*]$ are given as \citep{Seikel_2012,Shafieloo_2012,Hwang:2022hla} \begin{eqnarray} \overline{F^*} &=& M^* + K(X^*,X) \left[ K(X,X)+C \right]^{-1} (Y-M), \nonumber\\ \text{Cov}[F^*] &=& K(X^*,X^*) \nonumber\\ && - K(X^*,X) \left[ K(X,X)+C \right]^{-1} K(X,X^*), \label{eq:main_prediction} \end{eqnarray} \noindent respectively. $X^*=[x_1^*,x_2^*,...,x_n^*]^T$ is the vector that corresponds to the target points, which are actually the redshift points of the CC data. $C$ is the noise covariance matrix of the observed data. Note that the matrix Cov$[F^*]$ has the elements corresponding to the covariances of all pairs of the elements of $F^*$. The above equations depend on the values of the hyper-parameters of the kernel and also the parameters of the approximate mean function. We marginalize over all these parameters using the \textit{emcee} package \citep{emcee} with the log marginal likelihood (denoted by $\log P(Y|X)$) given as \citep{Seikel_2012} \begin{eqnarray} \log P(Y|X) &=& -\frac{1}{2} (Y-M)^T \left[ K(X,X)+C \right]^{-1} (Y-M) \nonumber\\ && -\frac{1}{2} \log |K(X,X)+C| -\frac{n}{2} \log{(2 \pi)}, \label{eq:log_marginal_likelihood} \end{eqnarray} \noindent where $|K(X,X)+C|$ is the determinant of the $K(X,X)+C$ matrix. In GPR, the derivatives of the function can also be computed by assuming derivatives also follows a joint multivariate normal distribution with the observed data. The mean vector and the covariance matrix corresponding to the first derivative are given as \citep{Seikel_2012} \begin{eqnarray} && \overline{F'^*} = M'^* \nonumber\\ && + [K'(X,X^*)]^T \left[ K(X,X)+C \right]^{-1} (Y-M), \nonumber\\ && \text{Cov}[F'^*] = K''(X^*,X^*) \nonumber\\ && - [K'(X,X^*)]^T \left[ K(X,X)+C \right]^{-1} K'(X,X^*), \label{eq:derivative_predictions} \end{eqnarray} \noindent where prime and double prime are first and second derivatives of the corresponding function respectively with respect to any argument, for example, in our case the redshift. Related to this, $k'(x,x^*)$ and $k''(x^*,x^*)$ are given as \begin{eqnarray} k'(x,x^*) = \dfrac{\partial k(x,x^*)}{\partial x^*}, k''(x^*,x^*) = \dfrac{\partial ^2 k(x^*,x^*)}{\partial x^* \partial x^*}, \label{eq:kernel_derivatives} \end{eqnarray} \noindent respectively. In GPR, we can also get the covariances between the function and its derivatives. For example, the covariance matrix between the function and its first derivative is given as \citep{Seikel_2012} \begin{eqnarray} && \text{Cov}[F^*,F'^*] = K'(X^*,X^*) \nonumber\\ && - [K(X,X^*)]^T \left[ K(X,X)+C \right]^{-1} K'(X,X^*). \label{eq:covariance_fstar_fstarprime} \end{eqnarray} \\ \noindent We have $\text{Cov}[F^*,F'^*] = \left[ \text{Cov}[F'^*,F^*] \right]^T = \text{Cov}[F'^*,F^*]$, since the covariance matrices are symmetric. Note that the reconstructed function may depend on the form of the kernel and the approximate prior mean function, but the dependence is not that significant because we are marginalizing all the hyper-parameters and the parameters involved in the mean function. For details of this see \citep{Hwang:2022hla}. For the marginalization and the MCMC analysis, we use the python package \textit{emcee} \citep{emcee}. In Figure~\ref{fig:GPR_m_vs_SN_m}, we have shown the reconstructed mean and standard deviation values of $m(z)$ obtained by GPR with the observed SN data. The black colored bars with circle marker format correspond to the SN data from the Pantheon compilation. The red line and the shaded orange region correspond to the reconstructed functions for the mean and the 1$\sigma$ confidence interval of $m(z)$ respectively obtained by GPR. We have plotted the mean values and the standard deviations of $m$ at target CC redshift points with the blue colored bars with star marker format. As we shall also include the BAO data later, we have also plotted $m$ and $\Delta m$ at BAO redshift points with the green colored bars with diamond marker format in the same figure. These reconstructed values at target redshift points will be used in the next steps. \subsubsection{Second step} We use Eq.~\eqref{eq:dL_to_m}, to get luminosity distance from $m$ and the solution is given as \begin{equation} d_L(z) = 10^{\frac{1}{5} \left[ m(z)-25-M_B \right] } \hspace{0.2 cm} \text{Mpc} \label{eq:dL_from_m} \end{equation} \noindent The above equation can be rewritten as a combination of a redshift independent part and a redshift dependent part. For the redshift independent part, we define a parameter, $\beta$ given as \begin{equation} \beta = 10^{-\frac{M_B}{5}} \hspace{0.2 cm} \text{Mpc}. \label{eq:beta} \end{equation} \noindent For the redshift dependent part, we define a quantity, $d_N$ given as \begin{equation} d_N(z) = 10^{\frac{1}{5} \left[ m(z)-25 \right] }. \label{eq:dN} \end{equation} \noindent With the definitions of $\beta$ and $d_N$, the luminosity distance, $d_L$ can be rewritten as \begin{equation} d_L(z) = \beta d_N(z). \label{eq:dL_wrt_beta_dN} \end{equation} \noindent In the above equation, we can see that $d_L$ is linear in $d_N$ and $d_N$ is independent of the $M_B$ parameter because the $M_B$ parameter is absorbed in the constant parameter, $\beta$. The purpose of doing this is to do the GPR analysis without the involvement of the $M_B$ parameter in all steps. $M_B$ will appear only in the final step. This will be clarified in the later part. Not only $d_L$, we also need $d'_L = \frac{dd_L}{dz}$ to find $H$. We will see in detail why we need $d'_L$ in the next step. To compute $d'_L$, we do the differentiation of Eq.~\eqref{eq:dL_wrt_beta_dN} with respect to $z$ and we get \begin{equation} d'_L(z) = \beta d'_N(z), \label{eq:dLp_from_dNp} \end{equation} \noindent where $d'_N$ is given as (by doing differentiation of Eq.~\eqref{eq:dN} with respect to $z$) \begin{equation} d'_N(z) = \alpha m'(z) 10^{\frac{1}{5} \left[ m(z)-25 \right] } = \alpha m'(z) d_N(z), \label{eq:dNp} \end{equation} \noindent with \begin{equation} \alpha = \frac{\log_{\hat{e}}{10}}{5}. \label{eq:alpha} \end{equation} So, in this step, we get $d_N$ and $d'_N$ from $m$ and $m'$. In the next step, we will also need $\Delta d_N$, $\Delta d'_N$ and Cov[$d_N,d'_N$]. These are computed from $\Delta m$, $\Delta m'$ and Cov[$m,m'$] by the propagation of uncertainty given as \begin{eqnarray} \text{Var}(d_N) &=& \left( \frac{\partial d_N}{\partial m} \right)^2 \text{Var}(m) + \left( \frac{\partial d_N}{\partial m'} \right)^2 \text{Var}(m') \nonumber\\ && + \frac{\partial d_N}{\partial m} \frac{\partial d_N}{\partial m'} \text{Cov}[m,m'] \nonumber\\ &=& \alpha^2 d_N^2 \text{Var}(m), \end{eqnarray} \begin{eqnarray} \text{Var}(d'_N) &=& \left( \frac{\partial d'_N}{\partial m} \right)^2 \text{Var}(m) + \left( \frac{\partial d'_N}{\partial m'} \right)^2 \text{Var}(m') \nonumber\\ && + \frac{\partial d'_N}{\partial m} \frac{\partial d'_N}{\partial m'} \text{Cov}[m,m'] \nonumber\\ &=& \alpha^4 d_N^2 m'^2 \text{Var}(m) + \alpha^2 d_N^2 \text{Var}(m') \nonumber\\ && + 2 \alpha^3 d_N^2 m' \text{Cov}[m,m'], \end{eqnarray} \begin{eqnarray} \text{Cov}[d_N,d'_N] &=& \frac{\partial d_N}{\partial m} \frac{\partial d'_N}{\partial m} \text{Var}(m) + \frac{\partial d_N}{\partial m'} \frac{\partial d'_N}{\partial m'} \text{Var}(m') \nonumber\\ && + \left( \frac{\partial d_N}{\partial m} \frac{\partial d'_N}{\partial m'} + \frac{\partial d_N}{\partial m'} \frac{\partial d'_N}{\partial m} \right) \text{Cov}[m,m'] \nonumber\\ &=& \alpha^3 d_N^2 m' \text{Var}(m) + \alpha^2 d_N^2 \text{Cov}[m,m'], \label{eq:Cov_dN_dNp} \end{eqnarray} \noindent respectively. Then we get the corresponding standard deviations for $d_N$ and $d'_N$ given as $\Delta d_N=\sqrt{\text{Var}(d_N)}$ and $d'_N=\sqrt{\text{Var}(d'_N)}$ respectively. \subsubsection{Third step} To get the Hubble parameter, we have to differentiate Eq.~\eqref{eq:H_to_dL}. By doing this, we get \begin{eqnarray} d'_L(z) &=& c \left[ \frac{1+z}{H(z)} + \int_0^z \frac{d\tilde{z}}{ H(\tilde{z}) } \right] \nonumber\\ && = \frac{c(1+z)}{H(z)} + \frac{d_L(z)}{1+z}. \label{eq:derivative_d_L_eqn} \end{eqnarray} \noindent From this equation, we get the Hubble parameter given as \begin{eqnarray} H(z) &=& \frac{c(1+z)^2}{(1+z)d'_L(z)-d_L(z)} \noindent\\ &=& \frac{c(1+z)^2}{ \beta \left[ (1+z)d'_N(z)-d_N(z) \right] }, \label{eq:H_from_dL_dLprime} \end{eqnarray} \noindent where in the second equality, we have used Eqs.~\eqref{eq:dL_wrt_beta_dN} and~\eqref{eq:dLp_from_dNp}. From this equation, we can now clearly see why we needed $d'_N$ along with $d_N$. Similar to the case for the luminosity distance, here also, we can separate the parameter independent part (which is redshift dependent) and the parameter dependent part (which is redshift independent). For the parameter independent part, we define a quantity, $G$ given as \begin{equation} G(z) = \frac{(1+z)^2}{ (1+z)d'_N(z)-d_N(z) }. \label{eq:G} \end{equation} \noindent For the parameter dependent part, we define a parameter, $F$ given as \begin{equation} F = \frac{c}{\beta} = c \hspace{0.2 cm} 10^{\frac{M_B}{5}} \hspace{0.2 cm} \text{Mpc}^{-1}, \label{eq:F} \end{equation} \noindent where in the second equality we have used the definition of $\beta$ from Eq.~\eqref{eq:beta}. Using the above two definitions, the Hubble parameter can be rewritten as \begin{equation} H(z) = F G(z). \label{eq:H_wrt_F_G} \end{equation} \noindent We can see that $H$ is linear in $G$. In this step, we get $G$ from $d_N$ and $d'_N$ and then finally we get $H$ from G. We also need corresponding standard deviations for these quantities. First, we get $\Delta G$ from $\Delta d_N$, $\Delta d'_N$ and Cov[$d_N,d'_N$] by the propagation of uncertainty given as \begin{eqnarray} \text{Var}(G) &=& \left( \frac{\partial G}{\partial d_N} \right)^2 \text{Var}(d_N)+\left( \frac{\partial G}{\partial d'_N} \right)^2 \text{Var}(d'_N) \nonumber\\ && + 2 \frac{\partial G}{\partial d_N} \frac{\partial G}{\partial d'_N} \text{Cov} [d_N,d'_N] \nonumber\\ &=& \frac{G^4}{(1+z)^4} \text{Var}(d_N) + \frac{G^4}{(1+z)^2} \text{Var}(d'_N) \nonumber\\ && - 2 \frac{G^4}{(1+z)^3} \text{Cov} [d_N,d'_N]. \label{eq:variance_G} \end{eqnarray} \noindent We get the corresponding standard deviation as $\Delta G = \sqrt{\text{Var}(G)}$. In Figure~\ref{fig:GPR_G}, we have plotted the reconstructed functions for $G(z)$ and the corresponding standard deviation in it using GPR and the propagation of uncertainty. The red line and the shaded orange region correspond to the mean $G(z)$ and the associated 1$\sigma$ confidence region around the mean respectively. The corresponding values of the mean and the standard deviation of $G$ at the target CC redshift points have been shown with the blue colored bars in circle marker format. We also plot reconstructed $G$ and $\Delta G$ at BAO redshift points with green colored bars with star marker format. This will be required later while using the BAO data. Once, we have $\Delta G$, we easily get $\Delta H$ using propagation of uncertainty given as \begin{equation} \Delta H(z) = |F| \Delta G(z). \label{eq:Delta_H} \end{equation} \subsubsection{Fourth step} We obtained $G(z)$ and $\Delta G(z)$ at CC redshift points from observed SN data using GPR. Now we compare these values to the actually observed CC data to get constraints on $M_B$. For this purpose, we define a chi-square given as \begin{equation} \chi^2_{\text{C}}(F) = \sum_{z_{\text{C}}} \frac{ \left[ F G(z_{\text{C}})-H_{\text{C}}(z_{\text{C}}) \right]^2}{ F^2 \Delta G^2(z_{\text{C}}) + \Delta H_{\text{C}}^2(z_{\text{C}}) }, \label{eq:chisqr_SN_CC} \end{equation} \noindent where subscript 'C' corresponds to the actual CC observation. In the above equation, the total term in the denominator corresponds to the total variance in the Hubble parameter. Since the total variance is itself parameter dependent, the better way to get constraints on the parameter is a maximum likelihood analysis rather than the chi-square minimization. The corresponding log-likelihood is given as \begin{eqnarray} & \log{L}_{\text{C}}(F) = - \frac{\chi^2_{\text{C}}(F)}{2} \nonumber\\ & - \frac{1}{2} \sum_{z_{\text{C}}} \log{ \left( 2 \pi \left[ F^2 \Delta G^2(z_{\text{C}}) + \Delta H_{\text{C}}^2(z_{\text{C}}) \right] \right) }. \label{eq:lnlk_SN_CC} \end{eqnarray} \noindent We maximize the likelihood by minimizing the negative log-likelihood to get constraints on $F$. Having $F$ and $\Delta F$, we then get $M_B$ and $\Delta M_B$ by the propagation of uncertainty given as \begin{eqnarray} M_B &=& 5\log_{10}\left( \frac{F}{c} \text{Mpc} \right), \nonumber\\ \Delta M_B &=& \frac{ \Delta F }{ | \alpha F | } . \label{eq:M_from_F} \end{eqnarray} It is now clear that the parameter $F$ (or $M_B$) is involved only in the final step, where we do the maximum likelihood analysis. All the previous steps are independent of this parameter. \subsection{Inclusion of BAO and CMB data} \label{subsec-cmbbao} In this subsection, we discuss how to include BAO data in our analysis using a similar methodology discussed so far in the previous subsection. As mentioned previously, the BAO observations have two types of data: one is related to the Hubble parameter through the quantity $\tilde{D}_H(z)=D_H(z)/r_d=c/(H(z) r_d)$ and the other is related to the luminosity distance (the comoving angular diameter distance to be more precise) through the quantity $\tilde{D}_M(z)=D_M(z)/r_d=d_L(z)/[(1+z)r_d]$. We need the value of $r_d$, to include BAO data in our analysis. For this purpose, we use $r_d$ and $\Delta r_d$ from the results of Planck 2018 CMB mission for TT,TE,EE+lowE+lensing combination of data. We have used standard abbreviations. From this data, $r_d$ and $\Delta r_d$ are given by $r_d=147.09$ Mpc and $\Delta r_d=0.26$ Mpc respectively. With these values, we get $H(z)$ and $\Delta H(z)$ corresponding to the BAO data given as \begin{eqnarray} H_{\text{B}}(z) &=& \frac{c}{r_d \tilde{D}_H(z)}, \nonumber\\ \frac{ \Delta H_{\text{B}}(z) }{ H_{\text{B}}(z) } &=& \sqrt{ \left[ \frac{\Delta \tilde{D}_H(z)}{\tilde{D}_H(z)} \right]^2+\left[ \frac{\Delta r_d}{r_d} \right]^2 }, \label{eq:PL18_BAO_H} \end{eqnarray} \noindent respectively. Similarly, we get $d_L(z)$ and $\Delta d_L(z)$ corresponding to the BAO observations given by \begin{eqnarray} d_{L (\text{B})} (z) &=& (1+z)r_d \tilde{D}_M(z), \nonumber\\ \frac{ \Delta d_{L (\text{B})} (z) }{ d_{L (\text{B})} (z) } &=& \sqrt{ \left[ \frac{\Delta \tilde{D}_M(z)}{\tilde{D}_M(z)} \right]^2+\left[ \frac{\Delta r_d}{r_d} \right]^2 }, \label{eq:PL18_BAO_dL} \end{eqnarray} \noindent respectively. So, we have two types of data for BAO with the help of the $r_d$ value from the Planck 2018 results. In this way, we include CMB data. Since we have $H(z)$ data for BAO, we do the same analysis as mentioned in the previous subsection and define a corresponding log-likelihood for BAO for $H(z)$ given as \begin{eqnarray} & \log{L}_{\text{B(H)}}(F) = - \frac{1}{2} \sum_{z_{\text{B}}} \frac{ \left[ F G(z_{\text{B}})-H_{\text{B}} (z_{\text{B}}) \right]^2}{ F^2 \Delta G^2(z_{\text{B}}) + \Delta H_{\text{B}}^2(z_{\text{B}}) } \nonumber\\ & - \frac{1}{2} \sum_{z_{\text{B}}} \log{ \left( 2 \pi \left[ F^2 \Delta G^2(z_{\text{B}}) + \Delta H_{\text{B}}^2(z_{\text{B}}) \right] \right) }, \label{eq:lnlk_SN_PL18_BAO_H} \end{eqnarray} \noindent with $\beta=c/F$. Subscript 'B' corresponds to the actual BAO data. Similarly, for $d_L(z)$ data from BAO, we can define a log-likelihood given as \begin{eqnarray} & \log{L}_{\text{B}(d_L)}(F) = - \frac{1}{2} \sum_{z_{\text{B}}} \frac{ \left[ \beta d_N(z_{\text{B}})-d_{L (\text{B})}(z_{\text{B}}) \right]^2}{ \beta^2 \Delta d_N^2(z_{\text{B}}) + \Delta d_{L (\text{B})}^2(z_{\text{B}}) } \nonumber\\ & - \frac{1}{2} \sum_{z_{\text{B}}} \log{ \left( 2 \pi \left[ \beta^2 \Delta d_N^2(z_{\text{B}}) + \Delta d_{L (\text{B})}^2(z_{\text{B}}) \right] \right) }, \label{eq:lnlk_SN_PL18_BAO_dL} \end{eqnarray} \noindent with $\beta=c/F$. We plot the reconstructed $d_N$ and $\Delta d_N$ in Figure~\ref{fig:GPR_dN}. The red line and the shaded orange region correspond to $d_N(z)$ and the associated 1$\sigma$ confidence region respectively. We have also shown the corresponding values at the target BAO redshift points by the green colored bars. Now adding the above two log-likelihoods, we get the total log-likelihood for SN+BAO data given as \begin{equation} \log{L}_{\text{B}}(F) = \log{L}_{\text{B}(H)}(F) + \log{L}_{\text{B}(d_L)}(F). \label{eq:lnlk_SN_PL18_BAO} \end{equation} \noindent We minimize the negative log-likelihood to get the constraints on $F$ from SN+BAO+CMB data. Then we get constraints on $M_B$ using Eq.~\eqref{eq:M_from_F}. \subsection{CC and BAO together} The constraints on $M_B$ from all the data combined i.e. from SN+CC+BAO+CMB can be obtained by doing the maximum likelihood analysis for the total log-likelihood given as \begin{equation} \log{L}_{\text{tot}}(F) = \log{L}_{\text{C}}(F) + \log{L}_{\text{B}}(F). \label{eq:lnlk_SN_CC_PL18_BAO} \end{equation} \noindent In the above equation, the subscript 'tot' corresponds to all these data combined i.e. to the SN+CC+BAO combination of data. \section{Results and discussion} \label{sec-result} For SN+CC data, we minimize the negative of log likelihood mentioned in Eq.~\eqref{eq:lnlk_SN_CC} and get constraints on $F$ as $F=[39.66\pm1.03]$ Mpc$^{-1}$. Using this in Eq.~\eqref{eq:M_from_F}, we get constraints on $M_B$ for SN+CC data as \begin{equation} M_B = -19.394 \pm 0.056 \hspace{0.2 cm} \text{mag} \hspace{0.2 cm} (\text{SN}+\text{CC}). \label{eq:result_MB_SN_CC} \end{equation} Similarly for SN+BAO data, we minimize the negative log-likelihood mentioned in Eq.~\eqref{eq:lnlk_SN_PL18_BAO} to get constraints on $F$. We get $F=[39.55\pm0.43]$ Mpc$^{-1}$ and consequently \begin{equation} M_B = -19.400 \pm 0.023 \hspace{0.2 cm} \text{mag} \hspace{0.2 cm} (\text{SN}+\text{BAO}). \label{eq:result_MB_SN_BAO_CMB} \end{equation} Finally, we get constraints on $F$ from all these data combined i.e. from SN+CC+BAO data by minimizing the negative log-likelihood mentioned in Eq.~\eqref{eq:lnlk_SN_CC_PL18_BAO} given as $F=[39.57\pm0.39]$ Mpc$^{-1}$. Consequently, we get the constraints on $M_B$ from SN+CC+BAO data as \begin{equation} M_B = -19.399 \pm 0.021 \hspace{0.2 cm} \text{mag} \hspace{0.2 cm} (\text{SN}+\text{CC}+\text{BAO}). \label{eq:result_MB_SN_CC_BAO_CMB} \end{equation} In Figure~\ref{fig:probability_MB}, we plot the probability of $M_B$ obtained from the maximum likelihood analysis as mentioned above. The solid-black, dotted-blue, and dashed-red lines correspond to the SN+CC, SN+BAO, and SN+CC+BAO respectively. The constraint on $M_B$ is tighter when we consider SN and BAO data combined compared to the one for SN and CC data combined. This is because the errors on $H(z)$ are significantly smaller in BAO data compared to the CC data. Also in BAO data, the constraints on $M_B$ are coming from the $d_L(z)$ data too which further tightens it. Since, the constraint on $M_B$ is significantly tighter from the BAO data, when we add CC data and BAO data together, the constraints follow the result of BAO data only i.e. there is no significant improvement by adding the CC data. That means for the computation of constraints on $M_B$, if we consider SN and BAO data together, we do not really need to add the CC data. But the result from the SN and CC data is important to consider because these data are independent of any fiducial cosmological model whereas the BAO data are dependent on a fiducial model. We now compare the Hubble parameter from CC data with the reconstructed Hubble parameter from the SN data with the obtained mean $M_B$ from SN data. We consider mean value of $M_B$ obtained from SN+CC data mentioned in Eq.~\eqref{eq:result_MB_SN_CC} to reconstruct $H(z)$. Since, we have obtained this $M_B$ value by maximizing the log-likelihood for CC data for $H(z)$ as mentioned in Eq.~\eqref{eq:lnlk_SN_CC}, the reconstructed $H(z)$ with this $M_B$ value should be consistent with the $H(z)$ observations of CC data. We do this cross-checking by plotting reconstructed $H(z)$ with the CC $H(z)$ in Figure~\ref{fig:GPR_H_vs_CC_H}. We can see that they are consistent with each other. We draw a similar plot for the BAO data too for $H(z)$ to compare this with the reconstructed one. In Figure~\ref{fig:GPR_H_vs_BAO_H}, the black colored bars correspond to the mean values of $H(z)$ and the associated standard deviations obtained from the BAO data by the method mentioned in Sec.~\ref{sec-methods} through Eq.~\eqref{eq:PL18_BAO_H}. In the same figure, we plot the reconstructed mean $H(z)$ and the corresponding 1$\sigma$ region by a red line and a shaded orange region respectively. For this reconstruction we consider $M_B = -19.400 \hspace{0.2 cm} \text{mag}$ mentioned in Eq.~\eqref{eq:result_MB_SN_BAO_CMB}. Since this $M_B$ value itself is obtained from the maximum likelihood analysis from the log-likelihood mentioned in Eq.~\eqref{eq:lnlk_SN_PL18_BAO}, it is obvious that we see that the red line and the black colored bars are quite consistent with each other. However, in this case, the standard deviations are not consistent. This is because the variance in $FG(z)$ in Eq.~\eqref{eq:lnlk_SN_PL18_BAO_H} from SN data (obtained by GPR) larger than the variance in $H(z)$ from BAO data. That is why the total variance is dominated by the variance in $FG(z)$ in Eq.~\eqref{eq:lnlk_SN_PL18_BAO_H}. That means the effect of the BAO $H(z)$ data is not that significant to obtain the standard deviation of $M_B$. That further indicates that the constraint is coming significantly from the BAO $d_L(z)$ data. To confirm this, we compute the constraints on $M_B$ separately from BAO $H(z)$ data and BAO $d_L(z)$ data by maximizing the corresponding log-likelihoods (through Eqs.~\eqref{eq:lnlk_SN_PL18_BAO_H} and~\eqref{eq:lnlk_SN_PL18_BAO_dL} respectively) separately. We find the constraints as \begin{equation} M_B = -19.383 \pm 0.053 \hspace{0.2 cm} \text{mag} \hspace{0.2 cm} (\text{SN}+\text{BAO} (H)), \label{eq:result_MB_SN_CC_BAO_CMB_H_only} \end{equation} \begin{equation} M_B = -19.404 \pm 0.026 \hspace{0.2 cm} \text{mag} \hspace{0.2 cm} (\text{SN}+\text{BAO} (d_L)), \label{eq:result_MB_SN_CC_BAO_CMB_dL_only} \end{equation} \noindent respectively. Now we can see that the standard deviation on $M_B$ is significantly larger for SN+BAO($H$) compared to SN+BAO($d_L$). Because of this reason, the standard deviation for SN+BAO($d_L$) is similar to the results obtained in Eq.~\eqref{eq:result_MB_SN_BAO_CMB}, and that of SN+BAO($H$) has no significant effect. We now look at the consistency of $d_L$ and $\Delta d_L$ obtained from actual BAO observation and the one from SN data by GPR using the result of mean $M_B$. We compare these two in Figure~\ref{fig:GPR_dL_vs_BAO_dL}. In this figure, the black colored bars correspond to the $d_L(z)$ and $\Delta d_L(z)$ from BAO data. The red line corresponds to the mean $d_L(z)$ from the SN data obtained by GPR with $M_B = -19.400$ and the shaded orange region corresponds to the associated 1$\sigma$ confidence region. Now we can see that the values are consistent. As expected the standard deviations are also of the same order of magnitude because the constraint on $M_B$ is coming significantly from the $d_L$ data of BAO compared to the $H$ data of BAO. To complete the discussion, we compare the values of $d_L(z)$ and the corresponding standard deviation obtained from BAO data with the ones obtained directly from the SN data by the propagation of uncertainty with $M_B = -19.400$ in Figure~\ref{fig:SN_dL_vs_BAO_dL}. Note that in this plot GPR is not involved. The black colored bars with circle marker format correspond to $d_L(z)$ and $\Delta d_L(z)$ from BAO data. The blue colored bars with star marker format correspond to $d_L(z)$ and $\Delta d_L(z)$ from SN data with $M_B = -19.400$. Although they are at different redshift points, we can conclude that the values are consistent for both cases (both for mean and standard deviation). All the results obtained from different combinations of data mentioned in Eqs.~\eqref{eq:result_MB_SN_CC},~\eqref{eq:result_MB_SN_BAO_CMB}, and~\eqref{eq:result_MB_SN_CC_BAO_CMB} indicate the mean value of $M_B$ to be approximately $-19.4$ (also see Figure~\ref{fig:probability_MB}). These results are similar to the results obtained from the previous studies like in \citep{Camarena:2019rmj,Gomez-Valent:2021hda,Cai:2021weh} in the context of similar cosmological data. Note that these results have discrepancies with the results obtained from the astrophysical data like stellar parallax and masers observations like in \citep{Greene:2021shv,Camarena:2021jlr,Dinda:2021ffa}, in which the results are close to $M_B \approx -19.2$. This discrepancy is already discussed in the literature and it is sometimes referred to as the $M_B$ tension (see \citep{Camarena:2021jlr} for details). \section{Summary} \label{sec-summary} The luminosity of the supernova type Ia is taken as a standard candle in the estimation of cosmic distances in terms of the integrals of the scale factor $a$ and its derivatives. This is crucial in the context of the present state of evolution, particularly the inference regarding the accelerated state of expansion of the universe. This work aims to check the consistency of this assumption by a reconstruction of the peak absolute magnitude, $M_B$, of the type Ia supernova, by a model independent approach from the observational data. Also, the reconstruction is aimed to be independent of any parametrization of cosmological quantities. We first reconstruct the Hubble parameter at CC redshift points as a function of $M_B$ with the help of the Gaussian process regression (GPR). We also reconstruct the corresponding standard deviation in the Hubble parameter, $\Delta_H$ at CC redshift points as another function of $M_B$ using GPR. Note that, in these reconstructions, actual CC data is not involved. Once we have reconstructed $H$ and $\Delta H(z)$ at CC redshift points, we compare these values with the actual CC data to obtain constraint on $M_B$. We define a corresponding likelihood with the help of Eq.~\eqref{eq:lnlk_SN_CC}. We obtain constraints on $M_B$ by maximizing this likelihood and the result is $M_B = -19.394$ $\pm$ $0.056$ mag. After this, we deviate from the principal motivation of a model independent work and also include the baryon acoustic oscillation (BAO) data in our analysis. The inclusion of the BAO data makes our analysis model dependent unlike in the case of SN and CC data. Although the mean value of $M_B$ remains quite consistent with the model-independent approach, this inclusion results in tighter constraints on $M_B$. For the SN+BAO data, we do a similar analysis as in the case of the SN+CC data and we obtain the constraint on $M_B$ as $M_B = -19.400$ $\pm$ $0.023$. Finally, we combine all these three data and get a constraint on $M_B$ as $M_B$ as $M_B = -19.399$ $\pm$ $0.021$. Since SN+BAO data give significantly a tighter constraint compared to the SN+CC data, the result of SN+CC+BAO follows the result of SN+BAO. Our final conclusion is that the mean value of $M_B$ that is used in the literature is quite consistent with that obtained by the model independent reconstruction. But to obtain tighter constraints in terms of the standard deviation, the model dependent tailored data does better. To match that accuracy, we require more data points in the SN and CC data sets. \bibliographystyle{apsrev4-1} \bibliography{refs}
Title: Diffuse non-thermal emission in the disks of the Magellanic Clouds
Abstract: The Magellanic Clouds, two dwarf galaxy companions to the Milky Way, are among the Fermi Large Area Telescope (LAT) brightest gamma-ray sources. Aiming at a comprehensive modeling of the non-thermal electromagnetic and neutrino emission in both Clouds, we self-consistently model the radio and gamma-ray spectral energy distribution from their disks based on recently published Murchison Widefield Array and Fermi/LAT data. All relevant radiative processes involving relativistic and thermal electrons (synchrotron, Compton scattering, and bremsstrahlung) and relativistic protons (neutral-pion decay following interaction with thermal protons) are considered, using exact emission formulae. Our joint spectral analyses indicate that radio emission in the Clouds has both primary and secondary electron synchrotron and thermal bremsstrahlung origin, whereas gamma rays originate mostly from neutral-pion decay with some contributions from relativistic bremsstrahlung and Compton scattering off starlight. The proton spectra in both galaxies are modeled as power laws in energy with similar spectral indices, ~2.4, and energy densities, ~1 eV/cm3. The predicted 0.1-10 GeV neutrino flux is too low for detection by current and upcoming experiments. Our analyses confirm earlier suggestions of a largely hadronic origin of the gamma-ray emission in both Magellanic Clouds.
https://export.arxiv.org/pdf/2208.02059
\def\mincir{\raise -2.truept\hbox{\rlap{\hbox{$\sim$}}\raise5.truept \hbox{$<$}\ }} \def\mincireq{\hbox{\raise0.5ex\hbox{$<\lower1.06ex\hbox{$\kern-1.07em{\sim}$}$}}} \def\magcir{\raise-2.truept\hbox{\rlap{\hbox{$\sim$}}\raise5.truept \hbox{$>$}\ }} \title{Diffuse non-thermal emission in the disks of the Magellanic Clouds} \titlerunning{Relativistic particles in the Magellanic Clouds} \author{M. Persic \inst{1} \and Y. Rephaeli\inst{2} } \institute{ INAF/Trieste Astronomical Observatory, via G.B.\,Tiepolo 11, I-34100 Trieste, Italy \\ INFN-Trieste, via A.\,Valerio 2, I-34127 Trieste, Italy \\ \email{massimo.persic@inaf.it} \and School of Physics \& Astronomy, Tel Aviv University, Tel Aviv 69978, Israel \\ Center for Astrophysics and Space Sciences, University of California at San Diego, La Jolla, CA 92093, USA \\ \email{yoelr@wise.tau.ac.il} } \abstract {The Magellanic Clouds, two dwarf galaxy companions to the Milky Way, are among the {\it Fermi} Large Area Telescope (LAT) brightest $\gamma$-ray sources.} {Comprehensive modeling of the non-thermal electromagnetic and neutrino emission in both Clouds.} {We self-consistently model the radio and $\gamma$-ray spectral energy distribution from their disks based on recently published Murchison Widefield Array and {\it Fermi}/LAT data. All relevant radiative processes involving relativistic and thermal electrons (synchrotron, Compton scattering, and bremsstrahlung) and relativistic protons (neutral pion decay following interaction with thermal protons) are considered, using exact emission formulae. } {Joint spectral analyses indicate that radio emission in the Clouds has both primary and secondary electron synchrotron and thermal bremsstrahlung origin, whereas $\gamma$\,rays originate mostly from $\pi^0$ decay with some contributions from relativistic bremsstrahlung and Compton scattering off starlight. The proton spectra in both galaxies are modeled as power laws in energy with similar spectral indices, $\sim$2.4, and energy densities, $\sim$1 eV cm$^{-3}$. The predicted 0.1--10\,GeV neutrino flux is too low for detection by current and upcoming experiments.} {We confirm earlier suggestions of a largely hadronic origin of the $\gamma$-ray emission in both Magellanic Clouds.} \keywords{ galaxies: cosmic rays -- galaxies: individual: Large Magellanic Cloud -- galaxies: individual: Small Magellanic Cloud -- gamma rays: galaxies -- radiation mechanisms: non-thermal } \section{Introduction} The spectral energy distributions (SEDs) of Cosmic Rays (CRs) outside their galactic accelerators are important for determining basic properties of CR populations and for assessing the impact of the particle interactions in the magnetized plasma in galactic disks and halos. Knowledge of these distributions is generally limited as it is usually based only on spectral radio observations. When, in addition, non-thermal (NT) X/$\gamma$-ray observations are available, reasonably detailed spectral modeling of the CR electron (CRe) and proton (CRp) distributions in star-forming environments can be very useful: Sampling the SED over these spectral regions yields important insight on the emitting CRe and possibly also CRp, whose interactions with the ambient plasma may dominate (via $\pi^0$ decay) high energy ($\magcir$100 MeV) emission. The nearby Magellanic Clouds (MC), two irregular dwarf satellite galaxies of the Milky Way, may exemplify the level of spectral modeling currently feasible in a joint analysis of radio and $\gamma$-ray measurements. The Large MC (LMC) is located at a distance $d = 50$ kpc (Foreman et al. 2015); for the purpose of this study its structure is modeled as a cylinder with radius $r=3.5$ kpc and height $h=0.4$ kpc (Ackermann et al. 2016). Likewise, for the Small MC (SMC) these quantities are $d = 60$\,kpc, $r=1.6$ kpc, and $h=4$ kpc (Abdo et al. 2010b). Both MCs have been detected with the {\it Fermi} Large Area Telescope (LAT) as extended sources (Abdo et al. 2010a,b). Based on recent data (Lopez et al. 2018), there is statistically significant emission along the SMC ''bar'' and ''wing'' where there is active star formation. A spectro-spatial analysis of the LAT data suggests its integrated $\gamma$-ray emission to be mostly ($\geq$90\%) hadronic. Similar analyses of the LMC with multi-source models (Foreman et al. 2015; Ackermann et al. 2016; Tang et al. 2017) attempted to extract the spectral content of these sources in a statistically viable way (including details on the quality of the fits). The sources were found to be point-like (pulsars, supernova remnants, plerions, unidentified background AGNs) and extended. The latter included the large-scale disk denoted as \footnote { Emission components are labeled with ''E'' in Ackermann et al. (2016) and ''G'' in Tang et al. (2017). } E0 $\sim$ G1, and smaller-scale components denoted as E1+E3 $\sim$ G2 (the region west of 30 Doradus), E2 $\sim$ G3 (the LMC northern region), and E4 $\sim$ G4 (the LMC western region) -- all the latter possibly encompassing different sources with different spectra. These spectral/spatial analyses revealed a galaxy-scale component dominating the emission, with a spectral shape suggesting a lepto-hadronic (Foreman et al. 2015) or hadronic (Ackermann et al. 2016; Tang et al. 2017) origin. \footnote{ As to the prominent 30 Dor massive-star--forming region, Ackermann et al. (2016) emphasize that it shines in GeV $\gamma$\,rays mainly because of the presence of PSR J0540-6919 and PSR J0537-6910. } In this paper we focus on the emission in the MC disks in an attempt to determine mean values of their magnetic fields and CRe and CRp energy densities. In spite of their detailed LAT data analyses, the above-mentioned studies were not based on self-consistent modeling of the broadband radio/$\gamma$ SED of the two galaxies. SED modeling is important because a firm proof of the (mostly) pionic nature of the $\gamma$-ray emission should be based on a quantitative estimate of the CRe spectrum. Clearly, the latter is primarily determined from measurements of synchrotron radio emission -- which was not accounted for in those earlier works. In an attempt to clarify, and possibly remove, some uncertainties inherent in previous modeling work, here we re-assess key aspects of (average) conditions in the MC large-scale disks. Employing a one-zone model for the extended disk emission, we self-consistently carry out detailed calculations of the emission by CRe and CRp. The radio and $\gamma$-ray data used in our analyses are taken from the afore-mentioned papers Lopez et al. (2018; SMC), Tang et al. (2017; LMC), Ackermann et al. (2016; LMC), and For et al. (2018; SMC, LMC). In Section 2 we briefly review the observations of extended NT emission from the MC disks. In section 3 we review the IR/optical radiation fields permeating the MCs. In Section 4 we describe calculations of the MC disk SEDs and perform fits to the data. Prospects of neutrino detection are discussed in Section 5, followed with a conclusion in Section 6. \section{Observations of extended emission} The MCs have been extensively observed over a wide range of radio/microwave, (soft) X-ray, and $\gamma$-ray bands. As mentioned above, point-source, and extended small and large scale emission has been detected from both galaxies. In this paper we focus on the extended large-scale disk emission of each galaxy because such emission traces the mean galactic properties of NT particles and magnetic fields. The spectral data sets used in our analysis are public (either tabulated or plotted) and are fully specified in Table\,1 (SMC) and Table\,2 (LMC). In this section we briefly review the observations most relevant to NT emission leaving out details that are discussed in the cited papers. \bigskip \noindent $\bullet$ {\it Radio.} In a multifrequency radio continuum study of the MCs, For et al. (2018) presented closely-sampled 76--227\,MHz Murchison Widefield Array data, supplemented by previous radio measurements at lower (for the LMC), and higher (for both MCs) frequencies. Measured fluxes include emission from MC (and background) point sources, which were estimated to contribute 11\% and 23\% of the measured emission of, respectively, the SMC and the LMC. Based on statistical analyses of a set of four fitting spectral models (power-law [1PL], curved PL, and double PL [2PL]) with either free and fixed high-$\nu$ index), they found that: \smallskip \noindent {\it (a)} for the SMC the best-fitting model is a 1PL (in the 76--8550 MHz band) with $\alpha($85.5\,MHz -- 8.55\,GHz$) = 0.82 \pm 0.03$ (consistent with Haynes et al. 1991); whereas \smallskip \noindent {\it (b)} for the LMC the preferred model is a 2PL (19.7--8550 MHz) with a low-$\nu$ (19.7--408 MHz) index $\alpha_0 = 0.66 \pm 0.08$ (consistent with Klein et al. 1989) and a (fixed) high-$\nu$ index $\alpha_1 = 0.1$ suggesting that synchrotron radiation and thermal free-free (ff) emission dominate at, respectively, low and high frequencies. \bigskip \noindent $\bullet$ {\it X-rays.} The observed diffuse X-ray emission in the MCs has been determined to be of thermal origin (Wang et al. 1991: {\it Einstein Observatory}; Points et al. 2001: {\it R\"ontgen Satellit (ROSAT)} Position Sensitive Proportional Counters (PSPC); Nishiuchi et al. 1999, 2001: {\it Advanced Satellite for Cosmology and Astrophysics (ASCA)}); as such, it is not directly relevant to our SED analysis except for the estimated thermal plasma density, which is required to calculate the pionic emission (see below). \bigskip \noindent $\bullet$ {\it $\gamma$-rays.} Both MCs were detected at $>$100 MeV $\gamma$-rays as galaxy-scale extended sources whose emission is dominated by a diffuse component. The latter has been interpreted as mainly originating from CRp interacting with the interstellar gas. \smallskip \noindent {\it (a)} The SMC was first detected with 17 months of {\it Fermi}/LAT data (Abdo et al. 2010b) as a $\sim$3$^o$-size source, in which emission was not strongly correlated with prominent sites of star formation. More recently Lopez et al. (2018), based on 105 months of LAT Pass8 data, produced maps of the extended $>$2 GeV emission (no signal at $>$13 GeV) and found statistically significant emission along the galaxy-scale quietly--star-forming ''bar'' and ''wing''. Within a set of single-component spectral models -- i.e. PL, broken PL (with frozen spectral shape and free normalization, representing pulsars), and exponentially-cutoff PL (representing pionic emission) -- the latter provides the best fit to the total $\gamma$-ray spectrum, with only a marginal improvement of the fit if the broken PL is added. Lopez et al.'s analysis suggests that although pulsars may contribute $\leq$10\% at $>$100 MeV, the extended emission is mainly ($\geq$90\%) of pionic origin, with a $\gamma$-ray emissivity $\magcir$5 times smaller than the local (Galactic) one. \smallskip \noindent {\it (b)} The LMC was marginally (>4.5\,$\sigma$) detected with the {\it Compton Gamma Ray Observatory}'s ({\it CGRO}) Energetic Gamma Ray Telescope (EGRET) (Sreekumar et al. 1992) and then confirmed (33\,$\sigma$) with 11 months of {\it Fermi}/LAT data (Abdo et al. 2010a). More recent studies have focused on the spatial and spectral modeling of the $\gamma$-ray surface brightness distribution; these are briefly reviewed below. \\ {\it (i)} The first LAT-based spectro-spatial analysis of the $\gamma$-ray surface brightness distribution was performed by Foreman et al. (2015) using 5.5 years of data, in the photon energy range 0.2$-$20 GeV. They modeled the CR distribution and $\gamma$-ray production based on observed maps of the LMC interstellar medium, star formation, radiation fields, and radio emission. The $\gamma$-ray spectrum was described by means of analytical fitting functions for the $\pi^0$-decay, bremsstrahlung, and inverse-Compton yields: these processes were estimated to account for, respectively, 50\%, 44\%, and 6\% of the total 0.2$-$20 GeV emission. In particular they inferred the CRp spectral index to be $q_p = 2.4 \pm 0.2$ and the equipartition (with CRp) magnetic field to be $B_{\rm eq} = 2.8\mu$G. \\ {\it (ii)} A subsequent spectro-spatial analysis (Ackermann et al. 2016; A+16) of 6 years of LAT (P7REP) data in the 0.2--100 GeV band reported extended and point-like emissions. The (dominant) extended emission is in the form of a disk-scale component, denoted as E0 in their paper, and additional degree-scale emissions from several regions of enhanced star formation (including 30 Doradus): if pionic, the E0 emission component implies a population of CRp with $\sim$1/3 the local Galactic density, whereas the superposed small-scale emissions imply local enhancements of the CRp density by factors of least 2-6. The spectrum of the E0 component (which does not include emission from the degree-scale emissions from several regions) shows some curvature (see their Fig.7-top): with no reference to the underlying emission mechanism, A+16 fitted the data with a tabulated function derived from the local (Galactic) gas emissivity spectrum (as a result of its interactions with CRp), an exponentially-cutoff PL, a broken PL, and a log-parabola, and concluded that the best-fitting analytical model was the log-parabola -- noting, however, that the similarity between the log-parabola and tabulated-function models (see their Fig.9) suggests a pionic nature of the large-scale disk emission. \\ {\it (iii)} More recently Tang et al. (2017; T+17) analyzed 8 years of LAT Pass8 data and deduced a deeper, more spectrally extended 0.08--80 GeV spectrum of the large-scale disk component (identified as G1; this too did not include emission from other regions) largely overlaps with E0 of A+16. The G1 shape, now extending down to 60 MeV, was determined to be best described as a $\pi^0$-decay hump from a CRp spectrum harder than that in our Galaxy -- although they could not rule out CRe relativistic bremsstrahlung completely. \\[0.2cm] In this analysis we use the T+17 G1 data as our reference set for the diffuse large-scale LMC disk emission, and the earlier A+16 E0 data as an auxiliary set for a consistency check. \begin{table*} \caption[] {SMC radio and $\gamma$-ray data.} \centering \begin{tabular}{ l l l l l l l} \hline \hline \noalign{\smallskip} Frequency & Energy Flux & Reference & & Frequency & Energy Flux & Reference \\ log($\nu$/Hz)& $10^{-12}$erg/(cm$^2$s) & & &log($\nu$/Hz) & $10^{-12}$erg/(cm$^2$s) & \\ \noalign{\smallskip} \hline \noalign{\smallskip} 7.932 & $0.393 \pm 0.171$ & Mills (1955, 1959) & & 8.340 & $0.481 \pm 0.125$ & For et al. (2018) \\ 7.881 & $0.307 \pm 0.080$ & For et al. (2018) & & 8.356 & $0.489 \pm 0.127$ & For et al. (2018) \\ 7.924 & $0.245 \pm 0.064$ & For et al. (2018) & & 8.611 & $0.543 \pm 0.041$ & Loiseau et al. (1987) \\ 7.964 & $0.230 \pm 0.060$ & For et al. (2018) & & 9.146 & $0.588 \pm 0.084$ & Loiseau et al. (1987) \\ 7.996 & $0.260 \pm 0.068$ & For et al. (2018) & & 9.146 & $0.486 \pm 0.028$ & For et al. (2018) \\ 8.029 & $0.379 \pm 0.099$ & For et al. (2018) & & 9.362 & $0.713 \pm 0.138$ & Mountfort et al. (1987)\\ 8.061 & $0.311 \pm 0.081$ & For et al. (2018) & & 9.362 & $0.623 \pm 0.161$ & For et al. (2018) \\ 8.090 & $0.316 \pm 0.082$ & For et al. (2018) & & 9.389 & $0.637 \pm 0.073$ & Haynes et al. (1991) \\ 8.114 & $0.317 \pm 0.083$ & For et al. (2018) & & 9.677 & $0.902 \pm 0.190$ & Haynes et al. (1991) \\ 8.155 & $0.464 \pm 0.121$ & For et al. (2018) & & 9.932 & $1.282 \pm 0.342$ & Haynes et al. (1991) \\ 8.176 & $0.387 \pm 0.101$ & For et al. (2018) & & $22.860^{\scriptscriptstyle +0.125}_{\scriptscriptstyle -0.167}$ & $7.047 \pm 0.860$ & Lopez et al. 2018 \\ 8.199 & $0.417 \pm 0.109$ & For et al. (2018) & & $23.161^{\scriptscriptstyle +0.125}_{\scriptscriptstyle -0.167}$ & $7.998 \pm 0.853$ & Lopez et al. 2018 \\ 8.220 & $0.360 \pm 0.094$ & For et al. (2018) & & $23.462^{\scriptscriptstyle +0.125}_{\scriptscriptstyle -0.167}$ & $7.194 \pm 0.658$ & Lopez et al. 2018 \\ 8.241 & $0.620 \pm 0.161$ & For et al. (2018) & & $23.781^{\scriptscriptstyle +0.125}_{\scriptscriptstyle -0.167}$ & $5.284 \pm 0.563$ & Lopez et al. 2018 \\ 8.258 & $0.512 \pm 0.133$ & For et al. (2018) & & $24.055^{\scriptscriptstyle +0.125}_{\scriptscriptstyle -0.167}$ & $5.598 \pm 0.683$ & Lopez et al. 2018 \\ 8.276 & $0.473 \pm 0.123$ & For et al. (2018) & & $24.383^{\scriptscriptstyle +0.125}_{\scriptscriptstyle -0.167}$ & $3.357 \pm 0.754$ & Lopez et al. 2018 \\ 8.294 & $0.486 \pm 0.127$ & For et al. (2018) & & $24.684^{\scriptscriptstyle +0.125}_{\scriptscriptstyle -0.167}$ & $<2.972$ & Lopez et al. 2018 \\ 8.310 & $0.594 \pm 0.155$ & For et al. (2018) & & $24.985^{\scriptscriptstyle +0.125}_{\scriptscriptstyle -0.167}$ & $<0.879$ & Lopez et al. 2018 \\ 8.326 & $0.489 \pm 0.127$ & For et al. (2018) \\ \noalign{\smallskip} \hline\end{tabular} \end{table*} \begin{table*} \caption[] {LMC radio and $\gamma$-ray data.} \centering \begin{tabular}{ l l l l l l l} \hline \hline \noalign{\smallskip} Frequency & Energy Flux & Reference & & Frequency & Energy Flux & Reference \\ log($\nu$/Hz) & $10^{-12}$erg/(cm$^2$s)& & & log($\nu$/Hz) &$10^{-12}$erg/(cm$^2$s)& \\ \noalign{\smallskip} \hline \noalign{\smallskip} 7.294 & $1.038 \pm 0.208$ & Shain (1959) & & 9.389 & $9.555 \pm 0.490$ & Haynes et al. (1991) \\ 7.653 & $1.349 \pm 0.203$ & Alvarez et al. (1987) & & 9.677 & $17.243 \pm 1.425$ & Haynes et al. (1991) \\ 7.881 & $1.410 \pm 0.240$ & For et al. (2018) & & 9.677 & $14.250 \pm 1.900$ & For et al. (2018) \\ 7.924 & $1.492 \pm 0.254$ & For et al. (2018) & & 9.932 & $23.085 \pm 2.993$ & Haynes et al. (1991) \\ 7.932 & $3.154 \pm 0.342$ & Mills (1959) & & & $=========$ & \\ 7.964 & $1.449 \pm 0.246$ & For et al. (2018) & & & $10^{-11}$erg/(cm$^2$s) & \\ 7.986 & $2.748 \pm 0.581$ & Mills (1959) & & & --------------------- & \\ 7.996 & $1.437 \pm 0.245$ & For et al. (2018) & & $22.2946 \pm 0.1336$ & $0.93 \pm 0.31$ & Tang et al. (2017) \\ 8.029 & $1.956 \pm 0.333$ & For et al. (2018) & & $22.5635 \pm 0.1353$ & $1.64 \pm 0.33$ & Tang et al. (2017) \\ 8.061 & $1.872 \pm 0.318$ & For et al. (2018) & & $22.8325 \pm 0.1336$ & $1.91 \pm 0.19$ & Tang et al. (2017) \\ 8.090 & $2.021 \pm 0.344$ & For et al. (2018) & & $23.1004 \pm 0.1343$ & $2.08 \pm 0.13$ & Tang et al. (2017) \\ 8.114 & $2.043 \pm 0.347$ & For et al. (2018) & & $23.3691 \pm 0.1344$ & $2.17 \pm 0.15$ & Tang et al. (2017) \\ 8.155 & $2.379 \pm 0.405$ & For et al. (2018) & & $23.6376 \pm 0.1342$ & $1.57 \pm 0.15$ & Tang et al. (2017) \\ 7.653 & $1.349 \pm 0.203$ & Alvarez et al. (1987) & & $23.9061 \pm 0.1343$ & $1.44 \pm 0.17$ & Tang et al. (2017) \\ 8.176 & $2.175 \pm 0.370$ & For et al. (2018) & & $24.1747 \pm 0.1342$ & $1.16 \pm 0.21$ & Tang et al. (2017) \\ 8.199 & $2.134 \pm 0.363$ & For et al. (2018) & & $24.4432 \pm 0.1343$ & $0.52 \pm 0.24$ & Tang et al. (2017) \\ 8.199 & $2.743 \pm 0.774$ & Mills (1959) & & $24.7117 \pm 0.1342$ & $1.09 \pm 0.31$ & Tang et al. (2017) \\ 8.220 & $1.999 \pm 0.340$ & For et al. (2018) & & $24.9801 \pm 0.1342$ & $<0.73$ & Tang et al. (2017) \\ 8.241 & $2.324 \pm 0.397$ & For et al. (2018) & & $25.2486 \pm 0.1342$ & $0.63 \pm 0.36$ & Tang et al. (2017) \\ 8.258 & $2.257 \pm 0.384$ & For et al. (2018) & & $22.781 \pm 0.100$ & $3.524 \pm 0.912$ & Ackermann et al. (2016) \\ 8.276 & $2.313 \pm 0.393$ & For et al. (2018) & & $22.985 \pm 0.100$ & $3.357 \pm 0.349$ & Ackermann et al. (2016) \\ 8.294 & $2.186 \pm 0.372$ & For et al. (2018) & & $23.182 \pm 0.100$ & $3.041 \pm 0.309$ & Ackermann et al. (2016) \\ 8.310 & $2.520 \pm 0.429$ & For et al. (2018) & & $23.383 \pm 0.100$ & $2.958 \pm 0.256$ & Ackermann et al. (2016) \\ 8.326 & $2.378 \pm 0.404$ & For et al. (2018) & & $23.587 \pm 0.100$ & $2.685 \pm 0.232$ & Ackermann et al. (2016) \\ 8.340 & $2.261 \pm 0.385$ & For et al. (2018) & & $23.781 \pm 0.100$ & $1.811 \pm 0.254$ & Ackermann et al. (2016) \\ 8.356 & $2.315 \pm 0.394$ & For et al. (2018) & & $23.985 \pm 0.100$ & $0.859 \pm 0.240$ & Ackermann et al. (2016) \\ 8.611 & $3.774 \pm 0.122$ & Klein et al. (1989) & & $24.161 \pm 0.100$ & $1.167 \pm 0.326$ & Ackermann et al. (2016) \\ 9.146 & $5.367 \pm 0.420$ & For et al. (2018) & & $24.383 \pm 0.100$ & $0.980 \pm 0.315$ & Ackermann et al. (2016) \\ 9.146 & $7.406 \pm 0.420$ & Klein et al. (1989) & & $24.587 \pm 0.100$ & $<0.706$ & Ackermann et al. (2016) \\ 9.362 & $9.476 \pm 1.150$ & Mountfort et al. (1987) & & $25.183 \pm 0.504$ & $<1.119$ & Ackermann et al. (2016) \\ \noalign{\smallskip} \hline\end{tabular} \end{table*} \section{Radiation fields} A reasonably precise determination of the ambient radiation field is needed for predicting the level of $\gamma$-ray emission from Compton scattering of the radio-emitting electrons (and positrons). The total radiation field includes cosmic (background) and local (foreground) components. Relevant cosmic radiation fields include the Cosmic Microwave Background (CMB) and the Extragalactic Background Light (EBL). The CMB is a pure Planckian described by a temperature $T_{\rm CMB}=2.735\,(1+z)$ K and energy density $u_{\rm CMB} = 0.25\,(1+z)^4$ eV cm$^{-3}$. The EBL originates from direct and dust-reprocessed starlight integrated over the star formation history the Universe. It shows two peaks, corresponding respectively to the Cosmic Infrared Background (CIB, at $\sim$100\,$\mu$m) that originates from dust-reprocessed starlight integrated over the star formation history of galaxies, and the Cosmic Optical Background (COB, at $\sim$1\,$\mu$m) that originates from direct starlight integrated over all stars that formed (e.g. Cooray 2016). The two peaks are described as diluted Planckians, characterized by a temperature $T$ and a dilution factor $C_{\rm dil}$. The latter is the ratio of the actual energy density, $u$, to the energy density of an undiluted blackbody at the same temperature $T$, i.e. $u = C_{\rm dil}\, a T^4$, where $a$ is the Stefan-Boltzmann constant. The dilution factors of the cosmic fields are, $C_{\rm CMB} = 1$, $C_{\rm CIB} = 10^{-5.629}$, and $C_{\rm COB} = 10^{-13.726}$. A recent updated EBL model, based on accurate galaxy counts in several spectral bands, is due to Franceschini \& Rodighiero (2017); locally ($z=0$) it can be numerically approximated as a combination of diluted Planckians, \begin{eqnarray} \lefteqn{ n_{\rm EBL}(\epsilon) ~=~ \sum_{j=1}^8 A_j \,\frac{8 \pi}{h^3c^3} \, \frac{\epsilon^2}{e^{\epsilon/k_B T_j}-1} \hspace{0.5cm} {\rm cm^{-3}~ erg^{-1}} } \label{eq:EBL} \end{eqnarray} with: $A_1=10^{-5.629}$, $T_1=29$\,K; $A_2=10^{-8.522}$, $T_2=96.7$\,K; $A_3=10^{-10.249}$, $T_3=223$\,K; $A_4=10^{-12.027}$, $T_4=580$\,K; $A_5=10^{-13.726}$, $T_5=2900$\,K; $A_6=10^{-15.027}$, $T_6=4350$\,K; $A_7=10^{-16.404}$, $T_7=5800$\,K; $A_8=10^{-17.027}$, $T_8=11600$\,K. Local radiation fields in the MCs arise from their intrinsic stellar populations and (given its close proximity) also from the Milky Way; we refer to this as the Galactic Foregound Light (GFL). Similar to the EBL by shape and origin, the GFL is dominated by two thermal humps, IR and optical. The corresponding energy densities (in eV cm$^{-3}$) are: {\it (i)} LMC: $u_{\rm IR} = 0.12$ and $u_{\rm opt} = 0.20$, estimated from Foreman et al. 2015 ($u_{\rm IR+opt} = 0.32$) and the IR/opt SED (from NED \footnote{ The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory (JPL), Caltech, under contract with the National Aeronautic and Space Administration (NASA). } ); {\it (ii)} SMC: $u_{\rm IR} = 0.026$, $u_{\rm opt} = 0.062$, scaling down the LMC values taking (from the SED, cf. NED) the optical and IR peaks to be factors of 0.215 and 0.144, respectively, of the corresponding LMC values. \section{SED models} Our main objective in this study is to determine CR electron and proton spectra in the MC disks by spectral modeling of NT emission in all relevant energy range accessible to observations. The particle, gas density, magnetic and radiation field distributions clearly vary significantly across the disk, obviating the need for a spectro-spatial treatment. Indeed, a modeling approach based on a solution to the diffusion-advection equation has been applied in the study of the Galaxy and several nearby galaxies. However, even for (just) a diffusion-based treatment to be feasible and meaningful, the spatial profiles of key quantities, such as the particle acceleration sources, gas density, magnetic field, and generally also the diffusion coefficient have to be specified. Given the generally very limited observational basis for reliably determining these profiles, a parameter-intensive spectro-spatial modeling approach is rarely warranted. An example is the very approximate diffusion-based approach adopted in modeling NT emission in the disks and halos of the star-forming galaxies NGC4631 and NGC4666 (Rephaeli \& Sadeh 2019) for which reasonably detailed radio spectra and spatial profiles are available; even so, the results of these analyses are not conclusive given the substantial uncertainty in the values of key parameters. Radio emission in the MC disks is electron synchrotron in a disordered magnetic field whose mean value $B$ is taken to be spatially uniform, and thermal-ff from a warm ionized plasma. A significant CRp component could yield additional radio emission by secondary $e^{\pm}$ produced by $\pi^{\pm}$ decays, and $\gamma$-ray emission from $\pi^{0}$ decay. In addition, $\gamma$-ray emission is produced by CRe scattering off photons of local and background radiation fields. The calculations of the emissivities from all these processes are well known and standard. We assume the CR spectral distributions to be time-independent and locally isotropic with spectral PL form. The primary CRe spectral density (per cm$^3$ and per unit of the electron Lorentz factor $\gamma$) is, $N_e(\gamma) = N_{e,0} \, \gamma^{-q_e}$ for $\gamma_{min} < \gamma < \gamma_{max}$ with $\gamma_{min}>>1$. As discussed later, this spectrum proves to be a good approximation to the actual primary steady-state spectrum in the relevant electron energy range. The secondary CRe spectrum can be analytically approximated as a smoothed-2PL, exponentially cutoff at high energies (see below). The assumed CRp spectrum (in units of cm$^{-3}$ GeV$^{-1}$) as a function of $E_p$/GeV is, $N_p(E_p) = N_{p,0} \, E_p^{-q_p}$ for $m_p < E < E_p^{max}$. \subsection{SMC} The emission spectrum from $\pi^{0}$-decay has more constraining power than a generic PL; thus, our modeling procedure begins with fitting a pionic emission profile to the $\gamma$-ray data with free normalization, slope, and high-energy cutoff. We then use the deduced (essentially, fully-determined) secondary-CRe spectrum together with an assumed primary-CRe spectrum to calculate the combined synchrotron emission using an observationally estimated value of $B$. The fit to the radio data includes, at high frequencies, also a thermal bremsstrahlung component computed with previously determined values of the gas density and temperature. Finally, the full Compton X/$\gamma$-ray yield is determined. The $\pi^0$-decay $\gamma$-ray flux is computed using the emissivity in Eq.\,(15) of Persic \& Rephaeli (2019a), where the total (thermal) gas proton density is $n_g=n_{\rm HI}+2\,n_{\rm H_2}$ with the {average} neutral-hydrogen density $n_{\rm HI} = 0.54$ cm$^{-3}$, inferred from $M_{\rm HI} = 4.23 \, 10^8 M_\sun$ (this value of $M_{\rm HI}$ was deduced from direct determination of HI column density from the 21\,cm emission line; Stanimirovi\'c et al. 1999), and the molecular-hydrogen density $n_{\rm H_2} = 0.02$ cm$^{-3}$, inferred from $M_{\rm H_2} = 3.2 \, 10^7 M_\sun$ (which, in turn, is derived from modeling {\it Spitzer}, Cosmic Background Explorer (COBE), InfraRed Astronomical Satellite (IRAS), and Infrared Space Observatory (ISO) far-infrared data; Leroy et al. 2007). The CRp spectrum derived from our fit to the {\it Fermi}/LAT data is $N_{p0} = 2.4 \, 10^{-10}$ cm$^{-3}$, $q_p = 2.40$, with $E_p^{max} = 30$ GeV. The model spectrum fully reproduces the 0.2--50 GeV {\it Fermi}/LAT spectrum (Fig.\ref{fig:gamma}). With these values, the estimated CRp energy density $u_p = \int_{m_p}^{E_p^{max}} E_p \,N_p(E_p) \, dE_p$ is $\sim 0.5$ eV cm$^{-3}$. The closely related $\pi^\pm$-decay secondary CRe spectrum, $N_{se}(\gamma)$, has no free parameters once the CRp spectrum is determined \footnote{ Denoting the total cross-section for inelastic $pp$ collisions $\sigma_{pp}(E_p)$ (see Eq.[79] of Kelner et al. 2006), the secondary CRe injection spectrum is $Q_{se}(\gamma) = (8/3) \,m_e \, (c / k_{\pi^\pm}) \, n_g\, N_{p0}\, [m_p + (4 m_e / \kappa_{\pi^{\pm}})\, \gamma]^{-q_p}\, \sigma_{pp}(m_p + E_{\pi^\pm}/k_{\pi^\pm})$ for $\gamma_{thr} < \gamma < \gamma_{se}^{max}$, with $\gamma_{se}^{thr} = m_{\pi^\pm}/(4m_e) = 68.5$ and $\gamma_{se}^{max}$$=$$m_{\pi^\pm}^{max}/(4m_e) \simeq (E_p^{max} - 3/2 m_p)/(4 m_e)$. The corresponding steady-state distribution is $N_{se}(\gamma) = 1/b(\gamma) \cdot \int_\gamma^{\gamma_{se}^{max}} Q_{se}(\gamma) \, d\gamma$, where $b(\gamma)$ is the radiative energy loss term. In a magnetized medium consisting of ionized, neutral and molecular gas, it is $b(\gamma) = \sum_{j=0}^2 b_j(\gamma)$ where $b_j(\gamma)$ are loss terms appropriate to each gas phase (see Eqs.[4]-[6] of Rephaeli \& Persic 2015). $N_{se}(\gamma)$ is analytically approximated by $N_{se}^{\rm fit}(\gamma) = N_{se,0} \gamma^{-q_1} (1+\gamma / \gamma_{b1})^{q_1-q_2} {\rm exp}[{-(\gamma/\gamma_{b2})^\eta}]$, where $q_1$, $q_2$ and $\gamma_{b1}$, $\gamma_{b2}$ are the low-/high-energy spectral indices and breaks; $\eta$ gauges the steepness of the high-end cutoff. }. We use $N_{se}^{\rm fit}(\gamma)$, with parameter values reported in Table 4, to compute the corresponding leptonic yields. However, the uncertainty in the total (HI and H$_2$) gas density clearly affects the energy loss rate $b(\gamma)$ and the resulting spectral normalization and shape of $N_{se}(\gamma)$, hence also of $N_e(\gamma)$, and their yields. To compute the synchrotron emission we assume $B=3.5 \mu$G, based on estimates of the ordered (1.7\,$\mu$G) and random (3\,$\mu$G) fields in the SMC (Mao et al. 2008). Once the secondary-CRe synchrotron yield has been computed, its low-frequency residuals from the data are modeled using a CRe spectrum, $N_e(\gamma)$, with $N_{e0} = 3.7 \, 10^{-8}$ cm$^{-3}$, $q_e = 2.23$, $\gamma_{max}=10^4$. The latter is the 1PL approximation, for $100 \mincir \gamma \mincir \gamma_{max}$, of the actual steady-state primary-CRe spectrum. {\rm Neglecting diffusion and advection losses,} this primary spectrum is $N_{pe}(\gamma) = k_i \gamma^{-(q_i-1)} /[b(\gamma) \,(q_i -1)]$ cm$^{-3}$ (units of $\gamma$)$^{-1}$ where $\dot{N}_i(\gamma) = k_i\, \gamma^{-q_i}$ is the CRe spectral injection rate. We find that, over the mentioned electron energy range, $q_i=2.28$ provides a decent match between the shapes of the curved spectrum and the 1PL spectrum -- this value is typical for $\gamma$-ray emission from Galactic CR accelerators (e.g. SNRs, the Crab nebula). The relative normalization between the two spectra can be based on the measured flux density at some radio frequency or on imposing the same CRe energy density on the two spectra over the electron energy range of interest (see discussion in Rephaeli \& Persic 2015). In our case here the normalization of $N_{pe}(\gamma)$, and hence $k_i$, can be found by using both $N_{pe}(\gamma)$ and $N_{se}(\gamma)$ to compute the total (primary plus secondary) synchrotron yield, which in turn is fitted to the radio synchrotron data: in doing this, the acceptable match (in the relevant energy range) between the curved and PL spectra allows us to use $N_e (\gamma)$ without substantial loss of accuracy. Given this procedure, the uncertainties in the gas density ultimately affect $N_e(\gamma)$ as well. The spectra of the synchrotron-emitting CRe are shown in Fig.\,\ref{fig:MC_CRe_spectra}. The resulting primary electron component is quite dominant, with $\sim$70\% of the total synchrotron and Compton yields (Fig.\,\ref{fig:SMC_radio}), but the normalization, $N_{e0}$, is not strongly constrained due to its dependence on the assumed magnetic field. If measurements of NT--X-ray and $\sim $1-30 MeV $\gamma$-ray fluxes -- corresponding to, respectively, the CMB and CIB peaks -- become available, most of the uncertainty could be removed (e.g. Persic \& Rephaeli 2019a). In addition to this modeling uncertainty, the considerable coupling between the CRe spectral parameters implies an appreciable range of the deduced values of $N_{e0}$ and $q_e$: This range can be roughly bracketed by a flatter slope ($q_e=2.2$) and lower normalization ($N_{e0} = 2.9 \, 10^{-8}$ cm$^{-3}$) or, alternatively, a steeper slope ($q_e=2.3$) and higher normalization ($N_{e0} = 6.7 \, 10^{-8}$ cm$^{-3}$) -- both, with the same $\gamma_{max}$ as the reference model (Table 4). At high frequencies a relatively flat, $\propto \nu^{-0.1}$, component represents diffuse thermal-ff emission (Spitzer 1978). The latter flux may be gauged to the H$\alpha$ flux if both emissions come from the same emitting volume (HII regions), because in this case the relevant warm-plasma parameters (temperature, density, filling factor) are the same. So the measured (optical) H$\alpha$ flux may be used to predict the (radio) ff emission. We model the thermal-ff emission combining Eqs.(3),(4a) of Condon (1992) and using $T_e = 1.1 \, 10^4$ $^{\circ}K$ (Toribio San Cipriano et al. 2017) and $F(H_\alpha) = 1.6 \, 10^{-8}$ erg cm$^{-2}$ s$^{-1}$ (Kennicutt et al. 1995). The (primary) synchrotron and thermal-ff normalizations are spectrally quite apart so they do not significantly affect each other. The radio model is shown alongside data in Fig.\,\ref{fig:SMC_radio}-right. Although subdominant, secondary CRe contribute appreciably to the total CRe population of the SMC. Whereas the primary spectrum is approximately PL, the secondary spectrum is clearly curved (see Fig.\,\ref{fig:MC_CRe_spectra}). The resulting curvature of the total CRe spectrum is reflected in the total synchrotron spectrum (computed using Eq.\,(9) of Persic \& Rephaeli (2019a) for $N_e(\gamma)$ and its straightforward generalization for $N_{se}(\gamma)$). Therefore the total radio spectrum, which consists of synchrotron and thermal-ff emission at low and high frequencies, is not a smooth 2PL but shows some extra structure. A 3rd-order polynomial (in log units) outperforms For et al.'s (2018) ''preferred'' 1PL model ($\Delta$BIC$>0$; Table 3). This polynomial (4 free parameters; Fig.\,\ref{fig:SMC_radio}-left) is matched by the physical model described above, i.e. a low-frequency component representing synchrotron emission (3 free parameters) plus a high-frequency $\propto \nu^{-0.1}$ PL representing thermal-ff emission (1 free parameter); this model is shown in Fig.\,\ref{fig:SMC_radio}-right. \begin{table*} \caption[] {Summary of fits to the SMC radio spectrum.} \centering \begin{tabular}{ l l l l l l l l l l l l} \hline \hline \noalign{\smallskip} & ${\chi^2}^\bullet$ & $\chi^2_{\rm red}$ & BIC$^\ddagger$ & $\Delta$BIC & dof & log$(S_0)$ & $\alpha$ & $c_0$ & $c_1$ & $c_2$ & $c_3$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} PL$^\star$ & 32.3 & 1.2 & 298.0 & & 27 & 2.284 & $-0.82$ & & & & \\ P$_3^+$ & 22.3 & 0.9 & 294.7 & 3.3 & 25 & & & 1.763 & $-$0.968 & 0.09 & 0.315 \\ \noalign{\smallskip} \hline\end{tabular} \smallskip $^\star$ Power law: ${\rm log}(S_\nu) = {\rm log}(S_0) + \alpha\, {\rm log}(\nu/0.2\,{\rm GHz})$ from Eq.(4) and Table 4 of For et al. (2018). \smallskip \noindent $^+$ 3rd-order polynomial: ${\rm log}(S_\nu) = c_0 + c_1\,x + c_2\,x^2 + c_3\,x^3$, with $x = {\rm log}(\nu/{\rm GHz})$. \smallskip \noindent $^\bullet$ $\chi^2$ calculations use actual fluxes (Table 3 of For et al. 2018), not their logarithms. \smallskip \noindent $^\ddagger$ See definitions and discussion in Sect. 4.2.1 of For et al. 2018. \smallskip \end{table*} With the CRe spectra essentially determined, we can calculate the Compton and NT-bremsstrahlung yields from CRe scattering off, respectively, CMB/EBL/GFL photons and thermal-plasma nuclei using, respectively, Eqs. (2.42) and (3.1) of Blumenthal \& Gould (1970). The photon fields are described in section 3; as to the densities of plasma nuclei we assume $n_i = 0.033$ cm$^{-3}$ (Mao et al. 2008) and metallicity $Z^2=1.2$ (Sasaki et al. 2002). Although the shape of the $\gamma$-ray spectrum is distinctly pionic, NT-bremsstrahlung peaks near the pionic peak ($\nu \sim 10^{23}$ Hz) where it contributes $\sim$3\% of the measured flux; Compton/starlight emission, that peaks at $\nu \sim 10^{22}$ Hz, contributes $\mincir$1\% of the flux at the lowest LAT data point. The predicted Compton/CMB spectral flux, too, is shown in Fig.\,\ref{fig:SMC_SED}. We found no need to introduce a further, broken-PL component in the $\gamma$-ray band representing pulsars, in agreement with Lopez et al. (2018). The resulting SED model, overlaid on data, is plotted in Fig.\ref{fig:SMC_SED}. An upper limit on the mean magnetic field, $B_{max}$, can be estimated by assuming that the measured radio emission is fully produced by secondary CRe. By matching the measured and predicted emission levels we infer a limiting value $B_{lim} \simeq 7 \, \mu$G. This upper limit, which is twice the observationally deduced value, is unrealistically high because, most notably at low frequencies, the {\it shapes} of the measured and predicted synchrotron spectra do not match well, and because primary CRe obviously contribute to the measured radio flux. A lower limit, $B_{min}$, can estimated from the fact that if $B$ decreases the secondary synchrotron yield decreases too, so to keep the synchrotron yield unchanged, the primary yield must increase -- by increasing the spectral normalization ($N_{e0}$) and possibly also varying the spectral shape ($q_e$, $\gamma_{max}$) of the primary CRe spectrum. At low energies the primary spectrum is steeper than the secondary, so at some point the resulting synchrotron spectrum becomes progressively and systematically steeper than the $<$230\,GHz radio data; this yields $B_{min} \simeq 2\,\mu$G. Based on our SED analysis, the value $B=3.5 \mu$G suggested by Mao et al. (2008) lies comfortably between the estimated bounds. \subsection{LMC} Our fitting approach differs from that adopted for the SMC. The fact the radio spectrum is best fitted by a 2PL model interpreted as a combination of (individually 1PL) synchrotron and thermal-ff components (For et al. 2018), indicates that synchrotron emission is generated by a 1PL CRe spectrum. This is unlike the case of the SMC for which the interplay of primary and secondary CRe with different spectra results in a more structured, non-PL form. With the primary CRe spectrum determined from fitting the radio data, the Compton $\gamma$-ray yield is calculated. To this a modeled pionic component is added and fit to the $\gamma$-ray data. Doing so enables determination of the the secondary CRe spectrum and the resulting yields, which turn out to be very minor in comparison with those of primary CRe; thus, no iteration of the fitting process is required. The radio flux is dominated by synchrotron emission with $\alpha = 0.66$ at the lowest measured frequencies, and by thermal-ff emission with $\alpha = -0.1$ at the highest frequencies (For et al. 2018). Adopting these spectral indices for the two components, we fit the full radio database with $T_e = 1.3 \,10^4$ $^{\circ}K$ and $F(H\alpha) = 2\, 10^{-7}$ erg cm$^{-2}$ s$^{-1}$ (Toribio San Cipriano et al. 2017; Kennicutt et al. 1995) for the thermal-ff component, and $B = 4.3 \mu$G (Gaensler et al. 2005) \footnote{ This field value, from a Faraday rotation study of the LMC, is consistent with Mao et al.'s (2012) estimates, $B_{\rm eq} \sim 2\mu$G from equipartition with CRp assuming $q_p=2$ and $B < 7\mu$G from using a (deduced) lower limit on $N_{e0}$ in fitting the 1.4 GHz radio flux. }, $N_{e0} \simeq 3.5\, 10^{-7}$ cm$^{-3}$ and $\gamma_{\rm max} \simeq 2.4\, 10^4$ for the synchrotron. (The cutoff is needed for consistency with the measured flux at $\nu \magcir 3\, 10^{23}$ Hz.) We should mention at this point that subsequent results (see below) suggest that the secondary CRe are only $\sim$10\% of the total. So the (quasi-)1PL primaries dominate the CRe population (Fig.\,\ref{fig:MC_CRe_spectra}) and the emerging synchrotron emission. Therefore, the combined synchrotron and thermal-ff radio spectrum matches the smooth 2PL profile of For et al.'s (2018) fit. The radio model is shown in Fig.\,\ref{fig:LMC_radio}. Next, we calculate the X/$\gamma$-ray Compton and NT-bremsstrahlung yields from {\rm (primary)} CRe scattering off CMB/EBL/GFL photons and thermal-plasma nuclei: {\rm to this aim we assume} the photon fields described in section 3 {\rm and, respectively, a plasma characterized by an average density} $n_i = 0.012$ cm$^{-3}$ (Sasaki et al. 2002; Cox et al. 2006) and metallicity $Z^2=1.2$ (Sasaki et al. 2002). {\rm Both emissions fall short of reproducing the $\gamma$-ray data.} The pionic emission is computed with $n_{\rm HI}=1$ cm$^{-3}$ (from $M_{\rm HI}=3.8\, 10^8 M_\sun$: Staveley-Smith et al. 2003) and $n_{\rm H_2} = 6.6\, 10^{-2}$ cm$^{-3}$ (from $M_{\rm H_2}=5\, 10^7 M_\sun$, measured from CO emission: Fukui et al. 2008). From our fitting, the matching CRp spectrum has $q_p = 2.38$ and $E_p^{max} = 80$ GeV for the T+17 data (Fig.\,\ref{fig:LMC_SED}-{\it top}), and $q_p = 2.6$ and $E_p^{max} = 25$ GeV for the A+16 data (Fig.\,\ref{fig:LMC_SED}-{\it bottom}). The corresponding CRp energy densities are, respectively, 1 and 1.5 eV cm$^{-3}$ -- the difference being largely due to the somewhat higher flux level in the A+16 data. In spite of the observational uncertainties in the two datasets, we see that for either the pionic yield largely dominates the $\gamma$-ray emission, so the pionic nature of the measured $\gamma$-ray spectrum appears well established. In addition to the dominant pionic component a subdominant, 7\% (5\%), leptonic contributions peaks at 500 (400) MeV, similarly to the pionic one: Compton/COB and NT bremsstrahlung emissions contribute, respectively, 6.5\% (4\%) and 0.5\% (1\%) to the peak of the observed emission in the T+16 (A+16) database. The secondary CRe spectrum is derived and fitted analytically as described in section 4.1: the fitting parameters are reported in Table 4 for both sets of LAT data. As mentioned, in both cases the radiative contribution of the secondary CRe is minor: this means that the corresponding primary PL spectra are essentially the same \footnote{ For the adopted nominal densities of the gas and of the magnetic and radiation fields, the inferred primary CRe injection index in the LMC is $q_i=2.22$ for both the T+17 and A+16 fits. } and (for each data set) no fitting iteration is required. The primary and secondary CRe spectra are shown in Fig.\,\ref{fig:MC_CRe_spectra}. Primary CRe largely dominate the CRe population in the LMC, even more so than in the SMC. However, their density is effectively unconstrained because it depends on assuming a magnetic field value (even though deduced from, as in our case, Faraday-rotation measurements) rather than using a value derived by modeling the NT--X-ray emission as Compton/CMB (e.g., Persic \& Rephaeli 2019a,b) or the hard--X-ray/MeV emission (when it becomes available) as Compton/IR. Also, appreciable modeling uncertainty stems from the inter-dependence among primary-CRe parameters which results in a range of acceptable values of the spectral index and normalization, $q_e=2.26$ with $N_{e0} = 2.22\, 10^{-7}$ cm$^{-3}$ and $\gamma_{max} = 2.0\, 10^4$, as well as $q_e=2.40$ with $N_{e0} = 6.21\, 10^{-7}$ cm$^{-3}$ and $\gamma_{max} = 2.7\, 10^4$. \footnote{ Values referring to the T+17 LAT data. A similar covariance, with only slightly different values, is found when the A+16 LAT data are used. } \begin{table*} \caption[] {SED model parameters.} \centering \begin{tabular}{ l l l l l l l l l l l l l l l} \hline \hline \noalign{\smallskip} & $N_{e0}$ &$q_e$ & $\gamma_{max}$& $u_p$ &$q_p$&$E_p^{max}$ & $N_{se0}$ &$q_1$&$q_2$&$\gamma_{b1}$&$\gamma_{b2}$ &$\eta$& F(H$\alpha$) & $T_e$ \\ &{\tiny cm$^{-3}$} & & {\tiny $10^4$}&{\tiny eV/cm$^3$}& &{\tiny GeV}&{\tiny $10^{-11}$cm$^{-3}$}& & &{\tiny $10^2$}&{\tiny $10^4$}& &{\tiny erg/(cm$^2$s)}&{\tiny $10^{4\,\circ}K$}\\ \noalign{\smallskip} \hline \noalign{\smallskip} SMC &$3.70~ 10^{-8}$& 2.23 & $1.0$ & 0.45 & 2.40& 30 &$0.14$ & 0.10& 2.60& 1.5 & $0.95$ & 2.65 & $1.6~10^{-8}$&$1.1$\\ LMC$^+$ [1] &$3.50~ 10^{-7}$& 2.32 & $2.4$ & 1.00 & 2.38& 80 &$0.50$ & 0.05& 2.67& 1.2 & $2.75$ & 3.20 & $2.0~10^{-7}$&$1.3$\\ ~~~~~~~~~~~ [2] &$3.30~10^{-7}$& 2.32& $2.6$ & 1.55 & 2.60& 25 &$1.55$ & 0.13& 2.75& 1.0 & $0.85$ & 3.20 & $2.0~10^{-7}$&$1.3$\\ \noalign{\smallskip} \hline\end{tabular} \smallskip \noindent $^+$ {\it Fermi}/LAT $\gamma$-ray data from T+17 [1] and A+16 [2]. \smallskip \end{table*} \section{Neutrino emission} With an apparent dominant $\pi^0$-decay origin of the $\gamma$\,rays produced in the interstellar medium of both MCs it is clear that $\pi^\pm$-decay neutrinos are also produced. Our calculations (following Kelner et al. 2006) of the predicted muon- and electron-neutrino spectra of both galaxies indicate (Fig.\,\ref{fig:MC_neutrinos}) that the broadly-peaked ($\sim$0.1--10 GeV) neutrino flux is too low for detection by current and upcoming $\nu$ projects. This conclusion is based on the estimated observation time needed to detect the LMC with an experiment with detection sensitivity comparable to the Antarctica-based IceCube+DeepCore Observatory, the most sensitive current/planned $\nu$-detector at neutrino energies $10 < E_\nu/{\rm GeV} <100$ (e.g. Bartos et al. 2013). The latter's effective area (Abbasi et al. 2012) is $A_{\rm eff}(E_\nu) = 40 \left( \frac {E_\nu}{100\,{\rm GeV}}\right)^2$ cm$^2$ (for $\nu_{\mu}$, twice smaller for $\nu_e$) in this energy range (Bartos et al. 2013). Only in the narrow energy range 10-50\,GeV do the IceCube+DeepCore sensitivity and the LMC predicted diffuse spectral $\nu$-flux effectively overlap (cf. the T+17 LAT dataset). In this band the latter can be approximated as $\frac{dN_\nu} {dE_\nu} \sim 10^{-10} \left( \frac{E_\nu}{100\, {\rm GeV}} \right)^{-2.5} {\rm cm}^{-2} {\rm s}^{-1} {\rm GeV}^{-1}$. The corresponding number of detected neutrinos, $N_\nu = t_{\rm obs} \int_{10\,{\rm GeV}}^{50\,{\rm GeV}} \frac{dN_\nu} {dE_\nu} A_{\rm eff}(E_\nu) dE_\nu$, is $N_\nu \sim 10 (t_{\rm obs}/{\rm yr})$. The detector background (for up-going events) is dominated by atmospheric neutrinos produced by cosmic rays in the northern hemisphere; its energy spectrum is approximately flat in the relevant energy range, at a level $\sim$$10^2 (d\Omega_{\rm LMC} / 1.6\, {\rm sr})$ Gev$^{-1}$ yr$^{-1}$ (Bartos et al. 2013 and references therein). So the net 10-50\,GeV background rate is $\sim$$4 \times 10^3 (\Delta\Omega_{\rm LMC} / 1.6\, {\rm sr})$ yr$^{-1} \sim 60$ yr$^{-1}$ assuming the LMC angular radius to be 5 deg). Based on this crude estimate, observation of diffuse GeV neutrinos from the LMC disk would imply $S/N < 0.2$. \section{Conclusion} The SMC and the LMC, star-forming Galactic satellite galaxies, are among the brightest sources in the {\it Fermi}/LAT $\gamma$-ray sky. We self-consistently modeled the radio/$\gamma$ SED of both galaxies using the latest available radio and LAT data, using exact emissivity formulae for the relevant emission processes. Both SEDs were modeled with the radio data interpreted as a combination of NT electron synchrotron emission and thermal electron bremsstrahlung, and the $\gamma$-ray data as a combination of $\pi^0$-decay emission from CRp interacting with the ambient gas plus leptonic emission. The CRp spectra appear similar in the two galaxies with $q_p \sim 2.4$, and $u_p \sim 1$ eV cm$^{-3}$. Our spectral findings are qualitatively and quantitatively new for the SMC, and confirm and strengthen previous results for the LMC. In detail, we quantify Lopez et al.'s (2018) suggestion of a mostly pionic origin of the LAT-measured $\gamma$-ray emission of the SMC, only $\mincir$3\% of the peak flux being by NT bremsstrahlung; as to the LMC, our analysis suggests that 93\%, 6.5\%, and 0.5\% of the LMC peak $\gamma$-ray flux (at 0.5 GeV) are accounted for by pionic, Compton/(EBL+GFL), and NT-bremsstrahlung emission -- in broad agreement with Foreman et al. (2015). As we have noted, for the SMC disk we used Lopez et al.'s (2018) LAT dataset which is only mildly contaminated by an estimated $\sim$10\% contribution from unresolved pulsars, whereas for the LMC disk we used LAT datasets that are based on maps that do not include emission from local gas and CR inhomogeneities associated with actively star-forming regions (A+16; T+17). Thus, our pionic emission modeling results are unlikely to be appreciably affected by emission from individual sources. As stated in Section 4, our treatment is based on determining the particle steady-state spectral distributions in the disk by fits to the radio and $\gamma$-ray data, using previously deduced mean disk values of the gas density, magnetic and radiation fields. An assessment of the combined uncertainties in the predicted particle radiative (and neutrino) yields is based firstly on the precision level of the observationally determined values of $n_g$ and $B$. Whereas the deduced CRe normalization is (essentially) independent of $n_g$, its dependence on $B$ is significant, $N_{e0} \propto B^{-(\alpha +1)}$, which is roughly $B^{-1.7}$. The deduced CRp normalization is $N_{p0} \propto 1/n_g$. Given the substantial uncertainties in both $n_g$ and $B$, it is clear that these impact also the overall normalization of the particle spectra, which can be uncertain by a factor of 2-3. However, the constraining power of the fits to the measurements is mostly in the well known spectral shapes of the predicted synchrotron and $\pi^0$-decay processes. As explained in the previous section, the spectral fits are quite good, with a typical level of uncertainty of a few $10\%$. Even though substantial, the overall level of uncertainty is not large enough to question our qualitative conclusion on the pionic nature of the $\gamma$-ray emission. The SMC radio data have large error bars below 230 MHz where $q_e$ would be best determined, are unsampled (but for one point) between 230 and 1400 MHz where the effect of the spectral truncation would be observed, and are only sparsely sampled above 1400 MHz. This leaves both $q_e$ and $\gamma_{max}$ poorly determined. The LMC radio spectral data permit determination only of $q_e$, as no truncation is apparent; we estimated $\gamma_{max}$ by weighing its lower limit implied by the radio spectrum and its upper limit implied by the $\gamma$ spectrum. Indeed, a larger value would shift the Compton/(EBL+FGL) blue peak to higher frequencies and to a higher flux level, deteriorating the fit to the $\gamma$ spectrum at frequencies log$(\,\nu)>24$. Our joint analysis of the broad-band SED of the MCs using exact emissivity formulae, and the leptonic radio and $\gamma$ emissions are coupled and rest on independent measures (independent of particle/field energy equipartition considerations) of the magnetic field, so the pionic emission is formally determined by modeling the residuals of the LAT data after the leptonic yield is accounted for. In general, uncertainties on the CRe spectrum affect the determination of the CRp spectrum because the latter is obtained by fitting the $\gamma$ data once the leptonic yields have been properly accounted for. However, we feel our CRp results here are reasonably safe because of the clear dominance of pionic emission in the $\gamma$ spectrum. For the SMC, the LAT data are sufficient to define the CRp spectral parameters, $q_p = 2.4$ and $E_p^{max} = 30$ GeV. For the LMC, however, some inconsistency between the T+17 and A+16 LAT datasets implies the derived CRp spectrum to be, respectively, flatter ($q_p = 2.4$) with no clearly discernible high-energy truncation ($E_p^{max} \mincir 80$\,GeV), or steeper ($q_p = 2.6$) with a clear truncation ($E_p^{max} = 25$\,GeV) and a $\sim$50\% higher normalization. (Both $q_p$ values are consistent within errors with Foreman et al.'s (2015) earlier estimate.) The reason for the discrepancy between the two data sets is not obvious, as the T+17 and A+16 data extraction regions largely overlap with each other and with the LMC disk, and both have been cleaned for identified (point-like or extended) sources. It should be pointed out that in the T+17 dataset the quality of the four highest-frequency points does not allow a definite estimate of $E_p^{max}$, whereas the four lowest-frequency points enable a clear view of the rising portion of the pionic hump. Ongoing {\it Fermi}/LAT measurements would likely improve the quality of the spectrum and clarify this issue. For both galaxies we obtained truncation energies systematically higher for CRp than for CRe -- probably, because energy losses are much less efficient for the former than for the latter in the relatively low-density MC interstellar gas. The secondary CRe spectra are at best as well determined as those of the parent CRp. Our model secondary spectra (with no free parameters) are relatively well defined in the MCs. The primary CRe spectra, determined by fitting radio data after secondary CRe have been accounted for, are however less well constrained. First, the measured radio fluxes include emission from individual background and MC-disk point sources that contribute $\sim$20\% of the total emission and are spectrally similar to the overall emission (For et al. 2018), so there is some intrinsic uncertainty in the predicted extended emission's spectral shape and normalization ($N_{e0}$ is biased high). Secondly, because of the $q_e$-$N_{e0}$ degeneracy the uncertainty on $N_{e0}$ affects the determination of the spectral slope. Thus, primary CRe are rather poorly constrained in both MCs. In spite of this, our assumption of a (truncated) 1PL representing primary CRe is sound. The PL is a fair approximation, over the relevant CRe energy range, to the actual spectrum computed accounting for realistic energy losses in the MC disk environments. The approximation is particularly good for the LMC. A possible underestimate of the gas content in the MC due to the presence of "dark" (unseen) neutral gas, i.e. HI gas optically thick in the 21\,cm line and/or H$_2$ has with no associated CO emission, could increase the uncertainty on the total, atomic and molecular, gas density. The CO (J=1-0) transition, often used to detect the total H$_2$ mass, may be hard to detect in low-$Z$ galaxies such as the MC (Rolleston et al. 2002; Requena-Torres et al. 2016), so the relatively bright [C\,II]$\lambda$158\,$\mu$m line can be used instead: some $>$70\% may not have been traced by CO (1-0) in dwarf galaxies but is well traced by [C\,II]$\lambda$158\,$\mu$m (Madden et al. 2020). Another important aspect to investigate is the mixing of dust into the interstellar medium (ISM) and the spatial variations of their properties (gas clumpiness affects emission levels, see above). Comparing the distributions of the IR dust emission and several gas tracers (H$\alpha$, 21\,cm, CO emission), their different emission processes highlight the distribution of gas under different conditions. In the LMC, this analysis reveals that: {\it (i)} dust emission, sampled in the Multiband Imaging Photometer for {\it Spitzer} (MIPS) 70 and 160\,$\mu$m bands, is well mixed with the large-scale HI 21\,cm emission; {\it (ii)} H$\alpha$ from star-forming H\,II regions is confined to the massive star formation region where also warmer ($\sim$120\,K) 24\,$\mu$m emission is found; and {\it (iii)} the {\it Spitzer} InfraRed Array Camera (IRAC) 8\,$\mu$m band, that traces polycyclic aromatic hydrocarbons, correlates well with the HI gas but is absent from the massive-star--forming H\,II regions. All in all, the dust emission revealed by the combined IRAC 8\,$\mu$m and MIPS 24, 70 and 160\,$\mu$m bands traces all three phases of the ISM gas (Meixner et a. 2006). Underestimating the gas content would affect the particle spectra in the following ways: {\it (i)} overestimate the CRp spectral normalization; {\it (ii)} underestimate the Coulomb and bremsstrahlung energy losses, $b_0(\gamma)$ and $b_1(\gamma)$ (Rephaeli \& Persic 2015), which would cause the computed steady-state secondary-CRe spectrum to be, respectively, higher and steeper; {\it (iii)} bias the derived primary-CRe spectrum lower and flatter. Uncertainties in the 3D structure of the MCs (mainly for the SMC, Abdo et al. 2010b) directly affect only the CRe and CRp spectral normalizations, to match the spectral flux when the emitting volume changes -- this holds for the relevant leptonic emissivities, which depend linearly on the CRe normalization. Thus, model SEDs are nearly unaffected by variations of the MC structure parameters. Another source of uncertainty in $N_{e0}$ stems from assuming a magnetic field (for both galaxies), in our case field values derived by Faraday rotation studies of extragalactic polarized sources seen through the MCs. These field strengths are measured along the line of sight to the background sources; thus, they are line-averaged fields, whereas the field in the synchrotron emissivity expression is a volume average. The two values may or may not be the same: the potential mismatch introduces another systematic uncertainty in the determination of $N_{e0}$ from fitting the radio spectrum. For example, in principle if $B$ is biased high, $N_{e0}$ and all leptonic yields are biased low, so the resulting $N_{p0}$ is biased high. The main breakthrough needed in MC SED studies is the measurement of diffuse NT X-ray emission from their disks. As exemplified in our studies of radio-lobe SEDs (Persic \& Rephaeli 2019a,b), the NT 1\,keV flux, interpreted as Compton/CMB emission, sets the normalization of the CRe spectrum: in the MC case, given that the excellent pionic fit to the $\gamma$-ray emission firmly defines the secondary spectrum, the 1\,keV flux would effectively measure the primary normalization. Modeling the radio spectrum as synchrotron radiation would then provide the primary spectral shape -- and, importantly, the magnetic field. A spectral determination of $B$ would bypass the need to assume particles/field equipartition, and would return a volume-averaged value better representing the mean field than the line averages yielded by Faraday-rotation--based measurements. Deeper X-ray observations of both MCs, with better spatial/spectral resolution than currently available, will be highly beneficial to searching the diffuse disk emission for a NT component. \medskip \noindent {\it Acknowledgement.} We acknowledge insightful comments by an anonymous referee that improved clarity and presentation of our work.
Title: Validation of TESS exoplanet candidates orbiting solar analogues in the all-sky PLATO input catalogue
Abstract: The Transiting Exoplanet Survey Satellite (TESS) is focusing on relatively bright stars and has found thousands of planet candidates. However, mainly because of the low spatial resolution of its cameras ($\approx$ 21 arcsec/pixel), TESS is expected to detect several false positives (FPs); hence, vetting needs to be done. Here, we present a follow-up program of TESS candidates orbiting solar-analogue stars that are in the all-sky PLATO input catalogue. Using Gaia photometry and astrometry we built an absolute colour-magnitude diagram and isolated solar-analogue candidates' hosts. We performed a probabilistic validation of each candidate using the VESPA software and produced a prioritized list of objects that have the highest probability of being genuine transiting planets. Following this procedure, we eliminated the majority of FPs and statistically vetted 23 candidates. For this remaining set, we performed a stellar neighbourhood analysis using Gaia Early Data Release 3 and centroid motion tests, greatly enhancing the on-target probability of 12 of them. We then used publicly available high-resolution imaging data to confirm their transit source and found five new, fully validated planets. For the remaining candidates, we propose on-off photometry to further refine the list of genuine candidates and prepare for the subsequent radial velocity follow-up.
https://export.arxiv.org/pdf/2208.12276
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} techniques: photometric -- methods: statistical -- surveys -- Hertzsprung–Russell and colour–magnitude diagrams -- stars: solar-type -- planets and satellites: detection \end{keywords} \section{Introduction} The Transiting Exoplanet Survey Satellite (\textit{TESS}, \citealt{2014SPIE.9143E..20R}) is a NASA all-sky survey telescope designed to search for transiting exoplanets orbiting nearby stars. With its array of four ultra-wide-field cameras, \textit{TESS} has been delivering, since July 2018, both short-cadence photometry and target pixel file images on pre-selected targets, and full-frame images (FFIs) with a 30- or 10-minute cadence (during the nominal and extended mission, respectively). These data are downlinked to the ground, where they are then further analysed with transit-search pipelines developed by the Science Processing Operations centre (SPOC). This applies to short cadence images \citep{2016SPIE.9913E..3EJ} and, starting from sector 36, to some targets selected from the FFIs \citep{2020RNAAS...4..201C}, while every FFI is also analysed with the Quick-Look Pipeline (QLP, \citealt{2020RNAAS...4..204H}). The candidate planets found by the SPOC and QLP are then vetted\footnote{For details, see \url{https://heasarc.gsfc.nasa.gov/docs/tess/data-handling.html}, and \url{https://archive.stsci.edu/missions/tess/doc/EXP-TESS-ARC-ICD-TM-0014-Rev-F.pdf} (Twicken et al., 2020).} by the MIT branch of the \textit{TESS} Science office (TSO), and those candidates that survive are later defined as \textit{TESS} Objects of Interest (TOIs, \citealt{2021arXiv210312538G}). \textit{TESS} focuses on relatively bright, nearby stars and is finding thousands of transiting planet candidates. However, because of the low spatial resolution of its cameras ($\approx$ 21 arcsec/pixel), a percentage of objects initially identified as exoplanet candidates are expected to be false positives (FPs). In fact, the crowding of stars within the 1 arcmin$^2$ point spread function (PSF) of \textit{TESS} might cause two (or more) stars to appear merged into the \textit{TESS} time-series. Therefore, if an exoplanet orbits around a star that is blended with another in the \textit{TESS} images, then the transit signal in the light curve is diluted. If another star -- blended in the \textit{TESS} PSF -- is present in the same pixel, it could be the origin of the transit signal by either being an eclipsing binary or hosting a planet itself. Some FPs are identifiable using \textit{TESS} data alone, but the majority of them need further observations \citep{2015JATIS...1a4003R}. To avoid wasting observational time and optimize follow-up resources, it is possible to identify the most promising candidates through a quick and efficient probabilistic validation procedure, which aids in distinguishing between a planet and a FP from a particular transiting candidate \citep{2012ApJ...761....6M}. % In this work, we present our probabilistic validation analysis of every TOI orbiting a solar-analogue target that is in the all-sky PLATO input catalogue \citep{2021A&A...653A..98M}, for which time-series or high-precision radial velocities follow-up observations are not yet available. We consider only candidates without follow-up observations available on the \textit{Exoplanet Follow-up Observing Program for TESS} (ExoFOP-TESS) website\footnote{Available at \url{https://exofop.ipac.caltech.edu/tess/}.}to provide an original analysis and avoid duplicated work. The software we use to perform such probabilistic validation is the \textsc{vespa} code, which is computationally efficient and publicly-available \citep{2012ApJ...761....6M}. By following this procedure, we are able to identify the majority of FP candidates. For the remaining set, we perform a stellar neighbourhood analysis using \textit{Gaia} Early Data Release 3 (\textit{Gaia} EDR3, \citealt{2021A&A...649A...1G}) and on-off photometry \citep{2009A&A...506..343D} to further refine the list of candidates and prepare for the subsequent radial velocity follow-up. As we will explain further in Section \ref{sec:definition}, throughout this work we label every candidate that passes only the \textsc{vespa} analysis as a `vetted' candidate; on the other hand, we refer to those that also pass the stellar neighbourhood analysis and meet a specific list of constraints as `statistically validated planets'. Furthermore, our work is a perfect case study of using \textsc{vespa} on PLATO data in the future, as the telescopes will have a similar spatial resolution \citep{2017SPIE10564E..05L}. This study could be the framework for future PLATO vetting. In Section \ref{sec:methods} we briefly describe the methods we used to perform probabilistic validation analysis and stellar neighbourhood analysis; in Section \ref{sec:targ} we explain how we selected our sample of TOIs orbiting PLATO solar-analogue targets, while in Section \ref{sec:results} we show the results of our validation analysis, paying specific attention to the planetary size of the statistically-vetted targets we found. In Section \ref{sec:discussion} we discuss our results, provide suggestions for the follow-up observations, specify the nomenclature used, and call attention to the importance of performing a stellar neighbourhood analysis. Concluding remarks are in Section \ref{sec:conclusion}. \section{Methods} \label{sec:methods} In this work, we performed the fully automated probabilistic validation procedure following \citet{2012ApJ...761....6M} \& \citet{2016ApJ...822...86M} for 158 TOIs orbiting solar-analogue stars that are in the all-sky PLATO input catalogue, v1.1 (asPIC1.1, \citealt{2021A&A...653A..98M}). The selection of these stars is described in Section \ref{sec:targ}, while in the following of this Section we describe the validation algorithms. \subsection{VESPA} The \textsc{vespa} code (\textit{Validation of Exoplanet Signals using a Probabilistic Algorithm}) is a publicly-available software package \citep{2012ApJ...761....6M} that models light curves of eclipsed stars as simple trapezoids parameterized by a depth $\delta$, a total duration $T$, and the transit shape parameter $T/\tau$, where $\tau$ is the \textit{ingress} (or \textit{egress}) duration, and simulates physically realistic populations of astrophysical FPs. Validating an exoplanet candidate is equivalent to demonstrating that the \textit{False Positive Probability} (FPP) is small enough to be considered negligible. \textsc{vespa} calculates the FPP as follows: \begin{equation} {\rm FPP} = 1-{\rm Pr(planet|signal)}, \label{eq} \end{equation} where \begin{equation} {\rm Pr(planet|signal)}= \frac{\mathcal{L}_{\rm TP}\pi_{\rm TP}}{\mathcal{L}_{\rm TP}\pi_{\rm TP}+\mathcal{L}_{\rm FP}\pi_{\rm FP}} \label{eq2} \end{equation} defines the probability that there is a planet given the observed signal. In equation \ref{eq2}, $\mathcal{L}$ represents the Bayesian likelihood factor, which says how similar is the shape of the observed transit signal to the expected signal shape produced by the hypothesis (false positive or planet scenarios). The prior $\pi$ describes how intrinsically probable a priori is the existence of the hypothesized scenario. In particular, ${\rm TP}$ indicates a "true positive". \textsc{vespa} supports the following hypotheses: \begin{itemize}[leftmargin=*] \item Eclipsing binary system in the background or foreground, blended within the photometric aperture of the target star (BEB); \item The target is a hierarchical-triple system where two of the components eclipse each other (HEB); \item The target star is an eclipsing binary (EB); \item Transiting planet (Pl)\footnote{\textsc{vespa} does not consider "blended transiting planet" FP scenarios \citep{2016ApJ...822...86M}.}. \end{itemize} Furthermore, \textsc{vespa} supports double-period versions of each FP scenario. This is done to avoid, for example, the case in which an EB with twice the orbital period of the detected candidate, and with similar primary and secondary eclipse depths, is confused with a transiting planet. Briefly, the \textsc{vespa} validation procedure works in this way: \begin{enumerate}[leftmargin=*] \item Simulation of a representative population for each hypothesis scenario listed above (fixing the period). Each population is made up of many different instances of that scenario; \item Calculation of the \textit{prior} ($\pi$) for each scenario, which is the product of three factors: the existence probability of the analysed scenario within the photometric aperture, the geometric probability of orbital alignment for which an eclipse is visible and the probability that the eclipse is able to mimic a transit \citep{2012ApJ...761....6M}; \item Calculation of the \textit{likelihood} ($\mathcal{L}$) of the observed transit signal for each scenario, where \textsc{vespa} models the shape of the eclipse and fits it to the observed light curve. This is done through \textit{Markov Chain Monte Carlo} (MCMC) simulations; \item Combination of \textit{prior} and \textit{likelihood} to calculate the FPP of the transit signal (Equation \ref{eq}). If the FPP is $<$ 1 per cent, then the candidate can be considered as probabilistically vetted\footnote{It is crucial to note that, as further explained by \cite{2016ApJ...822...86M}, a vetted candidate requires to have a `probability > 99\% of being on the target star' to be fully `validated'. Therefore, as we explain in Section \ref{sec:definition}, we will label a candidate as fully `validated' only after proving this constraint. }. \end{enumerate} We refer the reader to \citet{2012ApJ...761....6M} \& \citet{2016ApJ...822...86M} for a detailed description of the method. \subsubsection{Data and constraints} \label{data} We fed \textsc{vespa} with the following stellar and planetary parameters: \begin{itemize}[leftmargin=*] \item [-] equatorial coordinates, \textit{Gaia} photometric magnitudes, and parallax (see Section \ref{baye}) from \textit{Gaia} EDR3; \item [-] stellar effective temperature $T_{\rm eff}$, gravity log$g$, and metallicity [Fe/H]\footnote{We used $T_{\rm eff}$, log$g$ and [Fe/H] in the \textsc{vespa} calculation only if their values come from spectroscopy; otherwise, we avoided adding these input parameters.} from \textit{Mikulski Archive for Space Telescopes} (MAST); \item [-] mean stellar density $\rho$ in units of [{\rm g cm$^{-3}$}] and maximum extinction in the V band (\textit{maxAV}) from asPIC1.1; \item [-] planet to stellar radius $R_p/R_s$ from \textit{Exoplanet Follow-up Observing Program for TESS} (ExoFOP-TESS). \end{itemize} We used the detrended and phase-folded time-series extracted from \textit{TESS} data available in the MAST portal (see Section \ref{phot}). We defined the maximum angular distance (\textit{maxrad} in \textsc{vespa}) from the target star where a potential blending star might be, as the radius of the aperture ($r_{\rm circ}$) for circular aperture photometry. Otherwise, we assumed the area ($A$) covered by the \textit{TESS} aperture as circular and computed the radius as: \begin{align} \label{eqn:eqlabel} \begin{split} r_{tess} &= \sqrt{A/\pi}; \\ A &= N_{\rm px} \times s^2, \end{split} \end{align} where $N_{\rm px}$ is the number of pixels within the \textit{TESS} aperture and $s$ is the \textit{TESS} platescale that is equal to 21 arcsec/pixel. As a safety margin, it is useful to add the \textit{TESS} PSF of 40 arcsec to both radii \citep{2018AJ....156..102S}, i.e., $\textit{maxrad} = r_{\rm circ} + 40$ arcsec and $\textit{maxrad} = r_{tess} + 40$ arcsec. As described by \citet{2015ApJS..217...16R} and \citet{2016ApJ...822...86M}\footnote{\label{footnote_1}And following a tutorial of \textsc{vespa} available at \url{https://nexsci.caltech.edu/workshop/2018/VESPA_Tutorial.pdf}.}, we quantified the maximum depth of a potential secondary eclipse ($\delta_{\rm max}$, \textit{secthresh} in \textsc{vespa}). We ran a transit search in the TOI light curve and looked for the deepest signal allowed at phases outside the transit ($\delta_{\rm sec}$). We compute $\delta_{\rm max}$ as $\delta_{\rm max} = \delta_{\rm sec} + 3\sigma_{\rm sec}$, where $\sigma_{\rm sec}$ is the uncertainty associated with $\delta_{\rm sec}$. Then, we inferred physical properties of the star (see Sec. \ref{sec:stellar}) given the photometric, spectroscopic, and observational constraints described above using the \textsc{Isochrones} package \citep{2015ascl.soft03010M} and finally computed the FPP with \textsc{vespa}. \subsubsection{Bayesian evidence} \label{baye} As explained in \citet{2016ApJ...822...86M}, to start the validation procedure, all available constraints on the target star are used to condition a direct fit of a single- or multiple-star model to the MIST grid of stellar models (\citealt{2016ApJS..222....8D, 2016ApJ...823..102C, 2011ApJS..192....3P}). This fit is done using multi-modal nested sampling, implemented with \textsc{MultiNest}. Consequently, \textsc{Isochrones} produces posterior samplings of the physical properties of the host star, modelled as a single- or multiple-star system. To compute the FPP, \textsc{vespa} requires the physical properties of each stellar model (single, binary, and triple) to evaluate each different scenario. When multi-modal posteriors are sampled with the \textsc{MultiNest} tool, the Bayesian evidence \citep{2019OJAp....2E..10F} is also computed. This particular parameter allows us to understand the degree to which the data imply a given model \citep{KNUTH201550}. Therefore, it is usual to prefer the model that implies the greatest Bayesian evidence \citep{kass1995bayes}. In our validation procedure, we found out that inserting as input only the observed \textit{Gaia} photometric magnitudes -- instead of adding other photometric magnitudes -- produced the strongest Bayesian evidence. For this reason, we preferred to insert only the \textit{Gaia} G, BP, and RP magnitudes into the \textsc{vespa} input file. \subsubsection{TESS photometry} \label{phot} For our analysis of the \textit{TESS} light curves, we accessed the \textit{TESS} data by downloading the SPOC Presearch Data Conditioning Simple Aperture Photometry (PDCSAP) flux light curves (\citealt{2012PASP..124.1000S,2014PASP..126..100S}) for the short cadence candidates, which have been observed in multiple sectors and have been stitched together by the \textit{TESS} mission in the so-called \textit{Data Validation Time Series} files. These files can be found in the MAST portal. When instead we deal with TOIs from FFIs, we downloaded the QLP normalized light curves detrended by splines (KSPSAP). In cases where QLP multisectors observations had been available, we stitched together each light curve and then we performed the phase-folding procedure (Fig. \ref{fig:qlp-fit}). \subsection{Stellar neighbourhood analysis} \label{stellar-neighbourhood} To further understand the real nature of a \textit{TESS} candidate, it is necessary to accurately analyse its stellar neighbourhood, to understand if possible contaminating stars are present. Thanks to \textit{Gaia} photometry \citep{2021A&A...649A...1G}, we can check whether any neighbourhood star is able to generate a flux in the aperture that corresponds at least to the observed flux variation. When this is not the case, we can exclude each \textit{Gaia} source to be a blended eclipsing binary and greatly enhance the probability that the detected signal is coming from the target star. Consequently, we developed a custom pipeline (further explained at the end of this Section and in Section \ref{subsec:diluted} \& \ref{k-param}) to compute which neighbourhood stars could reproduce the observed transit signal. When none of them can, we changed the value of the \textit{maxrad} constraint within the \textsc{vespa} input data and proceeded with the analysis. In this case, we considered the spatial resolution of \textit{Gaia} EDR3 -- that can resolve close pairs of stars at 1.5 arcseconds separation -- as the minimum value for the \textit{maxrad} constraint. It is important to note that this is a peculiar situation, which only happened to $\sim$ 1/10 of our targets. Therefore, it is often necessary to perform additional photometric follow-up observations to confirm the source of the signal. We can apply the so-called \textit{seeing-limited on-off photometry} technique \citep{2009A&A...506..343D}, which consists of the flux measurement -- with ground-based imagers -- of the target star and neighbour stars within, for example, 3.5 arcmins (10x10 \textit{TESS} pixels, \citealt{2019AJ....158..138S}) during the predicted on- and off-transit phases. Thanks to the much higher angular resolution obtainable with some ground-based instruments compared to \textit{TESS}, we can either confirm or discard that the detected signal is due to a genuine exoplanet candidate orbiting a given target star. Using these high angular resolution imagers, we can first resolve most cases of stars that appear merged into the time-series obtained by \textit{TESS}; then -- depending on the photometric precision of the instrument and the transit depth -- we can either: \begin{enumerate}[leftmargin=*] \item detect the source of the signal and verify if it does not exhibit luminosity variations that are sufficiently strong to cause a false alarm (when we have high photometric precision and/or a deep transit depth); or \item focus on the photometry of the neighbourhood stars and verify if none of them can reproduce the discovery signal (when we have low photometric precision and/or a shallow transit depth). \end{enumerate} Both strategies are able to detect the source of the signal. It is necessary to note that for the success of this technique, it is crucial to take into account the \textit{TESS} ephemeris uncertainty (Epoch, Period, and Duration) and perfectly plan the on-off observation windows. In fact, the accumulation of the uncertainty over time shortens the window length in which we can precisely collect the off- and, especially, on-transit phases. Thanks to the multi-year \textit{TESS} observations, the ephemeris uncertainty is often less of an issue. Nonetheless, it is important to collect on-off photometry as close as possible to the last \textit{TESS} observation to avoid the accumulation of uncertainties. Therefore, when we have many data points and ephemeris uncertainties are small, we should shorten by 3$\sigma$ both sides of the window length of the on-transit phase, where $\sigma$ takes into account the ephemeris uncertainties and the ingress/egress duration. In addition to that, it is essential to perform this follow-up with an observation band similar to \textit{TESS}, such as the Cousins $I_{\rm c}$ or the Sloan $i^{\prime}$, to avoid the potential obtainment of a transit depth different than expected (see Sec. \ref{sec:on-off}). Both in the on-off technique and in the photometric analysis using \textit{Gaia} EDR3, we computed the expected magnitude variation that any neighbourhood star have to generate to reproduce the transit signal. Specifically, we followed \citet{2009A&A...506..343D} and found that a transit signal originates from a neighbour star $\rm c$ if this star is able to reproduce the discovery signal ($s$): \begin{equation} (\Delta F/F)_{\rm s} = \frac{k_{\rm c} \Delta F_{\rm c}}{k_{\rm t} F_{\rm t}+ \sum k_{\rm i} F_{\rm i}}, \label{eq:signal} \end{equation} where $\rm t$ and $\rm i$ (with $\rm c \in i$) stand for \textit{target} and \textit{contaminants} respectively, while $k$ is the \textit{fraction of light of the stellar PSF which falls into the given photometric aperture}. The aim of this photometric follow-up is then to falsify the equation \ref{eq:signal} for each neighbour star. When we are analysing a contaminant star, we can rewrite equation \ref{eq:signal} as follows: \begin{equation} \frac{\Delta F_{\rm c}}{F_{\rm c}} = \left ( \frac{k_{\rm t} F_{\rm t}+ \sum k_{\rm i} F_{\rm i}}{k_{\rm c} F_{\rm c}}\right )(\Delta F/F)_{\rm s}, \end{equation} which in magnitude notation becomes: \begin{equation} \Delta m_{\rm c} = -2.5 \log \left(1-(\Delta F/F)_{\rm s}\left(\frac{k_{\rm t}10^{-0.4 m_{\rm t}}+\sum k_{\rm i}10^{-0.4 m_{\rm i}}}{k_{\rm c}10^{-0.4 m_{\rm c}}}\right)\right). \label{eq:magexp} \end{equation} The argument of the logarithm must be positive. This means that each contaminating star needs to generate a flux in the aperture that corresponds at least to the observed flux variation. If none of them can pass this threshold, we can rule out each resolved neighbourhood star as the source of the transit signal without taking a photometric observation to evaluate $\Delta m_{\rm c}$. In this specific situation, we can already move on to high-resolution imaging and precision radial velocity observations; otherwise, we require ground-based photometric observations to perform the on-off follow-up. \subsubsection{Diluted discovery signal} \label{subsec:diluted} In our procedure, the discovery signal in equation \ref{eq:signal} must be the one coming from simple aperture photometry. This is important because we need to conserve the possible stellar contamination coming from neighbour stars to subsequently correct it with our pipeline. However, the transit depth of a TOI -- provided by the TESS team -- comes from a PDCSAP \footnote{or KSPSAP, if the specific TOI has been identified with the QLP pipeline.} flux light curve (hereafter, $\delta_{\rm PDCSAP}$), which is already corrected for the crowding contamination from known neighbour stars \citep{2021arXiv210312538G}. A similar amount of flux correction is collected into the TIC \textit{contamination ratio} parameter \citep{2019AJ....158..138S}, which is defined as the nominal flux from the contaminants divided by the flux from the source. The contaminants have been searched for within 10 \textit{TESS} pixels, and the contaminating flux has been calculated within a radius that depends on the target's Tmag. Using this parameter, we can therefore recover, in a simplistic way, the diluted transit depth as follows: \begin{equation} (\Delta F/F)_{\rm s} = \frac{\delta_{\rm PDCSAP}}{(1+{\rm CR})} \end{equation} where CR is the \textit{contamination ratio}. We recovered the diluted transit depth and performed a custom correction for stellar dilution for two reasons. Firstly, there are known cases where the QLP planet radius has been inaccurate relative to uncontaminated ground-based observations. This inaccuracy has often turned out to be linked to the QLP deblending method, which is based on the \textit{TESS} magnitude estimates from the TIC \citep{2020RNAAS...4..204H}. Moreover, the QLP deblending method effectively deblends the light curve from contamination by an additional star inside the aperture \citep{2021arXiv210312538G}, whereas we deblend the transit depth from contamination by any star whose flux falls inside the aperture. Secondly, SPOC simulates the contaminating flux in the field around the target star from the full TICv7 catalogue for sectors 1-13 and TICv8 for sectors 14 onwards \citep{2021arXiv210312538G}, and both use the \textit{Gaia} DR2 catalogue. In our pipeline, we used instead stellar parameters from TICv8.2 and included parameters from the \textit{Gaia} EDR3 catalogue. As we will specify in Section \ref{k-param}, we have used the \textit{TESS} photometric band to correct for stellar contamination. In particular, we obtained the \textit{TESS} magnitude of a \textit{Gaia} star by cross-matching TIC and \textit{Gaia} catalogues through the \textit{Gaia} ID of the star. \subsubsection{The k parameter} \label{k-param} The $k$ parameter modifies the expected magnitude difference that is required to reproduce the transit signal. Its value depends on the selected pixels and on the exact position of the given star in the \textit{TESS} aperture. In fact, the PSF of the telescope causes the light from the target to fall onto several different pixels. The photometric aperture used to extract the light curve of a short-cadence TOI can be found inside a \textit{Target Pixel File} (TPF) object, which is an ensemble of images taken for each observed cadence. Differently, for long-cadence TOIs found with the QLP, the \textit{TESS} aperture is circular and its optimal radius is given by the QLP itself. Thanks to the \textit{Pixel Response Function} (PRF) provided by the \textit{TESS} mission, we can determine in which pixels the light from the target falls. In detail, the PRF is a model that describes the image of a point source and how it varies depending on where it lands on the detector. Its shape comes from a combination of the optical point spread function, jitter during observations, and intra-pixel location of where the light lands\footnote{For details, see \url{https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/active-missions/tess/_documents/TESS_Instrument_Handbook_v0.1.pdf}.}. The PRF images span 13x13 physical \textit{TESS} CCD pixels and have 9x9 intra-pixel samples per each pixel. This ensures some pixel precision without having to interpolate. The available PRF files change among different \textit{TESS} sectors, cameras, and CCDs. The entire procedure for evaluating the $k$ parameter for an individual star (whether it is the target or a contaminant) can be described as follows: \begin{itemize}[leftmargin=*] \item We extract the PRF at the exact pixel location of the star, where the total flux is determined using the star's \textit{TESS} magnitude; \item We evaluate the shift between the centre of the aperture and that of the PRF (i.e., the separation between the centre of the aperture and star location); \item We calculate the exact contribution of the flux that falls into the aperture. When the aperture is circular, we use the implementation of the \textsc{photutils} package \citep{larry-bradley-2021-5525286}. When dealing with TPF apertures (made by multiple square pixels), we calculate four weights to consider the displacement between the centre of an aperture pixel and that of a PRF pixel. This displacement causes an aperture pixel to overlap with up to a maximum of four PRF pixels; therefore, to calculate the flux contribution from a single aperture pixel, we need to consider the weighted contribution of each of these four PRF pixels; \item We divide the contribution of flux that falls into the aperture by the total flux of the star. The result obtained is the $k$ parameter. \end{itemize} \subsubsection{Undiluted radius} \label{sec:undiluted} The analysis of the neighbour stars allows us to evaluate how much the stellar dilution affects the candidate's transit depth -- and thus its radius -- and whether this value remains consistent with a planetary object. The equation of the new transit depth is as follows: \begin{equation} \frac{\Delta F_{\rm t}}{F_{\rm t}} = \left ( \frac{k_{\rm t} F_{\rm t}+ \sum k_{\rm i} F_{\rm i}}{k_{\rm t} F_{\rm t}}\right )(\Delta F/F)_{\rm s}, \end{equation} with the same notation used in eq. \ref{eq:signal}, while the candidate's new radius ($R_{\rm p}$) can therefore be estimated with the following equation: \begin{equation} R_{\rm p} \approx R_{\star}\sqrt{\frac{\Delta F_{\rm t}}{F_{\rm t}}}. \label{eq:undiluted} \end{equation} \subsection{Centroid Motion} \label{centroid} In addition to the on-off photometry technique, we performed another verification test to recognise the presence of contaminating stars. In particular, we have exploited the so-called \textit{centroid motion} test, which monitors the shift in the position of the photometric centroid during a transit event and verifies whether the corresponding motion is pointing away from the target. With this test, we further seek to determine the location of the transit source and to discard blended eclipsing binary sources. In the specific case where a TOI was identified by the SPOC, we took the result of the centroid motion test carried out by the \textit{TESS} mission -- that can be found within a \textit{TESS} \textit{Data Validation Report} file. When this is not the case, we followed the procedure described in \citet{2020MNRAS.498.1726M} to perform the centroid test, and then we applied the suggested constraints to determine whether a candidate has passed the test. These constraints include the probability of correct source identification $\rm P_\eta$, the probability of correct source association $\rm P_D$, and the Mahalanobis distance \citep{mahalanobis1936generalized}. \section{Targets selection} \label{sec:targ} To select the candidates orbiting solar-analogue stars, we built an intrinsic colour-magnitude diagram in the \textit{Gaia} bands, correcting the photometry for distance modulus, extinction, and reddening. From this diagram, it is possible to extrapolate all stars belonging to a certain spectral class. We focus on solar-analogue stars because of the scientific importance of discovering planets around `Solar twins' to carry out future atmospheric follow-up, and to obtain statistical information about exoplanet systems, whose characteristics may be similar to our planet, the only one known to host life. \subsection{Intrinsic colour-magnitude diagram} We developed a custom pipeline that takes into account the entire list of TOIs and then cross-matches it with the MAST, which also includes \textit{Gaia} data. Then, we cross-matched the same list with the asPIC1.1 \citep{2021A&A...653A..98M} and used its corrected \textit{Gaia} DR2 photometry to build an intrinsic colour-magnitude diagram. \subsubsection{The all-sky PLATO input catalogue} The intrinsic (i.e., reddening and extinction-free), absolute colour-magnitude diagram is presented in Figure \ref{fig:colour-mag-mix}. In the Figure, the V magnitude (\textit{upper} panel) and the stellar distance (\textit{lower} panel) are colour-coded. This colour-magnitude diagram represents stars across all the range of spectral types isolated in the asPIC1.1 (FGK and M dwarfs and subgiant stars), where for FGK stars V $\leq$ 13 and for M dwarfs V $\leq$ 16. This completeness is important to highlight -- especially across the G spectral class -- since it ensures that we did not apply any significant bias in our selection. We found that in a magnitude limited sample, bluer stars -- being intrinsically more luminous -- tend to be located at larger distances than redder stars along the main sequence, which are instead found closer to the observer. Then the larger distance implies a larger distance, reddening, and extinction uncertainty. This also explains both the increase in colour and magnitude uncertainty (Fig. \ref{fig:colour-mag-mix}, \textit{lower} panel). Furthermore, the magnitude (and colour) uncertainty tends to increase towards bluer stars. Since interstellar extinction is inversely proportional to the wavelength \citep[Whitford's Law of Interstellar Extinction, ][]{1958AJ.....63..201W}, bluer stars tend to be more extincted, and hence the magnitude uncertainty increases. The colour uncertainty has the same tendency. \subsection{Stellar sample} \label{sec:selection} At this stage, every TOI's host star within the asPIC1.1 is ready for selection. The data is continuously updated and at the time of writing it, 2022 May 16, the number of TOIs discovered is 5637, of which 2842 are included in the asPIC1.1. We decided to use Mamajek's table \citep{2013ApJS..208....9P} for these TOIs' host stars. This table provides average colours and magnitudes (in different pass bands) for each spectral class and hence allows us to do a selection based on these average values. We used the photometric magnitudes of the second\footnote{Although \textit{Gaia} EDR3 is now available, we used \textit{Gaia} DR2 in our stellar selection because both the asPIC1.1 and Mamajek's table are based on this \textit{Gaia} release.} \textit{Gaia} data release (corrected for extinction). We chose the stellar classes from F9V to G8V, providing almost seven hundred target stars to be analysed. The choice to use this range of stellar subclasses is arbitrary, but motivated by the colour and magnitude parameters of its two extremities, which are almost equally separated from the parameters of the Sun. Furthermore, both the F9V and the G8V subclasses differs by about $\pm$ 300K from the effective temperature of the Sun. Expanding the range at each end with only one subclass would imply an expansion into the effective temperature range of $\pm$ 120K, which is equivalent to a total range expansion of $\approx$ 40\%. After this selection, we considered only TOIs currently defined by the TESS Follow-up Observing Program (TFOP) working group as \textit{Planet Candidate} (PC) or whose definition\footnote{We extracted the dispositions from the `TFOPWG Disposition' entry available on the ExoFOP-TESS website.} is still absent. We also excluded from our analysis each TOI for which time-series or high-precision RV follow-up observations were already available and all TOIs currently under investigation by the ExoFop-TESS website. In total, after discarding single-transit candidates, 158 TOIs survived within our selection. \section{Results} \label{sec:results} Here we report the result of the validation procedure for 158 TOIs orbiting solar-analogue stars analysed using the \textsc{vespa} code. In the following subsections, we present the statistical outputs coming from this calculation. \subsection{Stellar Parameters} \label{sec:stellar} The \textsc{vespa} code relies on the \textsc{Isochrones} package to infer physical properties of a \textit{TESS} star. \textsc{Isochrones} uses a nested sampling scheme given photometric, spectroscopic, and other observational constraints (see Section \ref{data}). Stellar properties are crucial for estimating the FPP \citep{2017ApJ...847L..18S}. If the \textsc{Isochrones} estimates agree with literature values, the resulting FPP becomes more reliable. We compared the stellar parameters simulated in this work with those determined for asPIC1.1 by inspecting the difference in their values: \begin{equation} \Delta x_i = x_{i,{\rm vespa}} - x_{i,{\rm PIC}}, \end{equation} while its uncertainty is: \begin{equation} \sigma_{\Delta x_i} = \sqrt{\sigma_{i,{\rm vespa}}^2+\sigma_{i,{\rm PIC}}^2}, \end{equation} where $x_i$ and $\sigma_i$ are the value and standard deviation of a given stellar parameter $i$ respectively, with $i$ = \{mass, radius, $T_{\rm eff}$, distance\}. To perform the comparison, we used stellar parameters from the \textsc{Isochrones} single-star fit when the planetary or BEB scenario was the most likely; otherwise, we used those from either the double- (EB scenario) or triple-star model (HEB scenario). We have always used the stellar parameters of the primary star, both in the case of a single- and a multiple-star system. When \textsc{vespa} simulates the BEB scenario, it does not use stellar parameters from the \textsc{Isochrones} star models; instead, it performs a TRILEGAL simulation \citep{2005A&A...436..895G} to generate a population of eclipsing binary stars in the neighbourhood of the \textit{TESS} target under examination. The source of the transit signal becomes one of these neighbourhood stars; hence, the simulated stellar parameters that have been generated are different from those of our selected target star. However, we aim to verify the \textsc{vespa}-simulated stellar parameters of our \textit{TESS} star specifically selected and not one of its neighbour stars. Therefore, we made use of stellar parameters from the \textsc{Isochrones} single-star model that have been used to evaluate the planetary scenario, regardless of whether \textsc{vespa} identified the BEB-scenario to be most likely. Figure \ref{fig:stellar} shows the difference $\Delta x_i$ between the two measures versus the asPIC1.1 stellar parameter $x_{i,{\rm PIC}}$. We omitted stars with $|\Delta x_i| > 3\sigma_{\Delta x_i}$ from our analysis. As we can see in Fig. \ref{fig:stellar}, we found that almost all \textsc{vespa}-simulated host stars have parameters in agreement with those estimated in the asPIC1.1 input catalogue; and, aside from four particular `single' star cases (see below), for every star outside $3\sigma$ confidence interval \textsc{vespa} found the BEB (or EB) scenario to be the most likely. We also noted the presence of a small systematic offset on stellar masses, which is probably due to the different empiric relationships or models adopted by asPIC1.1 and \textsc{Isochrones}. By the analysis of the individual targets, we see that: \begin{itemize}[leftmargin=*] \item the candidates we statistically vetted (see Sect. \ref{valid}) orbit a star whose simulated parameters agree with the asPIC1.1 ones; \item stars with a stellar parameter $i$ with $|\Delta x_i| > 3\sigma_{\Delta x_i}$ usually also have one (or more) other stellar parameters that follow this characteristic, which means that nine target stars had to be discarded from our analysis; \item stars outside $3\sigma$ confidence interval almost often had large \textit{maxAV} and/or $\Delta \rho / \rho$ values in the \textsc{vespa} input files. This was the case for three of the four `single' stars outside $3\sigma$ confidence interval. The large \textit{maxAV} value in the \textsc{vespa} input files -- that we noticed being quite often underestimated in the consecutive \textsc{vespa} simulation of these stars -- could explain both the differences in stellar $T_{\rm eff}$ and $M$. On the other hand, the large $\Delta \rho / \rho$ values could explain some of the stars with large \textsc{vespa}-simulated stellar radius $R$ and mass $M$; \item the remaining unexplained `single' star was modelled by \textsc{Isochrones} but the resulting fit led to erroneous posterior stellar parameters. \end{itemize} \subsection{False Positive Probability} Among the entire selected sample, \textsc{vespa} was unable to evaluate some candidates due to problems related to the geometry of their orbital configuration or to difficulties in modelling their light curves (see Section \ref{sec:prob}). This happened in the most difficult cases, where the signal/noise ratio was very low. Therefore, the results we present here do not take them into account. Considering this removal plus the stars omitted in the previous section, we remain with 128 TOIs with usable light curves and reliable parameters. Among them, there are 23 candidates with a very high probability of being transiting planets, while almost 45 per cent of the entire sample have probability of being an FP that exceed 50 per cent (Fig. \ref{fig:fpp-pie}). Among the FPs, there are 26 candidates with FPP > 90\%. The remaining 48 candidates have an FPP with an intermediate value and their true transit nature requires further analysis to be confirmed. The histogram in Figure \ref{fig:isto} illustrates the FPP distribution of our candidates. In particular, it is possible to note that the FPP covers nearly the full 0-100 per cent probability range, with a higher concentration at the two extremes of the distribution. Another important result concerns the FP scenario that appears to be the most recurrent one, i.e., not necessarily the one with a probability that exceed 50 per cent, but the one with the highest probability among all the scenarios analysed. As we can see from the pie chart in Figure \ref{fig:fpp-pie2}, the Background (or Foreground) Eclipsing Binary (BEB) is the main cause of FPs. This result is consistent with the expectations of the \textit{TESS} mission, for which the main cause of FPs is expected to be the BEB scenario \citep{2015JATIS...1a4003R}. This is caused by the crowding of stars within the \textit{TESS} photometric aperture, as a result of the large pixel size and the overall PSF area. In fact, having many light sources in the same photometric area may dilute the brightness of the observed source and increase the FP probability due to blended eclipsing binaries. \subsection{Vetted candidates} \label{valid} To claim a statistical vetting for a transiting exoplanet candidate, we considered the FPP < 1\% threshold, as was done by \citet{2016ApJ...822...86M}. The number of candidates orbiting solar-analogue stars that satisfy this limit is 23, which corresponds to 18 per cent of TOIs within our selection (complete list in Table \ref{tab:validated}). Then, we subdivided the entire sample of statistically vetted candidates in five arbitrary planet-size bins: % \begin{itemize}[leftmargin=*] \item Terrestrials: $R_p$ $\leqslant$ 2 $R_{\earth}$; \item Sub-Neptunes: 2$R_{\earth}$ < $R_p$ $\leqslant$ 4 $R_{\earth}$; \item Sub-Jovians: 4$R_{\earth}$ < $R_p$ $\leqslant$ 10 $R_{\earth}$; \item Jovians: 10$R_{\earth}$ < $R_p$ $\leqslant$ 25 $R_{\earth}$; \item Stellar objects: $R_p$ $\geqslant$ 25 $R_{\earth}$, \end{itemize} to determine which kind of exoplanets we found. For each candidate, we considered two different values for their planetary radius. First of all, we took into account the radius estimated by the \textit{TESS} mission \citep{2019AJ....158..138S}. Then, we considered our estimated radius, which we obtained after performing a correction for the stellar dilution (see Sec. \ref{sec:undiluted}). In Figure \ref{fig:radii} we show the planetary radii of our sample of vetted candidates and its distribution. We added five shaded areas to highlight different planet-size bins. We can note the presence of one terrestrial-size exoplanet and the high concentration of Sub-Neptunes and Jovian-size candidates. At small radii, our estimated radii are quite similar to those coming from the \textit{TESS} mission, while they are often larger than the other estimate when the planetary radii are in the Jovian-size bin. The difference between our radii and the ones estimated by the \textit{TESS} mission might be due to the points highlighted in Sec. \ref{sec:undiluted} and may depend only on how the stellar dilution is treated (i.e., there is no dependence on the stellar radius). The two estimates are different (i.e., their difference is greater than $1\sigma$) for $\approx$ 17 per cent of the vetted candidates. We noticed that each of these vetted candidates is in the Jovians size bin and has been identified with the QLP pipeline. It is worth being aware of this difference. In fact, not only the planet-size bin could be different, but also the planetary nature of a candidate could become questionable if its radius reaches a specific value. We chose an arbitrary upper limit of 25 $R_{\earth}$ for a sub-stellar object, as the largest confirmed transiting exoplanet discovered so far has a similar size\footnote{Information from the \textit{NASA Exoplanet Archive}.} \citep{2017AJ....153..211Z}. We confirm that each of our vetted candidates has a sub-stellar radius. \subsection{Vetted candidates confirmed to orbit their host star} \label{sec:vett-conf} Following the procedure described in Section \ref{stellar-neighbourhood}, we took advantage of \textit{Gaia} EDR3 photometry to accurately analyse the stellar neighbourhood of each vetted candidate. In this way, we were able to check whether any neighbourhood star could mimic the detected transit signal and successively exclude each \textit{Gaia} source as a possible BEB. This procedure allowed us to narrow the transit source origin of ten vetted candidates (see Section \ref{next} and Table \ref{tab:validated}) within 1.5 arcseconds separation from their host star. For the other 13 candidates, we have not been able to narrow down the location of the source of their signal. However, we have identified which neighbourhood stars might be a contaminant source and how deep their transit signal is. % \subsubsection{Centroid motion results} As additional evidence of the transit source origin of our vetted candidates, we considered the centroid motion test (see Section \ref{centroid}). In Table \ref{tab:validated} -- in the \textit{centroid test} column -- we show the results of this examination. Aside from a controversial case\footnote{The \textit{TESS} Data Validation reported a possible stellar contamination from a neighbour star.}, every vetted candidate whose on-target probability has been greatly enhanced -- with our \textit{Gaia} photometry analysis (see Section \ref{stellar-neighbourhood}) -- has passed the centroid motion test. Figure \ref{fig:centroid} presents an example of a test passed and one of a test failed. Summing the results of this analysis with those obtained using \textit{Gaia} photometry, we greatly enhanced the on-target probability of 12 vetted candidates. \subsubsection{High-resolution imaging data} To confirm the transit source origin of our vetted candidates, we consider the high-resolution Speckle/Adaptive Optics (AO) imaging data publicly available on the ExoFOP-TESS website (either as a table data or as an `Open Observing Note'). We need these follow-ups data to rule out unresolved neighbour stars beyond the 1.5 arcseconds spatial resolution of \textit{Gaia} EDR3. Summing the information gained with these data (further explained in Table \ref{tab:validated}) with those obtained using \textit{Gaia} photometry and centroid motion tests, we confirm the transit source origin of six vetted candidates and have greatly enhanced the on-target probability of another six vetted candidates. \subsection{On-off photometry observations} \label{sec:on-off} To perform the on-off photometry follow-up of our statistically vetted candidates (i.e., those from Sec. \ref{valid}), we have submitted an observational proposal to INAF AOT44 call (October 1st 2021 - March 31st, 2022, proposal REM-44018, P.I. Giacomo Mantovan), to collect multi-band REM images \citep{2003Msngr.113...40C, 2004AIPC..727..765M}. Located in La Silla, Chile, the REM telescope allows us to observe mainly the \textit{TESS} candidates detected in the southern hemisphere. Thanks to these observations, we found that two candidates may be false positives. In particular, our analysis shows that both the transits of TOI 3353.01 and TOI 3353.02 could be due to a background eclipsing binary. In fact, Gaia EDR3 5212899427468921088 -- a neighbourhood star of TOI 3353 -- reproduces both the discovery signals (Fig. \ref{fig:onoff}). The magnitude variation has been calculated averaging the flux measured in several images during both the on- and off-transit phases, and correcting for systematic variations (i.e., different on- and off- zero point of magnitude due to different sky conditions occurring during the two phases). In addition, our procedure, which is a differential, aperture, transit photometry, removes most systematic trends, whereas we point out that some residual trends may still be present due to the individual, averaged, on-off measurement. We analysed REM images taken using the Sloan/SDSS g$^\prime$ filter and also the Sloan/SDSS i$^\prime$ filter, which allow us to perform a follow-up with an observation band similar to \textit{TESS} \citep{2015JATIS...1a4003R}. The observed -- and averaged -- on-off magnitude variation is comparable to the estimated one (eq. \ref{eq:magexp}). In addition to our analysis, the centroid motion tests carried out by the \textit{TESS} mission was unclear for both candidates, further suggesting a possible contamination. Moreover, the \textit{TESS} team -- in a note present on the ExoFOP \textit{TESS} website -- alerted the possible contamination for TOI 3353.02 exactly from the neighbour star we found. Even though this result may seem reasonable and a lot of data suggests stellar contamination, we have reason to believe that this is a misleading result: \begin{itemize}[leftmargin=*] \item the observed on-off variations present large error bars; \item we shortened both sides of the window length of TOI 3353.01's on-transit phase only by 1$\sigma$ (see Sec. \ref{stellar-neighbourhood}) because we were limited by the length of the data we had; \item when using REM images taken using the Sloan/SDSS i$^\prime$ filter, the observed on-off variation of the neighbourhood star does not reproduce the discovery signal of TOI 3353.01; \item the target star is active and is also saturated in the REM photometry. These two aspects could affect the on-off photometry of the considered neighbourhood star, which is only 22 arcseconds far from the target; \item there are no available data on the potential activity of Gaia EDR3 5212899427468921088. If this contaminating star were intrinsically variable, on-off photometry could give a `false negative'; \item the orbital periods of TOI 3353.01 and TOI 3353.02 are not in phase with each other. We expect a 2:1 period commensurability if both transit were due to the same background eclipsing binary. \end{itemize} Moreover, we independently reanalysed the \textit{TESS} light curves of TOI 3353.01 and TOI 3353.02, and modelled them using the \textsc{pycheops} code \citep{2021MNRAS.tmp.3057M}, to extrapolate the host star's stellar density $\rho_{*,h}$ from the transit signals. We followed equations 27 and 30 from \citet{2010arXiv1001.2010W}, and then ran MCMC simulations to better estimate the value and uncertainty of $\rho_{*,h}$. We compared $\rho_{*,h}$ with the nominal stellar density $\rho_{*}$, i.e., the one calculated from the stellar radius $R_*$ and mass $M_*$. Furthermore, we performed the same analysis focusing on the contaminant star, to determine if the resulting stellar density $\rho_{*,c}$ could better match the nominal value of the contaminant star. To do so, we injected the `third light' parameter (l\_3 in \textsc{pycheops}) into the \textsc{pycheops} modelling procedure, and treated the target star as the `third light' for the hypothetical transits in the contaminant. These analyses show that: \begin{itemize}[leftmargin=*] \item the resulting stellar density $\rho_{*,h}$ leads to values in agreement with the nominal stellar density $\rho_* = 0.94\pm0.18 \rho_{\sun}$ of the host star. In particular, we obtained $\rho_{*,h} = 0.69\pm0.23 \rho_{\sun}$ and $\rho_{*,h} = 0.91\pm0.36 \rho_{\sun}$, for TOI 3353.01 and TOI 3353.02, respectively; \item the resulting stellar density $\rho_{*,c}$ leads to values larger than $\rho_{\sun}$, which are not consistent with the nominal stellar density $\rho_* = 0.24 \rho_{\sun}$ of the contaminant star. Given the large magnitude difference between the host and the contaminant star, treating the target as the `third light' for the hypothetical transits in the contaminant leads the model to produce transits so deep that the occulting body has to be very large. This is contradicted by the observations, where the short duration of ingress and egress mandates a much smaller body. \end{itemize} These considerations imply that the results of our on-off analysis are not accurate enough to confirm the source of transit signals or detect the source of contamination. We can also rule out the physical scenario of a contaminating eclipsing binary capable of explaining both transiting candidates. Moreover, through the analysis of stellar density from both transit models, we have reasons to believe that TOI 3353 is a genuine multi-planetary system. Regardless of the latter result, we emphasise that further photometric observations are crucial to shed light on the true nature of these two candidates. In particular, we suggest performing full-transit photometric observations and focusing attention on the neighbourhood star Gaia EDR3 5212899427468921088. \subsection{Problematic cases} \label{sec:prob} The \textsc{vespa} code was unable to evaluate 21 candidates orbiting solar-analogue stars within our sample. In more detail: \begin{enumerate}[leftmargin=*] \item Three candidates have orbital period and simulated stellar properties that imply their orbit to be within their host star's Roche limit. \textsc{vespa} considers this situation as a FP; \item Eight TOIs did receive MCMC modelling but the trapezoidal fitting model was not able to fit the transit signal. All these candidates have a very low signal/noise ratio; \item The trapezoid MCMC fit did not converge for one candidate; \item Nine TOIs have no light-curve publicly-available on the MAST portal. \end{enumerate} \section{Discussion} \label{sec:discussion} The validation process is a fundamental part of a more complex work, which aims to identify false positives and exclude them to obtain a cleaner sample of candidates, leading to the final confirmation of an exoplanet candidate. Thanks to this procedure, we can eliminate most of the false candidates, and proceed with the follow-up observations up to the radial velocity measurement, which allows the full characterisation of a planetary system. Only after the radial velocity procedure, it is possible to confirm an exoplanet candidate. \subsection{Follow-up observations} \label{next} In this work, we statistically vetted 23 \textit{TESS} candidates orbiting solar-analogue stars and subsequently analysed their stellar neighbourhood to investigate the presence of possible contaminant stars, confirming the transit source origin of six of them and greatly enhancing the on-target probability of another six of them. These two steps allow us to determine which are the best targets and which are the next follow-up observations required to fully confirm and characterise their planetary nature. For some of our targets, the high-resolution spectra of their host stars are available in public archives, and these are listed in detail in Table \ref{tab:spectra}. If the name of the host star is duplicated, it shows that there are more than one instrument that has obtained the spectrum of the host star. All these instruments have been included in Table \ref{tab:spectra}. For some of our targets, there are RV measurements performed with low-resolution spectroscopy. However, we do not report on the spectra obtained with those instruments but only the `Open Observing Notes' on these RV measurements that are publicly available on the ExoFOP website. It should be noted that, however, there are no high-precision RV measurements reported for any of these stars, which would lead to confirming or ruling out an exoplanet candidate. Depending on which instrument has published the spectra, and with what spectral coverage and resolution, the next steps for follow-up on our targets will be determined. For our targets, as incorporated and specified in Table \ref{tab:spectra}, the available spectra are obtained with TRES \citep{2007RMxAC..28..129S} (resolution $\sim$ 44000), FIDEOS \citep{2014SPIE.9147E..89T} (resolution $\sim$ 43000), and CHIRON \citep{2013PASP..125.1336T} (resolution $\sim$ 80000). Based on the capabilities of each instrument, different follow-up paths should be pursued for each of our targets, as we will detail later in this Section. It should be noted, however, that RV confirmation requires resource-intensive long-term monitoring programs. On ExoFOP catalogue, all the spectra obtained by CHIRON (which have the highest resolution in Table \ref{tab:spectra}) for our targets are flagged as not appropriate to precision RV (PRV), which is necessary for directly measure the stellar reflex motion due to planets and derive planet masses. We hence focus on RV follow-up strategies that have not been yet conducted for our targets. Low-precision RV measurements are necessary to reject grazing eclipsing binaries or transiting massive white and brown dwarfs, which can not be identified through the transit method. It is also essential because often the presence of a stellar body can be ruled out after taking three radial velocity observations. This aim can be fulfilled by either TRES or FIDEOS, as listed in Table \ref{tab:spectra}. In fact, TRES and FIDEOS are usually used for identifying the nature of the transiting objects or ruling out false positives (a technique known as \textit{reconnaissance spectroscopy}). Any candidate that will survive this test will become an exquisite target for internal structure and atmospheric characterisation through high-precision RV measurements, which can be conducted by higher resolution spectrometers -- that confirm exoplanet candidates by determining their masses -- such as CHIRON. \subsubsection{Definition of `statistically validated planets' and priority marks} \label{sec:definition} Considering all the above points and state-of-the-art planet validation papers, we recognise some TOIs investigated in this work as fully `statistically validated planets'. For such TOIs we have added a small letter planet suffix (e.g., `b', `c') which takes the place of .01/.02 previously present (see Table \ref{tab:validated}). Any other TOI investigated in this work is instead referred to as `vetted'. In particular, only TOIs meeting the following criteria should be given a planet suffix: \begin{itemize}[leftmargin=*] \item The transit signal has been confirmed to be on-target (i.e., relative to a \textit{maxrad} area that contains no known neighbouring stars bright enough to cause the event, including Adaptive Optics/Speckle neighbours); \item Host star spectroscopy should not be suggestive of a composite spectrum, a large RV offset indicative of an EB, or an RV orbit that is out-of-phase with the photometric ephemeris; \item The TOI should have a well sampled transit shape (i.e., high photometric precision, high number of transits observed, short cadence sampling or transits very deep) to be used for the statistical validation; \item The TOI has been probabilistically vetted with FPP < 0.01; \item The TOI has a uniquely determined orbital period. \end{itemize} Moreover, here we suggest a priority mark to establish what the next step is for the five statistically validated planets and the 18 exoplanet candidates vetted in this paper (see Table \ref{tab:validated}): \begin{itemize}[leftmargin=*] \item Mark = 1: candidate with greatly enhanced on-target probability, and both low-precision RV measurements and high-resolution imaging data are already available. High-precision RV observations should be conducted; \item Mark = 2: candidate with greatly enhanced on-target probability, and low-precision RV measurements are already available. High-resolution Adaptive Optics/Speckle imaging observations should be conducted; \item Mark = 3: candidate with greatly enhanced on-target probability, but no (or not enough) RV measurements are available. We suggest performing low-precision RV observations; \item Mark = 4: candidate whose on-target probability is low or unclear. We suggest to perform on-off photometry observations. \end{itemize} \subsection{Statistical validation reliability} We acknowledge that our statistical validation analysis strongly relies on the stellar neighbourhood analysis. In our calculation, we have added a constraint (\textit{maxrad}) to account for the probability that the transit originates from the target. However, the use of \textit{Gaia} EDR3 photometry, centroid motion tests or subsequent on-off analyses is necessary to give 100 per cent reliability to our results. Moreover, high-resolution imaging follow-ups are needed to rule out unresolved neighbour stars beyond the 1.5 arcseconds spatial resolution of \textit{Gaia} EDR3. In fact, if all neighbour stars are not adequately ruled out as transit sources prior to the analysis, \textsc{vespa} could classify false positive candidates as true planets \citep{2020arXiv200200691G, 2016ApJ...822...86M}. For this reason, we performed the \textit{Gaia} EDR3 photometry analysis and the centroid motion tests before starting the \textsc{vespa} code, and we checked the high-resolution imaging data publicly available on the ExoFOP website. We require subsequent on-off analyses for those candidates not yet confirmed orbiting their host star. This emphasises the importance of careful consideration of potential contamination from the host star’s neighbourhood (see, for example, the notes in Table \ref{tab:validated} on the on-target probability of TOI 1689.01), then precisely following the priority marks we have suggested (Sect. \ref{next}) to fully validate and later confirm our target exoplanets. We emphasize that our candidates with Mark = 1 are fully validated planets and ready to be confirmed. In contrast, the \textit{maxrad} constraint that we have adopted enhances -- by construction -- the probability of the BEB scenario, and hence the total FPP of a candidate (also noted by \citealt{2021MNRAS.508..195D}). Therefore, we can only identify the planetary candidate after excluding the BEB scenario through a complete analysis of the stellar neighbourhood. This is for example the case of TOI 4399.01, which was first identified by \textsc{vespa} as a possible BEB and then as a likely planet after ruling out contaminant stars using \textit{Gaia} EDR3 photometry. \section{Conclusions} \label{sec:conclusion} Here we presented our ongoing follow-up program of \textit{TESS} candidates orbiting solar-analogue stars that are in the all-sky PLATO input catalogue. Our probabilistic validation analysis allows us to identify which are the most promising candidates, while the evaluation of their stellar neighbourhood determines which are the next follow-up observations needed to confirm their exoplanet nature. The final goal of the entire procedure is to avoid wasting observational time at expensive facilities and optimize follow-up resources. In particular, we statistically vetted 23 \textit{TESS} candidates orbiting solar-analogue stars. Five of them have been confirmed on-target and are ready for follow-up high-precision radial velocity observations (we refer to them as `statistically validated planets'), another three have a greatly enhanced on-target probability and need high-resolution imaging data, while the others need additional spectroscopic and/or photometric observations (see Table \ref{tab:validated}, column `Priority \& obs.'). It is worth noting that these are new discoveries. We will continue to search for new validated planets at least as long as the \textit{TESS} mission will continue. In the very near future, we will complete the on-off photometry follow-up of our best targets, by proposing further investigation with REM and other telescopes, as well as low-precision radial velocity observations. This will allow us to extend the sample of vetted candidates and hence the number of genuine targets to be later characterized through high-precision radial velocity observations. Similarly to \textit{TESS}, the future PLATO transit mission \citep{2014ExA....38..249R} will have a low spatial resolution (15 arcsec/pixel, \citealt{2017SPIE10564E..05L}); hence, it will also require a quick and efficient statistical validation procedure to exclude false positives from the large number of candidates that PLATO will discover. To conclude, our validation procedure will be essential and should be rather easily adaptable to the future PLATO mission.\newline The authors became aware of the confirmation of TOI 4399 b \citep{2022AJ....163..289Z} during the referee process. This independent work verifies our process by confirming one of our five validated planets. Moreover, we want to bring attention to the follow-up work of TOI-5398 on TRES by Jiayin Dong et al. (private communication), which allowed the establishment of a tentative orbit. This independent work further demonstrates that our method finds good targets for follow-up. \section*{Acknowledgements} We are extremely grateful to the anonymous referee for the thorough comments, which undoubtedly improved the quality of this manuscript. We acknowledge Dr. David Latham and Allyson Bieryla for providing helpful in-depth interpretation of TRES observations. We also acknowledge Dr. Boris Safonov for the precious discussion on TOI 1689. I would like to acknowledge the contribution of Filippo Santoliquido in helping me to improve the scientific impact of many figures. I would also like to acknowledge the contribution of Ho-Hin Leung, who helped me improving the readability of the text. G.M. acknowledges the support of the Erasmus+ Programme of the European Union and of the doctoral grant funded by the University of Padova and by the Italian Ministry of Education, University and Research (MIUR). G.M. is also grateful to the Centre for Exoplanet Science, University of St Andrews (StA-CES) for hospitality and computing resources. GPi, LBo, VNa, and FZM acknowledge the funding support from Italian Space Agency (ASI) regulated by `Accordo ASI-INAF n. 2013-016-R.0 del 9 luglio 2013 e integrazione del 9 luglio 2015 CHEOPS Fasi A/B/C'. We acknowledge the support of PLATO ASI-INAF agreements n.2015-019-R0-2015 and n. 2015-019-R.1-2018. T.G.W. and A.C.C. acknowledge support from STFC consolidated grant number ST/V000861/1, and UKSA grant ST/R003203/1. This research has made use of the Exoplanet Follow-up Observation Program (ExoFOP; DOI: 10.26134/ExoFOP5) website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. We acknowledge the use of TESS High Level Science Products (HLSP) produced by the Quick-Look Pipeline (QLP) at the TESS Science Office at MIT, which are publicly available from the Mikulski Archive for Space Telescopes (MAST). Funding for the TESS mission is provided by NASA's Science Mission directorate. \textit{Software}: \textsc{Astropy} \citep{2013A&A...558A..33A}, \textsc{astroquery} \citep{2019AJ....157...98G}, \textsc{emcee} \citep{2013PASP..125..306F}, \textsc{Isochrones} \citep{2015ascl.soft03010M}, \textsc{lightkurve} \citep{2018ascl.soft12013L}, \textsc{matplotlib} \citep{2018ascl.soft12013L}, \textsc{Numpy} \citep{2011CSE....13b..22V}, \textsc{pandas} \citep{jeff-reback-2021-5501881}, \textsc{pycheops} \citep{2021MNRAS.tmp.3057M}, \textsc{MultiNest} \citep{2019OJAp....2E..10F}, \textsc{Scipy} \citep{2007CSE.....9c..10O}, \textsc{vespa} \citep{2012ApJ...761....6M}. This work is dedicated to my beloved grandma. \section*{Data Availability} The authors confirm that the data supporting the findings of this study are available within the article and its supplementary materials. \bibliographystyle{mnras} \bibliography{references} % \appendix \section{Supplementary tables} \begin{table*} \caption{Statistically vetted candidates and validated planets (i.e., those with the suffix `b') orbiting solar-analogue stars.} \label{tab:validated} \begin{tabular}{lllllllllllll} \hline \hline TOI & Tmag $^a$ & Vmag & Period & R$_{p, \rm \textsc{vespa}}$ & R$_{p, \textit{TESS}}^i$ & R$_{p, \rm undil.}$ & Secth. $^b$ & Maxrad $^c$ & FPP $^d$ & Centr. & On-target$^g$ & Priority\\ & & & (day) & ($R_{\earth}$) & ($R_{\earth}$) & ($R_{\earth}$) & & (arcsec) & & test & result & \& obs. \\ \hline 1689.01 & 6.3196 & 6.996 & 9.12381 & 2.781 & 2.109 & 2.108 & 0.00029 & 1.5 & 1.31e-05$^j$ & failed$^*$ & unclear$^k$ & -\\ 2545 b & 8.9059 & 9.521 & 7.994037 & 2.526 & 2.750 & 2.75 & 0.00033 & 1.5 & 0.00111$^j$ & passed & confirmed & 1: PRV\\ 2569.01 & 11.1769 & 11.775 & 13.114774 & 8.551 & 10.489 & 13.235 & 0.00234 & 103 & 0.00874 & failed & unclear & 4: on-off\\ 3353.01$^*$ & 8.7843 & 9.327 & 4.665774 & 2.840 & 2.831 & 2.834 & 0.00037 & 103 & 1.25e-13$^j$ & passed$^*$ & unclear & 4: on-off\\ 3353.02$^*$ & 8.7843 & 9.327 & 8.817565 & 2.468 & 2.464 & 2.477 & 0.00052 & 103 & 0.0$^j$ & passed$^*$ & unclear & 4: on-off\\ 3474.01 & 11.9672 & 12.531 & 3.878932 & 16.305 & 15.071 & 17.858 & 0.00239 & 82 & 8.02e-07 & failed & unclear & 4: on-off\\ 3837.01 & 11.366 & 11.673 & 11.892894 & 13.496 & 12.036 & 12.11 & 0.00127 & 1.5 & 7.41e-13 & passed & enhanced & 2: HRI$^f$\\ 3892.01 & 11.9956 & 12.607 & 4.581080 & 13.232 & 13.858 & 14.358 & 0.00179 & 82 & 1.01e-09 & failed & unclear & 4: on-off\\ 4029.01 & 11.1718 & 11.554 & 5.884856 & 6.701 & 6.969 & 7.135 & 0.00106 & 103 & 0.00322 & passed & enhanced & 2: HRI\\ 4361.01 & 8.6661 & 9.265 & 741.42559$^h$ & 2.931 & 2.971 & 2.971 & 9e-05 & 1.5 & 0.000626 & passed & enhanced & 2: HRI\\ 4399 b & 7.7582 & 8.31 & 7.712121 & 3.310 & 3.208 & 3.207 & 0.00107 & 1.5 & 8.75e-06$^j$ & passed & confirmed & 1: PRV\\ 4402.01 & 9.8429 & 10.286 & 3.698994 & 1.748 & 1.786 & 1.787 & 0.00012 & 93 & 0.00675 & failed$^*$ & unclear & 4: on-off\\ 4443.02 & 7.9147 & 8.493 & 10.313947 & 2.344 & 2.161 & 2.164 & 0.00020 & 92.5 & 1.21e-05$^j$ & passed & confirmed & 3: LPRV$^e$\\ 4492.01 & 9.6311 & 10.324 & 4.433206 & 14.323 & 13.290 & 14.218 & 0.00075 & 103 & 1.74e-06 & failed & unclear & 4: on-off\\ 4602.01 & 7.7746 & 8.32 & 3.980286 & 2.302 & 2.427 & 2.429 & 0.00014 & 82 & 0.0 & - & unclear & 4: on-off\ \\ 4640.01 & 11.0771 & 11.63 & 2.685723 & 2.987 & 2.928 & 2.929 & 0.00038 & 1.5 & 0.000503 & passed & enhanced & 3: LPRV\\ 4702.01 & 12.2469 & 12.877 & 3.121702 & 15.850 & 15.823 & 15.873 & 0.00098 & 1.5 & 0 & - & enhanced & 3: LPRV\\ 4994.01 & 11.9545 & 12.652 & 21.492146 & 9.955 & 9.328 & 9.338 & 0.002817 & 1.5 & 2.64e-06 & passed & enhanced & 3: LPRV\\ 5174 b & 10.6309 & 11.583 & 12.214286 & 5.346 & 5.343 & 5.351 & 0.00103 & 1.5 & 1.97e-04 & - & confirmed & 1: PRV\\ 5210.01 & 11.4194 & 12.118 & 4.566131 & 13.341 & 12.228 & 12.827 & 0.00158 & 103 & 0 & failed & unclear & 4: on-off\\ 5238 b & 11.6370 & 12.214 & 4.872171 & 5.170 & 5.209 & 5.220 & 0.00268 & 82 & 1.43e-13 & passed & confirmed & 1: PRV\\ 5398 b & 9.5806 & 10.059 & 10.590923 & 11.758 & 11.653 & 11.657 & 0.00232 & 1.5 & 3.26e-14 & - & confirmed & 1: PRV\\ 5427.01 & 11.6590 & 12.140 & 5.237418 & 14.321 & 14.918 & 16.112 & 0.00306 & 82 & 7.44e-6 & failed & unclear & 4: on-off\\ \hline \multicolumn{13}{l}{\textbf{ Notes.}} \\ \multicolumn{13}{l}{$^a$ \textit{TESS} magnitude.}\\ \multicolumn{13}{l}{$^b$ Maximum secondary eclipse depth allowed.}\\ \multicolumn{13}{l}{$^c$ Exclusion radius within which FP scenarios are allowed.}\\ \multicolumn{13}{l}{$^d$ False Positive Probability.}\\ \multicolumn{13}{l}{$^e$ Low Precision Radial Velocity.}\\ \multicolumn{13}{l}{$^f$ High-resolution imaging.}\\ \multicolumn{13}{l}{$^g$ On-target probability. Full explanation in Section \ref{sec:vett-conf}.}\\ \multicolumn{13}{p{\textwidth}}{$^h$ TOI 4361.01 is a `duo-transit' candidate, i.e., a TOI with only two transits separated by about two years. Therefore, its period is not uniquely constrained but somewhat ambiguous. However, as further explained in Appendix \ref{appendix-b}, we have reasons to keep it as a vetted candidate regardless of its uncertain periodicity.}\\ \multicolumn{13}{p{\textwidth}}{$^i$ The \textit{TESS} planetary radius R$_{p, \textit{TESS}}$ has been calculated, in this work, from the transit depth available on the ExoFOP website and eq. 22 from \cite{2010arXiv1001.2010W}. We did the latter to maintain consistency with R$_{p, \rm \textsc{vespa}}$ \& R$_{p, \rm undil.}$, which were both calculated with the ExoFOP transit depth as a prior parameter. }\\ \multicolumn{13}{p{\textwidth}}{$^j$ TOI 3353.01 and TOI 3353.02 have high-resolution speckle imaging publicly available on the ExoFOP website (PI: Howell). Following \cite{2012ApJ...761....6M} and the \textsc{vespa} tutorial, we inserted the Gemini/Zorro contrast curves into the \textsc{vespa} input parameters. The final FPP of TOI 3353.01 slightly increases (reaching a value of 1e-6), while that of TOI 3353.02 remains unchanged. This result further confirms our statistical vetting. TOI 2545.01, TOI 4399.01, and TOI 4443.02 also have high-resolution speckle (or Adaptive Optics) imaging publicly available on the ExoFOP website (PI: Dressing, Howell, and Ciardi, respectively). The final FPP of TOI 2545.01 remains the same, while the FPP of every other TOIs decreases from two to seven orders of magnitude. Also in this case, our vettings are confirmed. This analysis allowed us to validate TOI 2545.01 and TOI 4399.01 and label them as TOI 2545 b and TOI 4399 b.}\\ \multicolumn{13}{p{\textwidth}}{$^k$ Companion detected at 0.08 arcseconds separation using high-resolution speckle imaging (Dr. Boris Safonov, from \url{exofop.ipac.caltech.edu}). New imaging data has been scheduled, and new analyses of existing data are also in progress (Dr. Boris Safonov, private communication). Additional information is available in Table \ref{tab:spectra}.}\\ \multicolumn{10}{l}{$^*$ Controversial.}\\ \end{tabular} \end{table*} \begin{table*} \caption{Published spectra of the host stars of our targets. The column under `Total' demonstrates how many spectra in total are obtained by the same facility throughout the years.} \label{tab:spectra} \begin{tabular}{llllllll} \hline \hline TOI & \textit{V} & Telescope & Instrument & Resolution & Spectral Range & Total & ExoFOP website's `Open Observing Notes'$^a$ \\ & (mag) & & & & (\AA) && \\ \hline 1689.01 & 6.996 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 3 & Large RV offset. Potential composite spectrum.$^b$ \\ 2545.01 & 9.521 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 4 & False Positive scenarios ruled out.\\ 2545.01 & 9.521 & SMARTS (1.5 m) & CHIRON & 80000 & 4500--8900 & 2 & --\\ 2545.01 & 9.521 & ESO 1m telescope & FIDEOS & 43000 & 4200--8000 & 2 & --\\ 2569.01 & 11.775 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 2 & RV offset out-of-phase probably not significant. \\ 3353.01 & 9.327 & SMARTS (1.5 m) & CHIRON & 80000 & 4500--8900 & 1 & --\\ 3837.01 & 11.673 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 3 & False Positive scenarios ruled out.\\ 3892.01 & 12.607 &FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 2 & False Positive scenarios ruled out.\\ 4029.01 & 11.554 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 2 & False Positive scenarios ruled out.\\ 4361.01 & 9.265 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 4 & False Positive scenarios ruled out.\\ 4361.01 & 9.265 & SMARTS (1.5 m) & CHIRON & 80000 & 4500--8900 & 1 & --\\ 4399.01 & 8.31 & SMARTS (1.5 m) & CHIRON & 80000 & 4500--8900 & 11 & False Positive scenarios ruled out.$^c$\\ 4402.01 & 10.286 & SMARTS (1.5 m) & CHIRON & 80000 & 4500--8900 & 1 & -- \\ 4443.01 & 8.493 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 2 & RV offset out-of-phase. More observations needed.$^d$\\ 4492.01 & 10.324 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 2 & False Positive scenarios ruled out.\\ 4602.01 & 8.32 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 2 & False Positive scenarios ruled out.\\ 4640.01 & 11.63 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 2 & RV offset out-of-phase. More observations needed.$^d$\\ 4702.01 & 12.877 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 2 & More observations needed.\\ 5174.01 & 11.583 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 2 & False Positive scenarios ruled out.\\ 5210.01 & 12.118 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 2 & More observations needed.\\ 5238.01 & 12.214 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 2 & False Positive scenarios ruled out.\\ 5398.01 & 10.059 & FLWO (1.5 m) & TRES & 44000 & 3850--9096 & 11 & False Positive scenarios ruled out.$^d$\\ \hline \multicolumn{8}{l}{\textbf{ Notes.}} \\ \multicolumn{8}{l}{$^a$ Summary of the `Open Observing Notes' publicly available on the ExoFOP website.} \\ \multicolumn{8}{p{\textwidth}}{$^b$ `The new TRES observation is very strong and is shifted by about 3 km/s compared to the first two TRES observations more than a year ago. Moreover, there is more line broadening, hinting at a composite spectrum. This is not a good target for PRV work or atmospheric characterization. No more TRES recon spectra are needed.' (Dr. David Latham, from \url{exofop.ipac.caltech.edu}).} \\ \multicolumn{8}{l}{$^c$ Data publicly available on the ExoFOP website and further analysed by \cite{2022AJ....163..289Z}.} \\ \multicolumn{8}{p{\textwidth}}{$^d$ This conclusion comes from the `Open Observing Notes' and private communication with Dr. David Latham.} \\ \end{tabular} \end{table*} \section{Duo-transit candidate vetting} \label{appendix-b} The \textit{TESS} candidate TOI 4361.01 is one of our vetted candidates with the second highest priority mark, which means that following our procedure, we demonstrated that it is ready to be analysed with high-resolution imaging and subsequently confirmed through high-precision radial velocity observations. However, there is a large gap in the \textit{TESS} data that causes its period $P$ of $\approx$741 days to be ambiguous. The absence of a period uniquely constrained induces us to be careful and requires further analysis to confirm its vetting. Therefore, we performed the following: \begin{enumerate}[leftmargin=*] \item We took into account all possible TOI 4361.01 period aliases, and modelled the \textit{TESS} light curve using the \textsc{pycheops} code, to extrapolate the host star's stellar density $\rho$ from the transit signal (see Sec. \ref{sec:on-off}). We then ran an MCMC simulation to better estimate the value and the uncertainty of $\rho$; \item we considered all the aliases whose extrapolated $\rho$ has a physical result and performed, again, the \textsc{vespa} analysis considering the new period. \end{enumerate} In Figure \ref{fig:mcmc}, we illustrate the results. In particular, we show the stellar density $\rho$ coming from the possible aliases as a function of the period aliases. We added the nominal stellar density available on the ExoFOP website for comparison. The lower limit on the orbital period comes from the ephemeris window covered by the \textit{TESS} mission during sectors 8 and 35, while the upper limit comes from the light curve modelling, where we fixed the transit duration as reported on the ExoFOP website. Specifically, when a period alias is $\gtrapprox 35.5$ days, the impact parameter $b$ (calculating following Eq. 7 from \citealt{2010arXiv1001.2010W} and considering a circular orbit) becomes $> 1$ (not physically allowed). These two limits constrain the period aliases to be within [$P$/34, $P$/21]. The assumption of a circular orbit avoids the treatment of eccentricity in the evaluation of impact parameter and the consequent degeneracy in the \textsc{pycheops} transit modelling. Although the latter assumption is simplistic, the results of exoplanet population studies, especially for low-mass, sub-Neptunes planets, favour low eccentricities values \citep{2011arXiv1109.2497M, 2012MNRAS.425..757K, 2013MNRAS.434L..51K}. From the results of this figure, we conclude that whilst this approach could yield a unique orbital period, we do not have the sampling in the transit ingress/egress to accurately say what it is. However, we emphasise that the unique orbital period estimation is not the central goal of this appendix and this work. Following the result of our analysis, we performed the \textsc{vespa} analysis of every survived TOI 4361.01 period alias. Moreover, for the sake of completeness, we also considered some period aliases ($P$/2, $P$/4, $P$/8, and $P$/16) outside the aforementioned limits. In conclusion, we have found the following: \begin{itemize}[leftmargin=*] \item every TOI 4361.01 period alias shows a FPP $<$ 1\%; \item the shorter the period, the greater the planetary probability (and the lower the FPP). \end{itemize} We hence confirm our statistical analysis and keep TOI 4361.01 as one of our best vetted candidates. \bsp % \label{lastpage}
Title: ALMA detection of parsec-scale blobs at the head of kiloparsec-scale jet in the nearby Seyfert galaxy NGC 1068
Abstract: We present Atacama Large Millimeter/submillimeter Array observations at $\approx100$ GHz with $0.05$ arcsec (3 pc) resolution of the kiloparsec-scale jet seen in the nearby Seyfert galaxy NGC 1068, and we report the presence of parsec-scale blobs at the head of the jet. The combination of the detected radio flux ($\approx0.8$ mJy), spectral index ($\approx0.5$), and the blob size ($\approx10$ pc) suggests a strong magnetic field of $B\approx240\,\mu$G. Such a strong magnetic field most likely implies magnetic field amplification by streaming cosmic rays. The estimated cosmic-ray power by the jet may exceed the limit set by the star formation activity in this galaxy. This result suggests that even modest-power jets can increase the galactic cosmic-ray content while propagating through the galactic bulge.
https://export.arxiv.org/pdf/2208.08533
command. \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \shorttitle{Cosmic-ray production at pc-scale blobs in NGC~1068} \shortauthors{Michiyama et al.} \graphicspath{{./}{figures/}} \begin{document} \title{ALMA detection of parsec-scale blobs at the head of kiloparsec-scale jet in the nearby Seyfert galaxy NGC~1068} \correspondingauthor{Tomonari Michiyama} \email{t.michiyama.astr@gmail.com} \author[0000-0003-2475-7983]{Tomonari Michiyama} \affiliation{Department of Earth and Space Science, Graduate School of Science, Osaka University, 1-1, Machikaneyama, Toyonaka, Osaka 560-0043, Japan} \affiliation{National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo, 181-8588} \author[0000-0002-7272-1136]{Yoshiyuki Inoue} \affiliation{Department of Earth and Space Science, Graduate School of Science, Osaka University, 1-1, Machikaneyama, Toyonaka, Osaka 560-0043, Japan} \affiliation{Interdisciplinary Theoretical \& Mathematical Science Program (iTHEMS), RIKEN, 2-1 Hirosawa, Saitama, 351-0198, Japan} \affiliation{Kavli Institute for the Physics and Mathematics of the Universe (WPI), UTIAS, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583, Japan} \author{Akihiro Doi} \affiliation{The Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuou-ku, Sagamihara, Kanagawa 252-5210, Japan} \affiliation{Department of Space and Astronautical Science, SOKENDAI, 3-1-1 Yoshinodai, Chuou-ku, Sagamihara, Kanagawa 252-5210, Japan} \author[0000-0002-7576-7869]{Dmitry Khangulyan} \affiliation{Graduate School of Artificial Intelligence and Science, Rikkyo University, Nishi-Ikebukuro 3-34-1, Toshima-ku, Tokyo 171-8501, Japan} \keywords{Active galactic nuclei (16), High energy astrophysics (739), Galaxy jets (601), Cosmic ray astronomy (324)} \section{Introduction} \label{sec:intro} Cosmic rays are ultrarelativistic particles that form an important component of the cosmic background. The cosmic-ray energy spectrum suggests local sources that are capable of boosting the particle energy beyond 1~PeV \citep{Blasi_2013A&ARv..21...70B,Kotera2011ARA&A..49..119K}. In the Milky Way, supernovae explosions give rise to sufficient cosmic-ray sources for supplying the required accelerating power. The cosmic-ray level in some of the observed galaxies is consistent with that of the Milky Way. For example, the detected gamma-ray flux in nearby starburst galaxies, such as NGC~253 and M~82, is reasonably explained by starburst activities \citep{Persic_2008A&A...486..143P, Rephaeli_2010MNRAS.401..473R, Abdo_2010ApJ...725L..73A}. However, several nearby galaxies exhibit an excess gamma-ray flux above the calorimetric limit of their star formation activity \citep{Eichmann_2016ApJ...821...87E, Ajello_2020ApJ...894...88A}. Understanding the dominant cosmic-ray sources other than starburst activities in galaxies are urgent topic in high energy astrophysics. Unveiling the cosmic-ray production activities in galaxies is also important for studying the galaxy evolution, as shown in recent cosmological simulations \citep{Hopkins2020MNRAS.492.3465H,Hopkins_2021MNRAS.501.4184H}. The nearby Seyfer-2 galaxy NGC~1068 located at a distance of $D_{\rm L}=13.97\pm2.1$~Mpc (\citealt{Anand2021MNRAS.501.3621A}) is one of the brightest gamma-ray emitters among nonblazar galaxies \citep{Ackermann_2012ApJ...755..164A} and its starburst activity falls below the detected gamma-ray flux level \citep{Lenain_2010A&A...524A..72L, Eichmann_2016ApJ...821...87E}. In addition, hints of the high-energy neutrino from the direction of NGC~1068 are reported \citep{Aartsen_2020PhRvL.124e1103A}. Therefore, this galaxy is an ideal target for investigating alternative cosmic-ray sources other than those driven by the star formation activity. In centimeter radio continuum, NGC~1068 has a prominent linear structure with an extent of 13$^{\prime\prime}$, which is confirmed by Very Large Array (VLA) \citep{Wilson_1987ApJ...319..105W}. This structure is considered to be a kiloparsec-scale jet. The distance from the central black hole to the head of the jet is $l_{\rm jet}\approx670$~pc, assuming that the jet is inclined to the line of sight by 45$^\circ$. \citet{Garc_2014A&A...567A.125G} estimated a jet power of $P_{\rm jet}=1.8\times{10}^{43}$~erg~s$^{-1}$ based on the 1.4\,GHz map \citep{Gallimore1996ApJ...458..136G_table} assuming the phenomenological relation of 1.4~GHz luminosity and jet power \citep{Brzan_2008ApJ...686..859B}. We apply a self-similar fluid model developed for extragalactic jet sources \citep{Kaiser_1997MNRAS.286..215K, Gallo_2005Natur.436..819G}, where the jet supplies energy at a constant rate and expands with a velocity of $v_{\rm exp}$ in a medium of constant mass density $\mu\bar{n}$ ($\mu$ is the average particle mass of 0.68$m_{\rm H}$, $m_{\rm H}$ is the hydrogen mass, and $\bar{n}=1$\,cm$^{-3}$ is the average particle density). Subsequently, the jet age is given as $t_{\rm jet}={(l_{\rm jet}}/2)^{5/3}{(\mu\bar{n}/P_{\rm jet})}^{1/3}$ by balancing the interior pressure and ram pressure of the shocked interstellar medium (ISM). By differentiating both sides of the equation with respect to time, the lobe expansion velocity can be obtained as $v_{\rm exp}=(3/5)(l_{\rm jet}/t_{\rm jet})$; for NGC~1068, we obtained $t_{\rm jet}\approx1\times{10}^5$~yr and $v_{\rm exp}\approx3\times10^3$~km~s$^{-1}$. We note that the density assumption is not very important in estimating the jet properties because the $t_{\rm jet}$ and $v_{\rm exp}$ timescales weakly depend on the density (e.g., $v_\mathrm{exp}\propto\bar{n}^{-1/3}$). In this Letter, we explain the cosmic-ray production activities at parsec-scale blobs based on ALMA high resolution map of the NGC~1068 jet head. \section{Observations} \label{sec:observation} Figure~\ref{fig:ALMA_B3} shows the low ($\sim0\farcs4$, 30~pc) and high ($\sim0\farcs05$, 3~pc) resolution millimeter maps of NGC~1068 by ALMA. The low-resolution map at 93.5~GHz (Figure~\ref{fig:ALMA_B3}a) displays a kiloparsec-scale radio lobe (hereafter NE-Lobe), indicating the shock formed by the interaction of the jet with the ISM at the edge of NE-Lobe, which is consistent with the centimeter images obtained by VLA \citep{Wilson_1987ApJ...319..105W}. The brightest radio emission comes from the head of the NE-Lobe. Figure~\ref{fig:ALMA_B3}b and \ref{fig:ALMA_B3}c display the high-resolution 92~GHz map of the entire NE-Lobe region and the enlarged view of the head of the NE-Lobe, respectively. This ALMA high-resolution map shows the bright region at the head of the NE-Lobe resolved into several blobs. To investigate the spectral index ($\alpha\equiv-{\rm d}S_\nu/{\rm d}\nu$) of the blobs at the centimeter/millimeter, we use archival 15\,GHz VLA and 252\,GHz ALMA maps (Figure~\ref{fig:index}a) and we identified four blobs (P1--P4). We measured the flux densities of the blobs using the {\tt imstat} command in Common Astronomy Software Applications package ({\tt CASA}) \citep{McMullin_2007ASPC..376..127M}. When measuring flux, we smoothed the VLA and ALMA images into 0\farcs15 beam using the {\tt imsmooth} task in {\tt CASA} to reduce systematic errors due to beam dilution. Although there are uncertainties related to the missing flux that impede the precise determination of the spectral index, our measurements show that $\alpha\approx0.5$ is preferred to hard ($\alpha>1$) or flat ($\alpha\approx0$) indices at the parsec-scale blobs. The maximum recovery scale (MRS) of the 92.0\,GHz ALMA band ~3 data was lower than that of the 14.9\,GHz VLA and 252.4~GHz ALMA band 6 data. The missing flux owing to mismatched (u,v) coverage has likely no effect on the 92\,GHz flux density because the 92\,GHz flux is higher than a simple power-law model joining 15 GHz to 250 GHz (Figure~\ref{fig:index}b). We used the archival FITS image files (not visibility) obtained from the NRAO VLA Archive Survey and the Japanese Virtual Obsefrvatory (JVO) for all the measurements in Figure~\ref{fig:ALMA_B3} and \ref{fig:index}. Table~\ref{tab:VLA_ALMA} shows the detailed information of the archival FITS image files. The spatial intensity profile from the central black hole to the peak position at the head is presented in Figure~\ref{fig:profile}. The blob diameter is defined as the full width at half maximum (FWHM) of the spatial profile. For the brightest blob, we obtained a diameter of $d_{\rm b}\approx10$~pc, which exceeds the beam size by a factor of three. We used the 1-D Slice tool in the viewer implemented in {\tt CASA} to make Figure~\ref{fig:profile}. The direction of the line is from the peak pixel\footnote{This position is often labeled as S1 in literature (e.g., \citealt{Gallimore_1996ApJ...464..198G_Cindex}).} in 92\,GHz map ($\alpha_{\rm ICRS}$, $\delta_{\rm ICRS}$)= (02h42m40.709s, -00d00m47.945s) to P2. The profile was fitted with three Gaussian functions using the {\tt curve\_fit} task in the Python SciPy module \citep{SciPy2020NatMe..17..261V}. The error of the best-fit value was estimated as the $95~\%$ confidence interval; for example, $A=0.175\pm0.006$~mJy, $x_{\rm off}=0\farcs967\pm0\farcs003$, and FWHM=$0\farcs1472\pm0\farcs003$ for the red curve. In this case, the most important parameter is FWHM, which is approximately three times larger than the synthesized beam size of $\approx0\farcs05$. The 92~GHz flux density associated with the brightest blob is $\approx0.77$~mJy (per 0\farcs15 aperture in diameter), corresponding to a specific luminosity of $L_{\rm 92GHz}\approx1.8\times{10}^{19}{\rm\ W\ {\rm Hz}^{-1}}$. \section{Discussion} \label{sec:discussion} Based on ALMA results, we investigate the possible cosmic-ray production activities by the kiloparsec-scale jet in NGC~1068. We confirm the enhancement of the magnetic filed at the blobs and investigate the possible power available for the cosmic-ray acceleration as described below. \subsection{Magnetic Field of The Blobs}\label{sec:B} Unless the plasma density is unfeasibly high, \(>10^3\rm\,cm^{-3}\), the detected radio emission has synchrotron origin, i.e., produced by relativistic electrons interacting with magnetic field. Assuming a power-law electron energy distribution, $n(E){\rm d}E\propto E^{-p}{\rm d}E$, the electron power-law index of $p\approx2$ can be anticipated from the obtained radio spectra (note that $p=2\alpha+1$ where $\alpha=0.5$). The most commonly used approach is ``minimum-energy formula". This approach requires that the total energy density of the magnetic field ($U_{\rm B}$) and the radio-emitting electrons ($U_{\rm e}$) is close to the minimum value. According to the formula based on equation (16.43) and (16.44) of \citet{longair_2011}\footnote{This formulation corresponds equation (1) of \citet{Beck2005AN....326..414B}, represented as $B_{\rm class}$.}, \begin{eqnarray}\label{eq:B_SI} B_\mathrm{min}&=&\left[ \frac{3\mu_0}{2}\frac{G(\alpha) L_{\rm \nu}}{V} \right]^{\frac{2}{7}}\\ \nonumber &\approx& 240\ {\rm \mu G}\\ \nonumber &&\times\left(\frac{L_{\rm 92GHz}}{1.8\times{10}^{19}\ {\rm W\ Hz}^{-1}}\right)^\frac{2}{7}\left(\frac{V}{{5.4\times10}^{2}\ {\rm pc^3}}\right)^{-\frac{2}{7}}, \end{eqnarray} where \begin{eqnarray} G(\alpha) &=& \frac{1}{a(p)(p-2)}[\nu_{\rm min}^{-(p-2)/2}-\nu_{\rm max}^{-(p-2)/2}]\nu^{(p-1)/2}\\ \nonumber &&\times \frac{(7.4126\times10^{-19})^{-(p-2)}}{2.344\times10^{-25}}(1.253\times10^{37})^{-(p-1)/2}, \end{eqnarray} $\nu_{\min}=15$~GHz, $\nu_{\max}=250$~GHz, $\nu=92$~GHz, $\mu_0=1.3\times10^{-6}$~m~kg$^{-2}$~A$^{-2}$, and $a(2)=0.529$. The $B_{\rm min}\approx240\,{\rm \mu G}$ corresponds $U_{\rm B}=B^2/(8\pi)=2.2\times10^{-9}\,{\rm erg\,cm^{-3}}$ and $U_{\rm e}=(4/3)U_{\rm B}=3.0\times10^{-9}\,{\rm erg\,cm^{-3}}$. This magnetic field strength is significantly higher than that of the ISM (a few $\mu$G) \citep{Ferri_2001RvMP...73.1031F}, suggesting a strongly amplified magnetic field. Such a magnetic field amplification is observed in Galactic supernova remnants, where cosmic-ray streaming enhances the ISM magnetic field up to a few hundred $\mu$G \citep[][see also \citealt{2022Sci...376...77A}]{Bell_2013MNRAS.431..415B}. Thus, we applied $B=B_{\rm min}=240$~$\mu$G as a fiducial value assuming the magnetic field amplification by the cosmic-ray instability seen in SNRs. Locally accelerated electrons lose their energy owing to synchrotron cooling over a time scale of \begin{eqnarray} t_{\rm sync}&=&\frac{3}{4}\frac{m_{\rm e}c}{\sigma_{\rm T}U_{\rm B}}\gamma_{\rm e}^{-1} \\ \nonumber &\approx&{4.5\times10^4}\ {\rm yr}\ \left(\frac{B}{240\ {\rm \mu G}}\right)^{-\frac{3}{2}}\left(\frac{\nu_{\rm sync}}{\rm 92\ GHz}\right)^{-\frac{1}{2}}. \end{eqnarray} Here, $\nu_{\rm sync}=3eB\gamma_{\rm e}^2/4\pi m_{\rm e}c$, where $m_{\rm e}$ is the electron rest mass, $\sigma_{\rm T}$ is the Thomson scattering cross-section, $U_{\rm B}=B^2/(8\pi)$ is the magnetic field energy density, and $e$ is the elementary charge. For comparison, the advection timescale through the blob is $t_{\rm adv} = d_{\rm b}/v_{\rm d}\approx1\times{10}^4$~yr (adopting downstream speed, $v_{\rm d} = v_{\rm exp}/4$, expected for strong shocks). Given that $t_{\rm adv} \lesssim t_{\rm sync}$, synchrotron emission is produced in the slow-cooling regime, implying that the measured electron spectrum is produced directely by the acceleration process. \subsection{Maximum Cosmic-ray Energy in the Kiloparsec-scale Jet}\label{Emax} The measured radio spectrum with $\alpha=0.5$ implies a power-law electron spectrum with $p\approx2$, which is consistent with the canonical slope predicted for diffusive shock acceleration under a strong shock \citep{Blandford_1978ApJ...221L..29B, Bell_1978MNRAS.182..147B}. The bright synchrotron emission of non-thermal electrons at the shock also implies efficient acceleration of the protons. If a significant upstream current is generated by cosmic-ray particles, the required amplification of the magnetic field is possible by nonresonant hybrid instability \citep{Bell_2013MNRAS.431..415B}. This implies that a considerable fraction of the downstream energy is transferred to relativistic protons. Because the physical conditions revealed at the forward shock are similar to those at the blast wave produced by a supernovae explosion, we can readily use the estimates for the cosmic-ray maximum energy from the literature. The maximum cosmic-ray energy can be calculated using Equation 6 in \citet{Bell_2013MNRAS.431..415B} as \begin{eqnarray} E_{\rm cr,max} &=& 8~{\rm PeV} \left(\frac{n_{\rm e}}{1\ {\rm cm}^{-3}}\right)^{\frac{1}{2}}\left(\frac{v_{\rm shock}}{3\times10^3~{\rm km\ s^{-1}}}\right)^3 \nonumber\\ &&\times\left(\frac{t_{\rm age}}{{1\times10^5~\rm yr}}\right) \times\left(\frac{p_{\rm cr}/\rho v_{\rm shock}^2}{0.3}\right), \end{eqnarray} where $n_{\rm e}$ is the electron density, $v_{\rm shock}=v_{\rm exp}$ is the forward shock velocity, $t_{\rm age}$ is the age of the system, and $p_{\rm cr}/\rho v_{\rm shock}^2$ is the cosmic-ray pressure ratio at the shock. For NGC 1068, we apply $n_{\rm e}=1~{\rm cm}^{-3}$ (as a typical ISM value), $v_{\rm shock}=3\times10^3~{\rm km\ s^{-1}}$, $t_{\rm age}=t_{\rm jet}=1\times{10}^5$~yr, and $p_{\rm cr}/\rho v_{\rm shock}^2=0.3$ (assuming the same situation as in the supernova cases). The maximum cosmic-ray energy can also be predicted on the basis of the requirement that ``the cosmic-ray acceleration timescale ($t_{\rm accel}$) cannot exceed the age of the system". Using the common assumption of Bohm diffusion, the cosmic-ray acceleration timescale was estimated to be $t_{\rm accel}\approx(10\eta_{\rm B}cr_{\rm g})/(3v_{\rm shock}^2)$, where $r_{\rm g}=E_{\rm cr}/eB$ is the gyroradius and $\eta_{\rm B}$ is the Bohm factor. When we apply $t_{\rm adv}$ (the shortest time among $t_{\rm adv}$, $t_{\rm sync}$, and $t_{\rm jet}$) as the age of the system, the maximum cosmic-ray energy is given by \begin{eqnarray}\label{eq:Emax} E_{\rm cr,max} &\approx& \frac{3e}{10c} \eta_B^{-1} B v_{\rm shock}^2 t_{\rm adv} \\ &\approx& 30\,\eta_B^{-1}\ {\rm PeV} \nonumber \\ &&\times\left(\frac{B}{230\ {\rm \mu G}}\right)\left(\frac{v_{\rm shock}}{3\times10^3\ {\rm km\ s^{-1}}}\right)^2\left(\frac{t_{\rm adv}}{{1\times10^4~\rm yr}}\right), \nonumber \end{eqnarray} Both these estimates yield cosmic-ray maximum energies of $\approx$PeV. The reverse shock at the jet head region (not the forward shock propagates through the ISM medium) might be a possible acceleration site \citep{Araudo2016MNRAS.460.3554A}. However, the reverse shock is unlikely the origin of the jet head in NGC~1068 based on the energy argument. The average magnetic field of $B_{\rm ave}\approx50$\,$\mu$G at the jet head (i.e., the regions shown in Figure~\ref{fig:ALMA_B3}c) can be obtained from the total flux of $\approx9$\,mJy and the volume a hemisphere of radius $r_{t}\approx40$\,pc). Assuming that this structure is produced by the reverse shock, the ratio between the energy density of the magnetic field $B_{\rm ave}^2/(8\pi)$ and the energy density of the jet $P_{\rm jet}/(\pi r_{\rm t}^2 c)$ is calculated as $\approx0.008$, which means that only $<1\%$ of the power of the relativistic jet is converted into the magnetic energy in the reverse shock, which is not likely the common case \citep{Blandford1977MNRAS.179..433B}. \subsection{Cosmic-ray Power of the Kiloparsec-scale Jet} The VLA and ALMA data allow us to obtain the total electron power as $P_{\rm e}\approx\pi{(d_{\rm b}/2)}^2v_{\rm d}U_{\rm e}\approx 2\times{10}^{38}$~erg~s$^{-1}$ per a radio-emitting blob. Here, we use the energy density $U_\mathrm{e}$ derived in Section~\ref{sec:B}. This estimate for $P_{\rm e}$ implies that the total cosmic-ray accelerating power in NGC~1068 can be as large as $P_{\rm CR,blobs}\approx1\times{10}^{41}$~erg~s$^{-1}$, where we summed the contributions from the four detected blobs, accounted for the existence of the counter jet, and adopted $K_{\rm ep}\approx0.01$. The expected power available for the cosmic-ray acceleration is $\approx1\%$ of the jet power, which is comparable to the efficiency of cosmic-ray acceleration produced by supernovae remnants. This value of $P_{\rm CR,blobs}$ corresponds to the lower limit of the total cosmic-ray accelerating power of this jet ($P_{\rm CR,jet}$), because we neglect the cosmic-ray acceleration occurring outside the blobs resolved by ALMA, i.e. $P_{\rm CR,jet}>P_{\rm CR,blobs}$. The total cosmic-ray accelerating power of the supernovae in NGC~1068 can be estimated as $P_{\rm CR,SN}\approx2\times{10}^{41}$~erg~s$^{-1}$ where the observed supernovae rate is 0.07~per year \citep{Storchi-Bergmann_2012ApJ...755...87S, Eichmann_2016ApJ...821...87E}, the energy of a supernova is ${10}^{51}$~erg, and $10~\%$ of supernova energy is transferred to the cosmic rays. The relation of $P_{\rm CR,jet}>P_{\rm CR,blobs} \approx P_{\rm CR,SN}$ indicates that the resolved parsec-scale blobs in the termination region of the kiloparsec-scale jet can be powerful sources of cosmic-ray production activities, as well as the star formation in NGC~1068 and would contribute to the gamma-ray excess seen in this galaxy (if $K_{\rm ep}$ in the blobs are same as that of supernovae remnants). Considering the spectral shape, maximum energy, and total energy budget, the blobs at the kiloparsec-scale jet head are likely important cosmic-ray factories. Finally, we note that there are various radio bright regions in NGC~1068 other than the NE-lobe region such as the components S1 and C, labeled by \citet{Gallimore1996ApJ...458..136G_table}. These regions would be alternative source of cosmic-ray acceleration sites and gamma-ray production regions. Regarding the component S1, it has been argued an efficient cosmic-ray accelerator, however, resulting gamma-ray emission would be strongly suppressed by the pair-creation processes because of intense photon field provided by the nucleus \citep[see e.g.,][]{Inoue2020ApJ...891L..33I}. Therefore, the component S1 can not be the major gamma-ray production site. The component C would be the other remaining candidate. However, a large radio flux of the component C alone cannot be considered strong evidence for cosmic-ray acceleration. One needs several ingredients for claiming CR acceleration: (1) operation of some acceleration mechanism; (2) presence of protons (ions) in the accelerator; (3) sufficiently high acceleration rate to boost particle energy to the PeV regime; and (4) conditions for particle escape from the acceleration site. The forward shock of the jet in NGC~1068 satisfies all these requirements, and this lucky combination allows us to claim its contribution to the CR budget. Furthermore, we see hints of a strong magnetic field amplification, which under the expected condition implies a significant current of cosmic ray, i.e., an indirect (but strong) sign of cosmic-ray acceleration. It is not clear if any of the above requirements are fulfilled in the component C. Analysis of their potential requires a reliable physical model. For P1-P4 we argue that these blobs belong to the downstream region of the jet forward shock, which allows us to estimate the key physical parameters there (e.g., the magnetic field amplification factor, the flow speed). Since, to our understanding, there is no physical model for the bright region C, we leave the analysis of the cosmic-ray acceleration potential of these regions beyond the scope of this letter. \section{Summary} Supernovae are considered to be the dominant source of cosmic-ray production in galaxies. However, recent gamma-ray observations revealed galaxies whose cosmic-ray power is beyond the calorimetric limit of the star formation activities. Because cosmic rays and their feedback processes play a crucial role in the evolution of galaxies, the identification of the cosmic-ray factories in external galaxies is intriguing. We show that the kiloparsec-scale jets observed in one such limit-break galaxy; NGC~1068, is a powerful cosmic-ray production site based on high-resolution (0\farcs05) ALMA maps of the termination shock region. The radio spectrum showed a spectral index of $\approx0.5$, corresponding to the electron spectral index of $\approx2$, which is consistent with the canonical value predicted by the diffusive shock acceleration. The amplified magnetic field is necessary to explain the radio flux at the blobs and the level of amplification is consistent with the cosmic-ray streaming instability that occurs in Galactic supernova. The maximum cosmic-ray energy may achieve $\approx$PeV and the cosmic-ray power at this kiloparsec-scale jet may be greater than that estimated based on the supernovae rate in NGC~1068. These results suggest that cosmic-rays can be generated far from the central black hole owing to interactions between the kpc-scale jet and ISM. Considering kiloparsec-scale jet as a new dominant cosmic-ray accelerator in a galaxy is important in comprehending the impact of cosmic rays on the evolution of galaxies. \begin{acknowledgments} T.M. and Y.I. appreciate support from NAOJ ALMA Scientific Research Grant Number 2021-17A. T.M. is supported by JSPS KAKENHI grant No. 22K14073. DK acknowledges support by JSPS KAKENHI grants No. 18H03722, 18H05463, and 20H00153. Some of the ALMA data were retrieved from the JVO portal (\url{http://jvo.nao.ac.jp/portal/}) operated by ADC/NAOJ. Data analysis was in part carried out on the common use data analysis computer system at the Astronomy Data Center, ADC, of the National Astronomical Observatory of Japan. This letter makes use of the following ALMA data: $\#$2018.1.01135, $\#$2018.1.01506, $\#$2015.1.01144, $\#$2016.1.00232, $\#$2016.1.00023. ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), NSC, ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ. \vspace{5mm} \facilities{ALMA, VLA} \software{astropy \citep{2013A&A...558A..33A,2018AJ....156..123A}, ALMA Calibration Pipeline, CASA \citep{McMullin_2007ASPC..376..127M} } \end{acknowledgments} \begin{deluxetable*}{lcccccccc} \tablenum{1} \tablecaption{Archival FITS images\label{tab:VLA_ALMA}} \tablewidth{0pt} \tablehead{ \colhead{telescope} & \colhead{freq.} & \colhead{resolution} & \colhead{rms} & \colhead{date} & \colhead{band} & \colhead{config.} & \colhead{MRS} & \colhead{ALMA ID} \\ \colhead{} & \colhead{(GHz)} & \colhead{} & \colhead{($\mu$Jy~beam$^{-1}$)} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} } \decimalcolnumbers \startdata VLA & 14.9 & $0\farcs145\times0\farcs136$ & 105 & 1983-11-03 & U & A/A & 1\farcs8 & $-$ \\ ALMA & 92.0 & $0\farcs053\times0\farcs050$ & 8 & 2019-06-22,24 & 3 & C43-9/10 & 0\farcs9 & 2018.1.01135.S \\ ALMA & 93.5 & $0\farcs46\times0\farcs34$ & 43 & 2019-09-22,23 & 3 & C43-6 & 5\farcs5 & 2018.1.01506.S \\ ALMA & 252.4 & $0\farcs150\times0\farcs103$ & 19 & 2017-07-23$^{*1}$ & 6 & C43-6$^{*2}$ & 34 & 2016.1.00023.S \\ \enddata \tablecomments{The values of observed (sky) frequency and resolution are based on the header of the downloaded FITS files. The sensitivity was measured in the emission-free region around the NE-Lobe. The MRS for VLA observations was predicted using a table available on the VLA website. The MRS for ALMA observations was based on the observatory reports (QA2 report). *1 Observations were conducted on 2017-07-23, 2017-07-24, 2017-07-27, 2017-12-27, 2018-01-01, 2018-01-19, and 2018-01-21. *2 Observations were conducted in C40-5 and C43-5 as well.} \end{deluxetable*} \clearpage \bibliography{sample631}{} \bibliographystyle{aasjournal}
Title: Applications of Machine Learning to Predicting Core-collapse Supernova Explosion Outcomes
Abstract: Most existing criteria derived from progenitor properties of core-collapse supernovae are not very accurate in predicting explosion outcomes. We present a novel look at identifying the explosion outcome of core-collapse supernovae using a machine learning approach. Informed by a sample of 100 2D axisymmetric supernova simulations evolved with Fornax, we train and evaluate a random forest classifier as an explosion predictor. Furthermore, we examine physics-based feature sets including the compactness parameter, the Ertl condition, and a newly developed set that characterizes the silicon/oxygen interface. With over 1500 supernovae progenitors from 9$-$27 M$_{\odot}$, we additionally train an auto-encoder to extract physics-agnostic features directly from the progenitor density profiles. We find that the density profiles alone contain meaningful information regarding their explodability. Both the silicon/oxygen and auto-encoder features predict explosion outcome with $\approx$90\% accuracy. In anticipation of much larger multi-dimensional simulation sets, we identify future directions in which machine learning applications will be useful beyond explosion outcome prediction.
https://export.arxiv.org/pdf/2208.01661
\title{Applications of Machine Learning to Predicting Core-collapse Supernova Explosion Outcomes} \correspondingauthor{Benny T.-H. Tsang} \email{benny.tsang@berkeley.edu} \author[0000-0002-6543-2993]{Benny T.-H Tsang} \affiliation{Department of Astronomy and Theoretical Astrophysics Center, University of California, Berkeley, CA 94720, USA} \affiliation{Kavli Institute for Theoretical Physics, University of California, Santa Barbara, CA 93106, USA} \author[0000-0003-1938-9282]{David Vartanyan} \affiliation{Department of Astronomy and Theoretical Astrophysics Center, University of California, Berkeley, CA 94720, USA} \author[0000-0002-3099-5024]{Adam Burrows} \affiliation{Department of Astrophysical Sciences, Princeton University, NJ 08544, USA} \keywords{ \emph{Unified Astronomy Thesaurus concepts}: Core-collapse supernovae (304); Astronomy data analysis (1858); Astrostatistics Techniques (1886); Classification (1907); Random forest (1935); Convolutional neural networks (1938)} \section{Introduction} \label{sec:int} Machine learning (ML) has become an integral part of astrophysics research in the recent decade \citep{BB10,Ivezic2014,Fluke20}. In essence, ML systems are computational tools that are efficient in assimilating complex probability distributions. These distributions are ubiquitous in both observational and theoretical astronomy. For example, characteristic separation of data samples in the image domain has facilitated reliable classification of galaxies \citep{Aniyan17,Cheng20}. Similar success has been achieved in the time domain for variable star classification \citep{Naul18,vanRoestel21}. In addition, identification of outliers from data clusters of known types, a technique known as novelty or anomaly detection, enables the discovery of previously unknown objects and new classes of objects \citep{Williamson19,Villar20,Tsang19,Ishida21,Malanchev21,Bengyat22}. Beyond classification and detection, one can regard multi-physics simulation products themselves as the complex distributions to be learned. Emulations of computationally costly simulations can be generated quickly by sampling new data points in the latent spaces that are trained to embody the fully-fledged simulations \citep{Caldeira19,Mustafa19,Vogl20,Horowitz21}. Moreover, ML systems are powerful tools for parameter inference, connecting observables to physical parameters that are oftentimes degenerate \citep{Ksoll20,Villanueva-Domingo21,Villanueva-Domingo22}. Parameters can even be distributions themselves, e.g., the equation of state of neutron stars \citep{Krastev22}, the tensor closure for neutrino transport \citep{Harada22}, and turbulence closures for sub-grid modeling \citep{Karpov22}. However, entirely lacking is the application of ML techniques to predicting core-collapse supernovae (CCSNe) explosion outcomes. CCSNe simulations are computationally expensive endeavors in both human and machine terms, and thus are a ripe opportunity for ML application. The explosion mechanism of CCSNe through the neutrino-heating mechanism has been studied as a computational problem for more than half a century (\citealt{1966ApJ...143..626C, 1985ApJ...295...14B}), through both detailed computational simulations and much cruder prescriptive methods (e.g., imposing a thermal bomb, driving a piston, or other rudimentary prescriptions). Only in the last decade have multi-dimensional simulations become the mainstay, with various groups performing scores of two-dimensional axisymmetric computations and tens of three-dimensional simulations. Though population suites of CCSNe have been evolved in 2D (\citealt{burrows2018, radice2017b,vartanyan2018a, summa2016, Ertl16,2022ApJ...924...38K, 2021Natur.589...29B}) and, more selectively, 3D (\citealt{vartanyan2018b,burrows_2019,burrows2020, nagakura2019b, glas2019,oconnor_couch2018b, summa2018, 2020ApJ...896..102K, 2021MNRAS.503.4942O,2022MNRAS.510.4689V}), developing hundreds, let alone thousands, of 3D simulations may not be feasible in the coming decade. To circumvent this limitation, and in order to explore the explosion landscape by progenitor for final explosion energies, observational signatures, and nucleosynthetic compositions, various groups have developed CCSNe population studies using simplified prescriptions in reduced dimensions. Different such approaches include analytical approximations of proto-neutron star cooling (\citealt{2012ApJ...757...69U}), PUSH (\citealt{PUSH1,2021ApJ...921..143C}), simple pistons (\citealt{swbj16}), and spherically symmetric turbulence models (STIR, \citealt{2020ApJ...890..127C, 2019ApJ...887...43M}), often calibrated to SN1987a and the Crab and comparing the derived explosion outcomes with various formulated predictions (e.g., the antesonic condition, \citealt{2012ApJ...746..106P,2018MNRAS.481.3293R}, the Ertl criterion, \citealt{Ertl16}; a semi-analytical pre-SN parametrization, \citealt{2016MNRAS.460..742M}). These methods rely on simplifying approximations for both explosion modeling and explosion prediction. In light of this, the motivation of our paper is to present a summary overview of potential ML approaches to CCSNe outcome prediction as a proof-of-concept of the eventual goal $-$ developing ML techniques, trained on the results of extant multi-dimensional simulations, to predict explosion outcome while circumventing costly detailed simulations. Our intent here is not to be comprehensive, but rather to present a sample of the applicable methods and to galvanize the use of these techniques more broadly in the community. We wish to highlight the versatility and potential future use of ML, and identify potential difficulties and obstacles. In Section \ref{sec:methods}, we describe our methodology including the simulated dataset (Section \ref{sec:datasets}), the various physics-based explosion conditions (Section \ref{sec:physics_features}), an unsupervised feature extraction approach used to derive physics-agnostic explosion criteria (Section \ref{sec:unsupervised_features}), a baseline random forest (RF) classifier used as an explosion outcome predictor (Section \ref{sec:classifier}), and a semi-supervised label propagation approach (Section \ref{sec:lp}). In Section \ref{sec:results}, we present the results comparing the accuracy of the various features in predicting explosion outcomes. We summarize our conclusions in Section \ref{sec:conc} and identify future directions in Section \ref{sec:future}. \section{Methods} \label{sec:methods} Our goal is to survey various machine learning approaches in tandem with a selection of explosion criteria to study their value in predicting explosion outcome ab-initio. These explosion outcome predictors are trained and tested on a suite of 100 2D axisymmetric CCSNe simulations run with the radiation-hydrodynamic code F{\sc{ornax}}. F{\sc{ornax}} \citep{skinner2019} is a multi-dimensional, multi-group code constructed to study CCSNe. It features an M1 solver \citep{2011JQSRT.112.1323V} for neutrino transport with detailed neutrino microphysics and an approximation to general relativity \citep{marek2006}. \subsection{Datasets} \label{sec:datasets} We selected a subset of 100 initial progenitor models from 9$-$26.99 $M_{\odot}$ to evolve in 2D-axisymmetry (Vartanyan et al., in prep.) using F{\sc{ornax}} for typically one second after core bounce to ascertain their explodability (discussed in more detail in \citealt{2022arXiv220702231W}). These models were evolved with neutrino heating as the explosion mechanism, absent rotation and magnetic fields. The models had a resolution of 1024$\times$256 in $r$, $\theta$ with outer radii extending from 30,000 km for the lower mass stellar progenitors and to 100,000 km for the most massive progenitors. These models were chosen to be representative as much as possible of the Salpeter initial mass function. They were selected to span broadly the distribution in density profiles, compositional interfaces, compactness and $\mu_4$/$M_4$ (discussed below). We categorize explosion as a run-away shock radius within the simulation time. Of the 100 models, 64 explode and 36 did not. These 100 models with known explosion outcomes based on the 2D simulations will be referred to as the \emph{labeled} dataset. Our 100 models were selected from the newest stellar progenitor models in \citealt{swbj16,sukhbold2018}. The compilation contains 12 progenitors in the mass range of 9$-$11.75\,$M_{\odot}$ in increments of 0.25\,$M_{\odot}$ (from \citealt{swbj16}), and 1500 progenitors in the range of 12$-$26.99\,$M_{\odot}$ in increments of 0.01\,$M_{\odot}$, for a grand total of 1512 stellar progenitors. The 1412 progenitor models that were not evolved in F{\sc{ornax}}, and therefore do not have known explosion outcomes, are referred to as the \emph{unlabeled} dataset. All the models studied were evolved as single-star progenitors, absent binary effects. We note that we are limited by the dataset size of this ML exercise, as well as by the complexity of the physical phase-space explored. \subsection{Physics-based Features} \label{sec:physics_features} Due to the limited number and high dimensionality of the progenitor models, explosion outcome predictors in the form of binary classifiers cannot be well-trained using the raw stellar profiles as inputs. Instead, parameters of much lower dimension, known as \emph{features}, are obtained to represent the distinctive characteristics of the models in a process known as \emph{feature extraction}. Multiple attempts have been made to identify such explosion conditions, often ab-initio, that can serve to predict CCSNe outcome (e.g., \citealt{2011ApJ...730...70O, 2012ApJ...746..106P, dolence_2015, Ertl16, 2016MNRAS.460..742M, summa2018, 2022MNRAS.515.1610G}). These derive in heritage from some variation on the concept of a critical condition \citep{burrowsgoshy1993}, which suggests a relation between neutrino luminosity and mass accretion at the shock, above which unabated shock expansion concludes in explosion. Below, we summarize three types of explosion metrics whose utility in predicting explosion outcomes we explore with our ML approaches. We focus on compactness and the Ertl condition because of their widespread use, their relative simplicity, and their ab-initio nature. We also target an additional feature $-$ the role of the silicon-oxygen compositional interface. \subsubsection{Compactness} The compactness parameter characterizes the core structure and is defined as \citep{2011ApJ...730...70O}: \begin{equation} \xi_M= \frac{M/M_{\odot}}{R(M)/1000\, \mathrm{km}}\,, \end{equation} where the subscript $M$ denotes the interior mass coordinate at which the compactness parameter is evaluated. For our purposes, we evaluate the compactness parameter $\xi_{1.75}$ at $M$ = 1.75 M$_{\odot}$, generally encompassing the Si/O interface for many the progenitor models. The compactness is often used as an ab-initio explosion condition because it depends only on the progenitor properties. While higher compactness is correlated with higher luminosities, accretion rates, and remnant masses \citep{oconnor2013}, compactness does not readily lend itself as an explosion condition, and suggestions that explosion is inhibited above a certain compactness parameter are false \citep{burrows2020} (with the exception that massive models may initially drive a successful shock, but then later implode into a black hole due to the large gravitational binding energy). We plot in the left panel of Figure \ref{fig:Ertl_eta_plane} the distribution of compactness versus the zero-age main sequence (ZAMS) mass of the labeled dataset, with exploding models indicated in orange and non-exploding in blue. For most progenitors the mass at which $\xi_M$ is calculated usually encompasses the Si/O interface entropy and density jump, and this is discussed below. \subsubsection{Ertl Parameter} The Ertl condition for explosion \citep{Ertl16} is another ab-initio explosion condition. It identifies a $\mu_4$ and $\mu_4 \times M_4$ space, where $\mu_4$ is a measure of the slope of the mass density at an entropy of four (per baryon per Boltzmann's constant) and $M_4$ is the interior mass at that entropy. This approximately corresponds to the location of an entropy/density jump at the Si/O interface. The Ertl condition purports to be a statement of criticality, with $\mu_4$ and $\mu_4 \times M_4$ relating indirectly to $L_{\nu}$ and $\dot{M}$, the neutrino luminosity and the mass accretion rate. We show in the right panel of Figure \ref{fig:Ertl_eta_plane} the Ertl curve suggested to separate explosion and non-explosion, overplotted with the results of our 100 2D simulations (see also \citealt{2022arXiv220702231W}). We note the poor agreement between our simulation results and the Ertl prediction, and comment on this more in Section \ref{sec:results}. \subsubsection{Si/O Interface Parameters} Lastly, we posit a physically-motivated explosion condition that looks at prominent density interfaces (often the Si/O interface, \citealt{2022arXiv220702231W}) whose accretion by the shock surface can revive a stalled shock into successful explosion \citep{fryer1999,swbj16,burrows2018,vartanyan2018a, ott2018_rel,burrows_2019,2021Natur.589...29B,2021ApJ...916L...5V,Boccioli22}. A sharp drop in density translates into an immediate drop in ram pressure at the shock surface upon encountering this interface, whereas the accretion-powered luminosity interior to the shock is sustained for an advective timescale (\citealt{2022arXiv220702231W}). This drop in ram pressure, while maintaining a higher luminosity, promotes explosion and may be key to explosion for massive stars. Lower-mass models of $\approx$9$-$10 $M_{\odot}$ may explode simply on the virtue of their very steep density profiles. We identify the location in mass coordinate $M_{\rm SiO}$ and the magnitude of the density jump across such interfaces $\Delta\rho_{\rm SiO}$ in all 1512 models in the full progenitor dataset. For each of the models studied here, we identified the Si/O or other prominent interface by looking for the steepest drop in density in the stellar progenitor profile exterior to the iron core. The stellar density may drop by as much as 2$-$3 times over less than 0.01 M$_{\odot}$. The extraction of the interface features can be complicated by the presence of multiple, fragmented burning shells (see also \citealt{2021ApJ...916L...5V, laplace2021} for a similar conclusion regarding the density profiles of binary stars). \citet{sukhbold2018} identify multiple burning shells during late-stage stellar evolution, where the physics is poorly resolved and the results prone to stochasticity (see also \citealt{2022arXiv220702231W}). Merging of the two shells into a single steeper shell will produce a more prominent density drop conducive to successful core-collapse explosions. We plot in Figure \ref{fig:SiO_param_plane} the Si/O interface mass coordinate versus the magnitude of the density drop from the labeled dataset. We see a nonuniform distribution of both compactness (in Figure \ref{fig:Ertl_eta_plane}) and the Si/O interface with clustering and multiple branches (see also \citealt{sukhbold2018,2016MNRAS.460..742M,2022arXiv220702231W}), as well as multi-valued outcomes (explosion and not) in a small range of the plotted phase space. Using the Si/O interface yields a clearer delineation between explosion and non-explosion than compactness, but in both cases we see degeneracy in the outcome for the given putative explosion criteria. \subsection{Unsupervised Feature Extraction} \label{sec:unsupervised_features} The physics-based feature sets described in Section \ref{sec:physics_features} are not always very effective. Modern auto-encoder neural network architectures offer an alternate data-driven, physics-agnostic approach to extracting relevant features in an unsupervised manner. Auto-encoders consist of two main components: an encoder and a decoder. The encoder is designed to take an input vector and convert it into a feature vector of much lower dimension. The decoder, on the other hand, attempts to reconstruct the input from the feature vector. By training the encoder-decoder pair to match the input and the reconstruction, the auto-encoder learns to capture the important information in the input without human intervention. Example usage in astronomy includes variable star \citep{Naul18,Tsang19} and galaxy \citep{Portillo20} classification, anomaly detection for supernova light curves \citep{Villar20}, detection of strong lensing features in images \citep{Cheng20}, and the denoising of radio images \citep{Gheller22}. Auto-encoders essentially serve as an apparatus for data compression, allowing ML systems to operate more effectively on the much lighter-weight feature vectors rather than the raw inputs. Here, we explore the application of auto-encoders to extracting representative features directly from the density profiles of the stellar progenitor models. To this end, we implement a basic auto-encoder network in \textsc{PyTorch} \citep{pytorch19}. We adopt three one-dimensional convolutional layers (\textsc{Conv1d}) as the main components of the encoder. The convolutional layers are designed to preserve the spatial information of the mass distribution in the stellar density profiles. The decoder is constructed using three corresponding \textsc{ConvTranspose1d} layers. The hyperbolic tangent function ($\tanh$) is used as the nonlinear activation function after each \textsc{conv1d} and \textsc{ConvTranspose1D} layer. After passing through the encoder's final layer, the resultant vector is commonly known as the \emph{embedding}, which is of much smaller dimension than the input sequence. The embedding vectors can be regarded as the reduced-dimension feature vectors that can be used for other downstream tasks. By construction, the $\tanh$ activation function produces embedding vectors $\mathbf{z} \in [-1, 1]^{d_{\rm z}}$, where $d_{\rm z}$ is the embedding dimension. In our case study, we focus on the explosion outcome prediction task, which is set up as binary classification. The auto-encoder architecture is presented in the schematic diagram in Figure \ref{fig:AE_schematic}. To explore the learning capacity of the auto-encoder, we vary the dimension of the embedding vector by adjusting the strides of the convolutional layers, covering embedding dimensions of $d_{\rm z} = 2$ to 32 in factors of 2. The number of total trainable parameters (weights and biases) depends solely on the kernel and filter sizes. Since we do not vary those network parameters, the auto-encoders we use in this work contain a constant number of trainable parameters of 106. The kernel size, stride, and the breakdown of the number of parameters in each layer can be found annotated in Figure \ref{fig:AE_schematic}. We keep the network size to a minimum to highlight the utility of a simple auto-encoder. We use the 1412 unlabeled models as the training set of the auto-encoder models. In other words, the auto-encoder is only tasked to learn the representation of the density profiles without regard to their explodability. We truncate the density profiles and consider only mass coordinates between $M_{\rm min}$ = 1\,$M_{\odot}$ and $M_{\rm max}$ = 2.3\,$M_{\odot}$. Interior to $M_{\rm min}$, matter collapses onto the proto-neutron star and lies interior to the stalled shock surface. On the high mass end, it is rare for relevant interfaces in the studied ZAMS distribution to exist beyond $M_{\rm max}$ and still accrete on relevant timescales for neutrino-driven CCSNe. The density profiles of the \textsc{Kepler} models with different ZAMS masses (\citealt{swbj16,sukhbold2018}) used as F{\sc{ornax}} supernova progenitors vary in grid resolution between $M_{\rm min}$ and $M_{\rm max}$, ranging typically from 800 to 1200 zones. To standardize the dimension of the input profiles, we interpolate and re-bin the logarithm of the truncated density profiles onto a uniform linear mass grid with $N_{m} = 128$ points. The reduced dimension of 128 is adequate in capturing the sharp jumps in density in the progenitor density profiles (see the solid lines in the top panels of Figure \ref{fig:ae_reconstructions}). To isolate trends, we subtract the means from the profile segments and normalize them independently. Mathematically, for each progenitor, the normalized density profiles take the form: \begin{align} \hat{x}_{m} = (x_{m} - \langle x_{m} \rangle) / (\max(x_{m}) - \min(x_{m}))\,, \end{align} where $m$ is the integer index of the uniform mass grid, $x_{m} = \log_{10}(\rho_{m})$ is the logarithmic mass density of the $m$-th mass bin, and $\langle \cdot \rangle$ denotes the mean value over the mass grid. The 128-dimension density profile segments \{$\hat{x}_{m}$\} are used as inputs to the auto-encoders. Mean squared error (MSE) between the input and the reconstructed density profiles \{$\tilde{x}_{m}$\} is used as the loss function for training: $L_{\rm AE} = \sum_{m}(\hat{x}_{m} - \tilde{x}_{m})^2/N_{m}$. Weights of the convolutional layers are initialized using the \texttt{kaiming\_normal\_} initializer \citep{He15} in \textsc{pytorch}. Biases are initialized as zeros. Optimization is done using the \textsc{Adam} optimizer \citep{KB14}. Training is conducted with a constant batch size of 100 and a learning rate of $10^{-2}$ for 500 epochs. Since our key goal is to present the utility of physics-agnostic features, we did not perform a systematic hyperparameter study to optimize the auto-encoder. Due to the limited size of the labeled dataset, we also did not attempt to train the auto-encoder simultaneously with a binary classifier for explosion outcome prediction. These will be instructive follow-up studies when more comprehensive datasets are available. \subsection{Classifier Training} \label{sec:classifier} Using the features obtained in Section \ref{sec:physics_features} and \ref{sec:unsupervised_features}, we train explosion outcome predictors in the form of binary classifiers. We adopt the \textsc{sklearn} implementation of \textsc{RandomForestClassifier} as a common classifier baseline. During training, we adopt a 5-fold, 80/20 split to divide the labeled dataset (with 100 models) into training and testing sets. The training/testing partitions are generated using the \textsc{StratifiedKFold} function of the \textsc{sklearn} package. A constant seed is used for the 5-fold random split, resulting in the same partitions of training/testing data across classifiers trained with different feature sets. To allow fair comparisons between classifiers trained with different feature sets, we use a relatively simple RF setup with fixed parameters of: \texttt{n\_estimators = 5}, \texttt{criterion = {`gini'}}, \texttt{max\_depth = 3}, \texttt{min\_smaple\_leaf = 2}, and \texttt{max\_features = {`sqrt'}}. The RF classifiers therefore all have a fixed number of five decision trees. With the feature sets we explored in Section \ref{sec:pred_results}, our RF parameters lead to about 7$-$10 nodes per tree partitioning the feature spaces, or about 35-50 total nodes. We use the accuracy, precision, recall, and the F1 score to assess the performance of the classifiers. \subsection{Semi-supervised Learning with Label Propagation} \label{sec:lp} During training, classifiers often require sufficiently large datasets to sample the distributions of various object classes in the feature space. Labeled datasets, in our case multi-dimensional simulations, are costly to produce both in computer and human hours. Alternately, unlabeled datasets, in our application 1D progenitor models, are usually much cheaper to obtain. Semi-supervised learning is a hybrid approach devised to use a limited sample of labeled data to assign mock labels to a larger, unlabeled dataset based on some distance metrics in the feature space. The hope is that by propagating the labels to the larger unlabeled dataset and incorporating it in training, the classifier can better learn the data distribution and achieve higher overall prediction accuracy. Semi-supervised approaches rely on a key presumption that the distributions of different classes are continuous, i.e., data samples close together in the feature space are likely to be of the same class. However, as we have seen in Figure \ref{fig:Ertl_eta_plane} and \ref{fig:SiO_param_plane}, there are complex branches and overlaps associated with degeneracy and/or modeling stochasticity. Nevertheless, in this paper we attempt the label propagation technique to probe the potential utility of semi-supervised learning approaches in improving explosion outcome prediction. The procedure of our label propagation study is as follows. With each feature set selection and in each cross-validation split, we repeat the RF classifier training described in Section \ref{sec:classifier} after (i) randomly removing 50\% or 75\% of the known explosion outcomes in the training split (sized 80) and (ii) re-labeling them based on the distances to their neighbors whose explosion outcomes are retained. In other words, we pretend that our labeled dataset is 50/75\% smaller than it is and allow the label propagation algorithm to re-create a 80-model training set for the RF classifier. Evaluation of prediction performance is still done using the 20-model testing splits. We employ the \textsc{LabelSpreading} model in \textsc{sklearn} for this task. Fundamentally, \textsc{LabelSpreading} works by building a fully-connected graph connecting all the data samples and propagating labels based on the pairwise distances. To limit the scope of this exercise, we adopt a constant set of label propagation parameters. In particular, we use the K-nearest neighbor kernel as the distance metric (\texttt{kernel="knn"}) with \texttt{n\_neighbors = 5}. A soft clamping factor of \texttt{alpha = 0.1} is used to allow the algorithm to change at most 10\% of the retained labels from the samples to account for the stochasticity in the explosion simulations. Pseudo-labels are assigned to the samples with their explosion outcomes removed via the \texttt{transduction\_} operation. \section{Results} \label{sec:results} \begin{table*} \centering \begin{tabular}{ccccc} \hline \hline {Features} & {Accuracy} & Precision & Recall & F1 Score \\ \hline $\langle x_{m} \rangle$, $\sigma_{x}$ & 0.68 $\pm$ 0.12 & 0.68 $\pm$ 0.13 & 0.69 $\pm$ 0.13 & 0.67 $\pm$ 0.12 \\ $\eta_{\rm 1.75}$ & 0.83 $\pm$ 0.08 & 0.84 $\pm$ 0.06 & 0.86 $\pm$ 0.06 & 0.83 $\pm$ 0.07 \\ $\mu_{4}$, $M_{4} \mu_{4}$ & 0.70 $\pm$ 0.09 & 0.69 $\pm$ 0.08 & 0.69 $\pm$ 0.08 & 0.69 $\pm$ 0.08 \\ $M_{\rm SiO}$, $\Delta \rho_{\rm SiO}$ & 0.89 $\pm$ 0.10 & 0.89 $\pm$ 0.09 & 0.91 $\pm$ 0.08 & 0.89 $\pm$ 0.10 \\ Auto-encoder ($d_{z} = 2$) & 0.77 $\pm$ 0.07 & 0.79 $\pm$ 0.07 & 0.75 $\pm$ 0.07 & 0.74 $\pm$ 0.07 \\ Auto-encoder ($d_{z} = 8$) & 0.84 $\pm$ 0.06 & 0.84 $\pm$ 0.07 & 0.83 $\pm$ 0.06 & 0.83 $\pm$ 0.06 \\ \hline \end{tabular} \caption{Table summarizing the performance scores of explosion outcome prediction using different feature sets. Each row corresponds to a different selection of feature parameter(s) used in the training and evaluation of the classifiers. Errors shown are the standard deviations of the respective scores.} \label{tab:class_perf_global} \end{table*} \subsection{Auto-encoder Performance} \label{sec:ae_results} Even with the highly limited number of only 106 trainable parameters, the auto-encoders with different embedding dimensions converge efficiently to an MSE loss of $10^{-4} - 10^{-3}$ within less than 50 epochs. To visualize the representation performance of the auto-encoders, we compare examples of the input and the reconstructed density profiles in the top panels of Figure \ref{fig:ae_reconstructions}. With $d_{\rm z} = 2$, the reconstructed profiles miss some of the sharper interface transitions, but trace the overall trends of the profiles quite well. With $d_{\rm z} \ge 8$, the density profiles are all well-captured by the auto-encoders. In the bottom panels of Figure \ref{fig:ae_reconstructions}, we show the (reconstruction $-$ input) error from all the 100 models in the labeled dataset. The maximum error is about 0.1 for the auto-encoder with an embedding size of $d_{\rm z} = 2$, whereas for $d_{\rm z} = 8$ the typical deviations are less than about $0.05$. We emphasize that the representation performance of the auto-encoder architecture can likely be improved or fine-tuned with a more thorough study. To establish the efficacy of the physics-agnostic features, we choose $d_{\rm z} = 8$ as the fiducial auto-encoder and report the prediction performance in the next section. \begin{table*} \centering \begin{tabular}{cccccc} \hline \hline {Features} & Fully Supervised & \multicolumn{2}{c}{50\% Dropped} & \multicolumn{2}{c}{75\% Dropped} \\ & & No LP & LP & No LP & LP \\ \hline $\langle x_{m} \rangle$, $\sigma_{x}$ & 0.68 $\pm$ 0.12 & 0.65 $\pm$ 0.14 & 0.71 $\pm$ 0.14 & 0.61 $\pm$ 0.06 & 0.63 $\pm$ 0.08\\ $\eta_{\rm 1.75}$ & 0.83 $\pm$ 0.08 & 0.81 $\pm$ 0.07 & 0.78 $\pm$ 0.07 & 0.77 $\pm$ 0.10 & 0.74 $\pm$ 0.12 \\ $\mu_{4}$, $M_{4} \mu_{4}$ & 0.70 $\pm$ 0.09 & 0.68 $\pm$ 0.11 & 0.67 $\pm$ 0.08 & 0.62 $\pm$ 0.08 & 0.65 $\pm$ 0.04 \\ $M_{\rm SiO}$, $\Delta \rho_{\rm SiO}$ & 0.89 $\pm$ 0.10 & 0.84 $\pm$ 0.07 & 0.89 $\pm$ 0.09 & 0.82 $\pm$ 0.07 & 0.84 $\pm$ 0.12 \\ Auto-encoder ($d_{z} = 2$) & 0.77 $\pm$ 0.07 & 0.76 $\pm$ 0.04 & 0.72 $\pm$ 0.07 & 0.71 $\pm$ 0.09 & 0.69 $\pm$ 0.06 \\ Auto-encoder ($d_{z} = 8$) & 0.84 $\pm$ 0.06 & 0.74 $\pm$ 0.15 & 0.83 $\pm$ 0.09 & 0.71 $\pm$ 0.10 & 0.73 $\pm$ 0.06 \\ \hline \end{tabular} \caption{Table comparing the accuracy scores of different feature sets with 50\% or 75\% of the labels dropped in the RF classifier training set. The Fully Supervised column corresponds to the accuracy using the full labeled training sets (Table \ref{tab:class_perf_global}). The `no LP' columns list the prediction performance trained directly from the reduced-size training set, while the `LP' columns show the performance with label propagation applied to the samples with the labels dropped.} \label{tab:lp_results} \end{table*} \subsection{Explosion Outcome Prediction} \label{sec:pred_results} We choose four sets of physics-based features for the explosion outcome prediction task, as listed in the left column of Table \ref{tab:class_perf_global}. The first feature set ($\langle x_{m} \rangle$, $\sigma_{x}$) are the mean and standard deviation of the truncated logarithmic density profiles, representing the most basic summary statistics of the density profiles. The remaining three sets correspond to the physics-based features described in Section \ref{sec:physics_features}. We train and evaluate a series of RF classifiers using our labeled dataset. Performance scores are listed in Table \ref{tab:class_perf_global} with the errors denoting the standard deviations over the five cross-validation splits. We find that our Si/O interface parameter set provides the best prediction of explosion outcome, with a prediction accuracy of 0.89. By comparison, $\eta_{\rm 1.75}$ yields an accuracy of 0.83 and the Ertl condition 0.70. Due to the limited size of the labeled set, there are $\approx$10\% fluctuations in the performance scores between different cross-validation splits as well as among repetitions of the RF training (from the randomness in tree-building). However, the general trend of performance with different feature sets is robust. The embedding feature vector extracted by the fiducial auto-encoder gives a prediction accuracy of 0.84, comparable to both the Si/O interface and the compactness parameter. Even with a reduced embedding dimension of $d_{\rm z} = 2$, the auto-encoder feature vector still outperforms the Ertl parameters. It highlights that \emph{features obtained from the density profiles via an unsupervised, physics-agnostic manner can offer classification performance competitive with physics-based features}. Identifying a prominent Si/O interface can be difficult, particularly with low-mass models, perhaps explaining some of the misclassifications. We find that stars between 12$- $15 M$_{\odot}$ lack prominent Si/O interfaces and tend to be more difficult to explode. According to \cite{sukhbold2018}, stars in this range may have multiple smaller, fragmented interfaces as well. We emphasize that our conclusion depends sensitively on both the progenitor profile and the neutrino microphysics included (see for instance, \citealt{burrows_2019}). Regardless, multiple studies have indeed found that models generally within this mass range are less likely to explode \citep{burrows_2019,vartanyan2018a, oconnor_couch2018a, summa2016, burrows_2019, 2021Natur.589...29B,2022arXiv220702231W}. The accuracy of the compactness parameter is surprising at first sight, given that studies have found no simple correlation between compactness and explosion outcome \citep{burrows2020,2021Natur.589...29B}. Indeed, we see no monotonic dependence of explosion outcome on compactness in Figure \ref{fig:Ertl_eta_plane}. Rather, the RF classifier here identifies the non-linear mapping between compactness and explosion outcome from the training samples. Such a non-linear dependence of explosion outcome is suggestive of additional underlying physics that is not captured by the compactness parameter, perhaps in the nuances of the density profile. Regardless of the metric used, we find predictive accuracy above 70\%, indicating that all the metrics considered contain some physical information about the explosion outcome. The Ertl parameter just underperforms compared to simply using only the density and its standard deviation. While both the Ertl parameter and compactness contain information about the density of the progenitor, the former may obfuscate it through analytical complication, while the latter, which performs better, is oversimplified. Categorizing the density profile through prominent interfaces seems to be the best approach thus far to predicting explosion outcome. \subsection{Utility of Label Propagation} \label{sec:label_propagation} Table \ref{tab:lp_results} summarizes the results of the label propagation study. Unsurprisingly, classifiers trained with fewer labeled samples tend to have poorer prediction performance across feature sets. Even with 75\% of the labeled training samples dropped, i.e., only with 20 training samples, the classifiers can still preserve reasonable prediction accuracy scores of about 0.6 $-$ 0.8 (the `no LP' columns). This suggests that the feature spaces we investigated can be sampled reasonably well with about 20 $-$ 40 models, and that most of the mis-classifications reside in the overlaps of branches that may be resolved by additional training samples. Label propagation offers only marginal improvements of a few percentage points across different feature sets. In some cases, e.g., with $\eta_{1.75}$ and auto-encoder features of embedding dimension $d_{\rm z} = 2$, label propagation can even diminish prediction performance. Such reduction in accuracy can be understood again by the complex discontinuities in the feature spaces and the stochasticity in modeling. With a small number of training samples, the classifiers can sometimes be misled by a single training sample to mis-classify large parts of the feature space. With a much larger dataset, the overlapping outcome branches will be better distinguished. We expect label propagation to be more effective in improving prediction accuracy with feature spaces that are smoother and less susceptible to model stochasticity. The unsupervised approach of feature extraction holds promise in uncovering such feature spaces. \section{Conclusions}\label{sec:conc} We explored the utility of a machine learning framework in predicting the explosion outcomes of massive stars based on their 1D progenitor models. We trained and evaluated a basic random forest classifier as an explosion predictor using both physics-based and physics-agnostic features. In particular, we investigated the commonly used compactness parameter and Ertl conditions, a new feature set that quantifies the location and the extent of the density drop at the silicon/oxygen interface, and auto-encoder features generated from the progenitor density profiles in an unsupervised manner. Applied to a set of 100 2D radiation hydrodynamical F{\sc{ornax}} simulations, we found that the new silicon/oxygen interface feature set has the best predictive power, with an accuracy of $\approx$90\%, outperforming the compactness and the Ertl condition. More importantly, using the physics-agnostic auto-encoder features, we obtained a predictive accuracy of $\approx$84\%, second only to the silicon/oxygen interface features. The competitive predictive performance of the auto-encoder features revealed that the density profiles alone contain meaningful information about the explodability of the stellar progenitors. It suggests that exploration of the clusters and branches in the reduced-dimension embedding space holds promise in uncovering the underlying progenitor properties that foreshadow explosions. With more multi-dimensional explosion simulations in the near future, we expect the unsupervised approach to representing progenitor models to be profitable in the task of identifying more robust explosion physics. \section{Caveats and Future Prospects} \label{sec:future} The conclusions cited here assume neutrino-heating as the dominant explosion mechanism, as is well-understood to be the case for the majority of garden-variety CCSNe. Explosion outcome depends on a confluence of factors (details of the code, simulation dimensions, stochasticity) and physical uncertainties (microphysics, progenitor structure, convection, nuclear burning rates). Our model suite can be extended to include additional physics, including magnetorotational effects, for instance, to capture more of the physical parameter space yet unexplored in CCSNe simulations. Importantly, the stellar progenitors used were all spherically-symmetric, one-dimensional models. Only recently (\citealt{2016ApJ...833..124M,zha2019, takahashi2019,fields2020,fields2021,fields2022}) have multi-dimensional progenitor models become available for simulation (\citealt{muller:18, muller_lowmass, 2022MNRAS.510.4689V,2022MNRAS.513.1317Z}). CCSNe simulations are sensitive to the ambient perturbations in the progenitor model (see, e.g. \citealt{burrows_2019}). The structure of the prominent compositional interfaces, and hence explosion outcome, morphology, and nucleosynthetic yields, will differ between 3D and 1D progenitor models. The relevant physical parameter space for explosion outcome is both very large and poorly constrained. Even the presence of a strong interface is difficult to resolve, and sometimes absent, in many progenitors. Additional constraints, perhaps involving the Helium core mass or some other characterization of the density profile (\citealt{sukhbold2018,2022arXiv220702231W}) is needed to break the degeneracy in predicting explosion outcome. We focused exclusively on the density profile when computing both the physics-based and physics-agnostic auto-encoder features, but we could expand the feature sets to include also the temperature, electron-fraction, and/or entropy profiles, etc. from the progenitor stellar models. For example, multiple profiles can be readily incorporated as different input channels in the auto-encoder architecture. Our main goal is to demonstrate the usefulness of the unsupervised feature extraction approach. We therefore did not conduct a thorough hyperparameter study for the auto-encoder architecture. Exploring the utility of transformers \citep{Vaswani17}, another neural network architecture that is effective in representing sequential data, will also be a promising future direction. Furthermore, our dataset was limited in size. We selected from approximately 1500 progenitor models and trained on 100 axisymmetric 2D simulations. Although explosion outcome does not seem to differ greatly between 2D and 3D simulations (\citealt{vartanyan2018b, 2021Natur.589...29B}), a significantly larger catalog of simulations, even in axisymmetry, would better populate the distribution of density profiles by progenitor, perhaps better resolving the clustering and outcome branches (\citealt{2016MNRAS.460..742M,sukhbold2018}) seen in the different phase spaces explored here. At the very least, we would need tenfold more simulations (thousands), even in 2D, to have a more balanced and comprehensive dataset. Yet even with the limited dataset and our simple approach in both identifying a physical criterion of interest and apposite ML techniques, we were able to obtain promising results. Machine learning applications are not limited to a simple binary determination of explosion outcome. Regression models can enable prediction of explosion diagnostics, such as the energy and ejecta composition, for a given physical setup and progenitor model. Inverse modeling can facilitate the reconstruction of progenitor properties from observables. Data transformation and image segmentation, which have already seen some use categorizing observations, can be used to characterize morphological features of supernova remnants, such as nickel bullets and voids/clustering in the ejecta, which, jointly with inverse modeling can characterize the structure of the stellar progenitor and its evolutionary history. Machine learning techniques provide an invaluable perspective from which to map the initial stellar mass function to the distribution of residues, i.e., black holes and neutron stars (partitioned between failed and successful supernovae), and are exquisitely suitable for the upcoming era of all-sky surveys. The effort here presents a first step in this direction. \software{ \textsc{Kepler} \citep{swbj16}, F{\sc{ornax}} \citep{skinner2019}, Jupyter \citep{Kluyver16}, Matplotlib \citep{Hunter07}, NumPy \citep{Oliphant06}, \textsc{sklearn} \citep{scikit-learn}. } \section*{Acknowledgements} We are grateful to Daniel Kasen, Tianshu Wang, and Matthew Coleman for valuable insights and discussion. This research was funded by the Gordon and Betty Moore Foundation through Grant GBMF5076, and by NASA awards ATP-80NSSC18K0560 and ATP-80NSSC22K0725. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. We acknowledge support from the U.S. Department of Energy Office of Science and the Office of Advanced Scientific Computing Research via the Scientific Discovery through Advanced Computing (SciDAC4) program and Grant DE-SC0018297 (subaward 00009650) and support from the U.S. NSF under Grants AST-1714267 and PHY-1804048 (the latter via the Max-Planck/Princeton Center (MPPC) for Plasma Physics). A generous award of computer time was provided by the INCITE program. That research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. We are also grateful for our computational resources through the Texas Advanced Computing Center (TACC) at The University of Texas at Austin via Frontera Large-Scale Community Partnerships under grant SC0018297 as well as the Leadership Resource Allocation under grant number 1804048. In addition, this overall research project was part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters was a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. This general project was also part of the ``Three-Dimensional Simulations of Core-Collapse Supernovae" PRAC allocation support by the National Science Foundation (under award \#OAC-1809073). Moreover, we acknowledge access under the local award \#TG-AST170045 to the resource Stampede2 in the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Finally, the authors employed computational resources provided by the TIGRESS high performance computer center at Princeton University, which is jointly supported by the Princeton Institute for Computational Science and Engineering (PICSciE) and the Princeton University Office of Information Technology, and acknowledge our continuing allocation at the National Energy Research Scientific Computing Center (NERSC), which is supported by the Office of Science of the US Department of Energy (DOE) under contract DE-AC03-76SF00098. Use was made of computational facilities purchased with funds from the National Science Foundation (CNS-1725797) and administered by the Center for Scientific Computing (CSC). The CSC is supported by the California NanoSystems Institute and the Materials Research Science and Engineering Center (MRSEC; NSF DMR 1720256) at UC Santa Barbara. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{aasjournal} \bibliography{References} \ifMNRAS \bsp % \label{lastpage} \else \fi
Title: A precursor interpretation for the Crab supernova 1054 A.D. very early light curve
Abstract: In spite that the Crab supernova in 1054 A.D. was studied over the years, it is still not clear what type of event produced the explosion. The most detailed and reliable source of the observed light curve is recorded in the Chinese and Japanese chronicles, and suggests a quick dimming after a very bright initial peak. We shall show in this work that the Crab event can be well explained by a type of precursor, a phenomenon emerging from large supernova sampling and quite expected from Stellar Evolution considerations of low-mass progenitors. This very early bright transient is followed by the explosion itself, likely a low-luminosity supernova from ``small iron core'' type instead of an electron-capture event. The latter conclusion stems from recent simulation work, which predict that an electron-capture supernova would render the magnitude to be much brighter for $\sim 3$ months, hence visible during daytime, and would not match the Chinese records.
https://export.arxiv.org/pdf/2208.04420
\title{A precursor interpretation for the Crab supernova 1054 A.D. very early light curve% } \titlerunning{Precursor interpretation of SN 1054 A.D. } % \author{J.E. Horvath$^{1}$} \institute{J.E. Horvath \\ {foton@iag.usp.br}\\ $^{1}$ Universidade de S\~ao Paulo, Department of Astronomy IAG-USP\\ R. do Mat\~ao 1226, 05508-090, Cidade Universit\'aria, S\~ao Paulo SP, Brazil\\ Tel.: +55-11-30912710\\ Fax: +55-11-30912806\\ } \date{Received: date / Accepted: date} \section{Introduction} \label{intro} The supernova 1054 A.D., which originated the Crab nebula and its associated pulsar are one of the most widely known and studied events in Astronomy. The connection of the Crab nebula with the historical supernova observed in 1054 A.D., with records in Chinese, Japanese and Western sources provided the best example of Baade and Zwicky's (1934) suggestion of the formation of neutron stars in events generally seen in {\it core-collapse} supernovae. However, this paradigm led to a series of questions which arise because the observed supernova contain some puzzling features, not observed in any other case in History. Among the latter, we can count the lack of clear evidence of the supernova remnant itself, precluding a more direct identification of the type of event, and the reconstructed early light curve (mainly from Chinese sources) which has been difficult to understand in general. On the theoretical side, since the work of Nomoto and collaborators (1982), the Crab has been seriously considered as a candidate to be a case of {\it electron-capture supernova} (ECSN), in which a super AGB star explodes after the capture of electrons in an $O-Mg-Ne$ degenerate core. The progenitors of these super-AGB stars are expected to have $8-10 \, M_{\odot}$ in the MS, although the exact values depend on metallicity (Doherty et al. 2017). A neutron star formation is expected in many cases from these events too. Alternatively, low-mass iron cores collapses (CCSNs) could lead to similar explosions and leave a neutron star as observed, the latter featuring masses {\it below} the ``fixed'' value $\sim \, 1.25 \, M_{\odot}$ attributed for the ECSN events for the compact remnant (Horvath et al. 2022). It is now known that the lightcurves of explosive events have revealed a variety, and a database on the long-term behavior of the lightcurves is better known because of detailed observations and theoretical work along the last decades. Nevertheless, several so-called ``supernova impostors'', transients and events that release $\geq 10^{51} erg$ but are not actual supernovae have been recognized and studied over the years. Eta Carina is one of the well-known cases that could have been identified as a supernova, although it belongs to a different class of transients (Davidson and Humphreys 2012). In the particular case of the SN 1054 A.D., a total energy of the Crab $\leq 7 \times 10^{49} \, erg$ as an upper limit (see Hester 2008) has been reported. The optical lightcurve would have been similar to the cases of SN2005cs, SN2016ov and SN2018zd (Spiro et al. 2014, Hiramatsu et al. 2021). A sudden large drop of the brightness around $\sim 120-130 \, d$ should have been occurred in the SN 1054 event according to model simulations, although there are no reports of that in the available records. The supernova should have decreased by additional $\sim 6-7 \, mag$ after vanishing during daytime, to become completely invisible, even at night, after $653 \, d$ (Table 1). Note that these features are quite independent of the exact nature of the event, either a core-collapse of a small iron core progenitor or an electron-capture event, and any model must comply with the temporal behavior. In either of these scenarios, ECSN or CCSN, the features of the initial lightcurve reconstructed for the Crab explosion are quite difficult to obtain. A bright maximum ($M_{V} \geq -18$) and rapid decay were observed, while the ECSNs and CCSNs scenarios predict in turn low explosion energies. We shall argue below that the very early lightcurve can be associated to a {\it precursor} of the type that are being seen in many supernova surveys (Section 3), and that the subsequent behavior of the light curve is the one expected from a low-mass iron core CCSN, not of an ECSN type (Section 4). \section{Historical records} \label{sec:1} Several Chinese records refer to the ``guest star'' of 1054 A.D.. A comparative discussion by Breen and McCarthy (1995) concluded that the most likely date for its appearance in the 4th July with a fast (but not precisely determined) rise time $\leq \, 1 \, d$. The Chief of the Astronomical Bureau at K'Ai-Feng writings provide accurate times for the appearance of the object, comparing its brightness to Venus, which has $m_{V} = - 4.47$, interpreted as its maximum brightness (it should be noted that in spite of being able to distinguish $\sim 0.1 \, mag$, ancient Chinese astronomers have not registered the brightness variation in time of Venus, exceeding $0.5 \, mag$, Filipovi\'c et al. 2022). After the classical papers of Lundmark (1921), Mayall and Oort (1942) and Shkilovsky (1968), among others, the common assumption is that, after applying the extinction correction $A_{V} = 1.6$ (Miller 1973), the peak absolute magnitude should have been $M_{V} = -18$ or higher. However, the Japanese chronicles compared the event with a dimmer Jupiter ($m_{V} = - 2.2$, Green and Stephenson 2003), although the proximity of the Sun in the sky and some uncertainty in the dating made these reports not as solid. Therefore, it is possible that the brightness of the object at peak was slightly lower than Venus. We shall adopt the accepted value $M_{V} = -18$ hereafter as a lower limit. The precise value is impossible to refine, but fortunately not crucial here. The same Chinese chronicles (Breen and McCarthy 1995) provide another important clue to the early lightcurve, saying that the guest star was no longer visible during the day after $23 \, d$. This can be quantified by observing that the $m_{V} = - 0.24$ Saturn is often visible during the day, and leads after extinction correction to a value $M_{V} \, \sim \, -14$. This estimation in uncertain by about $0.5$ mag, but overall it is fair to state that the transient faded by $\Delta M_{V} \sim \, 4$ in about three weeks. This temporal feature is reliable from the Chinese imperial astronomers records and a prime feature to be understood. The last temporal benchmark is related to the {\it late} lightcurve, i.e. the disappearance of the guest star from the night sky. Again, it is difficult to pin down exactly the sky background brightness, but this ``disappearance'' has been quantified as $M_{V} \sim -7$ after $653 \, d$ (Breen and McCarthy 1995). It is unfortunate that no intermediate values are known, since supernova often present interesting and revealing behavior at these times. The Western chronicles are scarce, more vaguely described (to the point of being doubtful that they describe a ``guest star'' event at all), and not due to astronomers. Some have considered the image of the Holy Roman Emperor Henry III considered as a proof that the 1054 A.D. was seen in Italy even though no written account of this specific observation remains (Hoffman and Gudrun 2021). A Middle East observation performed by Ibn Butlan is also known (Brecher 1978) but it does not contain substantial information either. An argument to associate a Flemish writing addressing the day of the Pope Leo IX death would need to bring the explosion to a much earlier date (Guidoboni et al. 1994). Because the death of the Pope Leo IX and the Great Schism of the Church, declared by the Patriarch of Constantinople on July 16th 1054, a heavy ``contamination'' (in the sense that some phenomena were reported to reinforce the holiness of the Pope, but could be completely unreal) of the reports is believed to have taken place, quite unrelated to the 1054 A.D. event itself, or in any case difficult to disentangle from the former. Certainly there is no consensus for a much earlier date for the explosion of the Crab supernova, which would also create big problems to sustain very high luminosities over a period of $> \, 3 \, months$ (April-July 1054) if accepted, thus it has been considered untenable (Breen and McCarthy 1995). \section{The very early SN 1054 A.D. light curve as a precursor} \label{sec:2} Precursors (i.e. transient events) preceding supernovae have been reported in the literature (see, for instance Ofek et al. 2014 for a study of a Type IIn supernova sample). The statistics is very incomplete since large surveys are only now reaching the point when an evaluation becomes feasible. In the case of the Crab as a confirmed low-mass explosion, a super-AGB character of the progenitor would enhance the expectations for episodic mass ejections or super-Eddington winds. Precursors can be originated in a couple of different ways in this context. The first is the ejection of mass, whose kinetic energy is converted into luminosity, with an efficiency factor $\epsilon$. The simplest estimation for the ejected mass stems from energetic considerations and reads \begin{equation} M_{ej} \approx \epsilon {{2 L_{p} {\delta t}} \over {v^{2}}} \, = \, 10^{-2} M_{\odot} \times {{\biggl( {\epsilon \over {0.1}} \biggr)}} {{\biggl( {Lp \over {10^{9} L_{\odot}}}\biggr)}} {{\biggl( {\delta t \over {23 \, d}} \biggr)}} {{\biggl( {1000 \, km s^{-1} \over {v}}\biggr)}}^{2} \end{equation} where $L_{p}$ is the precursor luminosity, $\delta t$ its duration and $v$ the velocity of the ejecta. A second mechanism, studied by Shaviv (2000, 2001) and applied to novae and other problems, is a super-Eddington wind outflow accelerating the matter, yielding a simplified expression \begin{equation} M_{ej,wind} \approx W {{L_{p} {\delta t}} \over {c_{s} c}} \, = \, 10^{-2} M_{\odot} \times {{\biggl( {W \over {10}} \biggr)}} {{\biggl( {Lp \over {10^{9} L_{\odot}}}\biggr)}} {{\biggl( {\delta t \over {23 \, d}} \biggr)}} {{\biggl( {{60 km s^{-1}} \over {c_s}} \biggr)}} \end{equation} with $W \approx 5-10$ an empirical constant derived for each particular problem (Shaviv 2000, 2001), and the sound velocity $c_{s}$ has been scaled to its expected value at the base of the wind $\approx 60 \, km \, s^{-1}$. Once we apply these expressions to reproduce the very early light curve of the Crab, we obtain in both cases $M_{ej} \simeq 10^{-2} \, M_{\odot}$, provided the velocity $v$ is high enough or the sound velocity $c_{s}$ is not too small. This mass is very small, and although it can produce a bright transient, and more importantly fast rising ($t_{rise} \leq 1 \, d$) and decay ($t_{decay} \sim \, weeks$) as required by observational constraints, its effect on the later shock breakout of the supernova itself is small. On the other hand, if the {\it whole} early lightcurve has to be sustained by circumstellar material (CSM), the mass must be much higher ($\sim 0.3 M_{\odot}$, Smith 2013), and an initial rapid decay of at least 4 magnitudes in just three weeks appears much more extreme. \begin{table}[htp] \caption{Variation in the absolute magnitude $\Delta M_{V}$ of the remnant with time (after Breen and McCarthy 1995)} \label{tab:truncatedsampling} \centering \begin{tabular}{l c c } \hline $|\Delta M_V|$ & $\Delta t$ (days) & \\ \hline $\geq 4$ & 23 & \\ $\sim 6-7$ & 653 & \\ $\sim 4$& $\gg 300$ & \\ \hline \end{tabular} \end{table} A possibly related phenomenon are the rapidly evolving optical transients reported in the last decade (Poznanski et al. 2010, Drout et al. 2014, Arcavi et al. 2016). These events form a growing group, generally characterized by fast rising times ($\sim$ days), absolute magnitudes comparable to Type Ia supernovae and decay within a month. They have been also associated to the effect of a shock wave interacting with CSM or energy injection by a magnetar birth, both models implying a stellar explosion as an original trigger. The super-Eddington scenario would also be a suitable model for these transients on general grounds (Shaviv 2000, Ofek et al. 2014), although a detailed discussion has not been presented for this specific case. A case study of the optical transient KSN2015K recently reported by K2/Kepler (Rest et al. 2018) serves to exemplify the possible relation with the early lightcurve of the Crab supernova. This transient raised in $\sim 2 \, d$ and decayed on a $\sim \, 1 \, month$ timescale, with a peak luminosity a factor of $\sim \, 5$ higher that the reconstructed for the Crab. However, as stated the peak magnitude of the Crab could have been brighter that $-18$ when first spotted the morning of July 4th (Green and Stephenson 2003, Smith 2013), making the KSN2015K almost a perfect fit to the lightcurve without any modification. If a transient like this happened in the SN 1054 event, the time to decay to a magnitude that would have rendered the transient unobservable during daytime, as reported by the Chinese, come out automatically right (Fig. 1) and is consistent with the historical observations (Murdin and Murdin 1985, Breen and McCarthy 1995). In the CSM breakout scenario, a rapid rise $ < 1$ day due to shock breakout diffusion $t_{d} \sim 30 R/c$ could be achieved if the CSM is standing at $\sim 10^{13} \, cm$. A compact progenitor of the carbon/oxygen type would have a much smaller radius. Only a small mass fraction $\leq 0.1 \, M_{\odot}$ would have to be ejected on a timescale $t_{csm} = R/v_{csm}$ over the last years before the explosion to explain the very early light curve (Rest et al. 2018). A super-Eddington model will rise even faster and require a somewhat smaller ejected mass. This model seems consistent with the evidence available the observed rise and decay in the KSN2015K too. The decay of the optical transient suggests that no other energy source like $^{56} Ni$ or continuous injection by a central magnetar (not expected for the ``normal'' magnetic field associated to the Crab pulsar (Allen and Horvath 2004, Kasen et al. 2016, Shukbold and Woosley 2016) is involved in the transient (Rest et al. 2018). To be sure, the growing group of transients include short events which are {\it not} consistent with shock breakout, and that can be interpreted within an ejected mass with radioactive decays as well (Ofek et al. 2021). Without any photometry and a scarce temporal information, we may never know which kind of precursor happened in SN 1054. \section{The subsequent and long-term behavior of the lightcurve} \label{sec:3} An extensive study of low-luminosity explosions (Kozyreva et al. 2021) shows that it would be very difficult to understand the very early lightcurve from the theoretical point of view invoking just a ``bare'' explosion.It remains to be seen whether this feature stands with other calculations with varying physical content. If the calculations of Kozyreva et al. (2021) are taken as a guide, we further conclude that the ECSN lightcurve based on the e8.8 model does not fit the Crab event, because it will render SN 1054 observable during the daytime for $\sim 4$ months. Explosions based on the more compact progenitors z9.6 and s9.0 will be acceptable, for a range of metallicities and explosion energies $few \times 10^{49} erg$. However, and guided by the simulations, this would mean that the Crab was {\it not} a ECSN, but rather a low-mass iron core CCSN. Moreover, this conclusion is actually independent of the precursor interpretation of the very early light curve, and relies on the Chinese report on the supernova becoming invisible during the day after $23 \, d$. An ECSN would be very visible during the day because it would be about 2 magnitudes brighter (looking like Jupiter) according to the detailed models of Kozyreva et al. (2021). As a final corollary, the Crab pulsar mass is predicted to be below $1.25 M_{\odot}$ to achieve consistency within the low-mass iron core CCSN model. In addition, it should be noted that the observed analogues SN2005cs and related events seem too dim at peak magnitude to be compared to the Crab case. However, the small overall mass $\sim 5 \, M_{\odot}$ in the Crab Nebula, the chemical evidence for a progenitor $\simeq 8 \, M_{\odot}$ and the low $^{56}Ni$ present (Smith 2013) remain as strong indications of a low-energy explosion, which also produced a neutron star. This is why we believe that a bright precursor interpretation, followed by the explosion itself, is a good fit to the whole picture. \section{Conclusions} \label{sec:4} In summary, we have interpreted the initial phase of the supernova 1054 A.D. as a bright transient precursor, followed by the explosion itself after $\sim 3 \, weeks$. This interpretation has been motivated by the expected features of a low-mass progenitor near the explosion, which also predicts a plateau-type visual magnitude lasting 3-4 months but at a level that will not re-brighten the event (which was not reported). Bright optical transients such as KSN2015K possess all the features of the very early SN 1054 light curve and have been brought as comparison examples, although their actual relevance to the event remains to be proved. In fact, it is quite remarkable that a lightcurve from a recent FELT event (KSN2015K) can fit accurately the reconstructed historical time behavior with just a slight (if any) brightness scale-down amounting to a small numerical factor. This can not, of course, taken as a proof, but certainly suggests a kinship of optical transients and SN precursors, a central thesis for our model. Interestingly, the idea that CSM may be involved in supernovae with an early luminosity excess was also present in Goldberg and Bildsten (2016), Morozova et al. (2020) and Moriya et al. (2020). Smith (2013) has stressed the apparent incompatibility of the low-energy hypothesis of the Crab supernova with the high-luminosity early lightcurve and offered a detailed discussion of a CSM hypothesis. His view is different from our precursor transient interpretation that would explain the very early phase, while the well-known ordinary expansion of the supernova would have taken over after $\sim 3 \, weeks$ and be responsible for the long-term behavior, but this will be problematic if the calculations of Kozyreva et al. (2021) stand for the ECSN cases due to the non-visibility of SN 1054 A.D. supernova during daytime after $23 \, d$. There are a few observational tests that may resolve the issue of the type of event. The most likely one would be the detection of ``light echoes'' of the event, which have the potential to reveal the spectrum and possibly its time evolution (Rest et al. 2008). However, in spite of the efforts over the years, the light echoes of the SN 1054 A.D. have not been detected yet. Therefore, other evidences such as the nucleosynthesis yields and structure of the remnant should be analyzed. We have remarked above the difficulties to study the latter. In fact, it is not clear that any other event of the ``SN 1054 A.D.-type'' has been observed, although this would not be so surprising because the derived number of rapidly evolving luminous transients is $\sim$ a few percent of the whole core-collapse SN (Drout et al. 2014). The follow-up of these fast transients could revel a weak SN after a few weeks, provided they are close enough and a strategy to identify and locate them quickly is developed (Inserra 2019). This would resemble the cases in which a long GRB is followed by a supernova, {\it mutatis mutandis}, and would constitute a crucial observation to assess the precursor scenario discussed above. According to our picture, around $\sim \, 1 \, month$ the light curve would have leveled off due to the underlying supernova emergence, and decayed very slowly until $120-130 \, d$ (Fig. 1). Although the suggestions of a Type IIn-P from a small iron core explosion (Kozyreva et al. 2021) and ECSN (Nomoto et al. 1982) would be difficult to distinguish as low-energy events at late times in historical supernovae, the theoretical models of Kozyreva et al. (2021) disfavor the ECSN explosions as a model for the SN 1054 because they would be far too bright (by two magnitudes) to match the Chinese temporal records. Therefore, an agreement of present models and the SN 1054 reconstruction suggests a core-collapse SN, not an electron-capture one, independently of the origin of the very early bright lightcurve and related to the disappearance during the daytime only. Since the available models are scarce, and there are caveats which apply to a handful of points, we are not claiming anything definitive, but rather point out an alternative interpretation of the Crab event that could tie optical transients to the exploding stars, and tentatively identify which one produces the observed phenomenology. It may be that the confirmed diversity of supernova events and associated precursor/transients could be crucial to understand a millennium-old puzzle in a paradigmatic case. \section*{Acknowledgements and declarations} * The author wish to acknowledge the financial support of the Fapesp Agency (S\~ao Paulo) through the grant 2020/08518-2 and the CNPq (Federal Government, Brazil) for the award of a Research Fellowship.The members of the GARDEL Group at USP are acknowledged for their encouragement and scientific advise in these topics. An anonymous Referee helped to improve the first version with several remarks and suggestions. \bigskip \noindent * Informed consent does not apply to this work. \bigskip \noindent * Data availability does not apply, no data beyond the one publicly available in the cited works has been employed. \bigskip \noindent * The author declares no competing interests. \bigskip \noindent * All the conception, execution and writing of the paper was performed by the author. \vfill\eject
Title: The SAMI Galaxy Survey: flipping of the spin-filament alignment correlates most strongly with growth of the bulge
Abstract: We study the alignments of galaxy spin axes with respect to cosmic web filaments as a function of various properties of the galaxies and their constituent bulges and discs. We exploit the SAMI Galaxy Survey to identify 3D spin axes from spatially-resolved stellar kinematics and to decompose the galaxy into the kinematic bulge and disc components. The GAMA survey is used to reconstruct the cosmic filaments. The mass of the bulge, defined as the product of stellar mass and bulge-to-total flux ratio M_bulge=M_star x (B/T), is the primary parameter of correlation with spin-filament alignments: galaxies with lower bulge masses tend to have their spins parallel to the closest filament, while galaxies with higher bulge masses are more perpendicularly aligned. M_star and B/T separately show correlations, but they do not fully unravel spin-filament alignments. Other galaxy properties, such as visual morphology, stellar age, star formation activity, kinematic parameters and local environment, are secondary tracers. Focusing on S0 galaxies, we find preferentially perpendicular alignments, with the signal dominated by high-mass S0 galaxies. Studying bulge and disc spin-filament alignments separately reveals additional information about the formation pathways of the corresponding galaxies: bulges tend to have more perpendicular alignments, while discs show different tendencies according to their kinematic features and the mass of the associated bulge. The observed correlation between the flipping of spin-filament alignments and the growth of the bulge can be explained by mergers, which drive both alignment flips and bulge formation.
https://export.arxiv.org/pdf/2208.10767
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} galaxies: formation, evolution -- galaxies: kinematics and dynamics -- galaxies: structure, fundamental parameters -- cosmology: large-scale structure of Universe \end{keywords} \section{Introduction} \label{Introduction} How galaxies acquire their angular momentum in the cosmic web is a crucial element in understanding galaxy formation and evolution. Since galaxies are not randomly distributed in the Universe but found along ordered filaments and walls, their properties are expected to be influenced by their host halos, and by the current location and past history of these halos in the evolving cosmic web. According to the tidal torque theory, galaxy spin is generated by the torques acting on the collapsing proto-halo, which gains angular momentum from the gravitational perturbations in the tidal field \citep{Hoyle1951,Peebles1969,Doroshkevich1970,White1984,Porciani2002,Schafer2009}. Thus, a correlation between the galaxy spin and the tidal field should likely exist. The alignment of the galaxy spin axis with respect to the orientation of the filament within which it resides represents a memory of the galaxy's formation. Therefore the signal is expected to be weak at low redshift ($z<0.1$; \citealp{Codis2018}). Cosmological N-body simulations have predicted the orientation of the dark matter halo spin vector to be mass-dependent (\citealp{AragonCalvo2007,Codis2012,Trowland2013,GaneshaiahVeena2018}). The sense of this trend is that low-mass halos tend to have their spin aligned parallel with the closest filament, while the spin axis of high-mass halos tends to be orthogonal to the filament. This trend has also been seen for galaxies in large-scale cosmological hydrodynamical simulations \citep{Dubois2014,Laigle2015,Codis2018,Wang2018,Kraljic2020}. In the context of galaxy formation mechanisms, this suggests that low-mass galaxies are formed via gas accretion mechanisms, while high-mass galaxies are formed via mergers occurring along the filament within which they are embedded \citep{Dubois2014,Welker2014}. On the other hand, \citet{GaneshaiahVeena2019} found a preferential perpendicular alignment for galaxies at all masses, though they studied a relatively small volume. Apart from stellar mass, simulations have found that spin--filament alignments also depend on other galaxy properties, such as morphology \citep{Codis2018,GaneshaiahVeena2019}, colour and magnitude \citep{Tempel2015,Wang2018}, triaxiality \citep{Wang2018}, degree of rotation, star formation activity and HI mass \citep{Kraljic2020}. A significant effort has been devoted to detecting the galaxy spin--filament alignments in observations of low redshift galaxies. Most of these studies found preferentially perpendicular orientations for early-type galaxies \citep{Tempel2013a,Pahwa2016,Hirv2017,Chen2019,Kraljic2021}, while late-type galaxies have their spins preferentially parallel to the closest filament \citep{Tempel2013a,Tempel2013b,Hirv2017,BlueBird2020,Kraljic2021,Tudorache2022}. However, some disagreement exists regarding the late-types, with some studies finding a perpendicular orientation or no clear trend for these galaxies \citep{Jones2010,Zhang2015,Pahwa2016,Krolewski2019}. These discrepant findings could plausibly be explained by differences in the selection criteria for the galaxy samples and in the algorithms used to reconstruct the cosmic filaments, combined with the intrinsic weakness of the signal at low redshift. The advent of integral field spectroscopy (IFS) has made possible a more precise measurement of the galaxy spin axes from spatially-resolved stellar kinematic maps and thus a more statistically significant detection of the spin--filament correlation signal with respect to photometric data. The mass-dependent trend for the galaxy spin--filament alignments was observed for the first time with >2$\sigma$ confidence by \citet{Welker2020}. They selected $\sim$1400 galaxies from the Sydney--AAO Multi-object Integral-field (SAMI) Galaxy Survey \citep{Croom2012,Bryant2015} with available spatially-resolved stellar kinematics and used the Galaxy And Mass Assembly (GAMA; \citealp{Driver2011}) survey to reconstruct the underlying cosmic filaments. The most recent results of \citet{Kraljic2021} for the IFS MaNGA survey \citep{Bundy2015} show that the 3D spin--filament alignment is preferentially parallel for spiral galaxies, with the correlation dominated by low-mass galaxies, and preferentially perpendicular for S0 galaxies. They also find a strong perpendicular alignment for low-mass S0 galaxies, at odds with the expected stellar mass-dependency of the signal. Most of these previous results, whether based on simulations or observations, suggest that the growth of a bulge is expected to affect the spin--filament alignment. According to the hierarchical formation scenario of structures in our Universe, galaxies build up their discs via accretion and their bulges via mergers (\citealp{Aguerri2001,Hopkins2010,Wilamn2013}). However, the formation of S0 galaxies, characterised by both bulge and disc components, is still debated, with multiple competing mechanisms depending on the environment and/or galaxy properties thought to be responsible for their origin (\citealp{Dressler1980,ElicheMoral2013,Johnston2014,FraserMcKelvie2018,Coccato2019,Barsanti2021a,Croom2021b}). Thus, an interesting question that arises is whether we can detect a correlation between the bulge properties and the spin--filament alignment trends in observations. We aim to address this hypothesis by investigating the signal according to the bulge-to-total flux ratio, as well as by exploring the separate bulge and disc spin--filament alignments. We take advantage of the SAMI Galaxy Survey to identify the spin axes of galaxies, bulges, and discs based on spatially-resolved stellar kinematics, and of the deep and highly-complete GAMA spectroscopic survey to reconstruct the cosmic web. These analyses will help to shed light on the formation mechanisms of galaxies, bulges, and discs. This paper is organised as follows. We present our galaxy sample, spin proxies, and galaxy properties in Section~\ref{Data and Galaxy Sample}. In Section~\ref{Methods} we describe the methods used to identify the orientation of the spin axes and to reconstruct the cosmic filaments. In Section~\ref{Results} we present our results about the alignments and their correlations with galaxy properties. In Section~\ref{Discussion} we compare our findings to previous studies, and we discuss their physical interpretations. Finally, we summarise our findings and state our conclusions in Section~\ref{Summary and conclusions}. Throughout this work, we assume $\Omega_{m}=0.3$, $\Omega_{\Lambda}=0.7$ and $H_{0}=70$\,km\,s$^{-1}$\,Mpc$^{-1}$ as the cosmological parameters. \section{Data and Galaxy Sample} \label{Data and Galaxy Sample} \subsection{The SAMI Galaxy Survey} \label{SAMI galaxy survey} The Sydney--AAO Multi-object Integral-field spectrograph (SAMI) was mounted on the 3.9\,m Anglo-Australian Telescope \citep{Croom2012}. The instrument has 13 fused optical fibre bundles (hexabundles), each containing 61 fibres of 1.6$^{\prime\prime}$ diameter so that each integral field unit (IFU) has a 15$^{\prime\prime}$ diameter \citep{Bland2011,Bryant2014}. The SAMI fibres feed the two arms of the AAOmega spectrograph \citep{Sharp2006}. The SAMI Galaxy Survey uses the 580V grating in the blue arm, giving a resolving power of $R=1812$ and wavelength coverage of 3700--5700\,\AA, and the 1000R grating in the red arm, giving a resolving power of $R=4263$ over the range 6300--7400\,\AA. The median full-width-at-half-maximum values for each arm are FWHM$_{\rm blue}$=2.65\,\AA\ and FWHM$_{\rm red}$=1.61\,\AA\ \citep{vandeSande2017}. The SAMI Galaxy Survey is a spatially-resolved spectroscopic survey of more than 3000 galaxies with stellar mass range $\log(M_{\star}/M_{\odot})=8$--12 and redshift range $0.004<z\leq 0.115$ \citep{Bryant2015,Croom2021}. Most of the SAMI targets belong to the three equatorial fields GAMA G09, G12 and G15 of the Galaxy And Mass Assembly survey \citep{Driver2011}. Eight massive clusters were also observed to explore high galaxy density environments \citep{Owers2017}, however they are not included in this analysis. The data are reduced using the SAMI {\sc python} package \citep{Allen2014}, which uses the {\tt 2dfdr} package \citep{2015ascl.soft05015A}. A complete description of the data reduction from raw frames to datacubes can be found in \citet{Sharp2015}, \citet{Allen2015}, \citet{Green2018} and \citet{Scott2018}. The final datacubes are characterised by a grid of 0.5$^{\prime\prime}\times0.5^{\prime\prime}$ spaxels, where the blue and red spectra have pixel scales of 1.05\,\AA\ and 0.60\,\AA\ respectively. \subsection{Spin proxies} \label{Spin proxies} \subsubsection{Stellar kinematics} \label{Stellar kinematics} To identify the spin axis of a galaxy, we take advantage of spatially-resolved stellar kinematics. A complete description of the stellar kinematics for the SAMI Galaxy Survey is presented in \citet{vandeSande2017}. In a nutshell, the line-of-sight velocity distributions are obtained from the Penalised Pixel-Fitting software (pPXF; \citealp{Cappellari2004,Cappellari2017}), where the red spectrum is smoothed with a Gaussian kernel to match the spectral resolution of the blue spectrum, then the combined blue and red spectrum is re-binned on a grid of uniform velocity spacing. For each bin, pPXF is run in a multistep process on each galaxy spaxel to estimate the noise from the fit residual, remove emission lines and extract velocity and velocity dispersion. The best-fitting templates are derived from the MILES library of stellar spectra \citep{SanchezBlazquez2006,FalconBarroso2011}. The stellar kinematic position angles (PA) are measured from the spatially-resolved stellar velocity maps using the {\sc fit\_kinematic\_pa} routine (see Appendix C of \citealp{Krajnovic2006}). Only spaxels with stellar continuum signal-to-noise ratio S/N$>$3 and velocity uncertainty $<$30\,km\,s$^{-1}$ are used in the fitting. The galaxies' semi-major axis effective radii ($R_e$) and ellipticities within $R_e$ ($\epsilon_e$) are measured using Multi-Gaussian Expansion (MGE; \citealp{Emsellem1994,Cappellari2002}). For the SAMI Galaxy Survey, the MGE technique is applied to $r$-band SDSS images \citep{York2000}; a detailed presentation of the MGE fits can be found in \cite{DEugenio2021}. \subsubsection{Stellar kinematics of bulges and discs} \label{Stellar kinematics of bulges and discs} In this work we aim to explore the separate spin--filament alignments of bulges and discs. To disentangle the spin axes of the two components, we take advantage of the 2D kinematic bulge/disc decomposition performed by \citet{Oh2020}. They used pPXF to estimate simultaneously the spatially-resolved velocity and velocity dispersion of the bulge and the disc, using photometric weights and a new subroutine for dealing with degeneracy in the solutions. The photometric inputs are based on the 2D photometric bulge/disc decomposition presented in the next Section. \citet{Oh2020} found that the combination of these two components adequately reproduces the major kinematic features of galaxies over a wide range of morphologies. We estimate the separate kinematic PAs of the bulges and the discs from the corresponding spatially-resolved velocity maps using the {\sc fit\_kinematic\_pa} routine. Only spaxels with continuum S/N$>$3 for the respective component are used in the fitting. Moreover, we select only galaxies with velocity maps where at least 70\% of the spaxels within $1\,R_e$ have S/N$>$3 for the respective component. Figure~\ref{GalaxyBulgeDisc_Vel_Maps} shows some examples of velocity maps for galaxies, bulges and discs, where the green lines mark the kinematic PAs. We exhibit galaxies where only the disc component has measured kinematic PA (examples a \& b), where only the bulge component has measured kinematic PA (examples c \& d), and where measurements are available for both components (examples e \& f). \subsubsection{Photometric position angles} \label{Photometric position angles} The spatially-resolved kinematic bulge/disc decomposition for the SAMI survey galaxies is based on photometric weights taken from the 2D photometric bulge/disc decomposition of \citet{Casura2022}. This latter allows us to estimate separate photometric properties of bulges and discs: the disc is defined as the exponential component, while the bulge corresponds to the S\'ersic component representing the light excess over the exponential component. The photometric decomposition uses the image analysis package {\sc ProFound} and the photometric galaxy profile fitting package {\sc ProFit} \citep{Robotham2017,Robotham2018}. \citet{Casura2022} follow a similar method to that used for the SAMI cluster galaxies in \citet{Barsanti2021}, with some differences that are outlined below. The decomposition is performed on the $g$-, $r$-, and $i$-band images from the Kilo-Degree Survey (KiDS; \citealp{deJong2017}). The {\sc ProFound} steps include image segmentation, source identification, sky subtraction, initial parameter estimation and local PSF estimation. Then, each galaxy is fitted using {\sc ProFit} with three models based on S\'ersic profiles \citep{Sersic1963}, and a combination of downhill gradient and full MCMC algorithms: (i)~a single-component S\'ersic model; (ii)~a double-component S\'ersic bulge + exponential disc model; and (iii)~a double-component point source + exponential disc model. The single-component S\'ersic model has seven free parameters: $x$ and $y$ positions of the profile centre, magnitude, effective radius containing half of the flux, S\'ersic index, position angle of the major axis, and axial ratio. These parameters are also left free to vary for the S\'ersic bulge model, while the exponential disc has the S\'ersic index fixed to 1. The point source model is described by $x$ and $y$ coordinates and magnitude. Both double-component models have the positions of the two components tied together. \citet{Casura2022} identify which of the three models best characterises each galaxy. This galaxy characterisation is based on Deviance Information Criterion cuts, which are calibrated against a visually inspected random sample of 1000 $r$-band objects. We make use of photometric PAs to identify the shape--filament alignments of galaxies, bulges, and discs for a comparison with the findings based on stellar kinematic PAs. The galaxy photometric PAs are derived from the $r$-band single-component S\'ersic profiles, while the bulge and disc photometric PAs are estimated from the $r$-band double-component S\'ersic bulge + exponential disc models. \subsubsection{Gas kinematics} \label{Gas kinematics} We explore whether the kinematic misalignment between the stellar and gas components has an impact on the galaxy spin--filament alignment. The gas kinematic PAs are estimated from the SAMI spatially-resolved H$\alpha$ velocity maps using the {\sc fit\_kinematic\_pa} routine. The H$\alpha$ velocity maps are obtained from the emission-line fitting software {\sc lzifu} \citep{Ho2016}, where the stellar continuum is subtracted using pPXF. Only spaxels with H$\alpha$ S/N$>$5 are used to estimate the gas kinematic PA \citep{Bryant2019}. \subsection{Galaxy properties} \label{Galaxy properties} Our aim is to explore the galaxy spin--filament alignments in relation to various galaxy properties to understand which show the strongest correlations and therefore may be linked by physical processes. We focus on stellar mass and bulge-to-total flux ratio. These two parameters correlate with each other, however they provide different information about the mechanisms linked to the spin--filament alignments. Stellar mass traces the position of the galaxy in the cosmic web and it is related to the overall accretion of material (e.g., \citealp{Codis2015} and references within). The bulge-to-total flux ratio traces particularly gas-rich major mergers \citep{Welker2017}, which are identified to most efficiently build-up the bulge component and to produce the most striking changes in galaxy shape and angular momentum \citep{Welker2014,Lagos2018}. Stellar masses ($M_\star$) are measured from K-corrected $g$--$i$ colours and $i$-band magnitudes \citep{Bryant2015,Taylor2011}. The bulge-to-total flux ratio (B/T) is estimated from the 2D photometric bulge/disc decomposition applied to the $r$-band KiDS images (see Section~\ref{Photometric position angles}). We make use of the B/T values extrapolated to infinity, but we find the same results using integrated quantities limited to a segment radius for the photometric fitting of the galaxies. The combination of the $M_\star$ and B/T parameters allows us to investigate the spin--filament alignments as a function of the mass of the bulge, defined as $M_{\rm bulge}=M_\star\times({\rm B/T})$. We also analyse alignment trends according to morphological, star formation, kinematic and environmental properties. Previous studies based on simulations and/or observations have shown that the galaxy spin--filament alignments also depend on these properties \citep{Dubois2014}. The visual morphological classification is based on \citet{Cortese2016}, where the galaxy morphology is represented by: elliptical=0, elliptical/S0=0.5, S0=1, S0/early-spiral=1.5, early-spiral=2, early/late-spiral=2.5 and late-spiral=3. Galaxies are classified according to their star formation characteristics into passive, star-forming and in-transition (H$\delta$-strong) by \citet{Owers2019}. Single stellar population properties are parametrised by the average light-weighted age and metallicity \citep{Scott2017,Scott2018}. The degree of ordered stellar rotation versus random motions in the galaxy is represented by the spin parameter evaluated within an effective radius, $\lambda_e$ \citep{Emsellem2007,Emsellem2011,Cappellari2016,vandeSande2017}. We also measure $(V/\sigma)_e$ as the flux-weighted mean within an effective radius \citep{Cappellari2007,vandeSande2017}. The kinematic morphology classification, which separates galaxies into slow rotators and fast rotators according to a Bayesian mixture model, is taken from \citet{vandeSande2021a}. The kinematic misalignment for a galaxy between the stellar and gas components is measured according to $\Delta{\rm PA}=|{\rm PA_{stars}-PA_{gas}}|$, where $\Delta{\rm PA}$ is the absolute difference between the stellar and gas kinematic position angles (e.g., \citealp{Bryant2019,Duckworth2019}). Finally, the local environment is characterised by the local galaxy density measured as the fifth-nearest neighbour surface density $\Sigma_5$ from \citet{Brough2013}. In addition, we make use of the GAMA galaxy group catalogue \citep{Robotham2011} to identify group central galaxies, group satellite galaxies, and isolated galaxies. \subsection{Galaxy sample and selection criteria} \label{Galaxy sample and selection criteria} Stellar kinematics are available for 3070 SAMI galaxies. We exclude 917 galaxies that belong to the eight clusters observed by SAMI, since environmental processes might affect and decrease the spin--filament alignment signal in these regions of high galaxy density \citep{Dubois2014}. We select 1711 galaxies with $9<\log(M_\star/M_{\odot})<12$ for reliable stellar kinematic velocity maps \citep{vandeSande2017}. Following \citet{Welker2020}, we choose 1516 galaxies with measured ellipticity and kinematic PAs, and we select 1311 galaxies having 1$\sigma$ uncertainties $\delta{\rm PA}<25$\degree. This cut allows us to select converged fits and to recover the various galaxy morphologies at different stellar mass ranges. From a visual inspection of the spatially-resolved velocity maps after fitting the PAs, we flag $\sim$2\% of them (mostly slow rotators) as possibly not well-resolved; however, the exclusion of these galaxies does not change our results. Our final SAMI galaxy sample comprises 1121 galaxies with measured $M_\star$ and B/T. Figure~\ref{SAMIproperties} shows the distributions of stellar mass, B/T, bulge mass and morphology. The median values are $\log(M_\star/M_{\odot}) = 10.30 \pm 0.02$, $\log(M_{\rm bulge}/M_{\odot}) = 9.79 \pm 0.03$, morphology type=2 (early-spiral) and B/T$= 0.40 \pm 0.01$. Our SAMI sample contains a wide variety of galaxies, mostly being late-types; this is typical of the SAMI Galaxy Survey and it is accentuated by excluding galaxies in clusters, which are mainly early-types. Gas kinematic PAs are estimated for the 1121 SAMI galaxies. Most of the galaxies ($\sim$80\%) have $\Delta{\rm PA}<30\degree$, indicating that the stellar and gas components are kinematically aligned. Photometric PAs are also measured for the 1121 SAMI galaxies, with about 66\% of them best fitted by the photometric double-component S\'ersic bulge + exponential disc model according to the galaxy characterisation of \citet{Casura2022}. Of the 1121 SAMI galaxies in our sample, stellar kinematic PAs are measured for 468 bulges and 516 discs, with 196 galaxies having both bulge and disc measurements. We exclude bulges and discs with $\delta{\rm PA}_{\rm bulge}>25$\degree and $\delta{\rm PA}_{\rm disc}>25$\degree, following the same approach used for the galaxy kinematic PAs. Visually inspecting the spatially-resolved velocity maps of the bulges and the discs after fitting the PAs, we find that 8--10\% of the bulges might not have well-resolved PAs, while only 2 discs are flagged. We include these cases, since their exclusion does not change our results. \subsection{The GAMA survey} \label{GAMA galaxy survey} In order to reconstruct the cosmic web, we take advantage of the Galaxy And Mass Assembly (GAMA; \citealp{Driver2011}) survey. GAMA is a spectroscopic and photometric survey of $\sim$300,000 galaxies down to $r < 19.8$\,mag that covers $\sim$286\,deg$^{2}$ in 5 regions called G02, G09, G12, G15 and G23. The redshift range of the GAMA sample is $0<z<0.5$, with a median value of $z\sim0.25$. Most of the spectroscopic data were obtained using the AAOmega multi-object spectrograph at the Anglo-Australian Telescope, although GAMA also incorporates previous spectroscopic surveys such as SDSS \citep{York2000}, 2dFGRS \citep{Colless2001,Colless2003}, WiggleZ \citep{Drinkwater2010} and the Millennium Galaxy Catalogue \citep{Driver2005}. The deep and highly complete spectroscopic redshift data (98.5\% in the equatorial regions; \citealp{Liske2015}), combined with the wide area, make the GAMA survey an ideal galaxy redshift sample for extracting the filaments of the cosmic web. We make use of the DR3 data release of the GAMA survey \citep{Baldry2018}. We select 35882 galaxies with secure redshifts and stellar masses in the SAMI redshift range ($0<z<0.13$) lying within the G09, G12 and G15 regions (where SAMI galaxies have been observed). Most galaxies have stellar masses between $10^{8}\,M_{\odot}$ and $10^{12}\,M_{\odot}$; the median stellar mass and redshift are $M_\star \sim 10^{9.7}\,M_{\odot}$ and $z \sim 0.09$. \section{Methods} \label{Methods} \subsection{Orientation of the spin axes} \label{Orientation of the spin axes} To identify the orientation of the spin axes of galaxies, bulges and discs, we follow the 3D thin-disc approximation implemented by \citet{Lee2007} and used in \citet{Kraljic2021} (see Section~2.6 of \citealp{Kraljic2021} for a summary of the formulae). This technique requires position angles and inclination angles. Our results are obtained using stellar kinematic PAs as described in Section~\ref{Stellar kinematics} for galaxies and in Section~\ref{Stellar kinematics of bulges and discs} for bulges and discs. For comparisons we also make use of photometric PAs (Section~\ref{Photometric position angles}) and gas kinematic PAs (Section~\ref{Gas kinematics}). The inclination angle is computed according to \citet{Haynes1984}, where for the intrinsic flatness parameter we use the mean value 0.171 of our SAMI sample. Using values of the intrinsic flatness parameter as a function of the galaxy morphology does not alter our results. The galaxy axial ratio is measured from the ellipticity (Section~\ref{Stellar kinematics}), while the bulge and disc axial ratios are measured from the 2D photometric bulge/disc decomposition (Section~\ref{Photometric position angles}). The inclination angle is set to 90\degree\ if the axial ratio is lower than the intrinsic flatness parameter. The sign of the inclination angle is ambiguous, since it is not possible to determine whether the spin axis is pointing in projection towards or away from us. Following \citet{Kraljic2021}, we chose to consider only the positive sign for the cosine of the inclination angle. Our conclusions do not change if we take into account the two-fold ambiguity as proposed by \citet{Lee2011}, by assigning to each galaxy both signs (see also Appendix~A of \citealp{Kraljic2021}). \citet{Lee2007} and \citet{Kraljic2021} applied this modelling to disc-dominated galaxies, however it can be extended to bulge-dominated galaxies assuming that the projected short axis and the spin axis are parallel \citep{Pahwa2016}. The assumption is based on the fact that the short and spin axes of most bulge-dominated galaxies are found to be aligned \citep{Franx1991}. In this work we extend the 3D modelling of the spin axis to bulge-dominated galaxies since most have regular and ordered velocity maps similar to those of disc-dominated galaxies \citep{Emsellem2011,Krajnovic2011,Cappellari2011}. Moreover, most passive SAMI galaxies are observed to be rotating oblate spheroids by applying orbit-superposition Schwarzschild models \citep{Santucci2022}. \subsection{Reconstruction of the cosmic web} \label{Reconstruction of the cosmic web} We reconstruct the filaments of the cosmic web using the Discrete Persistent Structure Extractor public code ({\sc DisPerSe}; \citealp{Sousbie2011a,Sousbie2011b}). {\sc DisPerSe} has been widely used in the literature to map the cosmic web, both using simulations (e.g., \citealp{Dubois2014,Welker2018,Codis2018}) and spectroscopic data, including the GAMA survey (e.g., \citealp{Kraljic2018, Duckworth2019, Welker2020}). Its strength for astrophysical applications resides in the fact that 3D structures of the cosmic web are identified starting from a point-like distribution, without making any assumption about its geometry or homogeneity. The 3D density field is built from the discrete distribution using the Delaunay Tessellation Field Estimator technique (DTFE; \citealp{Schaap2000, Cautun2011}), and it represents the input to the geometric 3D ridge extractor. {\sc DisPerSe} is a parameter-free and scale-free topologically motivated algorithm, based on discrete Morse and persistence theories. Voids, walls, and filaments are identified as individual components of the cosmic web and defined as distinct regions in the geometrical segmentation of space. The most significant structures can be selected according to their persistence ratio, which traces the significance of the topological connection between individual pairs of critical points and can be expressed in terms of number of standard deviations $\sigma$. For the galaxy distribution, we use the 35882 GAMA galaxies selected in Section~\ref{GAMA galaxy survey}, taking advantage of the right ascension, declination and spectroscopic redshift measurements. We run {\sc DisPerSe} with a 3$\sigma$ persistence threshold, in accord with previous studies that investigate galaxy spin--filament alignments \citep{Welker2020,Kraljic2021}. A reconstruction of the 3D density fields using the Python package {\sc pyvista} \citep{Sullivan2019} and displaying the typical tetrahedrons for the GAMA G09, G12 and G15 regions can be found \href{https://skfb.ly/o9MXv}{here}. The 3D filamentary structure is shown as interactive plot at \href{https://skfb.ly/o9MXz}{this URL}. Figure~\ref{PolarPlotSAMIGAMAFilaments} shows the projected network of filaments (blue lines), the 35882 GAMA galaxies (grey points) used to reconstruct the cosmic web, and the 1121 SAMI galaxies for which we aim to study the spin--filament alignments (red points). The reconstruction of the cosmic filaments is affected by the `Fingers of God' effect (FoG; \citealp{Jackson1972}). This distortion effect is due to the random motions of galaxies within virialised groups and clusters, and it causes the elongation of halos in redshift space, possibly leading to the identification of spurious filaments (see Figure~1 of \citealp{Kraljic2018}). We investigate the impact of the FoG effect within the SAMI region of the GAMA survey, implementing a compression correction by making galaxies isotropically distributed around their group centres, as in \citet{Kraljic2018}. Figure~\ref{GAMA_LOS_filaments} shows the probability distribution function (PDF) for |$\cos\alpha$|, where $\alpha$ is the angle between the filament and the GAMA line-of-sight (similar to Figure 2 of \citealp{Welker2020}). For |$\cos\alpha$|\,<\,0.9, the PDF with the FoG effect is consistent with the PDF corrected for the FoG effect according to the two-sample Kolmogorov-Smirnov test (K-S test; \citealp{Lederman1984}). An increase of aligned filaments is found for |$\cos\alpha$|\,>\,0.9, even when correcting for the FoG effect. The percentage of these aligned filaments decreases by only $\sim8$\% when applying the correction, from $\sim22$\% with FoG effect to $\sim14$\% after correction, and it still shows an increase at |$\cos\alpha$|\,>\,0.9. We conclude that the correction for the FoG effect does not make significant changes to the cosmic web for the SAMI Galaxy Survey, largely because a relatively small fraction of galaxies are affected (see also next Section). This in turn is due to the SAMI volume probing only the nearby region of the GAMA survey at $0< z < 0.12$, where the number of rich and massive groups, for which the FoG effect is most important, is very limited \citep{Barsanti2018}. Finally, it is worth noting that the FoG compression does not correct for boundary effects that can also lead to spurious filaments along the line-of-sight \citep{Welker2020}, and it has been found to perform poorly when applied to groups individually \citep{Kuchner2021}. \subsection{Cosmic web metrics} \label{Cosmic web metrics} Each SAMI galaxy is assigned to the closest filament using the smallest 3D Euclidean distance, $D_{\rm fil}$; if the projection point is beyond the start or end of the filament segment, we use the closest distance to the node. To exclude spurious filaments identified along the GAMA line-of-sight, when assigning each SAMI galaxy to the closest filament we follow Method~1 of \citet{Welker2020}: SAMI galaxies assigned to filaments with |$\cos\alpha$|\,>\,0.9 are reassigned to the closest filament with |$\cos\alpha$|\,<\,0.9, where $\alpha$ is the angle between the filament and the GAMA line-of-sight. If we follow the other two methods proposed by \citet{Welker2020}---i.e.\ Method~0 (all SAMI galaxies and filaments are taken into account) or Method~2 (we disregard SAMI galaxies assigned to filaments with |$\cos\alpha$|\,>\,0.95)---we find consistent results, with differences only in the statistical significance of the spin--filament correlation signals. This is in agreement with the conclusions regarding the three methods by \citet{Welker2020}. The galaxy spin--filament alignment is parametrised as the absolute value of the cosine of the angle between the galaxy spin axis and the closest filament in 3D Cartesian coordinates (e.g., \citealp{Tempel2013b,Kraljic2021}): \begin{equation} |\cos\gamma|=\frac{|\mathbf{L} \cdot \mathbf{r}|}{|\mathbf{L}| \cdot |\mathbf{r}|} \end{equation} where \textbf{L} is the galaxy spin axis identified as described in Section~\ref{Orientation of the spin axes} and \textbf{r} is the orientation vector of the filament. The angle |$\cos\gamma$| varies in the range [0,1], with |$\cos\gamma$|=1 meaning the galaxy spin axis is parallel to the filament while |$\cos\gamma$|=0 means the galaxy spin axis is perpendicular to the filament. The same parameter is used to quantify the separate bulge and disc spin--filament alignments, in which case \textbf{L} represents the respective bulge/disc spin axis. A schematic view of the cosmic web metrics and the associated vectors is given in Figure~\ref{CosmicWebMetrics} (see also Figure~4 of \citealp{Kraljic2018} and Figure~1 of \citealp{Winkel2021}). Comparing the $|\cos\gamma|$ values obtained for the cosmic web with and without the FoG effect, we find that the values deviate significantly from a 1:1 relation for only $\sim5$\% of the 1121 SAMI galaxies. The exclusion of these galaxies does not alter our conclusions and so we choose to study the $|\cos\gamma|$ values of the whole sample of 1121 SAMI galaxies without correcting the cosmic web for the FoG effect. Finally, we show the B/T distribution of the 1121 SAMI galaxies as a function of the distance to the closest filament and to the closest node in Figure~\ref{BT_DfilDistribution}. More bulge-dominated galaxies are found closer to the spine of the filament with respect to disc-dominated galaxies and also closer to the nodes, resembling the morphology-density relation of \citet{Dressler1980}. These trends are expected, since B/T correlates with stellar mass which traces $D_{\rm fil}$ and $D_{\rm node}$, which are, by definition, regions of higher density and stronger collapse \citep{Kraljic2018,Welker2020}. Dividing the galaxy sample into 489 low-mass galaxies with $9<\log{(M_\star/M_{\odot)}}<10.2$ and 632 high-mass galaxies with $10.2<\log{(M_\star/M_{\odot)}}<12$, we find that the increasing of B/T with lower values of $D_{\rm fil}$ and $D_{\rm node}$ is reproduced for both ranges in stellar mass. Overall, it suggests a scenario where galaxies migrate along the filament spine and towards the nodes, growing the bulge component via mergers. \section{Results} \label{Results} Our aim is to identify which galaxy properties are mostly related to the flipping of the galaxy spin--filament alignments, in order to understand the involved physical processes. We focus on stellar mass, bulge-to-total flux ratio and their product, the mass of the bulge $M_{\rm bulge}=M_\star\times({\rm B/T})$. In fact, $M_\star$ plays a key role in the alignments \citep{Codis2015,Welker2020}. However, $M_\star$ alone might not be able to explain the dependence on morphology \citep{Kraljic2020}. Thus, B/T might also be an independent driver, since it is identified as the main tracer of gas-rich major mergers that most efficiently build-up the bulge component and change the galaxy angular momentum \citep{Welker2014,Welker2017,Lagos2018}. We also investigate the spin--filament alignments as a function of morphological, star formation, kinematic and environmental properties. These parameters correlate with $M_\star$ and/or B/T, so they are expected to show secondary correlations. As stated in Section~\ref{Orientation of the spin axes}, the identification of the spin axis is based on stellar kinematic PAs. The descriptions of how the galaxy properties are measured, together with the primary references, are given in Section~\ref{Galaxy properties}. By exploiting the spatially-resolved kinematic bulge/disc decomposition for the SAMI galaxies, we are able to study separately the spin--filament alignments of bulges and discs. These analyses help us understand the different formation scenarios for galaxies, bulges, and discs. \subsection{\texorpdfstring{$\boldsymbol{M_{\rm bulge}=M_\star\times(B/T)}$}{} is the primary parameter} \label{Correlations with galaxy properties} We explore whether galaxy spin--filament alignments depend on different galaxy properties, with the goal of understanding their statistical significance and possible physical linkages. Figure~\ref{AnglevsGalaxyParameters} shows the average |$\cos\gamma$| values as a function of stellar mass (panel A), bulge-to-total flux ratio (panel B), bulge mass (panel C), degree of stellar rotation (panels D and E), average light-weighted age (panel F), kinematic misalignment between stars and gas (panel G), distance from the closest filament (panel H), and local galaxy density (panel I). The panels display that the mean |$\cos\gamma$| {\em decreases} with increasing $M_\star$, B/T, $M_{\rm bulge}$ and age, indicating a relative shift from parallel to perpendicular spin--filament alignments as these quantities increase. By contrast, the mean |$\cos\gamma$| {\em increases} with increasing $(V/\sigma)_e$, $\lambda_e$ and $D_{\rm fil}$, indicating a relative shift from perpendicular to parallel alignments as these quantities increase. No significant trends are found for |$\cos\gamma$| as a function of $\Delta{\rm PA}$ or $\Sigma_5$. We look for possible dependencies performing Spearman rank correlation test for individual galaxies (correlation coefficients, $\rho$, and $p$-values, $p_{\rm S}$, are listed in Table~\ref{SpearmanResults}). We adopt $p_{\rm S}<0.05$ as the criterion for rejecting the null hypothesis of no correlation. With this criterion, statistically significant correlations with |$\cos\gamma$| are detected for $M_\star$, B/T, $M_{\rm bulge}$, age and $D_{\rm fil}$. The result is marginal for $\lambda_e$, while for $\Delta{\rm PA}$ and $\Sigma_5$ there is no correlation. The strongest correlations are found for $M_{\rm bulge}$ and B/T. We explore whether one of these parameters primarily correlates with the spin--filament alignments, following a similar method to that used in \citet{Oh2022}. We fit linear relations between |$\cos\gamma$| and the galaxy properties. Choosing a galaxy property as the tested primary parameter, we estimate the expected |$\cos\gamma$|$_{\rm exp}$ values from the linear fit between the tested primary parameter and the observed |$\cos\gamma$|$_{\rm obs}$ values. We define the difference $\Delta\cos\gamma=|\cos\gamma|_{\rm obs}-|\cos\gamma|_{\rm exp}$, removing the dependency on the tested primary parameter from the other galaxy properties. Then, we use the Spearman test to check whether correlations are still present between $\Delta\cos\gamma$ and the other galaxy properties. Assuming $M_\star$ as the primary parameter, significant ($p_{\rm S}<0.05$) residual correlations are found for B/T and $M_{\rm bulge}$, implying that $M_\star$ alone cannot explain the dependence of galaxy spin--filament alignments on B/T and $M_{\rm bulge}$. Assuming B/T as primary parameter, we detect a significant residual correlations for $M_\star$ and $M_{\rm bulge}$. Finally, considering $M_{\rm bulge}$ as primary parameter, we find no residual dependence on any galaxy property. These results are confirmed by the estimate of the partial correlation coefficients, which allow us to explore the true correlation between two parameters while controlling for a third quantity, avoiding the cross-correlation driven by their dependency on the third property \citep{Lawrance1976,Baker2022}. The left panel of Figure~\ref{ExplainedVariance} shows the partial correlation coefficients from the Spearman test between |$\cos\gamma$| and the studied galaxy properties while controlling for $M_{\rm bulge}$. No significant correlations remain once the correlation with $M_{\rm bulge}$ is taken into account. Similar results are obtained for the |$\cos\gamma$| values where the cosmic web is corrected for the FoG effect. An alternative approach is to apply the partial least squares regression technique (PLS; \citealp{Wold1966,Hoskuldsson1988}) to estimate the contribution to the |$\cos\gamma$| variance by each galaxy parameter. We use the PLSRegression Python function \citep{Pedregosa2012} and follow the approach described in \citet{Oh2022}. The right panel of Figure~\ref{ExplainedVariance} and Table~\ref{SpearmanResults} show the fraction of the variance in |$\cos\gamma$| explained by each galaxy parameter. The highest contribution ($\sim$70\%) is found for $M_{\rm bulge}$, suggesting that this parameter most significantly correlates with the galaxy spin--filament alignments. B/T accounts for $\sim$27\%, $M_\star$ for $\sim$2\%, and the remaining 1\% is explained by the other galaxy properties. These results are consistent with the findings from the analysis above, and suggest that $M_{\rm bulge}$ is the primary parameter to correlate with spin--filament alignments. \begin{table} \centering \caption{Results from Spearman rank correlation test and PLS technique for galaxy spin--filament alignments $|\cos\gamma|$ as a function of various galaxy properties. Column~1 lists the analysed galaxy property, column~2 the number of galaxies, column~3 the Spearman correlation coefficient, column~4 the $p$-value from the Spearman test with significant $p$-values ($<$0.05) highlighted in bold, and column~5 the explained variance in $|\cos\gamma|$.} \label{SpearmanResults} \begin{tabular}{@{}lcccc@{}} \toprule Galaxy property & $N_{\rm gal}$ & $\rho$ & $p_{\rm S}$& Variance (\%)\\ \midrule $\log(M_\star/M_{\odot})$ & 1121 & $-$0.07 & \textbf{0.014} & 1.73 \\ B/T & 1121 & $-$0.11 & $\mathbf{10^{-4}}$ &26.58 \\ $\log(M_{\rm bulge}/M_{\odot})$ & 1121 & $-$0.13 & $\mathbf{10^{-5}}$ &71.09\\ \midrule $(V/\sigma)_e$ & 1071 & $+$0.06 & \textbf{0.047} & 0.07 \\ $\lambda_e$ & 1071 & $+$0.06 & \textbf{0.048} &0.07 \\ Age & 1121 & $-$0.10 & \textbf{0.001} &0.27 \\ $\Delta{\rm PA}$ & 1121 & $-$0.02 & 0.481 & 0.01\\ \midrule $D_{\rm fil}$ & 1121 & $+$0.07 & \textbf{0.024} & 0.24 \\ $\log(\Sigma_5)$ & 1110 & $-$0.05 & 0.096 & 0.01 \\ \bottomrule \end{tabular} \end{table} \subsection{Trends with \texorpdfstring{$\boldsymbol{M_\star}$}{}, B/T and \texorpdfstring{$\boldsymbol{M_{\rm bulge}}$}{}} \label{Trends with stellar mass, B/T and mass bulge} We find significant correlations of the galaxy spin--filament alignments with $M_\star$, B/T and $M_{\rm bulge}$. We now explore this signal further, dividing the SAMI galaxies into $M_\star$, B/T and $M_{\rm bulge}$ ranges, and analysing the tendency of the alignment for each sub-sample. Following \citet{Welker2020}, we divide the 1121 SAMI galaxies into four stellar mass ranges: $9<\log{(M_\star/M_{\odot)}}<9.5$, $9.5<\log{(M_\star/M_{\odot)}}<10.2$, $10.2<\log{(M_\star/M_{\odot)}}<10.9$ and $10.9<\log{(M_\star/M_{\odot)}}<12$. Figure~\ref{TrendsStellarMass} shows the probability distribution function (PDF) for |$\cos\gamma$|, the absolute value of the cosine of the angle between the galaxy spin axis and the orientation of the closest filament. The data are grouped in three bins of |$\cos\gamma$| and the PDFs are normalised such that the mean value over the bins is unity. The error bars are estimated from the bootstrap method using 1000 sample realizations. To assess the statistical significance of each trend, we apply the K-S test to test the null hypothesis that |$\cos\gamma$| has a uniform distribution. In order to account for possible observational bias, we build the null hypothesis by generating 3000 randomised samples where the galaxy spins are fixed, but the galaxy positions (and thus the identification of their closest filaments) are shuffled \citep{Tempel2013a,Tempel2013b,Kraljic2021}. The median from the 3000 random samples is used as null hypothesis and the reconstructed distributions are nearly uniform. We take $p$-values ($p_{\rm K-S}$) smaller than 0.05 to indicate that the distribution is significantly different from the null hypothesis. Table~\ref{PAstellResults} lists the galaxy property of interest, the selection of galaxy sub-samples, the number of galaxies for each sub-sample, the mean |$\cos\gamma$| value, and the $p$-value from the K-S test. We find that the PDF is skewed towards more perpendicular spin--filament alignments (i.e.\ lower values of |$\cos\gamma$|) for more massive galaxies and towards more parallel spin--filament alignments (i.e.\ higher values of |$\cos\gamma$|) for less massive galaxies. With these four mass bins, significant statistical results are found for galaxies with $\log{(M_\star/M_{\odot})}>10.2$. For the purpose of our subsequent analyses, we group the SAMI galaxies into two mass bins: low-mass galaxies, with $9<\log{(M_\star/M_{\odot})}<10.2$, and high-mass galaxies, with $10.2<\log{(M_\star/M_{\odot})}<12$. The top left panel of Figure~\ref{TrendsGalaxyProperties} shows the PDFs of the spin--filament alignments for these two sub-samples. We find $p_{\rm K-S}<0.05$ only for the high-mass galaxies, which show a tendency to perpendicular alignment. Using the two-sample K-S test, we check the null hypothesis that the |$\cos\gamma$| distributions of the low-mass and high-mass galaxy sub-samples are drawn from the same parent population. This test returns $p_{\rm 2\,K-S}=0.006$, listed in Table~\ref{PAstellResults} and indicating that the two distributions are unlikely to be drawn from the same population. The top right panel of Figure~\ref{TrendsGalaxyProperties} shows the cumulative distribution functions (CDFs) for the two sub-samples and the entire population. The insert displays the difference $\Delta$(CDF)$\,=\,$CDF(sub-sample)\,$-$\,CDF(all) as a function of |$\cos\gamma$|. The PDFs and CDFs of the spin--filament alignments for the SAMI galaxies divided into B/T and $M_{\rm bulge}$ ranges are displayed in the middle and bottom panels of Figure~\ref{TrendsGalaxyProperties}, respectively. The selection of the ranges, the number of galaxies for each sub-sample, the average |$\cos\gamma$| values and the $p$-values from the K-S tests are reported in Table~\ref{PAstellResults}. Galaxies with the lowest values of B/T and $M_{\rm bulge}$ tend to have spins parallel to the closest filament, while galaxies with highest B/T and $M_{\rm bulge}$ show preferentially perpendicular orientations. These deviations from a uniform PDF are statistically significant. Comparing the |$\cos\gamma$| distributions of the galaxies in the most extreme ranges, the pairs of sub-samples in B/T and $M_{\rm bulge}$ have statistically different distributions. Finally, the comparison of the CDF plots shows that the largest deviations of the sub-samples from the population as a whole is displayed for $M_{\rm bulge}$. \begin{table*} \caption{Galaxy spin--filament alignments for various galaxy properties. Column~1 lists the galaxy property, column~2 the set(s) of galaxy sub-samples for that property, column~3 the number of galaxies in each sub-sample, column~4 the average |$\cos\gamma$|, column~5 the $p$-value from the K-S test, and column~6 the tendency of the alignment (where there is a significant deviation from the reconstructed uniform distribution). Columns~7 and~8 give the sub-samples for which |$\cos\gamma$| distributions are compared with a two-sample K-S test; the associated $p$-value is listed in column~9. Significant $p$-values (those less than 0.05) are highlighted in bold.} \makebox[\textwidth][c]{ \begin{tabular}{lccccc|ccc} \toprule Galaxy Property & Selection & $N_{\rm gal}$& <|$\cos\gamma$|> & $p_{\rm K-S}$ & Alignment & \multicolumn{1}{c}{Sample 1} & Sample 2 & $p_{\rm 2\,K-S}$\\ \midrule $\log{(M_\star/M_{\odot})}$ & [9; 9.5] & 168 & 0.521$\pm$0.022 & 0.934 & & & & \\ & [9.5; 10.2] & 321 & 0.551$\pm$0.016 & 0.155 & & & & \\ & [10.2; 10.9] & 454 & 0.425$\pm$0.014 & \textbf{0.002} & $\perp$ & & & \\ & [10.9; 12] & 178 & 0.433$\pm$0.021& \textbf{0.002}& $\perp$ & & & \\ & & & & & & & & \\ & [9; 10.2] & 489 & 0.539$\pm$0.013 & 0.102 & & [9; 10.2] &[10.2; 12] &\textbf{0.006} \\ & [10.2; 12] & 632 & 0.427$\pm$0.011 & $\mathbf{1\times10^{-4}}$& $\perp$ & & & \\ & & & & & & & & \\ & ${\rm PA}_{\rm gas}$, [8; 9] & 180 & 0.561$\pm$0.042 & 0.954 & &[8; 9] & [10.2; 12] & \textbf{0.022}\\ & ${\rm PA}_{\rm gas}$, [9; 10.2] & 489 & 0.482$\pm$0.012 & 0.646 & & & &\\ & ${\rm PA}_{\rm gas}$, [10.2; 12] & 632 & 0.441$\pm$0.010 & $\mathbf{8\times10^{-5}}$& $\perp$& & &\\ \midrule B/T & [0; 0.1] & 214 & 0.559$\pm$0.020 & \textbf{0.001} & $\parallel$ & [0; 0.1] & [0.7; 1] & $\mathbf{7\times10^{-5}}$\\ & [0.1; 0.4] & 344 & 0.517$\pm$0.016 & 0.991 & & & & \\ & [0.4; 0.7] & 301 & 0.437$\pm$0.017 & \textbf{0.017} &$\perp$ & & & \\ & [0.7; 1] & 262 & 0.375$\pm$0.018 & $\mathbf{7\times10^{-5}}$& $\perp$& & & \\ \midrule $\log{(M_{\rm bulge}/M_{\odot})}$ & [7; 8.3] & 83 & 0.591$\pm$0.030 & \textbf{0.008} & $\parallel$ & [7; 8.3] & [10.5; 12] & $\mathbf{3\times10^{-5}}$\\ & [8.3; 10] & 369 & 0.542$\pm$0.015 & 0.113 & & & & \\ & [10; 10.5] & 459 & 0.440$\pm$0.014& \textbf{0.015} & $\perp$ & & & \\ & [10.5; 12] & 210 & 0.363$\pm$0.019 & $\mathbf{1\times10^{-5}}$ & $\perp$& & & \\ \midrule $(V/\sigma)_e$ & [0; 0.3] & 207 & 0.413$\pm$0.021 & \textbf{0.037} & $\perp$ & [0; 0.3] &[1; 1.7] & 0.433\\ & [0.3; 0.7] & 459 & 0.449$\pm$0.014 & \textbf{0.020}& $\perp$ & & & \\ & [0.7; 0.1] & 287 & 0.502$\pm$0.018 & 0.582& & & & \\ & [1; 1.7] & 118 & 0.468$\pm$0.027& 0.984 & & & & \\ \midrule $\lambda_e$ & [0; 0.3] & 229 & 0.408$\pm$0.019 & \textbf{0.011} & $\perp$ & [0; 0.3] &[0.6; 1] & 0.245\\ & [0.3; 0.5] & 330 & 0.450$\pm$0.016 & \textbf{0.015} & $\perp$ & & & \\ & [0.5; 0.6] & 214 & 0.473$\pm$0.020 & 0.993& & & & \\ & [0.6; 1] & 298 & 0.500$\pm$0.017& 0.989 & & & & \\ \midrule Age [Gyr] & [0; 2] & 285 & 0.534$\pm$0.016 & 0.881 & &[0; 2] & [7; 17.5]& \textbf{0.018}\\ & [2; 5] & 537 & 0.472$\pm$0.011 & 0.437 & & & & \\ & [5; 7] & 160 & 0.419$\pm$0.021& \textbf{0.006}& $\perp$ & & & \\ & [7; 17.5] & 139 & 0.405$\pm$0.023 & \textbf{0.014} & $\perp$& & &\\ \midrule Visual & Elliptical & 90 & 0.429$\pm$0.026 & 0.099 & & Elliptical+S0 & Late-type& \textbf{0.023} \\ Morphology & S0 & 358 & 0.433$\pm$0.014 & \textbf{0.004}& $\perp$ & & &\\ & Late-type & 647 & 0.513$\pm$0.011 & 0.872 & & & &\\ & & & & & & & & \\ & Late-type, $\scriptstyle{7<\log{(M_{\rm bulge}/M_{\odot})}<8.3}$ & 75 & 0.638$\pm$0.034 & \textbf{0.016} & $\parallel$ & & & \\ & Late-type, $\scriptstyle{10.5<\log{(M_{\rm bulge}/M_{\odot})}<12}$ & 94 & 0.441$\pm$0.031& \textbf{0.038} & $\perp$ & & & \\ \midrule Spectral & Passive & 394 & 0.409$\pm$0.015 & $\mathbf{1\times10^{-4}}$& $\perp$& Passive& Star-forming & \textbf{0.003} \\ Classification & Star-forming & 714 & 0.502$\pm$0.011 & 0.990& & & &\\ & & & & & & & & \\ & Star-forming, $\scriptstyle{7<\log{(M_{\rm bulge}/M_{\odot})}<8.3}$ & 72 & 0.606$\pm$0.032 & \textbf{0.038}& $\parallel$ & & & \\ & Star-forming, $\scriptstyle{10.5<\log{(M_{\rm bulge}/M_{\odot})}<12}$ & 114&0.423$\pm$0.027 & \textbf{0.003}& $\perp$& & &\\ \midrule Kinematic & Slow rotator & 68 & 0.408$\pm$0.032 & 0.240& & Slow rotators & Fast rotators & 0.730\\ Morphology & Fast rotator & 864 & 0.471$\pm$0.009 & 0.100& & & &\\ && & & & & & &\\ & Fast rotator, $\scriptstyle{7<\log{(M_{\rm bulge}/M_{\odot})}<8.3}$ & 49 & 0.693$\pm$0.041 & \textbf{0.004}& $\parallel$ & & & \\ & Fast rotator, $\scriptstyle{10.5<\log{(M_{\rm bulge}/M_{\odot})}<12}$ & 247& 0.406$\pm$0.018&\textbf{0.002} & $\perp$& & & \\ \midrule Local & Central & 385 & 0.441$\pm$0.013& \textbf{0.036}& $\perp$& Central & Isolated& 0.439 \\ environment & Satellite & 328 & 0.495$\pm$0.015 & 0.092& & & &\\ & Isolated & 408 & 0.484$\pm$0.013 & 0.758& & & &\\ & & & & & & & &\\ & Isolated, $\scriptstyle{7<\log{(M_{\rm bulge}/M_{\odot})}<8.3}$ & 35 & 0.643$\pm$0.050 & \textbf{0.047}& $\parallel$ & & & \\ & Isolated, $\scriptstyle{10.5<\log{(M_{\rm bulge}/M_{\odot})}<12}$ & 147 & 0.429$\pm$0.024&\textbf{0.017} & $\perp$& & & \\ \bottomrule \end{tabular} } \label{PAstellResults} \end{table*} \subsection{Secondary tracers} \label{Secondary tracers} We investigate the spin--filament alignments as a function of galaxy properties that show weaker correlations with respect to $M_{\rm bulge}$. We focus on light-weighted stellar age and the degree of ordered stellar rotation. We also classify galaxies according to visual morphological type, star formation properties, kinematic morphology and membership in a galaxy group. The correlation of these properties with $|\cos\gamma$| are expected due to their dependency on $M_{\star}$ and B/T (e.g., \citealp{vandeSande2017,vandeSande2018}), and thus $M_{\rm bulge}$. The results are listed in Table~\ref{PAstellResults}, while the figures similar to Figure~\ref{TrendsGalaxyProperties}, containing the PDFs and CDFs, are reported as supplementary material to this article. We also report the $\lambda_e$ versus ellipticity plot with points colour-coded according to $M_{\rm bulge}$ showing the expected dependency of $\lambda_e$ on $M_{\rm bulge}$: galaxies with lower $\lambda_e$ values have higher $M_{\rm bulge}$. Significant results are found only for tendencies towards more perpendicular alignments for galaxies with $(V/\sigma)_e$\,<\,0.7 ($p_{\rm K-S}=0.020$), $\lambda_e$\,<\,0.5 ($p_{\rm K-S}=0.015$), age\,>\,5\,Gyr ($p_{\rm K-S}=0.014$), S0 morphology ($p_{\rm K-S}=0.004$), passive spectral type ($p_{\rm K-S}=1\times10^{-4}$) and central group membership ($p_{\rm K-S}=0.036$). Similar results are obtained using inclination-corrected $\lambda_e$ values from \citet{vandeSande2021b}. We also note that similar PDFs are found as a function of $(V/\sigma)_e$ and $\lambda_e$. We check whether late-type galaxies, star-forming galaxies, fast rotators and isolated galaxies with low-bulge masses ($7<\log{(M_{\rm bulge}/M_{\odot})}<8.3$) show a preferential parallel alignment, in agreement with the observed dependency of the signal on $M_{\rm bulge}$. Similarly, we test whether these classified galaxies with high-bulge masses ($10.5<\log{(M_{\rm bulge}/M_{\odot})}<12$) tend to be aligned more perpendicularly. The PDFs for these sub-samples are shown in Figure~\ref{TrendsVisualKinematicMorphologyMassBulge}. Late-types, star-forming galaxies, fast rotators and isolated galaxies with low-$M_{\rm bulge}$ have statistically significant tendencies towards parallel alignments (with $p_{\rm K-S}=0.016$, 0.038, 0.004 and 0.047 respectively), while those with high-$M_{\rm bulge}$ have a more perpendicular tendency (with $p_{\rm K-S}=0.038$, 0.003, 0.002 and 0.017 respectively). This confirms that morphological, star formation, kinematic and environmental properties are secondary tracers of the spin--filament alignments with respect to $M_{\rm bulge}$. \subsection{S0 galaxies} \label{S0 galaxies} Using the analysis of the spin--filament alignments, we aim to better understand the formation of S0 galaxies. A significant preference for perpendicular alignments for S0 galaxies ($p_{\rm K-S}=0.004$) is detected in the previous Section. We now explore this trend as a function of $M_\star$ and the kinematic misalignment between stars and gas, in order to understand which sub-samples dominate the perpendicular signal for S0 galaxies. The choice of these galaxy properties is motivated by the fact that \citet{Kraljic2021} found that the perpendicular alignment of S0 MaNGA galaxies is dominated by low-mass galaxies (at apparent odds with the predicted mass dependency of the signal) or by misaligned galaxies. According to the visual morphology classification, there are 469 S0 galaxies. Of this sample, we select only fast rotators and galaxies best fitted by a double-component S\'ersic bulge plus exponential disc model in the $r$-band (see Section~\ref{Photometric position angles}). This allows us to exclude possible ellipticals and late-type galaxies that might contaminate the signal. The final S0 sample comprises 216 galaxies. The top and bottom panels of Figure~\ref{TrendsS0} show the PDFs of the spin--filament alignments for the 216 S0 galaxies divided into $M_\star$ and $\Delta{\rm PA}$ ranges, respectively; the results are reported in Table~\ref{PAstellResultsS0}. More statistically significant perpendicular alignments are observed for high-mass S0 galaxies and kinematically-misaligned S0 galaxies (with $\Delta{\rm PA}>30\degree$) relative to low-mass or aligned S0s. We chose $\Delta{\rm PA}=30\degree$ as the threshold between misaligned and aligned galaxies in line with previous studies (e.g., \citealp{Bryant2019,Duckworth2019}), but our results do not change if we use 20$\degree$ or 40$\degree$ as the threshold. Using the two-sample K-S test, we find that the |$\cos\gamma$| distribution of the high-mass S0 galaxies is marginally different to the |$\cos\gamma$| distribution of low-mass S0 galaxies ($p_{\rm 2\,K-S}=0.043$). On the other hand, the |$\cos\gamma$| distributions of aligned and misaligned S0 galaxies are consistent ($p_{\rm 2\,K-S}=0.185$). High-mass S0 galaxies are also significantly more metal-rich ($p=6\times10^{-8}$), dispersion-dominated ($p=0.008$), and tend to have a more classical \citet{deVaucouleurs1948,deVaucouleurs1956} bulge profile ($p=0.002$) compared to low-mass S0 galaxies (high-mass S0 galaxies have median $n_{\rm bulge}=4.24\pm0.17$, while low-mass S0 galaxies have median $n_{\rm bulge}=2.20\pm0.42$). The distributions as a function of [Z/H], $(V/\sigma)_{e}$ and S\'ersic index of the bulge are shown in Figure~\ref{PropertiesS0}. These findings suggest that high-mass and low-mass S0 galaxies represent different galaxy populations. Overall, we find that the perpendicular tendency for S0 galaxies is dominated by high-mass galaxies and not by low-mass S0 galaxies as in \citet{Kraljic2021}. They studied 114 low-mass S0 galaxies and 155 high-mass S0 galaxies, while our samples count 38 and 178, respectively. The discrepant finding might be due to the different criteria applied to select S0 galaxies, or could be explained in terms of massive bulges for the low-mass S0 MaNGA galaxies giving rise to a strong perpendicular trend. \begin{table*} \caption{Galaxy spin--filament alignments for 216 S0 galaxies as a function of $M_\star$ and $\Delta{\rm PA}$. The columns are the same as in Table~\ref{PAstellResults}.} \makebox[\textwidth][c]{ \begin{tabular}{lccccc|ccc} \toprule Galaxy Property & Selection & $N_{\rm gal}$& <|$\cos\gamma$|> & $p_{\rm K-S}$ & Alignment & \multicolumn{1}{c}{Sample 1} & Sample 2 & $p_{\rm 2\,K-S}$\\ \midrule & All & 216 & 0.433$\pm$0.016 & \textbf{0.007} &$\perp$ & & & \\ \midrule $\log{(M_\star/M_{\odot})}$ & [9; 10.2] & 38 & 0.621$\pm$0.040 & 0.891 & &[9; 10.2] &[10.2: 12] & \textbf{0.043}\\ & [10.2; 12] & 178 & 0.402$\pm$0.017 & $\mathbf{8\times10^{-4}}$ &$\perp$ & & & \\ \midrule $\Delta{\rm PA}$ & <30\degree & 116 & 0.443$\pm$0.021 & 0.489 & & <30\degree& >30\degree & 0.185\\ & >30\degree & 100 & 0.342$\pm$0.029 & \textbf{0.013} &$\perp$ & & & \\ \bottomrule \end{tabular} } \label{PAstellResultsS0} \end{table*} \subsection{Spin--filament alignments of bulges and discs} \label{spin--filament alignments of bulges and discs} We next investigate the separate spin--filament alignments of bulges and discs to understand the formation of these galaxy components. We take advantage of the 2D kinematic bulge/disc decomposition described in Section~\ref{Stellar kinematics of bulges and discs}, which allows us to measure the separate stellar kinematic PAs of bulges and discs. Out of the 1121 SAMI galaxies, kinematic PAs are available for 468 bulges and 516 discs, with 196 galaxies having both bulge and disc measurements (see Section~\ref{Galaxy sample and selection criteria}). Bulge PAs have an average 1$\sigma$ error of $\sim$5\degree, while it is $\sim$2\degree\ for disc PAs, highlighting the larger uncertainty in estimating PAs from the bulge velocity maps. To identify the orientation of the bulge and disc spin axes, we apply the method of Section~\ref{Orientation of the spin axes}. Figure~\ref{HistogramBulgeDisc} shows the visual morphology distributions for bulges, discs and galaxies with both components. Bulges belong mainly to early-type galaxies, while discs mainly trace late-type galaxies. Galaxies with both components are mostly S0s. In Figure~\ref{TrendsBulgeDisc} we display the PDFs and CDFs of the spin--filament alignments for bulges and discs. We also investigate the tendencies according to $M_{\rm bulge}$. Table~\ref{PAstellResultsBulgeDisc} lists the number of galaxies, the average |$\cos\gamma$| and the $p$-values from the K-S tests. Bulges have a significant perpendicular tendency, while discs show a uniform PDF. The |$\cos\gamma$| distributions of the bulges and the discs are not statistically different ($p_{\rm 2\,K-S}=0.139$), and they are consistent with the galaxy spin--filament alignments. The alignment for bulges tends to be more perpendicular at all $M_{\rm bulge}$ values (there is only one bulge component with $\log(M_{\rm bulge}/M_{\odot})<8.3$). Discs show a significant parallel tendency for galaxies with $7<\log(M_{\rm bulge}/M_{\odot})<8.3$, while the alignment is significantly perpendicular at $10.5<\log(M_{\rm bulge}/M_{\odot})<12$. The two disc |$\cos\gamma$| distributions are statistically different ($p_{\rm 2\,K-S}=0.005$). To better understand the kinematic characteristics of the bulges and discs, we measure $V/\sigma$ for both components as the flux-weighted means within 1\,$R_e$ of the galaxies \citep{Cappellari2007,Oh2020}. Bulges tend to have lower $V/\sigma$ values with respect to discs, with median $(V/\sigma)_{\rm bulge}=0.30$ and median $(V/\sigma)_{\rm disc}=0.84$. Figure~\ref{TrendsBulgeDisc2} shows the PDFs of the spin--filament alignments as a function of $(V/\sigma)$ for bulges (left panel) and discs (right panel). Perpendicular tendencies are seen for dispersion-dominated bulges with $(V/\sigma)_{\rm bulge}<0.8$ ($p_{\rm K-S}=0.004$), while rotation-dominated bulges with $(V/\sigma)_{\rm bulge}>0.8$ show a more parallel trend ($p_{\rm K-S}=0.016$). Rotation-dominated discs tend to show a parallel alignment ($p_{\rm K-S}=0.037$), while dispersion-dominated discs tend to be perpendicular ($p_{\rm K-S}=0.024$). Since we find a parallel tendency for rotation-dominated bulges that we do not detect for bulges with $7<\log(M_{\rm bulge}/M_{\odot})<8.3$, we inspect in Figure~\ref{HistogramsBulge} the bulge mass distributions (left panel) and the bulge S\'ersic index distributions (right panel) according to $(V/\sigma)_{\rm bulge}$. Rotation-dominated bulges tend to have low $M_{\rm bulge}$ values with respect to dispersion-dominated bulges (two-sample K-S test: $p$-value $=7.75\times 10^{-5}$ ). Thus, the trends of the PDFs in Figure~\ref{TrendsBulgeDisc2} are in agreement with the dependency of the signal on $M_{\rm bulge}$. The PLS technique confirms $M_{\rm bulge}$ as the primary correlation parameter over $(V/\sigma)_{\rm bulge}$. Dispersion-dominated bulges show a peak around the de~Vaucouleurs profile, with median $n_{\rm bulge}=4.70\pm0.15$. On the other hand, rotation-dominated bulges have median $n_{\rm bulge}=2.72\pm0.67$. The two $n_{\rm bulge}$ distributions are also significantly different according to the two-sample K-S test ($p$-value $=$ 0.003). Finally, we analyse the 196 SAMI galaxies with available |$\cos\gamma$| for both the bulge and the disc components. About 65\% of these galaxies are visually classified as S0s. Accounting for any kinematic misalignment, we investigate the tendencies as a function of the absolute difference between the stellar kinematic PAs of the bulges and the discs, $\Delta{\rm PA}_{\rm bulge - disc}=|\rm PA_{\rm bulge}-PA_{\rm disc}|$, and the associated uncertainties $\delta{\rm PA}_{\rm bulge - disc}=\Delta{\rm PA}_{\rm bulge - disc}/(\delta{\rm PA}_{\rm bulge}+\delta{\rm PA}_{\rm disc})$. Most (70\%) galaxies have aligned components: $\Delta{\rm PA}_{\rm bulge - disc}<30$\degree or $\delta{\rm PA}_{\rm bulge - disc}<3$, i.e. ${\rm PA}_{\rm bulge}$ and ${\rm PA}_{\rm disc}$ are less than 3$\sigma$ different. For these galaxies both bulges and discs show a tendency to perpendicular alignments, as shown in Figure~\ref{TrendsBulgeDiscBothComponents}, and in agreement with the perpendicular trend found for S0 galaxies in Section~\ref{S0 galaxies}. The remaining (30\%) galaxies have bulge and disc significantly misaligned (30\degree$<\Delta{\rm PA}_{\rm bulge - disc}<90$\degree and $\delta{\rm PA}_{\rm bulge - disc}>3$) and no significant tendencies. In conclusion, although these results are affected by the limitations of the bulge-disc decompositions and are based on small samples (see Section~\ref{Caveats}), they show how the study of the bulge and disc spin--filament alignments provides further clues on the tendencies for the various galaxy populations. \begin{table*} \caption{Spin--filament alignments of bulges and discs. Column 1 specifies the galaxy component, while columns 2--10 are the same as in Table~\ref{PAstellResults}.} \makebox[\textwidth][c]{ \begin{tabular}{lcccccc|ccc} \toprule Galaxy Component & Galaxy Property & Selection & $N_{\rm gal}$& <|$\cos\gamma$|> & $p_{\rm K-S}$ & Alignment & \multicolumn{1}{c}{Sample 1} & Sample 2 & $p_{\rm 2\,K-S}$\\ \midrule Bulge & & All & 468& 0.428$\pm$0.013 & \textbf{0.001} & $\perp$ & Bulge & Disc & 0.139 \\ Disc & & All & 516 & 0.459$\pm$0.013 & 0.244 & & & &\\ \midrule Bulge & $\log{(M_{\rm bulge}/M_{\odot})}$ & [7; 10] & 139 & 0.486$\pm$0.025 & \textbf{0.020}& $\perp$ & [7; 10] &[10.5; 12] & 0.403\\ & & [10; 10.5] & 139 & 0.457$\pm$0.024 & 0.506 & & & &\\ & & [10.5; 12] & 161 & 0.383$\pm$0.022 & \textbf{0.004} & $\perp$ & & &\\ & & & & & & & & &\\ & $(V/\sigma)_{\rm bulge}$ & [0; 0.3] & 254 & 0.404$\pm$0.018 & \textbf{0.004}& $\perp$ & [0; 0.3] & [0.8; 2] & \textbf{0.003}\\ & & [0.3; 0.8] & 189 & 0.420$\pm$0.021 & \textbf{0.008} & $\perp$ & & &\\ & & [0.8; 2] & 25 & 0.685$\pm$0.047 & \textbf{0.016} & $\parallel$ & & &\\ \midrule Disc & $\log{(M_{\rm bulge}/M_{\odot})}$ & [7; 8.3] & 21 & 0.685$\pm$0.056 & \textbf{0.030}& $\parallel$ & [7; 8.3] & [10.5; 12] & \textbf{0.005}\\ & & [8.3; 10] & 289 & 0.458$\pm$0.018 & 0.329 & & & &\\ & & [10; 10.5] & 134 & 0.486$\pm$0.025 & 0.706 & & & &\\ & & [10.5; 12] & 45 & 0.348$\pm$0.037 & \textbf{0.025} & $\perp$ & & &\\ & & & & & & & & &\\ & $(V/\sigma)_{\rm disc}$ & [0; 0.4] & 30 & 0.265$\pm$0.056 & \textbf{0.024}& $\perp$ & [0; 0.3] &[1.5; 3] & \textbf{0.007}\\ & & [0.4; 0.6] & 99 & 0.499$\pm$0.028 & 0.915 & & & & \\ & & [0.6; 1.5] & 208 & 0.459$\pm$0.021 & 0.584 & & & & \\ & & [1.5; 3] & 17 & 0.655$\pm$0.069 & \textbf{0.037} & $\parallel$ & & &\\ \midrule Bulge & $\Delta{\rm PA}_{\rm bulge - disc}$ or $\delta{\rm PA}_{\rm bulge - disc}$& <30$\degree$ or <3$\sigma$ & 137 &0.431$\pm$0.024 & \textbf{0.018} & $\perp$ & Bulge & Disc & 0.989\\ Disc & $\Delta{\rm PA}_{\rm bulge - disc}$ or $\delta{\rm PA}_{\rm bulge - disc}$& <30$\degree$ or <3$\sigma$ & 137 &0.410$\pm$0.026 & \textbf{0.022} & $\perp$ & & & \\ \bottomrule \end{tabular} } \label{PAstellResultsBulgeDisc} \end{table*} \subsection{Gas spin--filament alignments} \label{Gas spin--filament alignments} We explore the orientation of galaxy spin axes identified using the gas kinematic PAs in Section~\ref{Gas kinematics} and the method described in Section~\ref{Orientation of the spin axes}. This allows us to study the spin--filament alignments for 180 low-mass SAMI galaxies with $8<\log(M_\star/M_{\odot})<9$ in addition to the 1121 SAMI galaxies with $9<\log(M_\star/M_{\odot})<12$. Figure~\ref{TrendsStellarMassGasPAs} shows the PDFs of the spin--filament alignments for the 1301 SAMI galaxies divided into $M_\star$ ranges. The results from the K-S tests are listed in Table~\ref{PAstellResults} and marked with ${\rm PA}_{\rm gas}$. Galaxies with $8<\log(M_\star/M_{\odot})<9$ tend to show a parallel tendency, however the distribution is not statistically different from uniform. Galaxies with $9<\log(M_\star/M_{\odot})<10.2$ are also consistent with a uniform distribution, while the result is statistically significant for the most massive galaxies, which have more perpendicular alignments. The |$\cos\gamma$| distribution for $8<\log(M_\star/M_{\odot})<9$ is significantly different to the one for $10.2<\log(M_\star/M_{\odot})<12$ ($p_{\rm 2\,K-S}=0.022$). These findings are in agreement with those obtained in Section~\ref{Trends with stellar mass, B/T and mass bulge} based on stellar kinematic PAs (see the top panels of Figure~\ref{TrendsGalaxyProperties} and Table~\ref{PAstellResults}). This is expected, since $\sim80$\% of the galaxies have aligned ${\rm PA}_{\rm stars}$ and ${\rm PA}_{\rm gas}$, and we do not detect any correlation between |$\cos\gamma$| and $\Delta{\rm PA}$ in Section~\ref{Correlations with galaxy properties}. Overall, these results further tie spin--filament alignments to accretion. \section{Discussion} \label{Discussion} In this Section we discuss our results with respect to previous work, the tracers of the physical processes regulating spin--filament alignments and possible formation scenarios for galaxies, bulges and discs. Finally, we address the caveats of this study. \subsection{Comparison with previous studies} We find that the mass of the bulge, defined as $M_{\rm bulge}=M_\star\times$ (B/T), is the primary parameter of correlation with the flipping of the spin--filament alignment from parallel to perpendicular: galaxies with lower-$M_{\rm bulge}$ tend to have their spin aligned in parallel with respect to the closest filament, while galaxies with higher-$M_{\rm bulge}$ show a more perpendicular tendency. This result highlights that neither $M_\star$ alone nor B/T alone can fully unravel spin--filament alignments. Accordingly \citet{Kraljic2020} found that the correlation of the signal with the galaxy morphology cannot be totally explained by its dependence on stellar mass. The analyses of the spin--filament alignments as a function of $\lambda_e$, stellar age, visual morphology, star formation characteristics, kinematic morphology and local environment show similar trends compared to those with $M_{\rm bulge}$, although with lower statistical significance (see Table~\ref{PAstellResults}). These galaxy properties are secondary tracers of the spin--filament alignments and their correlation is explained by their dependency with $M_{\rm bulge}$. Overall, our results are in agreement with the previous studies based on shape as a proxy for spin \citep{Tempel2013a,Tempel2013b,Pahwa2016,Hirv2017,Chen2019}, in projection for the SAMI survey \citep{Welker2020} and from 3D spins for the MaNGA survey \citep{Kraljic2021}. We also analyse shape--filament alignments for the 1121 SAMI galaxies and for 735 bulges and discs in Appendix~\ref{shape--filament alignments}, where the spin axes are identified using photometric PAs. We observe similar results to spin--filament alignments, but with weaker statistical significance. Focusing on 216 S0 galaxies (Section~\ref{S0 galaxies}), we find a preference for perpendicular alignments, with the signal dominated by high-mass galaxies and galaxies with kinematically misaligned stellar and gas components. \citet{Kraljic2021} found a more perpendicular tendency for low-mass S0 MaNGA galaxies, at odds with our result. The discrepancy might be explained by the different criteria used to select S0 galaxies, underlying the importance of sample selection, as well as by other galaxy properties, such as massive bulges giving rise to more perpendicular alignments. \subsection{Tracers of physical processes} Our findings suggest that the flipping of the spin--filament alignment is most strongly related to the growth of a bulge in the galaxy, traced by $M_{\rm bulge}$. Mergers have been found to be responsible for both driving the alignment flips \citep{Codis2012,Dubois2014,Welker2014} and causing the formation of the bulge \citep{Sales2012,Wilamn2013}. $M_{\rm bulge}$ is constructed as the product of $M_{\star}$ and B/T, and it is the primary correlation parameter in a statistical sense. Since the combination of two distinct drivers is expected to show an even stronger correlation, $M_{\star}$ and B/T can be identified as distinct physical tracers of the spin--filament alignments, in agreement with theoretical studies \citep{Welker2014,Codis2015,Welker2017,Lagos2018,Kraljic2020}. B/T shows strongest correlations with respect to $M_{\star}$ (see the right panel of Figure~\ref{ExplainedVariance}, Tables~\ref{SpearmanResults} and \ref{PAstellResults}), in agreement with the fact that B/T traces mergers, in particular gas-rich major mergers, which are the driving mechanism of the flipping. However, a residual independent trend is still detected for $M_{\star}$ (see Section~\ref{Correlations with galaxy properties}), which is small due to its dependency on B/T. From a physical perspective, $M_{\star}$ traces the galaxy position in the cosmic web and the overall accretion independently from merger activity, and it brings additional information about the direction of accretion. $M_{\rm bulge}$ has also been identified as a key parameter for tracing star formation quenching mechanisms \citep{Lang2014,Bluck2014,Bluck2022,Dimauro2022}: these works found that quenching processes are most strongly related to the bulge. Velocity dispersion is even a stronger tracer \citep{Bluck2020,Brownson2022,Bluck2022}, suggesting that AGN feedback is the main quenching mechanism via feeding the black hole with gas and preventing gas accretion into the galaxy. We find that the spin--filament alignments correlate with the galaxy velocity dispersion estimated within 1\,$R_e$ ($\rho=-$0.08, $p_{\rm S}=0.012$), but $M_{\rm bulge}$ is still the primary parameter. This highlights a different dynamic, where the flipping of the spin--filament alignment is mainly related to star accretion. The result is in agreement with the finding from simulations that galaxies with AGNs show stronger perpendicular tendencies because they tend to belong to more massive pressure-supported galaxies \citep{Soussana2020}. We aim to investigate the role of AGNs in spin--filament alignments within the SAMI Galaxy Survey in an upcoming study (Barsanti et al., in preparation). We note that the tendency to perpendicular spin--filament alignments is reproduced for all the galaxy properties, while significant results for parallel alignments are only found for $M_{\rm bulge}$ and B/T (see Table~\ref{PAstellResults}). The perpendicular spin--filament alignment is expected to be the most robust tendency at low redshift \citep{Pahwa2016,Chen2019}, since most galaxies will have undergone mergers of some kind, causing a flip from the strong parallel spin--filament alignment acquired at high redshift \citep{Dubois2014,Laigle2015,Codis2018}. The stronger perpendicular tendencies, especially for central galaxies, might also be due to the fact that $\sim64$\% of our SAMI galaxies belong to galaxy groups, which are high galaxy density environments where the larger fraction of mergers is more likely to cause the flipping (e.g., \citealp{Bett2012,Welker2014}). However, our results stand even when selecting only the 408 isolated galaxies ($M_{\rm bulge}$ is the primary correlation parameter with $\rho=-$0.12, $p_{\rm S}=0.006$), highlighting that filament-related processes are the dominant players in regulating the spin alignments. \subsection{Formation scenarios for galaxies, bulges and discs} Our findings point to a scenario where bulge-dominated galaxies, showing more perpendicular tendencies, are predominantly assembled through mergers occurring along the filament. Disc-dominated galaxies have more parallel alignments and they are mainly formed from gas accretion. S0 galaxies might belong to different galaxy populations according to $M_\star$ ($p_{\rm 2\,K-S}=0.043$): high-mass S0s show perpendicular alignments and have more de~Vaucouleurs-like bulges, suggesting to be preferentially formed via mergers. These results are in agreement with the findings of \citet{FraserMcKelvie2018} for the MaNGA survey, who suggested processes such as mergers for the formation of high-mass S0 galaxies and a faded spiral scenario for low-mass S0 galaxies. In Section~\ref{spin--filament alignments of bulges and discs} we explore the separate spin--filament alignments of 468 bulges and 516 discs. To our knowledge, this is the first time that such a study has been conducted in the observations. Bulges, especially more massive or dispersion-dominated mainly having a de~Vaucouleurs profile, show more perpendicular alignments. Thus, they are more likely to be formed via mergers, in agreement with the expected formation channel for bulges (e.g., \citealp{Barnes1988}). Rotation-dominated bulges, typically having $0<n_{\rm bulge}<2$, show a more parallel tendency and they could represent pseudo-bulges formed via secular processes, in agreement with the formation scenario proposed by \citet{Kormendy2004}. They tend to have low bulge masses compared to dispersion-dominated bulges, as expected from the dependency of the signal on $M_{\rm bulge}$. This finding highlights how the alignments are influenced by sample selection, and that a larger number of bulges is needed in order to further investigate the impact of kinematics and morphology on their spin--filament alignments. We also note that strongly rotation-dominated bulges might be mis-classified discs due to the degeneracies of the 2D bulge/disc decompositions. We address this caveat in the next Section. Our results for discs are consistent with multiple formation and evolution mechanisms at low redshift. Discs in low-bulge mass galaxies or rotation-dominated discs have mainly parallel tendencies, suggesting gas accretion as formation channel. Discs in high-bulge mass galaxies or with low $(V/\sigma)_{\rm disc}$ values show perpendicular alignments, indicating mergers. Several previous works, particularly in simulations, have shown that the formation and the evolution of the disc are heavily affected by tidal forces, mergers, and in situ instabilities \citep{Ostriker1989,Okamoto2005,RomanoDiaz2009,Scannapieco2009}. These processes can lead to the destruction of the disc and its formation at a later stage, as well as they can flip the disc spin--filament alignment. The kinematic misalignment between bulges and discs for 59 galaxies might suggest different physical processes acting on the components (e.g., \citealp{Chilingarian2009}), especially since environmental mechanisms affect mainly discs compared to bulges (e.g., \citealp{Barsanti2021}). However, we do not find significant tendencies for the spin-filament alignments of both components, which are consistent with uniform distributions. In order to understand this misalignment it is crucial to take into account the limitations of bulge-disc decomposition and the large uncertainties of the bulge PAs (see Section~\ref{Caveats}). Overall, studying separately the spin--filament alignments of bulges and discs provides further clues in terms of the pathways that gave rise to the corresponding galaxies. This outcome is in accord with the conclusion of \citet{Jagvaral2022}, who investigated the intrinsic alignments, i.e. the tendency of galaxies to coherently align with the density field and produce correlations among galaxy shapes, separately for bulges and discs. Intrinsic alignments might bias weak lensing measurements and the estimate of cosmological parameters. \citet{Jagvaral2022} concluded that the stellar dynamics of the two galaxy components play a significant role in determining the intrinsic alignments. \subsection{Caveats} \label{Caveats} We address the three main caveats to the results obtained in this study: (i) the definitions of bulges and discs, (ii) the 3D modelling used to identify the spin axes and (iii) the small galaxy samples and weak statistical significance. We assume that galaxies are characterised by two components: a central bulge and a surrounding disc. However, bulge/disc decompositions do not respect the entire galaxy complexity. In particular, the photometric profiles show limitations and degeneracies \citep{Head2015,Fischer2019,Barsanti2021a,Papaderos2022,Sonnenfeld2022}. The disc exponential component is dominated by the galaxy outskirts and it is not able to completely capture the central region \citep{Mendez2019b, Breda2020}. The bulge S\'ersic component might wrongly identify parts of discs or bars. In particular, bulges could be inner parts of discs, since even a late-type galaxy can have $(V/\sigma)<0.8$ for the central region \citep{vandeSande2018}. We explore more deeply the 25 strongly rotation-dominated bulges with $0.8<(V/\sigma)_{\rm bulge}<1$ by visually inspecting the photometric bulge-disc decomposition matched to the kinematic bulge-disc decomposition and by comparing the effective radii from the double-component S\'{e}rsic bulge plus exponential disc fit: 8/25 bulges are suspected discs. Thus, we need to be cautious in the interpretations of the galaxy components and their formation scenarios. To identify the spin axes we apply the 3D thin-disc approximation, in order to consistently compare results for galaxies, bulges and discs. Our conclusions do not change if we follow a 2D modelling of the spin--filament alignments as in \citet{Welker2020}, by estimating the angle between the kinematic position angle and the projected direction of the associated filament to the galaxy. The mass of the bulge is still found to be the primary parameter of correlation, although with weaker statistical significance ($\rho=-0.07$; $p_{\rm S}=0.020$) with respect to the result from the Spearman test for the 3D thin-disc approximation. The PDFs of the 2D angles as a function of $M_{\rm bulge}$ are shown in Figure~\ref{MassBulgeTrends2D}. The ranges of the PDFs are consistent with those of Figure 6 in \citet{Welker2020}. `Classical' bulges (i.e.\ completely dispersion-dominated bulges) are excluded in our analysis of the bulge spin-filament alignments, since they do not show regular velocity maps and kinematic PAs. However, in a 2D analysis we expect their projected minor axis to be perpendicularly aligned with respect to the closest cosmic filament, in agreement with the previous studies on shape for elliptical galaxies \citep{Tempel2013b,Pahwa2016}. Finally, the spin--filament alignments recovered in this study are relatively weak preferences (even when statistically significant) and based on limited galaxy samples. Bulge and disc tendencies are based on $\sim$200-100 galaxies with a minimum sample of 17 discs. We note that the observational study of spin--filament alignments is still a challenging field at low redshift due to the intrinsic weakness of the signal \citep{Codis2018}. Thus, our results provide only hints for galaxy evolution mechanisms and larger samples are required to disentangle internal effects of stellar mass and morphology, to assess the role of the environment, and to take into account effects of sample selection. Larger samples are also needed to confirm our tendencies for the bulges and discs. Nevertheless, this work highlights that the separate study of the spin--filament alignments for the galaxy components is a powerful tool to constrain their different formation scenarios. \section{Summary and conclusions} \label{Summary and conclusions} We study the alignment of the spin axis of galaxies, bulges, and discs with respect to the orientation of the closest cosmic filament. These analyses shed light on the formation mechanisms of the galaxies and their structural components. We explore the alignments as a function of various galaxy properties, with the goal of understanding which property shows the strongest correlation. We take advantage of the SAMI Galaxy Survey to identify galaxy spin axes based on spatially-resolved stellar kinematics. The deep and highly complete GAMA spectroscopic survey is used to reconstruct the underlying cosmic filaments, implementing the {\sc DisPerSe} structure extractor. We exploit the 2D bulge/disc kinematic decomposition (where possible) to identify the separate spin axes of bulges and discs. Our sample comprises 1121 SAMI galaxies with $\log(M_\star/M_{\odot})>9$, amongst which 468 bulges and 516 discs have reliable measurements. We carry out the analyses of the spin--filament alignments in 3D. Our results are summarised as follows: \begin{enumerate} \item[(i)] Using the Spearman test, we find statistically significant correlations of the galaxy spin--filament angle |$\cos\gamma$| with $M_\star$ ($p_{\rm S}=0.014$), B/T ($p_{\rm S}=10^{-4}$) and $M_{\rm bulge}$ ($p_{\rm S}=10^{-5}$). Much of the strongest correlation is detected for $M_{\rm bulge}$, which on its own explains $\sim$70\% of the variance in |$\cos\gamma$| and it can be identified as the primary parameter to correlate with spin--filament alignments. \item[(ii)] Galaxies with lower $M_{\rm bulge}$ tend to have their spins aligned parallel to the closest filament ($p_{\rm K-S}=0.008$), while galaxies with higher $M_{\rm bulge}$ have perpendicular alignments ($p_{\rm K-S}=1\times 10^{-5}$). \item[(iii)] $M_\star$ or B/T alone cannot fully unravel spin--filament alignments. Since their product $M_{\rm bulge}=M_\star\times$ (B/T) shows the strongest correlation, $M_\star$ and B/T can be identified as two distinct physical tracers of spin-filament alignments. \item[(iv)] Other galaxy properties, such as visual morphology, kinematic features, stellar age, star formation activity and local environment, correlate with spin--filament alignment due to their dependency on $M_{\rm bulge}$ and can be identified as secondary tracers. \item[(v)] S0 galaxies have a significant perpendicular alignment ($p_{\rm K-S}=0.007$). The signal is dominated by high-mass S0 galaxies with $10.2<\log(M_\star/M_{\odot})<12$ ($p_{\rm K-S}=8\times10^{-4}$), in agreement with the expected $M_\star$-dependency, and by S0 galaxies with the stellar and gas components misaligned ($p_{\rm K-S}=0.013$). A two-sample K-S test indicates high-mass S0 galaxies have a different spin--filament alignment ($p_{\rm 2\,K-S}=0.043$) and a more de~Vaucouleurs-like bulge ($p=0.002$) relative to low-mass S0 galaxies. \item[(vi)] Analysis of the separate spin--filament alignments of bulge and disc components reveals that bulges tend to be aligned more perpendicular to the closest filament ($p_{\rm K-S}=0.001$). This tendency is seen for galaxies with both low- and high-$M_{\rm bulge}$, and for dispersion-dominated bulges, which mostly have de~Vaucouleurs-like profiles. Rotation-dominated bulges tend to have parallel spin--filament alignments and mostly $n_{\rm bulge}<2$. The discs show a tendency towards parallel alignments for low-$M_{\rm bulge}$ galaxies ($p_{\rm K-S}=0.030$) and for rotation-dominated discs ($p_{\rm K-S}=0.037$), while they have more perpendicular alignments for high-$M_{\rm bulge}$ galaxies ($p_{\rm K-S}=0.025$) and low $(V/\sigma)_{\rm disc}$ values ($p_{\rm K-S}=0.024$). Galaxies with both a bulge and a disc tend to have perpendicular alignments for both components. \item[(vii)] We obtain consistent findings using spatially-resolved gas kinematics (rather than stellar kinematics) for the identification of the galaxy spin axes. Similar results are observed for shape--filament alignments using photometric PAs for galaxies, bulges, and discs. \end{enumerate} In conclusion, we find an observational link between galaxy spin--filament alignments and the growth of the bulge. This link can be explained by mergers, which can cause the flipping and the bulge assembling, as seen in galaxy formation simulations. Both B/T and $M_\star$ are needed to fully unravel spin--filament alignments and from a physical perspective, they are independent tracers of the involved physical processes: B/T traces the amount of mergers a galaxy might have experienced, which are the driving mechanisms of flipping and changing the galaxy angular momentum \citep{Welker2014,Welker2017,Lagos2018}; $M_\star$ traces the galaxy position in the cosmic web, where galaxies need to be very close to filaments to start experiencing lots of mergers along it \citep{Codis2015}. Additional clues about the processes involved in changing the spin alignment relative to the closest cosmic filament can be discovered by studying the separate spin--filament alignments of the galaxy components, such as bulges and discs. This demonstrates that integral field spectroscopy (IFS) tools, such as spatially-resolved stellar kinematic bulge/disc decomposition and gas kinematics, offer powerful information for the study of spin--filament alignments. However, at present we can only derive suggestive hints in the context of galaxy formation scenarios, although they provide a consistent picture in accord with simulations, due to the relatively small number of galaxies involved in the analyses and the weak statistical significance of the results. Upcoming IFS galaxy surveys, such the Hector survey \citep{Bryant2020}, will be able to draw stronger conclusions from spin--filament alignments regarding the physical mechanisms leading to the formation of galaxies, bulges, and discs, as well as to constrain the roles of local and global environments in determining galaxy spins. \section*{Acknowledgements} We thank the referee for the constructive report. SB would like to thank Luca Cortese for insightful comments. This research was supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. The SAMI Galaxy Survey is based on observations made at the Anglo-Australian Telescope. The Sydney-AAO Multi-object Integral field spectrograph (SAMI) was developed jointly by the University of Sydney and the Australian Astronomical Observatory, and funded by ARC grants FF0776384 (Bland-Hawthorn) and LE130100198. The SAMI input catalogue is based on data taken from the Sloan Digital Sky Survey, the GAMA Survey and the VST/ATLAS Survey. The SAMI Galaxy Survey is supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013, the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020, and other participating institutions. The SAMI Galaxy Survey website is http://sami-survey.org/. This study uses data provided by AAO Data Central (http://datacentral.org.au/). JJB acknowledges support of an Australian Research Council Future Fellowship (FT180100231). FDE acknowledges funding through the H2020 ERC Consolidator Grant 683184, the ERC Advanced grant 695671 “QUENCH” and support by the Science and Technology Facilities Council (STFC). JvdS acknowledges support of an Australian Research Council Discovery Early Career Research Award (project number DE200100461) funded by the Australian Government. In this work we use the \href{http://www.python.org}{Python} programming language \citep{vanrossum1995}. We acknowledge the use of {\sc \href{https://pypi.org/project/numpy/}{numpy}} \citep{harris+2020}, {\sc \href{https://pypi.org/project/scipy/}{scipy}} \citep{jones+2001}, {\sc \href{https://pypi.org/project/matplotlib/}{matplotlib}} \citep{hunter2007}, {\sc \href{https://pypi.org/project/astropy/}{astropy}} \citep{astropyco+2013}, {\sc \href{https://pypi.org/project/pyvista/}{pyvista}} \citep{Sullivan2019}, {\sc \href{https://pingouin-stats.org/}{pingouin}} \citep{Vallat2018} and {\sc \href{http://www.star.bris.ac.uk/~mbt/topcat/}{topcat}} \citep{taylor2005}. \section*{Data availability} The SAMI reduced data underlying this article are publicly available at \href{https://docs.datacentral.org.au/sami}{SAMI Data Release 3} \citep{Croom2021}. Ancillary data comes from the \href{http://gama-survey.org}{GAMA Data Release 3} \citep{Baldry2018}. \section*{Supporting information} Supplementary figures are available online. \textbf{Figures S1, S2, S4 and S5}. PDFs and CDFs of the spin--filament alignments for the SAMI galaxies in ranges of $(V/\sigma)_{e}$ and $\lambda_e$, age, divided into group centrals, satellites and isolated galaxies, visual morphology, spectral classification and kinematic morphology. \textbf{Figure S3}. The distribution of $\lambda_e$ and ellipticity for the SAMI galaxies, colour-coded according to $M_{\rm bulge}$. \section*{Author Contribution Statement} SB devised the project and drafted the paper. SO and SC performed kinematic and photometric bulge-disc decompositions, respectively. FDE performed MGE fits. SB reconstructed the cosmic web and measured spins for galaxies, bulges and discs. SB, MC and CW contributed to data analyses and interpretation of the results. JJB, SMC, JSL, SNR and JvdS provided key support to all the activities of the SAMI Galaxy Survey ('builder status'). All authors discussed the results and commented on the manuscript. \bibliographystyle{mnras} \bibliography{biblioSAMI} % \appendix \section{Shape--filament alignments} \label{shape--filament alignments} We investigate the shape--filament alignments using the photometric PAs to identify the spin axes of galaxies, bulges, and discs. To find the spin orientation we apply the method in Section~\ref{Orientation of the spin axes}. The galaxy photometric PA is measured for the 1121 SAMI galaxies selected in Section~\ref{Galaxy sample and selection criteria} from the $r$-band single-component S\'{e}rsic model, as presented in Section~\ref{Photometric position angles}. Applying the Spearman test, we find significant correlations of the galaxy shape--filament alignments with $M_{\star}$ ($\rho=-0.06$; $p_{\rm S}=0.038$), B/T ($\rho=-0.07$; $p_{\rm S}=0.026$) and $M_{\rm bulge}$ ($\rho=-0.08$; $p_{\rm S}=0.005$). Figure~\ref{TrendsGalaxyPropertiesPhotometricPA} shows the shape--filament alignments as a function of these galaxy properties. The PDFs tend towards perpendicular alignments for more massive and bulge-dominated galaxies, while a tendency towards parallel alignments is mainly seen for low values of $M_\star$, B/T and $M_{\rm bulge}$. These findings are consistent with the results based on stellar kinematic PAs in Section~\ref{Trends with stellar mass, B/T and mass bulge}, although they show lower statistical significance when compared with a uniform distribution. A similar result has been found by \citet{Kraljic2021}: when comparing the results for the spin--filament alignments with those based on photometric PAs for the MaNGA survey, they observed qualitatively the same trends and weaker statistical significance for the latter. The photometric PAs for the bulges and the discs are estimated for 735 SAMI galaxies from the $r$-band double-component S\'{e}rsic bulge plus exponential disc model, as described in Section~\ref{Photometric position angles}. We select only galaxies that are best fitted by this model in order to measure reliable photometric PAs for the two galaxy components. Figure~\ref{TrendsBulgeDiscPhotometricPA} shows the PDFs for the shape--filament alignments of the bulges and the discs, also as a function of $M_{\rm bulge}$. Bulges show perpendicular tendencies, with a significant result for $10.5<\log{(M_{\rm bulge}/M_{\odot})}<12$ ($p_{\rm K-S}=0.042$). Discs tend to be aligned in parallel for lowest-$M_{\rm bulge}$ and perpendicularly for highest-$M_{\rm bulge}$ ($p_{\rm 2\,K-S}=0.025$). Overall, the results are consistent with the bulge and disc spin--filament alignments shown in Section~\ref{spin--filament alignments of bulges and discs}, but with lower statistical significance. Further useful constraints on shape--filament alignments for galaxies, bulges and discs could be obtained by exploiting the whole GAMA survey, and not just the limited SAMI sample used here for comparison with the spin--filament alignments. \bsp % \label{lastpage}
Title: Sensors and Actuators for the Advanced LIGO+ Upgrade
Abstract: As part of the Advanced LIGO+ (A+) project we have developed, produced, and characterised sensors and electronics to interrogate new optical suspensions. The central element is a displacement sensor with an integrated electromagnetic actuator known as a BOSEM and its readout and drive electronics required to integrate them into LIGO's control and data system. In this paper we report on improvements to the sensors and testing procedures undertaken to meet enhanced performance requirements set out by the A+ upgrade to the detectors. The best devices reach a noise level of $4.5\times 10^{-11}{\rm m}/\sqrt{\rm Hz}$ at a measurement frequency of 1 Hz.
https://export.arxiv.org/pdf/2208.00798
\title{Sensors and Actuators for the Advanced LIGO+ Upgrade} \author{S J Cooper} \bham \author{C M Mow-Lowry} \email{Now at the Vrije Universiteit Amsterdam} \bham \author{D Hoyland} \bham \author{J Bryant} \bham \author{A Ubhi} \bham \author{J O’Dell} \ral \author{A Huddart} \ral \author{S Aston} \Livingston \author{A Vecchio} \email{av@star.sr.bham.ac.uk} \bham \date{\today} \section{Introduction} The Advanced LIGO~\cite{LIGO03} and Virgo~\cite{acernese2021calibration} gravitational-wave interferometers have opened explorations of the gravitational-wave sky, with the observation of the coalescence of stellar-mass binary black holes~\cite{gw150914, O1-BBH, gwtc-1, gwtc-2, gwtc-2.1, gwtc-3}, binary neutron stars~\cite{gw170817, gw170817-MM, gw190425} and more recently neutron star-black hole systems~\cite{NS-BH-O3}. The LIGO and Virgo detectors are being upgraded to what is known as `Advanced$+$' (A$+$, hereafter) configuration. Installation has already begun for some of the hardware. After the forth observing run (O4) scheduled to start in 2022, it will be completed in advance of the fifth observing run in 2025~\cite{ObservingScenario}. The higher sensitivity achieved through this upgrade will correspond to an increase of the Universe's observable volume by a factor $\approx 5$ with respect to the one probed by the existing instruments, resulting in an equal increase in detection rate. For example, based on current estimates of the merger rate of populations of stellar-mass binary systems~\cite{gwtc-3-rates-pop} and the expected instrument performance in A$+$ configurations, the detectors will observe several of such binary coalescences every week, \textit{e.g.}~\cite{PhysRevD.91.062005}. LIGO A$+$ will achieve this increase in observing range by upgrading several sub-systems of the detectors: new test-masses with reduced coating thermal noise \cite{abernathy2021exploration}, frequency-dependent squeezing \cite{PhysRevLett.124.171102}, replacement the central beam-splitter with a larger optic, and installation of balanced homodyne detection \cite{Fritschel:14} to further reduce quantum noise. These upgrades will require several new multi-stage suspension systems for isolating and controlling the new optical elements. Each suspension requires a set of low-noise sensors and actuators, with associated electronics, to actively damp resonances and steer the laser beam. One example of a new suspension, shown in Fig.~\ref{fig:hrts}, is the `HAM Relay Triple Suspension', an evolution of existing suspension designs that is lighter and easier to assemble. In total, 200 BOSEMs -- "Birmingham Optical Sensor and Electro-Magnetic actuator" -- have been produced, along with 82 coil-driver units and 44 `satellite amplifiers' for reading out the photocurrent. This paper updates the design produced for Advanced LIGO \cite{Carbone_2012, Aston_2012} to fulfil existing and enhanced sensitivity requirements for A+. A noise budget with the dominant terms is provided, with reference to the exhaustive selection procedure required. More than half of the BOSEMs reach a new `Enhanced' performance requirement. The best Enhanced BOSEMs are now dominated by quantum shot noise across the whole measurement band, a major performance improvement. \section{BOSEM design} BOSEMs are compact, ultra-high vacuum compliant, non-contact, and low-noise position sensors with integrated electromagnetic actuators. They have a long history of development started by studies on the initial LIGO OSEMS by Fritschel \cite{fritschelOSEM}, which were upgraded to AOSEMS by Abbott \cite{abbott2009advanced} and modified by Carbone \cite{Carbone_2012} to produce the Advanced LIGO BOSEMs. A table of key parameters for each BOSEM can be found in Table \ref{tab:bosemParam}. Each BOSEM unit comprises of a number of elements: sensing, actuation, and alignment. An exploded CAD model of the BOSEM is shown in Fig.\,\ref{fig:explodedBosem}, with key features of the sensor labelled. The optical readout is based on a shadow sensing scheme where an opaque `flag', rigidly mounted to the measurement surface \cite{BOSEMflag}, partly blocks 935\,nm light emitted from an Infra-Red LED (IRLED, Optek OP132) before it is sensed by a photodiode (PD, Osram BPX65). The choice of wavelength ensures high quantum efficiency from the silicon photodiode and negligible emission at 1064\,m, the wavelength of the main science laser. A lens is installed after the LED to produce a narrower beam and smaller spot on the photodiode, improving both the linearity and sensitivity of the BOSEM. \begin{table}[] \caption{Key parameters for a BOSEM.} \label{tab:bosemParam} \begin{tabular}{|c|c|} \hline Parameter & Value \\ \hline \multicolumn{2}{|c|}{Coil} \\ \hline Turns & 800 \\ Winding sense & \begin{tabular}[c]{@{}c@{}}Clockwise when viewed \\ from the rear face\end{tabular} \\ Inductance & $14.7\pm 0.7$\,mH \\ Resistance & $37.6 \pm 2\,\Omega$ \\ Length & 8\,mm \\ Inner coil diameter & 17.8\,mm \\ Maximum current & 150\,mA \\ Breakdown (to coil former) & \textgreater 200\,V \\ \hline \multicolumn{2}{|c|}{Sensor} \\ \hline Mass & 158\,g \\ Linear range (typ.) & 0.7\,mm \\ Operating LED current & 35\,mA \\ Photodiode bias & 10\,V typ. (50\,V max.) \\ \begin{tabular}[c]{@{}c@{}}Photocurrent, open-light \end{tabular} & 45-80\,$\mu$A \\ Current transfer ratio & $0.19 \pm 0.04\%$ \\ Average sensitivity & 20.25\,kV/m \\ Electrical connector & \begin{tabular}[c]{@{}c@{}}Glenair micro-D \\ MR7590-9P-1BSN-MC225\end{tabular} \\ Mechanical Interface & \begin{tabular}[c]{@{}c@{}} 4x 8/32UNC thru-holes on \\ 40.64\,mm (1.3’’) square grid\end{tabular} \\ Operating temperature & $22\pm2^\circ$\,C \\ \begin{tabular}[c]{@{}c@{}}Storage temperature\\ (ambient pressure)\end{tabular} & 10 to 120°C \\ \hline \multicolumn{2}{|c|}{Sensor Noise} \\ \hline \begin{tabular}[c]{@{}c@{}} Standard \\ 1-10\,Hz \\ 10-20\,Hz\end{tabular} & \begin{tabular}[c]{@{}c@{}} \\ 3x10$^{-10}$\,m/$\sqrt{\rm Hz}$\\ 1x10$^{-10}$\,m/$\sqrt{\rm Hz}$\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}} Enhanced \\ 1-20\,Hz\end{tabular} & \begin{tabular}[c]{@{}c@{}} \\ \textless\,0.75x10$^{-10}$\,m/$\sqrt{\rm Hz}$ \end{tabular}\\ \hline \end{tabular} \end{table} To meet LIGO's stringent vacuum requirements, the BOSEMs must go through a multi-stage cleaning process. In the original production run toluene was used in the cleaning process to remove paraffin from the coil wire \cite{wireClean}. Following the detection of toluene residue in RGA scans during the production process the coil wire was switched(from MWS Wire 32 HML to MWS Wire 32 HML Natural) to ensure the production process was paraffin-free. The updated cleaning process can be found in \citep{BOSEMClean}. \section{Performance analysis} The noise budget of a BOSEM is shown in Fig.~\ref{fig:bosemNB}. The noise sources expected to dominate a typical system are the shot noise and the photodiode dark noise. The shot noise, shown in dark blue, is modelled and projected to an equivalent displacement using \begin{equation} SN = \frac{2R}{K} \sqrt{2eI} \end{equation} where $e$ is the electronic charge, $I$ is the photocurrent at the operating point (the `half-light' current), $R$ is the transimpedance gain of 121\,kΩ, and $K$ is the sensivity of the BOSEM which varies depending on the photocurrent but averages 20250 volts per metre. The leading factor of 2 accounts for the differential output gain of the satellite amplifier. The photodiode dark noise, shown in red, was measured by switching off the IRLED and measuring the output of the satellite amplifier \cite{Carbone_2012}. The total modelled noise trace, shown in purple, is achieved by summing the shot noise and dark noise in quadrature. The noise model is compared with the spectrum of a typical BOSEM `open light test', where there is no flag inserted in the beam, shown in light blue. Since the open light test has a factor of 2 more photocurrent than normal half-light operation, to calibrate it into metres we divide by a factor of $\sqrt{2}$ before using the volts-to-metres conversion factor. This will correctly scale the shot-noise for the photocurrent expected under normal operating conditions. The noise floor of our signal analyser is shown for reference in yellow. The whitened ADC noise of LIGO's Control and Data System is shown in green; at frequencies lower than 1\,Hz we expect BOSEMs installed at LIGO to be dominated by this noise source. There is a clear discrepancy between the noise model and the measurement of a typical BOSEM. Two contributions have been identified. First, the photodiode dark noise varies from unit to unit, presumably due to defects in individual photodiodes. Repeated measurements have shown that this never exceeds the noise requirements and to be sub-dominant in almost all BOSEM assemblies. Second, the IRLEDs show significant excess intensity fluctuation, and this is the dominant source of excess noise. This excess noise was identified in the original Advanced LIGO BOSEM production run and a screening process was implemented to select IRLEDs that comply with the noise requirements \cite{AstonPhD}. For the A+ upgrade we conducted an extensive review of different IRLEDs, but no model was shown to consistently meet the noise requirements. Instead, the Advanced LIGO screening process was improved, most importantly by lengthening the initial `burn in' from 50 hours to 168 hours as detailed in \cite{BOSEMscreen}. To capitalise on the effort involved in screening every IRLED, we identified units with especially good noise performance between 1\,Hz and 20\,Hz. If these units met a new `Enhanced' noise requirement they were separated and tagged for use in critical locations. They do not require any change to the existing signal chain. Figure \ref{fig:bosemASD} shows the noise spectra of a Standard BOSEM and an Enhanced BOSEM, along with their respective requirements. The best of the Enhanced BOSEMs are dominated by shot noise across the whole measurement band. Further improvements can come from reducing the measurement range (increasing sensitivity), for example the Differential-OSEM sensors \cite{DOSEM}, or by using alternative technology such as fringe-counting interferometers \cite{HoQI}, which have improved resolution without sacrificing operating range. \section{Conclusions} As part of the Advanced LIGO+ upgrade the University of Birmingham has provided over 200 BOSEM sensors and actuators along with their associated driving electronics. Over half of the BOSEMs meet the `Enhanced' specification. The best units have a resolution of 4.5$\times 10^{-11}\,$m$\sqrt{\rm Hz}$ all the way down to 0.1\,Hz and these devices represent the ultimate performance of BOSEMs in their current form. \acknowledgments We thank Jeff Kissel for his useful comments on the manuscript. This work is supported by the UK's Science and Technology Facilities Council through Grant No. ST/S00243X/1. A.V. acknowledges the support of the Royal Society and Wolfson Foundation. The authors gratefully acknowledge the support of the United States NSF for the construction and operation of the LIGO Laboratory and Advanced LIGO. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the United States National Science Foundation (NSF), and operates under cooperative agreement PHY–1764464. Advanced LIGO was built under award PHY–0823459. \bibliographystyle{unsrt} \bibliography{refs}
Title: Pole Inflation in Supergravity
Abstract: We show how we can implement within Supergravity chaotic inflation in the presence of a pole of order one or two in the kinetic mixing of the inflaton sector. This pole arises due to the selected logarithmic Kahler potentials K, which parameterize hyperbolic manifolds with scalar curvature related to the coefficient -N of the logarithmic term. The associated superpotential W exhibits the same R charge with the inflaton-accompanying superfield and includes all the allowed terms. The role of the inflaton can be played by a gauge singlet or non-singlet superfield. Models with one logarithmic term in K for the inflaton, require N=2 and some tuning, of the order of 10^-5, between the terms of W and predict a tensor-to-scalar ratio r at the level of 0.001. The tuning can be totally eluded for more structured K's, with N values increasing with r and spectral index close or even equal to its present central observational value.
https://export.arxiv.org/pdf/2208.11757
\section{Introduction} Among the many scenarios of inflation, the one which stands out in terms of its simplicity, elegance and phenomenological success is \emph{chaotic inflation} ({\sf\small CI}). Most notably, the power-law potentials, employed in models of CI, have the forms \beq V_{\rm I}=\ld^2\sg^n/n\>\>\> \mbox{or}\>\>\>V_{\rm I}=\ld^2(\sg^{n/2}-M^2)^2/n\>\>\>\mbox{for}\>\>\>M\ll\mP=1,\label{vn}\eeq which are very common in physics and so it is easy the identification of the inflaton $\sg$ with a field already present in the theory. E.g., within \emph{Higgs inflation} ({\sf\small HI}) the inflaton could play, at the end of inflation, the role of a Higgs field. However, for $n=2$ and $4$ the theoretically derived values for spectral index $\ns$ and/or tensor-to-scalar ratio $r$ are not consistent with the observational ones \cite{plin}. A way out of these inconsistencies is to introduce some non-minimality in the gravitational or the kinetic sector of the theory. In this talk, which is based on Refs.~\cite{sor,epole}, we focus on the latter possibility. Namely, our proposal is tied to the introduction of a pole in the kinetic term of the inflaton field. For this reason we call it for short \emph{Pole} (chaotic) \emph{inflation} ({\sf\small PI}) \cite{terada}. Below, we first briefly review the basic ingredients of PI in a non-\emph{Supersymmetric} ({\sf\small SUSY}) framework (Sec.~\ref{Fhi}) and constrain the parameters of two typical models in Sec.~\ref{nmi} taking into account the observational requirements described in \Sref{obs}. Throughout the text, the subscript $,\chi$ denotes derivation \emph{with respect to} ({\sf\small w.r.t}) the field $\chi$, charge conjugation is denoted by a star ($^*$) and we use units where the reduced Planck scale $\mP = 2.44\cdot 10^{18}~\GeV$ is set equal to unity. \subsection{Non-SUSY Set-up}\label{Fhi} The lagrangian of the homogenous inflaton field $\sg=\sg(t)$ with a kinetic mixing takes the form \beq \label{action1} {\cal L} = \sqrt{-\mathfrak{g}} \left(\frac{\fkk}{2\fk^2} \dot\sg^2- \Vhi(\sg)\right)~~\mbox{with}~~\fk=1-\sg^p,~p>0~~\mbox{and}~~\fkk>0.\eeq Also, $\mathfrak{g}$ is the determinant of the background Friedmann-Robertson-Walker metric $g^{\mu\nu}$ with signature $(+,-,-,-)$ and dot stands for derivation w.r.t the cosmic time. Concentrating on integer $p$ values we can derive the canonically normalized field, $\se$, as follows \beq \label{VJe} \frac{d\se}{d\phi}=J=\frac{\sqrt{\fkk}}{\fk}~~\Rightarrow~~\se=\frac{\sqrt{\fkk}}{p}B(\sg^p;1/p,0), \eeq where $B(z;m,l)$ represents the incomplete Beta function. Note that $\se$ gets increased above unity for $p<10$ and $0\leq\sg\lesssim1$, facilitating, thereby, the attainment of PI with \sub\ $\sg$ values. Inverting this function we obtain, e.g., \beq \label{sesg} \sg= \begin{cases} 1-e^{-\se/\sqrt{N_1}} &\mbox{for}~~p=1,\\ \tanh{\lf\frac{\se}{\sqrt{N_2}}\rg} &\mbox{for}~~p=2\,. \end{cases}\eeq As a consequence, \Eref{action1} can be brought into the form \beq {\cal L} = \sqrt{-\mathfrak{g}} \left( \frac12\dot\se^2-\Vhi(\sg(\se))\right). \label{action} \eeq For $\se\gg1$, $V_{\rm I}(\se)$ -- expressed in terms of $\se$ -- develops a plateau, and so it becomes suitable for driving inflation of chaotic type called \emph{E-Model Inflation} \cite{alinde, linde21} (or $\alpha$-Starobinsky model \cite{ellis21}) and \emph{T-Model Inflation} \cite{tmodel, linde21} for $p=1$ and $2$ respectively. \subsection{Inflationary Observables -- Constraints} \label{obs} The analysis of PI can be performed using the standard slow-roll approximation as analyzed below, together with the relevant observational and theoretical requirements that should be imposed. \subparagraph{\bf (a)} The number of e-foldings $\Ns$ that the scale $\ks=0.05/{\rm Mpc}$ experiences during PI must be enough for the resolution of the problems of standard Big Bang, i.e., \cite{plcp} \begin{equation} \label{Nhi} \Ns=\int_{\sef}^{\sex} d\se\frac{\Vhi}{\Ve_{\rm I,\se}}\simeq61.3+\frac{1-3w_{\rm rh}}{12(1+w_{\rm rh})}\ln\frac{\pi^2g_{\rm rh*}\Trh^4}{30\Vhi(\sgf)}+\frac14\ln{\Vhi(\sgx)^2\over g_{\rm rh*}^{1/3}\Vhi(\sgf)},\eeq where $\sex$ is the value of $\se$ when $\ks$ crosses the inflationary horizon whereas $\se_{\rm f}$ is the value of $\se$ at the end of PI, which can be found, in the slow-roll approximation, from the condition \beqs\beq\mbox{\sf\small max}\{\epsilon(\sg_{\rm f}),|\eta(\sg_{\rm f})|\}=1,\>\>\>~\mbox{where}\>\>\>\epsilon= {1\over2}\left(\frac{\Ve_{\rm I,\se}}{\Ve_{\rm I}}\right)^2\>\>\>\mbox{and}\>\>\>\eta= \frac{\Ve_{\rm I,\se\se}}{\Ve_{\rm I}}\,.\label{sr}\eeq Also we assume that PI is followed in turn by an oscillatory phase with mean equation-of-state parameter $w_{\rm rh}$, radiation and matter domination. We determine it applying the formula \cite{epole} \beq w_{\rm rh}=2\frac{\int_{\sgn}^{\sgm} d\sg J(1- \Vhi/\Vhi(\sgm))^{1/2}}{\int_{\sgn}^{\sgm} d\sg J(1- \Vhi/\Vhi(\sgm))^{-1/2}}-1,\label{wrh}\eeq\eeqs where $\sgn=\vev{\sg}$ is the \emph{vacuum expectation value} ({\small\sf v.e.v}) of $\sg$ after PI and $\sgm$ is the amplitude of the $\sg$ oscillations \cite{epole}. Motivated by implementations \cite{univ} of non-thermal leptogenesis, which may follow PI, we set $\Trh\simeq10^9~\GeV$ for the reheat temperature. Indicative values for the energy-density effective number of degrees of freedom include $g_{\rm rh*}=106.75$ or $228.75$ corresponding to the \emph{Standard Model} ({\small\sf SM}) or \emph{Minimal SUSY SM} ({\small\sf MSSM}) spectrum respectively. \subparagraph{\bf (b)} The amplitude $\As$ of the power spectrum of the curvature perturbations generated by $\sg$ at $\ks$ has to be consistent with data~\cite{plcp}, i.e., \begin{equation} \label{Prob} A_{\rm s}={\Ve_{\rm I}(\sex)^{3}}/{12\, \pi^2}{\Ve_{\rm I,\se}(\sex)^2} \simeq2.105\cdot 10^{-9}\,. \end{equation} \subparagraph{\bf (c)} The remaining inflationary observables ($\ns$, its running $\as$ and $r$) have to be consistent with the latest \emph{Planck release 4} ({\sf\small PR4}), \emph{Baryon Acoustic Oscillations} ({\sf\small BAO}), CMB-lensing and BICEP/{\it Keck} ({\sf\small BK18}) data \cite{plin,gws}, i.e., \begin{equation} \label{nswmap} \mbox{\sf (i)}\>\>\ns=0.965\pm0.009\>\>\>~\mbox{and}\>\>\>~\mbox{\sf (ii)}\>\>r\leq0.032, \end{equation} at 95$\%$ \emph{confidence level} ({\sf c.l.}) -- pertaining to the $\Lambda$CDM$+r$ framework with $|\as|\ll0.01$. These observables are estimated through the relations \beq\label{ns} \mbox{\sf (i)}\>\>\ns=\: 1-6\eph_\star\ +\ 2\ith_\star,\>\>\>\mbox{\sf (ii)}\>\> \as =\frac23\left(4\ith^2-(\ns-1)^2\right)-2\what\xi_\star\>\>\>~ \mbox{and}\>\>\>~\mbox{\sf (iii)}\>\>r=16\eph_\star\,, \eeq with $\xi={\Ve_{\rm I,\se} \Ve_{\rm I,\se\se\se}/\Ve_{\rm I}^2}$ -- the variables with subscript $\star$ are evaluated at $\sg=\sgx$. \subparagraph{\bf (d)} The effective theory describing PI has to remain valid up to a UV cutoff scale $\Qef\simeq\mP$ to ensure the stability of our inflationary solutions, i.e., \beq \label{uv}\mbox{\sf (i)}\>\> \Vhi(\sgx)^{1/4}\leq\Qef \>\>\>~\mbox{and}\>\>\>~\mbox{\sf (ii)}\>\>\sgx\leq\Qef.\eeq \subsection{Results}\label{nmi} Using the criteria of \Sref{obs}, we can now analyze the inflationary models based on the potential in \Eref{vn} and the kinetic mixing in \Eref{action1} for $p=1$ and $2$. The slow-roll parameters are \beq\epsilon=\frac{n^2\fk}{2\fkk\sg^2}\>\>\mbox{and}\>\>\eta=\frac{n\fk}{\fkk\sg^2}\lf n-1-(n+p-1)\sg^p\rg,\label{sr0}\eeq whereas from \Eref{Nhi} we can compute \beq \label{Ns0} \Ns\simeq\begin{cases}N_1\lf \sgx+f_{1\star}\ln f_{1\star}\rg/n f_{1\star} & \mbox{for}~~p=1,\\ N_2\sgx^2/2n f_{2\star} &\mbox{for}~~p=2, \end{cases}\eeq where $f_{p\star}=f_p(\sgx)$. Since $f_{p\star}$ appears in the denominator, $\Ns$ increases drastically as $\sgx$ approaches unity, assuring thereby the achievement of efficient PI. The relevant tuning can be somehow quantified defining the quantity \beq \Dex=1 - \sgx.\label{dex}\eeq The naturalness of the attainment of PI increases with $\Dex$. Imposing the condition of \Eref{sr} and solving \Eref{Ns0} w.r.t $\sgx$, we arrive at \beq \label{sgx0} \sgf\ll\sgx\simeq\begin{cases} n\Ns/(n\Ns+N_1)& \mbox{for}~~p=1,\\ \sqrt{2n\Ns/(2n\Ns+N_2)}&\mbox{for}~~p=2, \end{cases}\eeq where we neglect the logarithmic contribution in the first of the relations in \Eref{Ns0}. We remark that PI is attained for $\sg<1$ -- and so \Eref{uv} is fulfilled -- thanks to the location of the pole at $\sg=1$. On the other hand, \Eref{Prob} implies \beq \label{lan0} \ld\simeq\lf{\sqrt{3nN\As}\pi/\Ns}\rg\begin{cases} 2& \mbox{for}~~p=1,\\ 1&\mbox{for}~~p=2\,. \end{cases}\eeq From \Eref{ns} we obtain the model's predictions, i.e., \beq \label{ns0} \ns\simeq1-{2/\Ns}, \>\>\> \as \simeq-{2/N_\star^2}\>\>\> \mbox{and}\>\>\> r\simeq\begin{cases}8N_1/\Ns^2& \mbox{for}~~p=1,\\ 2N_2/\Ns^2 &\mbox{for}~~p=2\,,\end{cases} \eeq which are independent of $n$ and for this reason these models are called $N$-attractors \cite{tmodel, alinde, linde21, ellis21}. However, the variation of $n$ in \Eref{vn} generates a variation to $\wrh$ in \Eref{wrh} and via \Eref{Nhi} to $\Ns$ which slightly distinguishes the predictions above. E.g., fixing $N_1=10$ we obtain \beqs\beq \wrh\simeq\begin{cases}-0.08,\\ ~~~0.19,\end{cases}\Ns\simeq\begin{cases}49.4,\\ 54.6,\end{cases}\Dex\simeq\begin{cases}0.074,\\ 0.04,\end{cases}\ns\simeq\begin{cases}0.963\\ 0.965\end{cases}r\simeq0.02 ~~\mbox{for}~~~ n=\begin{cases}2,\\ 4\end{cases}\label{nsn1}\eeq and $p=1$ with $\as\sim 10^{-4}$. Similar $\as$ values are obtained setting $N_2=10$ and $p=2$ which yields \beq \wrh\simeq\begin{cases}-0.04,\\ ~~~0.23,\end{cases}\Ns\simeq\begin{cases}50.2,\\ 54.6,\end{cases}\Dex\simeq\begin{cases}0.024,\\ 0.01,\end{cases} \ns\simeq\begin{cases}0.962,\\ 0.963,\end{cases}r\simeq\begin{cases}0.0074,\\ 0.0064,\end{cases}\mbox{for}~~n=\begin{cases}2.\\ 4.\end{cases}\label{nsn2}\eeq\eeqs Notice that $\Dex$ is larger for $p=1$. Imposing the bound on $r$ in \Eref{nswmap}, we can find a robust upper bound on $N_p$. Namely, we find numerically \beq N_1\lesssim 19 ~~\mbox{and}~~ N_2\lesssim 55. \label{N12b}\eeq Therefore, we can conclude that the presence of $\fk$ in \Eref{action1} revitalizes CI rendering it fully consistent with the present data in \Eref{nswmap} without introducing any complication with the validity of the effective theory. Recall \cite{corfu12} that the last problem plagues models of CI with large non-minimal coupling to gravity for $n>2$. \subsection{Outline}\label{plan} It would be certainly interesting to inquire if it is possible to realize similar models of PI in a SUSY framework where a lot of the problems of SM are addressed. We below describe how we can formulate PI in the context of \emph{Supergravity} ({\sf\small SUGRA}) in \Sref{fhim} and we specify six models of PI: three models (\dci, \ca, \cb) employing a gauge singlet inflaton in \Sref{fhi3} and three (\dhi, \ha, \hb) with a gauge non-singlet inflaton in \Sref{fhi1}. \section{Realization of PI Within SUGRA}\label{fhim} We start our investigation presenting the basic formulation of scalar theory within SUGRA in \Sref{sugra1} and then -- in \Sref{sugra2} -- we outline our strategy in constructing viable models of PI. \subsection{General Set-up} \label{sugra1} The part of the SUGRA lagrangian including the (complex) scalar fields $Z^\al$ can be written as \beqs \beq\label{Saction1} {\cal L} = \sqrt{-\mathfrak{g}} \lf K_{\al\bbet} D_\mu Z^\al D^\mu Z^{*\bbet}-V_{\rm SUGRA}\rg, \eeq where the kinetic mixing is controlled by the K\"ahler potential $K$ and the relevant metric defined as \beq \label{kddef} K_{\al\bbet}={\Khi_{,Z^\al Z^{*\bbet}}}>0\>\>\>\mbox{with}\>\>\>K^{\bbet\al}K_{\al\bar \gamma}=\delta^\bbet_{\bar \gamma}.\eeq Also, the covariant derivatives for the scalar fields $Z^\al$ are given by \beq D_\mu Z^\al=\partial_\mu Z^\al+ig A^{\rm a}_\mu T^{\rm a}_{\al\bt} Z^\bt\eeq with $A^{\rm a}_\mu$ being the vector gauge fields, $g$ the (unified) gauge coupling constant and $T^{\rm a}$ with ${\rm a}=1,...,\mbox{\sf\small dim}\Ggut$ the generators of a gauge group $\Ggut$. Here and henceforth, the scalar components of the various superfields are denoted by the same superfield symbol. The SUGRA scalar potential, $V_{\rm SUGRA}$, is given in terms of $K$, and the superpotential, $W$, by \beq V_{\rm SUGRA}=\Ve_{\rm F}+ \Ve_{\rm D}\>\>\>\mbox{with}\>\>\> \Ve_{\rm F}=e^{\Khi}\left(K^{\al\bbet}{\rm F}_\al {\rm F}^*_\bbet-3{\vert W\vert^2}\right) \>\>\>\mbox{and}\>\>\>\Ve_{\rm D}= g^2 \sum_{\rm a} {\rm D}_{\rm a} {\rm D}_{\rm a}/2, \label{Vsugra} \eeq where a trivial gauge kinetic function is adopted whereas the F- and D-terms read \beq \label{Kinv} {\rm F}_\al=W_{,Z^\al} +K_{,Z^\al}W\>\>\>\mbox{and}\>\>\>{\rm D}_{\rm a}= Z_\al\lf T_{\rm a}\rg^\al_\bt K^\bt\>\>\>\mbox{with}\>\>\> K^{\al}={\Khi_{,Z^\al}}\,.\eeq\eeqs Therefore, the models of PI in \Sref{Fhi} can be supersymmetrized, if we select conveniently the functions $K$ and $W$ so that \eqs{vn}{action1} are reproduced. \subsection{Modeling PI in SUGRA}\label{sugra2} We concentrate on PI driven by $V_{\rm F}$. To achieve this, we have to assure that $V_{\rm D}=0$ during PI. This condition may be attained in the following two cases: \begin{itemize} \item If the inflaton is (the radial part of) a gauge singlet superfield $Z^2:=\Phi$. In this case, $\Phi$ has obviously zero contribution to $V_{\rm D}$. \item If the inflaton is the radial part of a conjugate pair of Higgs superfields, $Z^2:=\Phi$ and $Z^3:=\bar\Phi$, which are parameterized so as $V_{\rm D}=0$ -- see \Sref{fhi1}. \end{itemize} To achieve a kinetic term in \Eref{Saction1} similar to that in \Eref{action1} for $p=1$ and $2$, we need to establish suitable $K$'s so that \beq \vevi{K}= -N \ln\fk~~~\mbox{and}~~~\vevi{K_{\al\bbet}}=N/\fk^2\eeq with $N$ related to $N_p$ -- here and henceforth the symbol ``$\vevi{Q}$" denotes the value of a quantity $Q$ during PI. However, from the F-term contribution to \Eref{Vsugra}, we remark that $K$ affects besides the kinetic mixing $V_{\rm SUGRA}$, which, in turn, depends on the $W$ too. Therefore, $\fk$ is generically expected to emerge also in the denominator of $V_{\rm SUGRA}$ making difficult the establishment of an inflationary era. This problem can be surpassed \cite{sor, epole} by two alternative strategies: \begin{itemize} \item Adjusting $W$ and constraining the prefactor of $K$'s, so that the pole is removed from $V_{\rm SUGRA}$ thanks to cancellations \cite{sor,epole,eno5} which introduce some tuning, though. \item Adopting a structured $K$ which yields the desired kinetic terms in \Eref{action1} but remains invisible from $V_{\rm SUGRA}$ \cite{tkref, sor, epole}. In a such case, any tuning on the $W$ parameters can be eluded. \end{itemize} In Sec.~\ref{fhi3} and \ref{fhi1} we show details on the realization of these scenaria, taking into account that $f_1$ in \Eref{action1} can be exclusively associated with a gauge singlet inflaton whereas $f_2$ can be related to a gauge non-singlet inflaton. We reserved $\al=1$ for a gauge singlet superfield, $Z^1=S$ called stabilizer or goldstino, which assists \cite{rube} us to formulate PI of chaotic type in SUGRA. Its presence in $W$ is determined as follows: \begin{itemize} \item It appears linearly in $W$ multiplying its other terms. To achieve technically such a adjustment, we require that $S$ and $W$ are equally charged under a global $R$ symmetry. \item It generates for $\vevi{S}=0$ the inflationary potential via the only term of $V_{\rm SUGRA}$ in \Eref{Vsugra} which remains alive \beq \Vhi=\vevi{V_{\rm F}}= \vevi{e^{K}K^{SS^*}|W_{,S}|^2}. \label{Vhio}\eeq \item It assures the boundedness of $\Vhi$. Indeed, if we set $\vevi{S}=0$, then $\vevi{K_{,z^\al}W}=0$ for $\al\neq1$ and $-3\vert \vevi{W}\vert^2=0$. Obviously, non-vanishing values of the latter term may render $\Ve_{\rm F}$ unbounded from below. \item It can be stabilized at $\vevi{S}=0$ without invoking higher order terms, if we select \cite{su11} \beq \label{K2} K_2=N_S\ln\lf1+|S|^2/N_S\rg~~\Rightarrow~~\vevi{K_2^{SS^*}}=1~~\mbox{with $0<N_S<6$}.\eeq $K_2$ parameterizes the compact manifold $SU(2)/U(1)$. Note that for $\vevi{S}=0$, $S$ is canonically normalized and so we do not mention it again henceforth. \end{itemize} \section{PI With a Gauge Singlet Inflaton}\label{fhi3} The SUGRA setup for this case is presented in \Sref{fhi30} and then -- in \Sref{fhi31} -- we describe the salient features of this model and we expose our results in \Sref{num3}. \subsection{SUGRA Set-up}\label{fhi30} This setting is realized in presence of two gauge singlet superfields $S$ and $\Phi$. We adopt the most general renormalizable $W$ consistent with the $R$ symmetry mentioned in \Sref{sugra2}, i.e., \beq W= S\lf \lda \phc+\ldb\phc^2-M^2\rg \label{whi} \eeq where $\lda, \ldb$ and $M$ are free parameters. As regards $K$, this includes, besides $K_2$ in \Eref{K2}, one of the following $K$'s, $\kas$ or $\tkas$, which yield a pole of order one in the kinetic term of $\phc$ and share the same geometry -- see \cref{epole}. Namely, \beq \kas=-N\ln\left(1-(\phc+\phc^*)/2\right)~~~\mbox{or}~~~ \tkas=-N\ln\frac{(1-\phc/2-\phc^*/2)}{(1-\phc)^{1/2}(1-\phc^*)^{1/2}},\label{tkas}\eeq with $\Re(\phc)<1$ and $N>0$. We opt a pole of order one as the simplest choice, although models with a pole of order two were also proposed \cite{alinde}. The $K$'s above are invariant under the set of transformations composing a set of matrices which can be related \cite{epole} to the group $U(1,1)$. Based on the $K$'s above, we can define the following three versions of PI: \begin{itemize} \item \dci, where the total $K$ is chosen as \beqs\beq \kbas=\kb+\kas. \label{kbas}\eeq The elimination of pole in $\Vhi$ discussed above can be applied if we set \beq N=2~~~\mbox{and}~~~\rss=-\ldb/\lda\simeq 1+\rs\,~~\mbox{with}~~~\rs\sim0~~~\mbox{and}~~M\ll1\label{kbas1} \eeq\eeqs such that the denominator including the pole in $\Vhi$ is (almost) cancelled out. \item \ca\ and \cb, which do not display any denominator in $\Vhi$ employing \beq \tkbas=\kb+\tkas \label{tkbas}\eeq with free parameters $N$, $\lda$, $\ldb$ and $M$. The discrimination of these models depends on which of the two inflaton-dependent terms in \Eref{whi} dominates -- see below. \end{itemize} \subsection{Structure of the Inflationary Potential}\label{fhi31} An inflationary potential of the type in \Eref{vn} can be derived from \Eref{Vhio} specifying the inflationary trajectory as follows \beq \vevi{S}=0\>\>\>\mbox{and}\>\>\>\vevi{\th}:=\mbox{\sf\small arg}\vevi{\Phi}=0. \label{inftr3}\eeq Inserting the quantities above into \Eref{Vhio} and taking into account \Eref{K2} and \beq \label{eK} \vevi{e^{K}}=\begin{cases} \fr^{-N}&\mbox{for}\>\>\>K=\kbas,\\ 1& \mbox{for}\>\>\>K=\tkbas, \end{cases}\eeq we arrive at the following master equation \beq\Vhi=\ld^2 \begin{cases} {\lf \sg-\rss\sg^2-\mma^2\rg^2}/{\fr^{N}}&\mbox{for \dci},\\ {\lf \sg-\rss\sg^2-\mma^2\rg^2}&\mbox{for \ca},\\ {\lf \sg^2-\rrs\sg-\mmb^2\rg^2}&\mbox{for \cb},\end{cases}\label{Vmab}\eeq where $\sg=\Re(\Phi)$, $r_{ij}=-\ld_i/\ld_j$ with $i,j=1,2$ and $\ld$ and $M_i$ are identified as follows \beq\ld= \begin{cases} \lda~~\mbox{and}~~\mma=M/\sqrt{\lda}&\mbox{for~\dci\ and \ca},\\ \ldb~~\mbox{and}~~\mmb=M/\sqrt{\ldb}&\mbox{for~\cb}.\end{cases}\label{ldab}\eeq As advertised in \Sref{fhi30}, the pole in $\fr$ is presumably present in $\Vhi$ of \dci, but it disappears for \ca\ and \cb. The arrangement of \Eref{kbas1}, though, renders the pole harmless for \dci. The correct description of PI is feasible if we introduce the canonically normalized fields, $\se$ and $\what \th$, as follows \beq \label{K3} \vevi{K_{\Phi\Phi^*}}|\dot \Phi|^2 \simeq\frac12\lf\dot{\what\phi}^{2}+\dot{\what \th}^{2}\rg~~\Rightarrow~~\frac{d\se}{d\sg}=J={\sqrt{N/2}\over\fr}~~\mbox{and}~~ \widehat{\theta}\simeq J\sg\theta~~~\mbox{with}~~~\vevi{K_{\Phi\Phi^*}}=\frac{N}{4f_1^2}. \eeq We see that the relation between $\sg$ and $\se$ is identical with \Eref{VJe} for $p=1$, if we do the replacement $N_1=N/2$. We expect that \ca\ [\cb] yield similar results with the non-SUSY models of PI with $p=1$ in \Eref{action1} and $n=2$ [$n=4$] in \Eref{vn}, whereas \dci\ is totally autonomous. \renewcommand{\arraystretch}{1.4} \begin{table}[!t] \bec\begin{tabular}{|c||c||c|l|l|}\hline {\sc Fields}&{\sc Eigen-}& \multicolumn{3}{|c|}{\sc Masses Squared}\\\cline{3-5} &{\sc states}& & {$K=\kbas$}&{$K=\tkbas$}\\ \hline\hline $1$ real scalar &$\widehat \theta$ & $\widehat m^2_{\theta}$& \multicolumn{2}{|c|}{$6H_{\rm I}^2$}\\ $2$ real scalars &$\what s_1,~\what s_2$ & $\what m^2_{ s}$&\multicolumn{2}{|c|}{$6H_{\rm I}^2/N_S$}\\\hline $2$ Weyl spinors & ${(\what{\psi}_{\Phi}\pm \what{\psi}_{S})/\sqrt{2}}$& $\what m^2_{ \psi\pm}$& \multicolumn{2}{|c|}{$6n(1-\sg)^2H_{\rm I}^2/N\sg^2$}\\ \hline \end{tabular}\eec \caption{\sl Mass spectrum of our CI models along the inflationary trajectory of Eq.~(3.5) -- we take $n=1$ for \textsf{$\delta$\ftn CI} and \textsf{\ftn CI2} whereas $n=2$ for \textsf{\ftn CI4}.}\label{tab3} \end{table}\renewcommand{\arraystretch}{1} To check the stability of $V_{\rm SUGRA}$ in \Eref{Vsugra} along the trajectory in \Eref{inftr3} w.r.t the fluctuations of $Z^\alpha$'s, we construct the mass spectrum of the theory. Our results are summarized in \Tref{tab3}. Taking into the limit $\rs=\mma=0$ for \dci, $\rss=\mma=0$ for \ca\ and $\rrs=\mmb=0$ for \cb, we find the expressions of the masses squared $\what m^2_{\chi^\al}$ (with $\chi^\al=\th$ and $s$) arranged in \Tref{tab3}. We there display the masses $\what m^2_{\psi^\pm}$ of the corresponding fermions too -- we define $\what\psi_{\Phi}=J\psi_{\Phi}$ where $\psi_\Phi$ and $\psi_S$ are the Weyl spinors associated with $S$ and $\Phi$ respectively. We notice that the relevant expressions can take a unified form for all models -- recall that we use $N=2$ in \dci\ -- and approach, close to $\sg=\sgx\simeq1$, rather well the quite lengthy, exact ones employed in our numerical computation. From them we can appreciate the role of $N_S<6$ in retaining positive $\what m^2_{s}$. Also, we confirm that $\what m^2_{\chi^\al}\gg\Hhi^2\simeq\Vhio/3$ for $\sgf\leq\sg\leq\sgx$. \subsection{Results}\label{num3} The dynamics of the analyzed models is analytically studied in \cref{epole}. We here focus on the numerical results. After imposing \eqs{Nhi}{Prob} the free parameters of $$\mbox{\dci, ~\ca, ~\cb\ ~~are}~~~(\rs,\mma), (N, \rss, \mma) ~~\mbox{and}~~ (N,\rrs,\mmb), $$ respectively. Recall that we use $N=2$ exclusively for \dci. Fixing $\mma=0.001$ for \dci, $\mma=0.01$ and $\rss=0.001$ for \ca\ and $\mmb=0.01$ and $\rrs=0.001$ for \cb, we obtain the curves plotted and compared to the observational data in \Fref{fig1}. We observe that: \subparagraph{\bf (a)} For \dci\ the resulting $\ns$ and $r$ increase with $|\rs|$ -- see solid line in \Fref{fig1}. This increase, though, is more drastic for $\ns$ which covers the whole allowed range in \Eref{nswmap}. From the considered data we collect the results \beq \label{resm1} 0\lesssim\rs/10^{-5}\lesssim3.3, \>\>\>3.5\lesssim {r/10^{-3}}\lesssim5.3\>\>\> \mbox{and}\>\>\>9\cdot10^{-3}\lesssim\Dex\lesssim0.01. \eeq In all cases we obtain $\Ns\simeq44$ consistently with \Eref{Nhi} and the resulting $\wrh\simeq-0.237$ from \Eref{wrh}. Fixing $\ns=0.965$, we find $\rs=-1.7\cdot 10^{-5}$ and $r=0.0044$ -- see the leftmost column of the Table in \Fref{fig1}. \subparagraph{\bf (b)} For \ca\ and \cb, $\ns$ and $r$ increase with $N$ and $\Dex$ which increases w.r.t its value in \dci. Namely, $\ns$ approaches its central observational value in \Eref{nswmap} whereas the bound on $r$ yields an upper bound on $N$. More quantitatively, for \ca\ -- see dashed line in \Fref{fig1} -- we obtain \beqs\beq \label{resm2} 0.96\lesssim\ns\lesssim0.9654,\>\>\>0.1\lesssim N\lesssim 65,\>\>\>0.05\lesssim{\Dex}/{10^{-2}}\lesssim 16.7\>\>\>\mbox{and}\>\>\> 0.0025\lesssim {r}\lesssim0.039\eeq with $\wrh\simeq-0.05$ and $\Ns\simeq50$. On the other hand, for \cb\ -- see dot-dashed line in \Fref{fig1} -- we obtain \beq \label{resm3} 0.963\lesssim\ns\lesssim0.965,\>\>\>0.1\lesssim N\lesssim 55,\>\>\>0.23\lesssim{\Dex}/{10^{-2}}\lesssim 8.5\>\>\>\mbox{and}\>\>\> 0.0001\lesssim {r}\lesssim0.04\eeq\eeqs with $\wrh\simeq(0.25-0.39)$ and $\Ns\simeq54-56$. In both equations above the lower bound on $N$ is just artificial. For $N=10$, specific values of parameters and observables are arranged in the rightmost columns of the Table in \Fref{fig1}. \section{PI With a Gauge non-Singlet Inflaton}\label{fhi1} In the present scheme the inflaton field can be identified with the radial component of a conjugate pair of Higgs superfields. We here focus on the Higgs superfields, $\bar\Phi$ and $\Phi$, with $B-L=-1,~1$ which break the GUT symmetry $\Ggut=\Gsm\times U(1)_{B-L}$ down to SM gauge group $\Gsm$ through their v.e.vs. We below outline the SUGRA setting in \Sref{fhi10} its inflationary outcome in \Sref{fhi11}) and its predictions in Sec.~\ref{num1}. We here update the results of \cref{sor}, taking into account the recent data of \cref{gws}, and enrich its content adding the model \hb. \subsection{SUGRA Set-up}\label{fhi10} \begin{floatingtable}[r] \begin{tabular}{|l||lll|}\hline {\sc Superfields}&$S$&$\Phi$&$\bar\Phi$\\\hline\hline $U(1)_{B-L}$&$0$&$1$&$-1$\\\hline $R$ &$1$&$0$&$0$\\\hline \end{tabular} \caption {\sl Charge assignments of the superfields.}\label{ch} \end{floatingtable} In accordance with the imposed symmetries -- see \Tref{ch} -- we here adopt the following $W$ -- cf.~\cref{jean}: \beq W= S\lf \frac12\ldb\bar\Phi\Phi+\ldd (\bar\Phi\Phi)^2-\frac14M^2\rg,\label{whih} \eeq where $\ldb, \ldd$ and $M$ are free parameters. In contrast to \Eref{whi}, we here include the first allowed non-renormalizable term. As we see below, this term assist us to activate the pole-elimination method for \dhi\ and generates a \hb. On the other hand, the invariance of $K$ under $\Ggut$ enforces us to introduce a pole of order two within the kinetic terms of $\phcb-\phc$ system. One possible option -- for another equivalent one see \cref{sor} -- is \beq\kba=-N\ln\left(1-|\phc|^2-|\phcb|^2\right)\>\>\>\mbox{or}\>\>\> \tkba=-N\ln\frac{1-|\phc|^2-|\phcb|^2}{(1-2\phcb\phc)^{1/2}(1-2\phcb^*\phc^*)^{1/2}},\label{tkba}\eeq which parameterizes the manifold ${\cal M}_{21}=SU(2,1)/(SU(2)\times U(1))$ \cite{sor} with scalar curvature ${\cal R}_{21}=-{6}/{N}$ -- note that the present $N$ is twice that defined in the first paper of \cref{sor}. From the selected above $W$ and $K$'s, the following inflationary models emerge: \begin{itemize} \item \dhi, where we employ \beqs\beq \kbba=\kb+\kba \label{kbba} \eeq and ensure an elimination of the singular denominator appearing in $\Vhi$ setting \beq N=2~~~\mbox{and}~~~\tss=-\ldd/\ldb\simeq 1+\ts\,~~\mbox{with}~~~\ts\sim0~~~\mbox{and}~~~M\ll1.\label{kbba1} \eeq\eeqs \item \ha\ and \hb, which do not display any singularity in $\Vhi$, employing \beq \tkbba=\kb+\tkba \label{tkbba}\eeq with free parameters $N$, $\ldb$, $\ldd$ and $M$. Their discrimination depends on which of the two inflaton-dependent terms in \Eref{whih} dominates -- see below. \end{itemize} \subsection{Structure of the Inflationary Potential}\label{fhi11} As in \Sref{fhi31}, we determine the inflationary potential, $\Vhi$, selecting a suitable parameterization of the involved superfields. In particular, we set \beq \Phi=\phi e^{i\theta}\cos\theta_\Phi\>\>\> \mbox{and}\>\>\>\bar\Phi=\phi e^{i\thb}\sin\theta_\Phi\>\>\> \mbox{with}\>\>\>0\leq\thn\leq{\pi}/{2}~~~\mbox{and}~~~S= \lf{s +i\bar s}\rg/{\sqrt{2}}.\eeq We can easily verify that a D-flat direction is \beq \vevi{\theta}=\vevi{\thb}=0,\>\vevi{\thn}={\pi/4}\>\>\>\mbox{and}\>\>\>\vevi{S}=0,\label{inftr}\eeq which can be qualified as inflationary path. Indeed, for both $K$'s in \Eref{tkbas}, the D term due to $B-L$ symmetry during PI is \beq \vevi{{\rm D}_{BL}}= N\lf|\vevi{\phc}|^2-|\vevi{\phcb}|^2\rg/ \lf1-|\vevi{\phc}|^2-|\vevi{\phcb}|^2\rg=0.\eeq Also, regarding the exponential prefactor of $V_{\rm F}$ in \Eref{Vsugra} we obtain \beq \label{eKh} \vevi{e^{K}}=\begin{cases} \frr^{-N}&\mbox{for}\>\>\>K=\kba,\\ 1& \mbox{for}\>\>\>K=\tkba,\end{cases}\eeq Substituting it and \eqs{K2}{whih} into \Eref{Vhio}, this takes its master form \beq\Vhi=\frac{\ld^2}{16} \begin{cases} {\lf \sg^2-\tss\sg^4-\mmb^2\rg^2}/{\frr^{N}}&\mbox{for \dhi},\\ {\lf \sg^2-\tss\sg^4-\mmb^2\rg^2}&\mbox{for \ha},\\ {\lf \sg^4-\tts\sg^2-\mmc^2\rg^2}&\mbox{for \hb},\end{cases}\label{Vmabh}\eeq where $r_{ij}=-\ld_i/\ld_j$ with $i,j=1,2$ and $\ld$ and $M_i$ are identified as follows \beq\ld= \begin{cases} \ldb~~\mbox{and}~~\mmb=M/\sqrt{\ldb}&\mbox{for~\dhi\ and \ha},\\ \ldd~~\mbox{and}~~\mmc=M/\sqrt{\ldd}&\mbox{for~\hb}.\end{cases}\label{ldbd}\eeq From \Eref{Vmabh}, we infer that the pole in $\frr$ is presumably present in $\Vhi$ of \dhi\ but it disappears in $\Vhi$ of \ha\ and \hb\ and so no $N$ dependence in $\Vhi$ arises. The elimination of the pole in the regime of \Eref{kbba1} lets open the realization of \dhi, though. \renewcommand{\arraystretch}{1.4} \begin{table}[!t] \begin{center} \begin{tabular}{|c||c|c||l|l|}\hline {\sc Fields}&{\sc Eigen-}& \multicolumn{3}{|c|}{\sc Masses Squared}\\\cline{3-5} &{\sc states}& & \multicolumn{1}{|c|}{$K=\kbba$}&\multicolumn{1}{|c|}{$K=\tkbba$}\\ \hline\hline 2 real&$\widehat\theta_{+}$&$m_{\widehat\theta+}^2$& \multicolumn{2}{|c|}{$3\Hhi^2$}\\\cline{4-5} scalars&$\widehat \theta_\Phi$ &$\widehat m_{ \theta_\Phi}^2$&\multicolumn{2}{|c|}{$M^2_{BL}+6\Hhi^2(1+2/N-2/N\sg^2)$} \\\hline 1 complex&$s, {\bar{s}}$ &$ \widehat m_{ s}^2$&\multicolumn{1}{|c|}{$6\Hhi^2(1/N_S-8(1-\sg^2)/N+N\sg^2/2$}&\multicolumn{1}{|c|}{$6\Hhi^2(1/N_S-4/N$}\\ scalar&&&\multicolumn{1}{|c|}{$+2(1-2\sg^2)+8\sg^2/N)$}&\multicolumn{1}{|c|}{$+2/N\sg^2+2\sg^2/N)$}\\\hline 1 gauge boson &{$A_{BL}$}&{$M_{BL}^2$}& \multicolumn{2}{|c|}{$2Ng^2\sg^2/\frr^2$}\\\hline $4$ Weyl & $\what \psi_\pm$ & $\what m^2_{ \psi\pm}$& \multicolumn{2}{|c|}{${12\frr^2\Hhi^2/N^2\sg^2}$}\\\cline{4-5} spinors&$\ldu_{BL}, \widehat\psi_{\Phi-}$&$M_{BL}^2$&\multicolumn{2}{|c|}{$2Ng^2\sg^2/\frr^2$}\\ \hline \end{tabular}\end{center} \caption{\sl\ftn Mass spectrum the models of HI along the inflationary trajectory of Eq.~(4.8). }\label{tab1} \end{table}\renewcommand{\arraystretch}{1.} To obtain PI we have to correctly identify the canonically normalized (hatted) fields of the $\phcb-\phc$ system, defined as follows \beqs\beq \vevi{K_{\al\bbet}}\dot Z^\al \dot Z^{*\bbet} \simeq \frac12\lf\dot{\widehat \sg}^2+\dot{\widehat \th}_+^2+\dot{\widehat \th}_-^2+\dot{\widehat \th}_\Phi^2\rg~~~\mbox{for}~~~\al=2,3. \label{kzzn}\eeq -- recall that $Z^1=S$ is already canonically normalized for $\vevi{S}=0$ as in \Eref{inftr}. We find \beq \lf \vevi{K_{\al\bbet}}\rg= \vevi{M_{\phc\phcb}}~~\mbox{with}~~ \vevi{M_{\phc\phcb}}=\frac{\kp\sg^2}{2}\mtta{2/\sg^2-1}{1}{1}{2/\sg^2-1}, \>\> \kp=\frac{N}{\frr^{2}}.\eeq\eeqs We then diagonalize $\vevi{M_{\phc\phcb}}$ via a similarity transformation, i.e., \beq U_{\phc\phcb} \vevi{M_{\phc\phcb}} U_{\phc\phcb}^\tr =\diag\lf \kp_+,\kp_-\rg,\>\>\>\mbox{where}\>\>\>U_{\phc\phcb}= \frac{1}{\sqrt{2}}\mtt{1}{1}{-1}{1},\>\> \kp_+=\kp\>\>\>\mbox{and}\>\>\> \kp_-=\kp\frr\,. \eeq Inserting the expressions above in \Eref{kzzn} we obtain the hatted fields \beq \frac{d\se}{d\sg}=J={\sqrt{2N}\over\frr},~~\widehat{\theta}_+ \simeq\sqrt{\kp_+}\sg\theta_+,~~\widehat{\theta}_- \simeq\sqrt{{\kp_-}}\sg\theta_-~~\mbox{and}~~~\widehat \theta_\Phi \simeq \sg\sqrt{2\kp_-}\lf\theta_\Phi-{\pi}/{4}\rg,\eeq where $\th_{\pm}=\lf\bar\th\pm\th\rg/\sqrt{2}$. From the first equation above we conclude that \Eref{VJe} for $p=2$ is reproduced for $N_2=2N$. We expect that \dhi\ has similar behavior with \dci, found in \Sref{fhi31} whereas \ha\ [\hb] may be interpreted as supersymmetrization of the non-SUSY models with $p=2$ in \Eref{action1} and $n=4$ [$n=8$] in \Eref{vn}. Having defined the canonically normalized scalar fields, we can derive the mass spectrum of our models along the direction of \Eref{inftr} and verify its stability against the fluctuations of the non-inflaton fields. Approximate, quite precise though, expressions for $\sg=\sgx\sim1$ are arranged in \Tref{tab1}. We confine ourselves to the limits $\ts=\mmb=0$ for \dhi, $\tss=\mmb=0$ for \ha\ and $\tts=\mmc=0$ for \hb. As in the case of the spectrum in \Tref{tab3}, $N_S<6$ plays a crucial role in retaining positive and heavy enough $\what m^2_{s}$. Here, however, we also display the masses, $M_{BL}$, of the gauge boson $A_{BL}$ (which signals the fact that $U(1)_{B-L}$ is broken during PI) and the masses of the corresponding fermions. The unspecified eigenstate $\what \psi_\pm$ is defined as \beq \what \psi_\pm=(\what{\psi}_{\Phi+}\pm {\psi}_{S})/\sqrt{2}\>\>\>\mbox{where}\>\>\>\psi_{\Phi\pm}=(\psi_\Phi\pm\psi_{\bar\Phi})/\sqrt{2}\,,\eeq with the spinors $\psi_S$ and $\psi_{\Phi\pm}$ being associated with the superfields $S$ and $\bar\Phi-\Phi$. It is also evident that $A_{BL}$ becomes massive absorbing the massless Goldstone boson associated with $\what\th_-$. The breakdown of $U(1)_{B-L}$ during PI is crucial in order to avoid the production of topological defects during the $B-L$ phase transition, which takes place after end of PI. Indeed, along the direction of \Eref{inftr}, $\Vhi$ develops a SUSY vacuum lying at the direction \beq \label{vev}\vev{S}=0~~~\mbox{and}~~~ % \vev{\sg}= \begin{cases} \lf 1-(1-4\tss\mmb^2)^{1/2}\rg^{1/2}/\sqrt{2\tss}&\mbox{for~\dhi\ and \ha},\\ \lf \tts+(\tts^2+4\mmc^2)^{1/2}\rg^{1/2}/\sqrt{2}&\mbox{for~\hb},\end{cases}\eeq i.e., $U(1)_{B-L}$ is finally spontaneously broken via the v.e.v of $\sg$. \subsection{Results}\label{num1} As in \Sref{num3}, we here focus on our numerical results -- our analytic ones for \dhi\ and \ha\ are presented in \cref{sor}. After enforcing \eqs{Nhi}{Prob} -- which yield $\ld$ together with $\sgx$ -- the free parameters of the models $$\mbox{\dhi, ~\ha, ~\hb\ ~~are}~~~(\ts,\mmb), (N, \tss, \mmb) ~~\mbox{and}~~ (N,\tts,\mmc), $$ respectively. Recall that we use $N=2$ exclusively for \dhi. Also, we determine $\mmb$ and $\mmc$ demanding that the GUT scale within MSSM $\Mgut\simeq2/2.433\times10^{-2}$ is identified with the value of $M_{BL}$ -- see \Tref{tab1} -- at the vacuum of \Eref{vev}, I.e., \beq \label{Mg} \vev{M_{BL}}={\sqrt{2N}g\vev{\sg}\over \vev{f_2}}=\mgut~~\Rightarrow~~\vev{\sg}\simeq\frac{\mgut}{g\sqrt{2N}} ~~\mbox{with}~~~g\simeq0.7,~~\vev{f_2}\simeq1\eeq and $\vev{\sg}$ given by \Eref{vev}. By varying the remaining parameters for each model we obtain the allowed curves in the $\ns-r$ plane-- see \Fref{fig2}. A comparison with the observational data is also displayed there. We observe that: \subparagraph{\bf (a)} For \dhi\ -- see the solid line in \Fref{fig2} -- we obtain results similar to those obtained for \dci\ in \Sref{num3}. Namely, the resulting $\ns$ and $r$ increase with $|\ts|$ with $\ns$ covering the whole allowed range in \Eref{nswmap}. From the considered data we collect the results \beq \label{resh1}2\lesssim-\ts/10^{-5}\lesssim5.5, \>\>\> 2\lesssim {r/10^{-3}}\lesssim3.6\>\>\>\mbox{and}\>\>\>4\lesssim\Dex/10^{-3}\lesssim4.75. \eeq Also, we obtain $\Ns\simeq(54.8-55.7)$ consistently with \Eref{Nhi} and the resulting $\wrh\simeq0.3$ from \Eref{wrh}. Fixing $\ns=0.965$ we find $\ts=-3.6\cdot 10^{-5}$ and $r=0.0026$ -- see the leftmost column of the Table in \Fref{fig2}. \Eref{Mg} gives $\mmb=0.00587$. \subparagraph{\bf (b)} For \ha\ and \hb, $\ns$ and $r$ increase with $N$ and $\Dex$ which is larger than that obtained in \dhi. Namely, $\ns$ approaches its central observational value in \Eref{nswmap} whereas the bound on $r$ yields an upper bound on $N$. More specifically, for \ha\ -- see dashed line in \Fref{fig2} -- we obtain \beqs\beq \label{resh2} 0.963\lesssim\ns\lesssim0.964,\>\>\>0.1\lesssim N\lesssim 36,\>\>\>0.09\lesssim{\Dex}/{10^{-2}}\lesssim 7.6\>\>\>\mbox{and}\>\>\> 0.0005\lesssim {r}\lesssim0.039\,,\eeq with $\wrh\simeq0.3$ and $\Ns\simeq56$. \Eref{Mg} dictates $\mmb\simeq(0.0013-0.0045)$. On the other hand, for \hb\ -- see dot-dashed line in \Fref{fig2} -- we obtain \beq \label{resh3} 0.963\lesssim\ns\lesssim0.965,\>\>\>0.1\lesssim N\lesssim40,\>\>\>0.45\lesssim{\Dex}/{10^{-2}}\lesssim 3.8\>\>\>\mbox{and}\>\>\> 0.0001\lesssim {r}\lesssim0.039\,,\eeq\eeqs with $\wrh\simeq(0.25-0.6)$ and $\Ns\simeq(54.6-60)$. \Eref{Mg} implies $\mmc\simeq(1.1-690)\cdot10^{-6}$. In both equations above the lower bound on $N$ is just artificial -- as in \eqs{resm2}{resm3}. For $N=12$, specific values of parameters and observables are arranged in the rightmost columns of the Table in \Fref{fig2}. Although \hb\ is worse than \ha\ regarding the tuning of $\mmc$ and $\tts$, it leads to $\ns$ values precisely equal to its central observational one -- cf. \Eref{nswmap}. \section{Conclusions}\label{con} We reviewed the implementation of PI first in a non-SUSY and then to a SUSY framework. In the former regime, we confined ourselves to models displaying a kinetic mixing in the inflaton sector with a pole of order one or two and verified their agreement with observations. In the latter regime, we presented two classes of models (CI and HI) depending on whether the inflaton is included into a gauge singlet or non-singlet field. CI and HI are relied on the superpotential in \eqs{whi}{whih} respectively which respects an $R$ symmetry and include an inflaton accompanying field which facilitates the establishment of PI. In each class of models we singled out three subclasses of models (\dci, \ca\ and \cb) and (\dhi, \ha\ and \hb). The models \dci\ and \dhi\ are based on the \Ka s in \eqs{kbas}{kbba} whereas (\ca, \cb) and (\ha, \hb) in those shown in \eqs{tkbas}{tkbba}. All those \Ka s parameterize hyperbolic internal geometries with a kinetic pole of order one for CI and two for HI. The Higgflaton in the last case implements the breaking of a gauge $U(1)_{B-L}$ symmetry at a scale which may assume a value compatible with the MSSM unification. All the models excellently match the observations by restricting the free parameters to reasonably ample regions of values. In particular, within \dci\ and \dhi\ any observationally acceptable $\ns$ is attainable by tuning $\rs$ and $\ts$ respectively to values of the order $10^{-5}$, whereas $r$ is kept at the level of $10^{-3}$ -- see \eqs{resm1}{resh1}. On the other hand, \ca, \cb, \ha\ and \hb\ avoid any tuning, larger $r$'s are achievable as $N$ increases beyond $2$, while $\ns$ lies close to its central observational value -- see \eqs{resm2}{resm3} for CI and \eqs{resh2}{resh3} for HI. \paragraph*{\small \bf\scshape Acknowledgments} {\small I would like to thank A.~Marrani, S.~Ketov and E.W. Kolb for interesting discussions. This research work was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the ``First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant'' (Project Number: 2251).} \def\ijmp#1#2#3{{\emph{Int. Jour. Mod. Phys.}} {\bf #1},~#3~(#2)} \def\plb#1#2#3{{\emph{Phys. Lett. B }}{\bf #1},~#3~(#2)} \def\prl#1#2#3{{\emph{Phys. Rev. Lett.} } {\bf #1},~#3~(#2)} \def\prep#1#2#3{\emph{Phys. Rep. }{\bf #1},~#3~(#2)} \def\prd#1#2#3{{\emph{Phys. Rev. } D }{\bf #1},~#3~(#2)} \def\npb#1#2#3{{\emph{Nucl. Phys.} }{\bf B#1},~#3~(#2)} \def\astp#1#2#3{\emph{Astropart. Phys.} {\bf #1}, #3 (#2)} \def\epjc#1#2#3{{Eur. Phys. J. C} {\bf #1},~#3~(#2)} \def\jhep#1#2#3{{\emph{JHEP} } {\bf #1}, #3 (#2)} \def\jcap#1#2#3{{\emph{JCAP} } {\bf #1}, #3 (#2)} \def\jcapn#1#2#3#4{{\emph{JCAP} }{\bf #1}, no.\ #4, #3 (#2)} \newpage
Title: The on-ground data reduction and calibration pipeline for SO/PHI-HRT
Abstract: The ESA/NASA Solar Orbiter space mission has been successfully launched in February 2020. Onboard is the Polarimetric and Helioseismic Imager (SO/PHI), which has two telescopes, a High Resolution Telescope (HRT) and the Full Disc Telescope (FDT). The instrument is designed to infer the photospheric magnetic field and line-of-sight velocity through differential imaging of the polarised light emitted by the Sun. It calculates the full Stokes vector at 6 wavelength positions at the Fe I 617.3 nm absorption line. Due to telemetry constraints, the instrument nominally processes these Stokes profiles onboard, however when telemetry is available, the raw images are downlinked and reduced on ground. Here the architecture of the on-ground pipeline for HRT is presented, which also offers additional corrections not currently available on board the instrument. The pipeline can reduce raw images to the full Stokes vector with a polarimetric sensitivity of $10^{-3}\cdot I_{c}$ or better.
https://export.arxiv.org/pdf/2208.14904
\keywords{Solar Orbiter, space observatory, solar physics, spectropolarimetry, data pipelines, data processing, image processing} \section{Introduction} \label{sec:intro} The Solar Orbiter (SO) is the first selected medium-class mission from ESA's Cosmic Vision 2015-2025 Program. It is a collaborative effort with NASA, and the spacecraft was launched on February 10th 2020\cite{Muller2020TheOverview}. Its primary goal is to study the Sun and the inner heliosphere. To achieve this, the spacecraft carries a scientific payload of 10 instruments, 6 remote sensing and 4 in-situ. The Polarimetric and Helioseismic Imager (PHI) is one of the remote sensing instruments and it retrieves the continuum intensity, the vector magnetic field, and the line-of-sight velocity, both in the Sun's photosphere\cite{solanki_polarimetric_2020}. It does this by sampling four linear combinations of the four Stokes parameters of the magnetically sensitive Fe I absorption line at $617.3$ nm at 6 wavelength positions. These samples are later transformed to the aforementioned physical quantities by inverting the radiative transfer equation for polarized light in the presence of a magnetic field. The instrument has two telescopes: the Full Disc Telescope (FDT) and the High Resolution Telescope (HRT)\cite{gandorfer_high_2018}. The FDT can capture the full solar disc, while the HRT is designed to capture photospheric features in more spatial detail. During normal operations to conserve telemetry, the raw images from the two telescopes are reduced onboard using field-programmable gate array (FPGA) computers, producing the data products which are downlinked to Earth\cite{Lange2017On-boardInstrument, Albert2020}. However there are occasions during the nominal and extended mission phases when telemetry rates will be favourable such that raw images can be downloaded and reduced on-ground. The ability to reduce data on-ground provides more opportunities for fine-tuning the data reduction process, and therefore for producing higher quality data products. Furthermore it allows for the additional processing when the HRT's stabilisation system is switched off. Since launch the spacecraft has been in its cruise phase until 27th November 2021, when the nominal mission phase began\cite{Zouganelis2020}. During the cruise phase, predominantly raw images from HRT were downloaded and reduced on-ground, to allow for in-flight testing and investigate the instrument's performance. This paper outlines the current state of the on-ground pipeline for the HRT (V1.4 June 2022), written in Python3, which is used to reduce and calibrate these raw images. Finally early in-flight data reduced with this pipeline is presented and the telescope's high polarimetric sensitivity is shown. \section{SO/PHI-HRT} \subsection{The Instrument} \label{subsec:hrt_inst} A simplified description of the key optical components of the HRT is presented, following the optical path shown in Fig. \ref{fig:hrt}\cite{solanki_polarimetric_2020, gandorfer_high_2018}. The HRT is a Ritchey-Chrétien telescope with a decentred pupil. The HRT has a Heat Rejection Entrance Window (HREW) that allows in only $4\%$ of the incoming solar light power. The HRT uses its own Polarisation Modulation Package (PMP), consisting of Liquid Crystal Variable Retarders (LCVR) and a polariser to modulate the light in order to obtain the polarisation characteristics of the incoming light\cite{Alvarez-Herrero2015a}. Once modulated, the light is split and a portion enters the Correlation Tracker Camera (CTC). The CTC is part of the Image Stabilisation System (ISS) which works to track a specific feature on the solar surface, calculating from the CTC images the steering signal for the M2 (tip-tilt) mirror, to accurately track the desired feature\cite{Volkmer2012, Carmona2014}. The ISS is also used to compensate for effects such as spacecraft jitter. After the beam splitter the light goes through the HRT Refocus Mechanism (HRM) and passes through the Feed Select Mechanism (FSM), which is used to switch between FDT and HRT, and towards the Filtergraph (FG). The FG contains two pre-filters and a tunable LiNbO$_{3}$ Fabry-Perot etalon, an interferometer, which together allow for a transmission window with a mean full-width-half-maximum (FWHM) of $(106\pm5)$ m\AA $\:$and free spectral range of $3.0$ \AA \cite{gandorfer_high_2018, Dominguez-Tagle2014OpticalPHI}. The resultant light then illuminates an Active Pixel Sensor (APS) which reads out images with $2048\times2048$ pixels. The re-imaging optics in the Focal Plane Assembly (FPA) provides a plate scale on the sensor of $0.5^{\prime\prime}$ per pixel which corresponds to $102$ km at $0.28$\,AU distance. The imaging cadence is controlled by the modulation accumulation scheme in the PMP. While the HRT has the operational capability of a $60$ second cadence, during the cruise phase a $96$ second cadence was used. This was done using a PMP scheme of $[4,5]$. At each wavelength position, $4$ frames are taken for each modulation state, and this is cycled through each modulation state $5$ times, resulting in $20$ total frames for each modulation state at each wavelength position. The minimum number of frames to achieve the desired signal to noise ratio of $10^{-3}$ is $16$\cite{solanki_polarimetric_2020}. \subsection{Flat-Field Acquisition}\label{subsec:flat_acq} A critical reduction process of the science data requires a flat-field. The flat-field contains the difference in gain of a given pixel with respect to the others, as well as information of other imperfections such as dust grains in the field of view (FOV). The HRT flat-fields are not acquired using off-pointing of the spacecraft\cite{kuhn_gain_1991}. Instead, the solar surface evolution, is used to introduce differences between subsequent images, such that localised solar features are averaged out with enough acquisitions. Over a period of approximately $8$ hours, 1500 images are accumulated at each polarisation state and wavelength. These flat-fields are acquired during every major campaign, to ensure the science data can be properly calibrated. However polarimetric structures remain in the flat-field that are smeared horizontally due to the solar rotation. This horizontal smearing leaves unwanted artefacts when applying the flat-field correction to the scientific data. Therefore an additional flat-field processing procedure was implemented as part of the pipeline: unsharp masking, which is described in Sec. \ref{subsec:unsharp_mask}. \section{SO/PHI-HRT On ground Pipeline} \subsection{Pipeline Overview}\label{subsec:pipe} The on-ground pipeline is developed in Python3 and reduces the raw data received from the SO/PHI-HRT instrument. The raw files downloaded from the instrument are classed Level 0 (L0) data. They become Level 1 (L1) data once necessary metadata are added, the data are scaled to the correct units (to account for the compression scheme used) and reflected in the $Y$ axis to match the solar orientation convention. This on-ground pipeline converts the L1 data, into L2 data. This process is described in Fig. \ref{diag:pipe}. The inputs to the pipeline are the science data, raw flat-fields, and demodulation matrices (for each operating PMP temperature) from the polarisation calibration campaign performed prior to launch. The science data and flat-fields are first dark-corrected to remove the dark current (not shown in Fig. \ref{diag:pipe} for brevity). A key capability of the pipeline is the option to unsharp mask the flat-fields (see Sec. \ref{subsec:unsharp_mask}), however the width of the Gaussian distribution to be used must be known beforehand. The pipeline has the functionality to reduce multiple datasets at once, with the same flat-field, provided the image dimensions, PMP temperature and continuum position of all datasets agree. The pipeline is built to work with any cropped dataset provided the input is square. The pipeline can also reduce images when the ISS is locked to track the solar limb, using an automatic limb detection algorithm to account for limb darkening effects when normalising the Stokes parameter and applying the cross-talk correction. Furthermore additional steps are implemented to account for the case when the ISS is not operating. The output quantities are indicated with the dotted outlines in Fig. \ref{diag:pipe}. \subsection{Unsharp Masking}\label{subsec:unsharp_mask} Due to the method of flat-field acquisition as described in Sec. \ref{subsec:flat_acq}, horizontal polarisation elements exist in the flat-fields, which would contaminate the data. To remove this contamination unsharp masking is performed on the flat-fields. This is achieved by convolving the demodulated flat-fields with a 2D Gaussian distribution. The width of the Gaussian distribution was optimised such that the horizontal stripes were removed, but that any larger scale information was retained. The width of the Gaussian is a function of Solar distance and PMP temperature. For example, at a distance of $0.526$ AU, a width of $59$ pixels was used, while at $0.801$ AU, $49$ pixels was the appropriate width to be used. An example of the unsharp masking process is shown in Fig. \ref{fig:unsharp}. \subsection{Flat-Field Correction}\label{subsec:flat_corr} % From in-flight testing it was determined that a polarisation state dependent, and wavelength dependent flat-field must be applied to the science data. This is done in order to remove a polarimetric ghost that was detected, likely to originate from a reflection between the inner panel of the HREW and the highly reflective etalon. With an optical path of this nature, with many parallel optical surfaces, several measures were taken to suppress ghosts; however it was not possible to eliminate all of them. In particular those produced by the HREW, which is not mounted on the instrument but instead is a component of the heat shield of the spacecraft is prone to have large margins in the mechanical alignment. Nevertheless, with the proper treatment of the flat-fields, we are capable of removing the contribution from the detected ghost to below the noise requirement level. Finally, to correct for the cavities within the etalon, the flat-field must be normalised over the wavelength range, so that the spectral line profile is removed. This also has the effect of removing the solar rotation to at least first order. Thus the flat-fielded data, $I_{ff}$, is calculated as follows: \begin{equation} I_{ff}(x,y,s,\lambda) = \frac{I_{df}(x,y,s,\lambda)}{I_{flat}(x,y,s,\lambda)}, \end{equation} where $I_{df}$ is the dark-corrected data, $x$ and $y$ are the spatial dimensions in the FOV, $s$ is the polarization state and $\lambda$ denotes the wavelength. \subsection{Demodulation}\label{subsec:demod} The raw images must be demodulated to remove the modulation applied by the PMP in the image acquisition process. This is obtained with the demodulation matrix: $d_{11-44}$, for a given pixel: \begin{equation}\label{mod_demod} \begin{pmatrix} d_{11} & \dots & d_{14}\\ \vdots & \ddots & \\ d_{41} & & d_{44} \end{pmatrix} \begin{pmatrix} I_{ff1}(x,y,s,\lambda)\\ I_{ff2}(x,y,s,\lambda)\\ I_{ff3}(x,y,s,\lambda)\\ I_{ff4}(x,y,s,\lambda) \end{pmatrix} = \begin{pmatrix} S_{1}\\ S_{2}\\ S_{3}\\ S_{4} \end{pmatrix}, \end{equation} where $I_{1...4ff}$ are the flat-fielded intensities and $S_{1...4}$ are the Stokes parameters: $I,Q,U,V$. The demodulation matrices acquired during the on-ground testing before launch, resulted in large gradients across the field of view in the Stokes parameters, an example of which are portrayed in Fig. \ref{fig:demod}. Averaging the central $1024\times1024$ region of the demodulation matrix results in a demodulation matrix that removes the large scale FOV variations that were introduced using the matrix measured during the on-ground polarisation campaign. \subsection{Limb Detection and Normalisation}\label{subsec:limb_norm} For datasets where the limb is in the FOV, such as when the spacecraft is off-pointing to the poles, certain additional steps are needed. From the World Coordinate System (WCS) information in the fits header, the pointing (North, South, East, West) is determined, such that a limb fitting algorithm can accurately detect the limb. First a mask is created, ensuring that all pixels outside the solar disc are set to $0$ in the final data products. To prevent limb darkening from affecting the normalisation, the edge and radius of the limb are calculated. For limb images, the average of Stokes $I$ at the continuum wavelength position is used as the Stokes normalisation factor, making sure to only include pixels which are less than $80\%$ of the solar radius in distance from disc centre. Under disc centre pointing, the average from the central $1024\times1024$ region is found and used as $I_{c}$. \subsection{Cross-Talk Correction}\label{subsec:ctalk} Cross-talk between the Stokes parameters arises from three main sources: spacecraft jitter, imperfect instrument calibration, and modulation from the LCVRs. The strongest cross-talk, is that from Stokes $I$ to the other Stokes parameters, as the absolute value of Stokes $I$ is much greater than that of $Q,\;U,\;V$\cite{DelToroIniesta2003IntroductionSpectropolarimetry}. Due to cross-talk from sources described earlier, an ad-hoc correction is applied to the data\cite{SanchezAlmeida1992ObservationSunspots, schlichenmaier_spectropolarimetry_2002}. A linear fit of $Q,\;U,\;V$ against $I$ is performed separately, on the continuum wavelength image, to find the gradient and offset parameters of the cross-talk from $I$ to $Q,\;U,\;V$. When applying the cross-talk correction at each of the $6$ different wavelength positions, the parameters are weighted by the respective averaged Stokes $I$ value, relative to the continuum value. The cross-talk parameters from in-flight data are of the order of $1\%$ or lower, indicating that the ISS ensures there are no major contributions from the spacecraft jitter, the instrument calibration is accurate and that the demodulation matrices used are effective. After this step, provided the ISS is operational, the pipeline produces the L2 `Stokes' filtergrams. \subsection{Special Case: ISS Off}\label{subsec:iss} The ISS of the instrument, as explained in Sec. \ref{subsec:hrt_inst}, tracks features on the Sun and compensates for the spacecraft jitter. The latter is important for two reasons: it removes the cross-talks induced by the jitter and keep the 24 raw frames aligned between each other during the acquisition. In some occasions the ISS has to be turned off and three procedures have been implemented to compensate for the absence of this subsystem. \subsubsection{Modulation Alignment}\label{subsubsec:mod} The first procedure is the modulation alignment just before the demodulation of the data (see Fig. \ref{diag:pipe}). For each wavelength we consider the first polarimetric modulation as a reference; the remaining three polarimetric modulations are then aligned to the chosen reference. This is performed by computing the gradient of the images, selecting a sub-region of $512\times512$ pixels, and evaluating the cross correlation between them with sub-pixel accuracy\cite{Guizar-Sicairos2008}. This registration has to be performed before the demodulation in order to avoid the combination of pixels from different regions on the Sun. Figure \ref{fig:mod_align} shows the effect of the spacecraft jitter on the data and the removal of the noise pattern with the modulation alignment step. The Stokes $V$ noise level decreases from $2.4\times10^{-3}$ to $1.4\times10^{-3}$. \subsubsection{\textit{V} to \textit{Q,U} Cross-Talk Correction}\label{subsubsec:ctalk} The spacecraft jitter is responsible for increasing the cross-talk both from $I$ to $Q,\;U,\;V$ and from $V$ to $Q,\;U$. Similar to the correction of cross-talk from Stokes $I$ (Sec. \ref{subsec:ctalk}), this procedure performs a linear fit of Stokes $Q$ and $U$ against Stokes $V$ immediately after the cross-talk correction from Sec. \ref{subsec:ctalk} is applied. The difference between the two methods is that here we consider points from all the wavelengths while computing the linear fit, so the parameters are not weighted by the continuum value. Despite the cross-talk parameters from $I$ to $Q,\;U,\;V$ being of the order of $1\%$, the parameters from $V$ to $Q,\;U$ can be up to $7\%$. \subsubsection{Wavelength Alignment}\label{subsubsec:wl} The last step before producing the L2 `Stokes' filtergrams is the alignment of the frames at different wavelengths. Similar to that described in Sec. \ref{subsubsec:mod}, we use the continuum Stokes $I$ image as a reference, and after computing the gradient, we align the other frames to this reference. The only exception is for the line core wavelength image, for which a line wing image is used as a reference. This alignment has to be performed before the Radiative Transfer Equation is inverted to create cohesive Stokes profiles, where the different wavelength samples in a particular Stokes profile, come from the same spatial location on the Sun. \subsection{Radiative Transfer Equation Inversion}\label{subsec:rte} To infer the physical quantities from the Stokes maps, a Radiative Transfer Equation (RTE) inversion is performed. Similar to the inversion code used by the HMI vector magnetic field pipeline\cite{hoeksema_helioseismic_2014, Borrero2011}, a code assuming a Milne-Eddington (ME) atmosphere is used\cite{DelToroIniesta2003IntroductionSpectropolarimetry,LandiDeglInnocenti2005}. A ME model assumes that the physical properties of the atmosphere remain constant with geometrical height, while the source function scales linearly with optical depth. This pipeline uses the CMILOS code written in C, which utilises analytical response functions \cite{orozco_suarez_usefulness_2007}. This code is the same as that used by the FPGA devices onboard\cite{CobosCarrascosa2016} and it works by minimizing the difference between the observed and synthetic profiles it produces, iterating the atmosphere's parameters until convergence of the two profiles is achieved. The CMILOS code has three operating modes: \begin{itemize} \item RTE with default starting conditions \item RTE with Classical Estimates as starting conditions \item Classical Estimates only \end{itemize} With Classical Estimates (CE) enabled, either in CE only mode, or together with RTE, it estimates the line-of-sight magnetic field and velocity using the centre of gravity method\cite{Semel1967,Rees1978}. The transverse component of the magnetic field is estimated using the weak-field approximation\cite{LandiDeglInnocenti2005}. The CMILOS inversion code produces the following L2 data products: full magnetic vector, Dopplergram and continuum intensity. The azimuth is defined as the counter-clockwise rotation from the positive direction of the detector $y$-axis. However, the intrinsic $180^\circ$ ambiguity of the Zeeman effect is not removed at this stage. \section{Early In-Flight Data} % \label{sec:data} \subsection{February 2021} We first introduce reduced data from 23 February 2021 17:00 UTC, during the short term planning period (STP) 136. This data captured the quiet Sun at disk centre, allowing us to characterise the noise level well, given the lack of strong magnetic field signals. The distance of the spacecraft to the Sun was $0.526$ AU, and the PMP temperature was set to $50\;\degree$C. At this distance, the (two-pixel) spatial resolution is $382$ km. The Stokes filtergrams in Fig. \ref{fig:iquv} display high uniformity and low linear polarimetric signal as expected for a quiet Sun. This demonstrates the high effectiveness of the flat-field correction and additional cross-talk removal. The photospheric magnetic field network appears clearly in the Stokes $V/I_{c}$ image. In Stokes $U/I_{c}$ the remnants of a polarimetric ghost edge is present in the lower right corner. This ghost is most likely due to a reflection off the HREW (see Sec. \ref{subsec:flat_corr}). The flat-field correction removes the ghost to a large extent but the edge remains. From analysis of histograms of the four quadrants, the difference of the lower right corner distribution from the others is below the $10^{-3}\cdot I_{c}$ noise level. Figure \ref{fig:rte} displays the derived quantities from these Stokes filtergrams. As expected by the uniformity of the filtergrams, the physical quantities in Fig. \ref{fig:rte} display equal uniformity and low magnetic field strengths due to the quiet Sun being void of active regions. The edge of a polarimetric ghost is visible in the lower right corner of the azimuth due to the absence of signal. The continuum intensity map exhibits the granular structure of the photosphere. The inclination is centred on $90$ degrees, and due to the very low linear polarisation signal, the azimuth contains mainly noise. The Gaussian fit to the Stokes $V$ histogram in Fig. \ref{fig:noise} a) indicates that a polarimetric accuracy of $<10^{-3}\cdot I_{c}$ is achieved, illustrating the high performance of the HRT instrument. It is also important to note that due to the tight telemetry budget, raw images from SO/PHI are compressed before download. The compression procedure, in this case to $6$ bits/pixel (down from $32$), increases the noise of the filtergrams: for example, data from the commissioning phase, which was downloaded without compression, had a Stokes noise level of $8.5\times10^{-4}$. Furthermore, using the same method as Liu et al. (2012)\cite{liu_comparison_2012}, the line-of-sight magnetogram has an estimated noise level of $6.6$ G, very similar to the noise level of the $720$ second magnetogram images from HMI, but with almost eight times the cadence: $96$ seconds. \subsection{November 2021} We present a reduced dataset of a sunspot captured by HRT, taken during the inferior conjunction in November 2021. The spacecraft was flying close to Earth, with a distance to the Sun of $0.858$ AU, a PMP temperature set to $40\;\degree$C and was pointing to disc centre. At this distance the (two-pixel) spatial resolution is $624$ km, and almost half the solar disk is within the FOV. As shown in Fig. \ref{fig:iquv_nov} there are clear signals in Stokes $Q/I_{c}$ and $U/I_{c}$ that capture the linear polarisation from the sunspot, which highlights the instrument's sensitivity. The $45$ degree offset in the signal pattern between Stokes $Q/I_{c}$ and $U/I_{c}$ is also finely highlighted. Figure \ref{fig:rte_nov} displays the physical quantities computed from the Stokes filtergrams plotted in Fig. \ref{fig:iquv_nov}. Selecting the umbra region with a continuum upper threshold of $0.6$, the mean magnetic field strength in the umbra is $1420$ G. This is somewhat low for an umbra and may reflect straylight, or that the large Zeeman splitting within the umbra is not caught that well by the placement of the wavelength points in PHI. The azimuth is of particular interest with a strong signal. The line-of-sight velocity displays the expected redshift on the limb side of the spot, with the corresponding blueshift towards disc centre (Evershed flow)\cite{Evershed1909RadialSun-spots}. \subsection{Magnetogram Noise} Several datasets with different modulation schemes were acquired during a campaign in early November 2021 to test the cadence of the schemes. The noise from the line-of-sight magnetograms of these datasets at different cadences is presented in Fig. \ref{fig:cad_noise}. The total number of frames per image is found from the multiplication of the two numbers in the accumulation scheme. A clear trend is visible: as the accumulation scheme changes from $[4,5]$ to $[16,1]$, less frames are being accumulated for each image, and therefore as expected the magnetogram noise increases from $6.8$ G to $8.3$ G. The last grouping, was the fastest the $[4,5]$ scheme could be executed by the instrument, with a cadence of $96$ seconds. It is also clear that the higher the cycles of the modulation states (the second value in the accumulation scheme), the lower the magnetogram noise. It must also be noted that like the data from February 2021, the compression acts as the main driver of the noise. \section{Conclusion}\label{sec:conc} An on-ground pipeline has been developed to reduce raw data from the HRT instrument to produce high quality data with a polarimetric accuracy $10^{-3}\cdot I_{c}$ and infer physical parameters from the polarised light. The $96$ second cadence line-of-sight magnetograms are shown to have an excellent low level of noise, only $6.6$ G, similar to the noise level of the HMI $720$ second magnetograms. This was achieved by calibrating the flat-fields to remove unwanted artefacts from the acquisition process by use of unsharp masking. As a a result of the analysis presented here, the unsharp masking procedure will be implemented onboard the spacecraft, such that the in-flight data will also produce data of the highest quality. % The absence of the ISS has also been taken care by three more steps. Despite the increase in data quality, as shown in Fig. \ref{fig:mod_align}, noise levels remain slightly higher than in the standard configuration because of the spacecraft jitter. % This pipeline will be embedded into a software tool which will automatically process all the SO/PHI science data that will arrive on ground and store them on the appropriate databases. \acknowledgments % This work was carried out in the framework of the International Max Planck Research School (IMPRS) for Solar System Science at the Max Planck Institute for Solar System Research (MPS). Solar Orbiter is a space mission of international collaboration between ESA and NASA, operated by ESA. We are grateful to the ESA SOC and MOC teams for their support. The German contribution to SO/PHI is funded by the BMWi through DLR and by MPG central funds. The Spanish contribution is funded by FEDER/AEI/MCIU (RTI2018-096886-C5), a “Center of Excellence Severo Ochoa” award to IAA-CSIC (SEV-2017-0709), and a Ramón y Cajal fellowship awarded to DOS. The French contribution is funded by CNES. \bibliography{new_med.bib} % \bibliographystyle{spiebib} %
Title: Reassessing candidate eccentric binary black holes: Results with a model including higher-order modes
Abstract: The detection of eccentricity from a gravitational wave signal is expected to help distinguish between formation channels for a given binary. In this study, we reassess all previously-reported binary black holes with previous claims of possible eccentricity as well as a few binaries with more interesting source parameters, for the first time using a model (TEOBResumSGeneral) which accounts for the full eccentricity range possible and incorporates higher-order gravitational emission critical to model emission from highly eccentric orbits. We estimate the eccentricity of these five events. For the first time, we present marginal evidence of eccentricity for one of the events: GW190929. Contrary to previous work with different settings, we do not find evidence supporting eccentric orbits for the same systems. We find the incorporation of eccentricity in our analyses dramatically shifts the posterior in multiple parameters for several events, features could negatively impact other analyses.
https://export.arxiv.org/pdf/2208.01766
\title{\bf Reassessing candidate eccentric binary black holes: Results with a model including higher-order modes} \author{H. L. Iglesias} \affiliation{Center of Gravitational Physics, University of Texas at Austin, Austin, TX 78712, USA} \thanks{higlesia@utexas.edu} \author{J. Lange} \thanks{jacob.lange@austin.utexas.edu} \affiliation{Center of Gravitational Physics, University of Texas at Austin, Austin, TX 78712, USA} \author{I. Bartos} \thanks{imrebartos@ufl.edu} \affiliation{Department of Physics, University of Florida, PO Box 118440, Gainesville, FL 32611-8440, USA} \author{S. Bhaumik} \affiliation{Department of Physics, University of Florida, PO Box 118440, Gainesville, FL 32611-8440, USA} \author{R. Gamba} \affiliation{Theoretisch-Physikalisches Institut, Friedrich-Schiller-Universit {\"a}t Jena, 07743, Jena, Germany} \author{V. Gayathri} \affiliation{Department of Physics, University of Florida, PO Box 118440, Gainesville, FL 32611-8440, USA} \author{A. Jan} \affiliation{Center of Gravitational Physics, University of Texas at Austin, Austin, TX 78712, USA} \author{R. Nowicki} \affiliation{Center of Gravitational Physics, University of Texas at Austin, Austin, TX 78712, USA} \author{R. O'Shaughnessy} \affiliation{Rochester Institute of Technology, Rochester, NY 14623, USA} \author{D. Shoemaker} \affiliation{Center of Gravitational Physics, University of Texas at Austin, Austin, TX 78712, USA} \author{R. Venkataramanan} \affiliation{Center of Gravitational Physics, University of Texas at Austin, Austin, TX 78712, USA} \author{K. Wagner} \affiliation{Rochester Institute of Technology, Rochester, NY 14623, USA} \keywords{eccentric black hole mergers} \section{Introduction} \label{sec:intro} The LIGO and Virgos collaborations have identified many gravitational wave (GW) signals throughout their multiple observing runs (\cite{2018arXiv181112907T,2020arXiv201014527A,LIGOScientific:2021djp}) using the Advanced LIGO \cite{2015CQGra..32g4001L} and Advanced Virgo \cite{2015CQGra..32b4001A} observatories. All their analyses done up to this point (searches, parameter estimation (PE)), population studies, etc.) only use models that assume quasi-circular orbits although there was recently a search of eccentric signals for O1 and O2 (\cite{2019ApJ...883..149A}). The state-of-the-art models (\cite{2020PhRvD.102d4055O,2019PhRvD.100b4059K, 2021PhRvD.103j4056P}) which were used to accurately measure masses and spins of binary black hole (BBH) systems inherently assume quasicircular orbits. Even though it is thought that the orbital eccentricity of a binary is radiated away through gravitational wave emissions relatively quickly, it is conceivable that these signatures of eccentricity remain in the signal \citep{2017ApJ...840L..14S,2018PhRvL.120o1101R,2018PhRvD..98l3005R,2021ApJ...921L..43Z}. Previous investigations have attempted to demonstrate that detection of orbital eccentricities $\gtrsim 0.1$ in GW signals is possible \citep{2021ApJ...915...54N,2021PhRvD.104j2001A,2021PhRvD.104j4016C}. Currently, there are two easily-accessible waveform models which predict the strong-field merger signal for merging binary black holes which incorporate both binary spin and orbital eccentricity: SEOBNRE and TEOBResumSGeneral. The SEOBNRE model is an effective-one-body (EOB) \citep{Buonanno:1998gg,Buonanno:2000ef,Damour:2000we,Damour:2001tu,Damour:2015isa} model that is based off of the early quasi-circular nonprecessing model SEOBNRv1. While this model can describe eccentric BBH coalescences, it is limited to eccentricities below 0.2 and spin magnitudes below 0.6 due to the fact it is based off of the older v1 SEOB model (\cite{2017PhRvD..96d4028C}). The {\tt TEOBResumSGeneral} model is an EOB approximant which can generate waveforms from non-precessing binaries coalescing along generic orbits \citep{Chiaramello:2020ehz, Nagar:2020xsk, 2021PhRvD.103j4021N}, or generic-spins binaries coalescing along quasi-circular orbits \citep{Damour:2014sva, Nagar:2015xqa, Nagar:2018zoe, Nagar:2019wds, Nagar:2020pcj, Riemenschneider:2021ppj, Akcay:2020qrj, Gamba:2021ydi}. In both scenarios, the model includes tidal interactions \citep{Damour:2009wj, Bernuzzi:2014owa, Akcay:2018yyh} as well as subdominant modes up to $\ell=|m|=5$ but not $m=0$ modes. While the model can generate stable waveforms up to an eccentricity of 0.9 with maximal spin magnitudes, comparisons to numerical relativity (NR) have only been made up to eccentricities of 0.2 and spin magnitudes of 0.7 (\cite{2021PhRvD.103j4021N}). Another EOB model in the literature that incorporates orbital eccentricity is SEOBNRv4EHM, which also describes non-precessing spins and includes higher order modes (\cite{2021PhRvD.104b4046K,2022PhRvD.105d4035R}). Recently, \cite{2021arXiv210707981O}, \cite{2021arXiv210605575G}, \cite{2022NatAs...6..344G}, and \cite{2021ApJ...921L..31R} reanalyzed LIGO public events to find evidence of eccentricity. However, these studies are not without their limitations. First, \cite{2022NatAs...6..344G} used RIFT \citep{gwastro-PENR-RIFT,gwastro-PENR-Methods-Lange,dissertation-RIFT-Lange} to produce marginal likelihoods for each NR simulation used in the study. Due to the scarcity of eccentric NR simulations, they were unable to produce a full posterior in parameter space. Instead they reported the discrete likelihood points for each simulation and found the highest likelihood point to be a precessing, eccentric waveform with $e=0.69^{+0.17}_{-0.22}$. Second, while Romero-Shaw et. al. were able to produce full posteriors, they performed their inference by a novel postprocessing technique: they first analyze with the quasi-circular model IMRPhenomD \citep{2011PhRvL.106x1101A} and, based on those results, reweight their conclusions with the aligned-spin eccentric model SEOBNRE (\cite{2019PhRvD.100l3017P,2019MNRAS.490.5210R}). While this approach seems to give reasonable results, the authors highlight its limitations in section 2.1 of \cite{2021ApJ...921L..31R}; notably, the iterative reweighting approach may be limited by systematic errors between the two models used in the reweighted analysis (\cite{2020PhRvD.102l4069J}). More broadly, a reweighting-based analysis generally cannot easily investigate whether the second model (here, eccentric) extends the posterior into new regions not thoroughly explored with the reference model. To date, only \cite{2021arXiv210707981O} and \cite{2021arXiv210605575G} have performed parameter inference including the continuously-explored effects of eccentricity and nonprecessing spins. \cite{2021arXiv210707981O} analyzed the low-mass events GW151226 and GW170608 including eccentricity and non-precessing spins; \cite{2021arXiv210605575G} analyzed GW190521 with non-spinning systems and sampled in initial engergy and angular momentum instead of eccentricity which could impact the prior volume and therefore the evidence. Finally, the available analytic models assume spin-orbit alignment, limiting attempts to disentangle the effects of orbital eccentricity and precession (\cite{PhysRevLett.126.201101}). In this study, we improve on the work of these studies by reanalyzing a set of GW events to produce full posteriors using the eccentric model TEOBResumSGeneral in tandem with the PE algorithm RIFT. For each event, we run \textbf{two} different analyses: a non-precessing, non-eccentric analysis (from now on called TEOBResumS-GIOTTO); and a non-precessing, eccentric analysis (from now on called TEOBResumS-DALI). From these analyses, we can calculate Bayes' factors for eccentricity. This paper is organized as follows. In Section \ref{sec:methods}, we introduce the TEOBResumSGeneral model and review the use of RIFT in this study. In Section \ref{sec:results}, we present the results of parameter inference as well as Bayes' factors for our five events. In Section \ref{sec:conclusions}, we summarize our results and conclude with some brief remarks about our future work. \section{Methods} \label{sec:methods} A coalescing binary black hole system in a quasi-circular orbit can be completely characterized by its intrinsic ($\lambda$) and extrinsic ($\theta$) parameters. By intrinsic parameters we refer to the binary's masses $m_i$ and spins. By extrinsic parameters we refer to the seven numbers needed to characterize its space-time location and orientation. We will express masses in solar mass units and dimensionless spins in terms of cartesian components $\chi_{i,x},\chi_{i,y}, \chi_{i,z}$, expressed relative to a frame with $\hat{\mathbf{z}}=\hat{\mathbf{L}}$ and (for simplicity) at the orbital frequency corresponding to the earliest time of computational interest (e.g., an orbital frequency of $\simeq 10 \unit{Hz}$). We will use $\lambda,\theta$ to refer to intrinsic and extrinsic parameters, respectively. \subsection{Waveform model} \label{subsec:model} In order to extract the eccentricity feature from detected GW events, we are using TEOBResumSGeneral waveform model \citep{Nagar:2021gss,Nagar:2018zoe,Albanesi:2022xge}, which includes eccentricity feature along with aligned spin as well as higher order modes. This waveform family is based on the EOB formalism and it can be used to simulate quasi-circular, eccentric or hyperbolic compact binaries systems. This model includes higher multipoles up to 5 throughout binary phases (inspiral, plunge, merger and ringdown) and eccentricity parameter space (i.e., up to $e\simeq 1$; see figure 19 in \cite{Nagar:2021gss}). TEOBResumSGeneral waveforms have been validated with Numerical Relativity waveforms from the Simulating eXtreme Spacetime collaboration with $e\leq0.2$, and mass ratio $q\leq3$. For our study we used TEOBResumSGeneral waveforms with two different configurations, corresponding to $e=0$ (TEOBResumS-GIOTTO) and $e>0$ (TEOBResumS-DALI). To generate orbital dynamics from eccentric initial conditions -- an initial eccentricity $e_0$ at initial orbital frequency $f_0$ -- the TEOBResumS-DALI code evolves an orbit starting at apastron, with $r_0=p_0/(1-e_0)$, $p^0_\phi=j_0$ the adiabatic angular momentum implied by angular momentum conservation, and $p_{r_*}=0$. In these expressions, $p_0$ is evaluated numerically from the Hamiltonian such that $\partial_{p_\phi} H( r_0(p_0),j_0(p_0), p_{r_*}=0)=f_0$. The orbital dynamics produce an associated asymptotic gravitational wave strain $h(t,\hat{n}) = \sum_{lm} {}^{-2}Y_{lm}(\hat{n}) h_{lm}(t)$, where the $h_{lm}(t)$ depend on the intrinsic parameters. In practice, we characterize the emission direction relative to the orbital angular momentum direction with two polar angles ($\iota,\phi_{c}$), so the binary's radiation relative to its fiducial inertial frame is fully specified with an initial frequency (to define conventions), the initial eccentricity at that frequency, two masses, both spins, and these two polar angles. While this current parameterization fixes the angle of periapsis (one degree of freedom available for eccentric orbits associated with the direction of the Runge-Lenz vector in the orbital plane), this angle is expected to be observationally inaccessible in the near future; \citep[see,e.g.][]{2022arXiv220614006C}. \subsection{RIFT} \label{subsec:rift} RIFT consists of a two-stage iterative process to estimate the source parameters $\bm \theta,\lambda$ responsible for gravitational wave observations $d_k$ via comparison to predicted gravitational wave signals $h_k(\bm{\lambda}, \bm\theta)$ where $k=1\ldots N$ indexes the observing instruments. Assuming a Gaussian, stationary noise model, the log-likelihood can be expressed as \begin{align} \label{eq:loglikelihood} \ln & {\cal L}(\lambda, \theta) = \nonumber \\ & -\frac{1}{2}\sum_k \qmstateproduct{h_k(\lambda,\theta)-d_k}{h_k(\lambda,\theta)-d_k}_k % - \qmstateproduct{d_k}{d_k}_k \end{align} where we have omitted normalization constants. (RIFT assumes the input detector noise power spectrum is known, and does not currently marginalize over the accuracy of that estimate, nor over calibration uncertainty.) In these expressions, the inner products represent the noise-weighted inner product derived from the $k$th detector's noise power spectrum \begin{align*} \qmstateproduct{a}{b}_k \equiv \int_{-\infty}^{\infty} 2 df \frac{\tilde{a}(f)^*\tilde{b}(f)}{S_{n,k}(|f|)} \,, \end{align*} is a $S_{n,k}(f)$, where $\tilde{a}(f)$ is the Fourier transform of $a(t)$, $\tilde{a}(f)^*$ denotes complex conjugation of $\tilde{a}(f)$, and $f$ is frequency; see, e.g., \cite{gwastro-PE-AlternativeArchitectures} for more details. We adopt a low-frequency cutoff $f_{\rm low}$ such that all inner products are modified to \begin{eqnarray} \qmstateproduct{a}{b}_k\equiv 2 \int_{|f|>f_{\rm low}} df \frac{\tilde{a}(f)^*\tilde{b}(f)}{S_{n,k}(|f|)} \,. \end{eqnarray} The two iterative stages of RIFT construct the necessary ingredients for an iterative evaluation of Bayes' theorem for this likelihood. In one stage of RIFT, many worker codes evaluate the \emph{marginal} likelihood $\ln {\cal L}_\alpha \equiv \ln {\cal L}_{\rm marg}(\lambda_\alpha)$ for $\lambda_\alpha$ in some grid of evaluation points, via \begin{align} {\cal L}_{\rm marg}(\lambda) \equiv \int d\theta p(\theta) {\cal L}(\lambda,\theta) \end{align} In the other iterative stage of RIFT, an interpolation algorithm provides a current-best-estimate $\hat{\cal L}(\lambda)$ based on current training data $\{(\lambda_\alpha,{\cal L}_\alpha\}$, and employs this estimate in Bayes' theorem to construct an approximate posterior distribution over the intrinsic parameters $\lambda$: \begin{align} \hat{p}_{\rm post}(\lambda) \simeq \frac{\hat{{\cal L}}(\lambda) p(\lambda)}{\int d\lambda \hat{{\cal L}}(\lambda) p(\lambda)} \end{align} Again using Monte Carlo integration, RIFT produces independent fair draws from this estimated posterior distribution, thus providing a new grid for the evaluation stage. After several iterations, $\ln \hat{\cal L}$ will converge to the the true log-likelihood in the neighborhood where the posterior has its support, and the intrinsic samples will be the true posterior distribution. \section{Results} \label{sec:results} In this section, we present the results from our eccentric reanalysis of the following events: GW150914, GW190521, GW190620, GW190706, GW190929. GW190521 and GW190620 were analyzed due to past evidence or hints of eccentricity (\cite{2022NatAs...6..344G, 2021ApJ...921L..31R}); GW150914, GW190706, and GW190929 were also picked due to specific characteristics (HOMs, unequal masses, high mass, positive spins, etc.). All the data and power spectral density (PSDs) are the same files used in the LVK's GWTC-2.1 paper (\cite{2021arXiv210801045T}) and are available on GWOSC \citep{ligo-O1O2-opendata}. Due to this, we chose 4 seconds of data around each event except for GW190521 where we chose 8 seconds of data\footnote{We chose to use 8 seconds of data instead of following GWTC-2.1 settings since the original analysis of this event was done with 8 seconds including \cite{2022NatAs...6..344G}.}. GW190521, GW190706, GW190929 uses data from all three HLV detectors with the HL data using the C01 calibration with noise subtracted out around 60 Hz (\cite{2021arXiv210801045T,2015PhRvD..91h4034L,2019PhRvD.100j4004C}). For these events, the V data uses the online calibration data except for GW190929 that uses 1A calibration. Finally GW150914 uses HL data using the C02 calibration, and GW190620 uses LV data using the C01 calibration with noise subtracted out around 60 Hz for L and using the online calibration for V. The PSDs were generated using BayesWave (\cite{2015PhRvD..91h4034L,2019PhRvD.100j4004C}) on the same segment of data used in each analysis. For all events except GW190521, we used a low-frequency cutoff of 20 Hz, motivated by the lack of observationally-accessible information at lower frequencies for typical sources. For GW190521, we use a low-frequency cutoff of 11 Hz. The high-frequency cutoff was 896, 224, 448, 896, and 896 Hz for GW150914, GW190521, GW190620, GW190706, and GW190929 respectively. We analyzed each event with the non-precessing, eccentric TEOBResumS-DALI include all the $\ell_{\rm max}\le4$ except the $m=0$ modes. For comparison, we also analyzed each event with the non-precessing, non-eccentic TEOBResumS-GIOTTO including the same amount of modes as the eccentric analysis. We adopted conventional masses and distance priors \citep[uniform in detector-frame mass and in the cube of the luminosity distance; see, e.g.,][]{gw-astro-PE-lalinference-v1}. For the non-precessing spins, we used a uniform spin magnitude prior $\chi_{i,z}\in[-0.99,0.99]$. For eccentricity, we adopted a uniform prior over a relavant range for each event up to the largest range of $e\in[0.0,0.9]$ due to the instability of the model past $e=0.9$. \subsection{Masses} \label{subsec:masses} Since we carried out analyses with both TEOBReumS-GIOTTO and TEOBResumS-DALI, we were able to make a direct comparison of the effect of the inclusion of eccentricity on each binary's component masses. For each event, the inferred mass parameters differ depending on whether an analysis allowed for or excluded eccentricity, with no clear pattern in the magnitude or direction of any shift in inferred properties. For example, the right panel on Figure \ref{fig:GW190521} shows the inferred 90\% credible intervals for the joint $M_{\rm source},q$ distribution, derived from inference with both TEOBResumS-GIOTTO and TEOBResumS-DALI. In this case, the $M_{\rm source}$ distributions are quite consistent in shape and support, while the $q$ distributions show modest differences in their shape (but not credible interval) usually slightly biasing towards more unequal masses. Though these mass adjustments are astrophysically small, they do introduce technical challenges for reweighting-based algorithms to assess the impact of eccentricity using reference nonprecessing analyses. For example, for GW150914 the $M_{source}$ distribution is shifted toward higher masses with the inclusion of eccentricity. To quantify the relative change in parameter $x$ between the two analyses, we define $\epsilon_x = |\mu_1 - \mu_2|/\sqrt{\sigma_1^2+\sigma_2^2}$, where $\mu_k,\sigma_k$ are the posterior mean and standard deviation of $x$ for models $1$ (here, eccentric) and $2$ (here, noneccentric) respectively. Between the events included in this study, we found the the largest relative shift for the total mass and mass ratio to be $\epsilon_M=0.909$ from GW190706 (See the right panel of Figure \ref{fig:GW190706}) and $\epsilon_q=0.606$ from GW190929 (See the right panel of Figure \ref{fig:GW190929}) respectively. Even small shifts in the posterior have significant implications for alternative techniques to assess the impact of eccentricity, such as the reweighting strategy adopted by Romero-Shaw. For example, for GW150914, GW190620, and GW190929, the mass distributions are notably offset between the two analyses, suggesting that a reweighting-based strategy could only with the greatest difficulty identify the noteworthy changes identified here. A strikingly common feature of our inferred mass distributions in $M$ (as well as in the joint $M,q$ distribution) is the often notable multimodality. In the most extreme example, GW150914 has a bimodal mass distribution. \subsection{Spins} \label{subsec:spins} Similar to Section \ref{subsec:masses}, we can also directly compare the analyses to examine how the inclusion of eccentricity affects the recovery of the spin parameters. For our Figures, we use the spin parameter $S_{\rm hu}$ (\cite{2018PhRvD..97h4002H}) and is defined as: \begin{equation} M^2S_{\rm hu}=\left((1+\frac{1}{2q})\Vec{S}_1+(1+\frac{1}{2}q)\Vec{S}_2\right)\cdot\hat{L}. \end{equation} The $S_{\rm hu}$ parameter is similar to the widely used $\chi_{\rm eff}$; however, $S_{\rm hu}$ describes the leading order effect of hangup on the full NR waveforms. The inferred spin and mass-spin distributions are frequently quite different when using a model including eccentricity with the most notable shift being from GW190706 with a $\epsilon_{S_{\rm hu}}=1.08$; however, there is a noticeable shift in $S_{\rm hu}$ for all the events except for GW190521. Interestingly, the inclusion of eccentricity shifts the spin parameter posterior to both more negative and more positive values depending on the event. Unsurprisingly few finely-tuned accidents occur, so binaries with notable shifts in inferred masses usually also have notable shifts in inferred spin, and vice-versa. Even more so than the mass distributions alone, the inferred joint mass-spin distributions show striking multimodality for several of our analyses, including GW190620 and GW150914. \begin{comment} \end{comment} \subsection{Eccentricity} \label{subsec:eccentricity} Despite the apparently strong impact of eccentricity on other parameters, our inferred eccentricity distributions are frequently quite modest. For example, the inferred distribution of eccentricity for GW190620 strongly supports a noneccentric origin. Even for GW190521, where the inferred posterior eccentricity peaks near $e\simeq 0.2$, the posterior distribution contains significant support for $e\simeq 0$. In fact, a Bayes factor analysis performed both by direct integration and by the Savage-Dickey ratio suggests these two events are most likely noneccentric, with Bayes' factors of 0.414 and 0.551 respectively. Our conclusions about these events are reasonably consistent with prior studies using \emph{nonprecessing} binaries, which generally find at best modest evidence for eccentricity when precession is not allowed. While on the surface our numerical Bayes factors for these events are in notable tension with the results of Romero-Shaw et al, we highlight the substantial systematic and methodological differences associated with our approach using a different eccentric model with a wider prior range and a different starting frequency. While a direct apples-to-apples comparison between these results would need a conversion between the two models' definition of eccentricity, a closer comparison would require similar run settings. Some other events have posterior distributions which are more suggestive of substantial eccentricity, such as GW190929, which peaks at $e\simeq 0.3$. The resulting eccentric BF for this event was found to be marginally favoring eccentricity with 3.35. \begin{table} \caption{\label{tab:ecc}% Bayes' Factors for each event comparing evidences of eccentric vs non-eccentric standardized to a prior with $e_{\rm max}=0.9$. } \begin{tabular}{ c c } \toprule \textrm{Event} & \textrm{Eccentric vs non-Eccentric}\\ \colrule GW150914 & 0.651 \\ GW190521 & 0.551 \\ GW190620 & 0.414 \\ GW190706 & 0.208 \\ GW190929 & 3.35 \\ \botrule \end{tabular} \end{table} \section{Conclusions} \label{sec:conclusions} In this work, we presented a state-of-the-art model-based assessment of the presence and impact of eccentricity in several promising BBH events, based on the non-precessing TEOBResumSGeneral model, including higher-order modes. We compare two analyses with the same model that include and exclude eccentricity. For the first time, we report modest evidence for binary orbital eccentricity in GW190929. However, we do not find evidence in favor of eccentric dynamics for almost all events presented, instead favoring a quasi-circular result. Our conclusions are derived using a uniform prior for eccentricity evaluated at a $18 \unit{Hz}$ starting frequency ($10.5 \unit{Hz}$ for GW190521); however, other authors adopting a lower (higher) reference frequency should deduce a larger (smaller) inferred eccentricity, owing to GW-induced orbital circularization. Our results are in tension numerically with previously-presented results for several of these events; however, numerical differences are most likely due to the differences in prior range, starting frequencies, and could also reflect systematics, as we adopted different waveforms and algorithms. For example, \cite{2022NatAs...6..344G} find higher likelihoods for eccentric, precessing systems compared to quasi-circular, precessing systems for GW190521, using direct comparison to precessing numerical relativity simulations; by contrast, our analysis assumes spin-orbit alignment and finds the full parameter posterior. Likewise, \cite{2021arXiv210605575G} argue that GW190521 is highly eccentric based on a analysis with TEOBResumSGeneral, excluding higher-order modes, starting from an initially unbound configuration at extremely low initial frequencies; by contrast, our analysis exclusively uses only the bound, later-time evolution (and thus higher starting frequency and corresponding lower eccentricity) and the higher-order-mode form of that model. Similarly, \cite{2021ApJ...921L..31R} found evidence of eccentricity for GW190620 and GW190521 based off analyses that used a re-weighting technique with the eccentric model SEOBNRE, a model that excludes higher-order modes, using a different configuration with different priors and initial frequencies; by contrast, our analysis again uses only the bound, later-time evolution and higher-order-mode form of a different model as well as a larger uniform-in-e range. To do a fair comparison, one would need to use the same settings of one of the above analyses. Our inferences with this eccentric model often exhibit complex structure. While for many of the recovered parameters we find largely consistent posteriors between both the non-eccentric and eccentric runs, there are a number of distributions for the different events that either have a multi-peak structure or a noticeable shift in the posterior. While this did sometimes yield evidence of eccentricity (for GW190929), this was mostly not the case. For example, the most dramatic multi-peak distribution is from GW190620. Even though the primary peak of this distribution is $e\simeq 0.3$, this did not translate to a BF favoring an eccentric scenario. As with previous analyses, a caveat to this work is the absence of precession. While not fully understood, it is expected that eccentricity could mimic precession. Further work needs to be done to ensure the evidences presented here are not mistaken for evidences of precession. \label{sec:conclusions} \acknowledgements The authors thank Juan Caldron Bustillo and Aaron Zimmerman for useful feedback and conversations. We are thankful to the waveform developers of TEOBResumSGeneral, specifically Alessandro Nagar and Sebastiano Bernuzzi for their help with implementing their model. The git hash of TEOBResumSGeneral used was bc4fcd0f0d97f9f0351e11c2e880ab0a6422ac20 for non-eccentric and 39e6d7723dacb23220ff5372e29756e5f94cb004 for eccentric I.B., V.G., and S.B. acknowledges the support of the National Science Foundation under grant PHY-1911796 and PHY-2110060, and the Alfred P. Sloan Foundation. HLI, JL, AJ, RN, DS, and RV thanks NSF PHY-2114581 and XSEDE TG-PHY120016. ROS is supported by NSF PHY2012057, PHY-1912632, and AST-1909534. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan. We are grateful for computational resources provided by the Leonard E Parker Center for Gravitation, Cosmology and Astrophysics at the University of Wisconsin-Milwaukee. We acknowledge the use of IUCAA LDG cluster Sarathi for the computational/numerical work. We are also grateful for the computational resources provided by California Institute of Technology at Pasadena, California as well as the LIGO Livingston Observatory. After completing our study, we became aware of \editremark{Knee et al in prep}, who reassessed the results of \cite{2021ApJ...921L..31R} using parameter inference with TEOBResumSGeneral without higher-order modes, finding similar conclusions to the original study; see their Fig. 10. \bibliography{Refs}
Title: Locations and Morphologies of Jellyfish Galaxies in A2744 and A370
Abstract: We present a study of the orbits, environments and morphologies of 13 ram-pressure stripped galaxies in the massive, intermediate redshift (z$\sim0.3-0.4$) galaxy clusters A2744 and A370, using MUSE integral-field spectroscopy and HST imaging from the Frontier Fields Program. We compare different measures of the locations and morphologies of the stripped sample with a sample of 6 poststarburst galaxies identified within the same clusters, as well as the general cluster population. We calculate the phase space locations of all cluster galaxies and carry out a substructure analysis, finding that the ram-pressure stripped galaxies in A370 are not associated with any substructures, but are likely isolated infalling galaxies. In contrast, the ram-pressure stripped galaxies in A2744 are strictly located within a high-velocity substructure, moving through a region of dense X-ray emitting gas. We conclude that their ram-pressure interactions are likely to be the direct result of the merger between two components of the cluster. Finally, we study the morphologies of the stripped and poststarburst galaxies, using numerical measures to quantify the level of visual disturbances. We explore any morphological deviations of these galaxies from the cluster population, particularly the weaker cases which have been confirmed via the presence of ionised gas tails to be undergoing ram-pressure stripping, but are not strongly visually disturbed in the broad-band data. We find that the stripped sample galaxies are generally divergent from the general cluster sample, with poststarburst galaxies being intermediary in morphology between stripped galaxies and red passive cluster members.
https://export.arxiv.org/pdf/2208.10524
\title{Locations and Morphologies of Jellyfish Galaxies in A2744 and A370} \email{callum.bellhouse@inaf.it} \author[0000-0002-6179-8007]{Callum Bellhouse} \affiliation{INAF-Padova Astronomical Observatory,\\ Vicolo dell’Osservatorio 5, I-35122 Padova, Italy} \author[0000-0001-8751-8360]{Bianca Poggianti} \affiliation{INAF-Padova Astronomical Observatory,\\ Vicolo dell’Osservatorio 5, I-35122 Padova, Italy} \author[0000-0002-1688-482X]{Alessia Moretti} \affiliation{INAF-Padova Astronomical Observatory,\\ Vicolo dell’Osservatorio 5, I-35122 Padova, Italy} \author[0000-0003-0980-1499]{Benedetta Vulcani} \affiliation{INAF-Padova Astronomical Observatory,\\ Vicolo dell’Osservatorio 5, I-35122 Padova, Italy} \author[0000-0002-4382-8081]{Ariel Werle} \affiliation{INAF-Padova Astronomical Observatory,\\ Vicolo dell’Osservatorio 5, I-35122 Padova, Italy} \author[0000-0002-7296-9780]{Marco Gullieuszik} \affiliation{INAF-Padova Astronomical Observatory,\\ Vicolo dell’Osservatorio 5, I-35122 Padova, Italy} \author[0000-0002-3585-866X]{Mario Radovich} \affiliation{INAF-Padova Astronomical Observatory,\\ Vicolo dell’Osservatorio 5, I-35122 Padova, Italy} \author[0000-0003-2150-1130]{Yara Jaff\'e} \affiliation{Instituto de F\'isica y Astronom\'ia, Universidad de Valpara\'iso, Avda. Gran Breta\~na 1111 Valpara\'iso, Chile} \author[0000-0002-7042-1965]{Jacopo Fritz} \affiliation{Instituto de Radioastronomia y Astrofisica, UNAM, Campus Morelia, AP 3-72, CP 58089, Mexico} \author[0000-0003-1581-0092]{Alessandro Ignesti} \affiliation{INAF-Padova Astronomical Observatory,\\ Vicolo dell’Osservatorio 5, I-35122 Padova, Italy} \author[0000-0002-8372-3428]{Cecilia Bacchini} \affiliation{INAF-Padova Astronomical Observatory,\\ Vicolo dell’Osservatorio 5, I-35122 Padova, Italy} \author[0000-0002-8238-9210]{Neven Tomi\v{c}i\'c} \affiliation{INAF-Padova Astronomical Observatory,\\ Vicolo dell’Osservatorio 5, I-35122 Padova, Italy} \author[0000-0001-5492-1049]{Johan Richard} \affiliation{Univ Lyon, Univ Lyon1, Ens de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, F-69230, Saint-Genis-Laval, France} \author[0000-0001-9976-1237]{Genevi\`eve Soucail} \affiliation{Institut de Recherche en Astrophysique et Plan\'etologie (IRAP), Universit\'e de Toulouse, CNRS, UPS, CNES,14 Av. Edouard Belin, 31400 Toulouse, France} \section{Introduction} The study of galaxy interactions within clusters is critical to understanding the growth and development of galaxies. Many different processes can affect or disrupt a galaxy's gas content, which can have profound effects on its subsequent evolution. Since the early work of \citet{Butcher1978a,Butcher1978b}, it has been understood that the color distribution of a galaxy population evolves with redshift, with greater fractions of blue galaxies at higher redshifts in comparison to the local universe. Many works since then have proposed different mechanisms, both internal to a galaxy, or resulting from interactions with its environment, that can act to transform galaxies and may contribute to the quenching of the global population of galaxies throughout cosmic time. The environmental processes that act upon cluster galaxies can be divided into gravitational and hydrodynamical effects. The former include \textit{tidal interactions} \citep{Spitzer1951,Toomre1977,Tinsley1979,1983ApJ...264...24M,Mihos1994,Springel2000}, caused by direct gravitational interaction between galaxies, and \textit{harassment} \citep{1996Natur.379..613M,1998ApJ...495..139M}, resulting from the cumulative effect of many high-speed close approach encounters between cluster members. In both gravitational processes, the stellar \textit{and} gas components of the galaxies are affected. On the other side of the coin lie the processes known as hydrodynamical interactions, which primarily impact the gas component with little to no impact on the prior stellar population. Such effects include the removal of the outer gas reservoir of the affected galaxy via \textit{starvation/strangulation} \citep{Larson1980,Balogh2000}, and in more extreme cases, the stripping of the internal gas component from the galaxy in the process known as \textit{ram-pressure stripping} (hereafter RPS). First discussed in \citet{1972ApJ...176....1G}, RPS is one of the most efficient mechanisms \citep{Boselli2006} which can abruptly quench star formation and greatly disrupt a galaxy's gas content. RPS can occur when a galaxy moves sufficiently quickly through the dense intracluster medium (hereafter ICM) of galaxy clusters. The ram-pressure effect scales with environmental density and galaxy velocity, preferentially affecting galaxies on steep, radially infalling orbits \citep{Jaffe2018}. A galaxy's ability to retain its gas is dependent on its stellar and gas mass surface densities as well as the mass of its dark matter halo, with more massive galaxies able to weather the effect and retain some gas for longer than their less massive counterparts. The ram-pressure acts upon the gas component of a galaxy with only subtle, indirect influence on the existing stellar component \citep{Smith2012}. Observational effects can include the formation of tails of stripped gas trailing behind the galaxy \citep{Fumagalli2014}, compression of the leading edge of the disc by the ICM \citep{2001ApJ...561..708V}, and the onset of star formation in the tails with the condensation of star-forming clumps \citep{Kenney2014}. In addition, it has recently been shown \citep{Bellhouse2021} that unwinding of the galaxy's spiral arms can occur during the early stages of RPS, due to removal of material from the outside edges of the disc. Processes which induce abrupt quenching in cluster galaxies give rise to the population of so-called post starburst (hereafter PSB) galaxies. PSB galaxies exhibit low to negligible star formation characterised by a lack of nebular emission lines, whilst exhibiting strong Balmer absorption lines indicative of relevant star formation in the past $\sim$ 1 Gyr. Previous studies have pointed towards RPS being responsible, at least in part, for the formation of PSB galaxies in clusters. Evidence includes the correlation of their locations with substructures in the ICM \citep{Poggianti2004}, the similarity in the spectral features of recently quenched regions within RPS galaxies compared with PSB galaxies \citep{Gullieuszik2017, Werle2022}, and evidence of very recent stripping in certain PSB galaxies \citep{Werle2022}. Although RPS is known to be one of the most efficient mechanisms influencing the evolution of cluster populations, its relative contribution within the context of galaxy evolution in the universe is only recently starting to be quantified \citep{Vulcani2022}. Many examples of RPS galaxies have been presented to date, including in large surveys of visually selected RPS candidates such as \citet{Poggianti2016} and \citet{Roberts2020} at low redshifts, \citet{McPartland2016} from $0.3<z<0.7$ and as part of wider surveys such as the Grism Lens-Amplified Survey from Space \citep[GLASS,][]{Treu2015b,Vulcani2016, Vulcani2017}. This has given us an overview of the stripping processes throughout the recent history of the universe and beyond, and newer studies have continued to probe outside the local universe \citep{boselli2019b,Kalita2019,Stroe2020,Durret2021,Ebeling2014}. Extending the known sample to higher redshifts opens up the opportunity to better understand the impact of stripping on the evolution of clusters, and an insight into the influence of stripping throughout the history of clusters in the universe today. A powerful tool in diagnosing and understanding the processes which occur within and around a galaxy is Integral Field Unit (IFU) spectroscopy, which enables the exploration of both the spatial and spectral information of the observed objects. This allows properties such as star formation rates and star formation histories to be traced across a galaxy, and ionized gas to be mapped in location and velocity throughout the galaxy and its tails, making it an extremely useful instrument when characterizing RPS interactions. IFU observations have proven extremely useful to studies of RPS \citep{Merluzzi2013,Fumagalli2014,2016MNRAS.455.2028F,Poggianti2017}, probing the kinematics and ongoing processes within galaxies during infall and leading to new discoveries about the resulting effects of ram-pressure interactions on galaxies \citep{Poggianti2017b,Bellhouse2021}. In this analysis we will utilize observations gathered by the MUSE (Multi Unit Spectroscopic Explorer, \citealt{2010SPIE.7735E..08B}) IFU at the European Southern Observatory (ESO) Very Large Telescope. In this work, we focus on two clusters, Abell 2744 and Abell 370, located around $0.3<z<0.4$, which are both post-mergers, and which are the first two clusters to have been observed within the MUSE Guaranteed Time Observations program. The aim of this study is to investigate the distribution of identified RPS galaxies in A2744 and A370, both in phase space and within the context of the cluster substructures. We also aim to compare different morphological parameters to test whether these confirmed RPS galaxies could be detectable using automated methods, which could be applied to other frontier fields clusters in a future study. This paper is part of a series of works which aim to characterise the process of stripping within frontier fields clusters, including \citet{Moretti2022} and \citet{Werle2022}. The paper is structured as follows. Section \ref{sec:data} describes the data used in this analysis as well as the sample selection process. In section \ref{sec:phasespace+ds} we outline the phase space and substructure detection techniques used to analyse the distributions of galaxies in the clusters. In section \ref{sec:xray+lensing}, we compare the locations of the RPS and PSB galaxies with X-ray and mass maps of the clusters. Section \ref{sec:morphology} describes an analysis of the galaxy morphologies, comparing the RPS and PSB galaxies with the sample of red and blue cluster galaxies, using an array of different morphology measures. Finally, in section \ref{sec:discussion} we summarise and interpret the results of the work. \section{Observations and Data}\label{sec:data} \subsection{Clusters}\label{sec:clusters} Abell 2744 (z=0.3064, $\sigma=1497 \mathrm{km\,s}^{-1},$ \citealt{Owers2011}, hereafter A2744) is a merging cluster with a virial mass of $7.4\times10^{15}\mathrm{M}_\odot$, mostly comprised of two distinct components with v$_{\rm pec} = -1308 \pm 161 \mathrm{km\,s}^{-1}$, $\sigma =1277\pm189\mathrm{km\,s}^{-1}$ and v$_{\rm pec} =2644\pm72\mathrm{km\,s}^{-1}$, $\sigma = 695 \pm 76 \mathrm{km\,s}^{-1}$ \citep{Mahler2018}. A2744 is in a particularly dynamical state due to its merging history, with a significant blue galaxy excess of $2.2\pm0.3$ times that of nearby clusters in the same core regions \citep{Owers2011}. The cluster's merging history is of particular interest, as it provides a valuable insight into the link between cluster mergers and galaxy star formation activity. An increased fraction of starbursting blue galaxies driven by interactions with the disturbed ICM is a keen candidate for a contributor to the scatter in the Butcher-Oemler effect \citep{Kauffmann1995,Miller2006}. The complex merging history of A2744 is explored in detail using X-ray and optical spectroscopy in \citet{Owers2011}, who identify two major substructures, the Northern Core (NC), and the Southern Minor Remnant Core (SMRC) within the cluster, as well as a region labelled the Central Tidal Debris (CTD), which is close in projection to the SMRC but exhibits a velocity close to that of the NC and propose a scenario of a post-core-passage major merger in addition to an interloping minor merger, with the CTD being a region stripped from the NC by the interaction. The locations of each of these regions are shown in Figure~\ref{fig:A2744_footprint} in Appendix~\ref{sec:appendix}, for context. Abell 370 (z=0.375, $\sigma=1789\mathrm{km\,s}^{-1}$ \citealt{Richard2021}, hereafter A370) is a historically significant cluster both within the context of galaxy evolution studies \citep{1984ApJ...285..426B,Dressler1997,Dressler1999}, and also for the study of gravitational lensing, as it contains one of the first observations of a giant-arc lens system \citep{Lynds1986,Soucail1987,Soucail1988}. The cluster has a total virial mass of $\mathrm{M}_{\rm vir}$ = $3.3 \times 10^{15} \mathrm{M}_\odot$ from weak lensing measurements \citep{Umetsu2011a,Umetsu2011b}. The cluster exhibits a bimodal distribution of galaxies consistent with a major merger \citep{Richard2010} of two progenitor clusters with masses $\mathrm{M}_{\rm vir}$ = $1.7 \times 10^{15} \mathrm{M}_\odot$ and $\mathrm{M}_{\rm vir}$ = $1.6 \times 10^{15} \mathrm{M}_\odot$ \citep{Molnar2020}. The centres of the X-ray and dark matter in both peaks are fairly close in comparison to similar merging clusters which % suggests that the merger axis is predominantly along the line of sight \citep{Richard2010}. The northern and southern brightest cluster galaxies (BCGs) are located at z=0.3780 and z=0.3733 respectively \citep{Lagattuta2019}, indicating a separation of 1024$\mathrm{km\,s^{-1}}$ \citep{Molnar2020}. Unlike A2744, the distribution of velocities of the cluster members do not exhibit a distinctly bimodal distribution, suggesting that the merging clusters may have already experienced a previous passage leading to mixing of the populations \citep{Lagattuta2019}. \subsection{Data} In this study, we utilize data from the MUSE Lensing Cluster GTO program \citep{Bacon2017,Richard2021}. The observations cover the central regions of clusters with single pointings or mosaics, with effective exposure times from 2 hours up to 15 hours under very good seeing conditions ($\sim0''.6$). The full set of clusters observed with MUSE, described in \citet{Richard2021}, is compiled from the MAssive Clusters Survey \citep[MACS,][]{Ebeling2001}, Frontier Fields \citep[FF,][]{Lotz2017}, GLASS \citep[][]{Treu2015b} and the Cluster Lensing and Supernova survey with Hubble \citep[CLASH,][]{Postman2012} programs. In the case of A2744, the MUSE data consist of a $2\times2$ mosaic of GTO observations, with a field of view (FoV) of $\sim2'\times2'$ ($2' = 0.27\mathrm{R}_{200}$) centered on RA= $0\mathrm{h}~14'~20.952''$ and DEC = $-30^\circ~23'~53.88''$ covering the region that includes the southern and central structures but excludes the northern core and interloper (The MUSE FoV of A2744 is shown overlaid on the HST F606W image in Figure~\ref{fig:A2744_footprint} in Appendix~\ref{sec:appendix}). For A370, the data consist of a $2\times2$ mosaic of observations centered on RA= $2\mathrm{h}~39'~53.111''$ DEC=-$1^\circ~34'~55.77''$, which is an expansion on the single pointing of the GTO program, extending the coverage to a $\sim2'\times2'$ ($2' = 0.24\mathrm{R}_{200}$) region of the cluster \citep{Lagattuta2019} (The MUSE FoV of A370 is shown overlaid on the HST F606W image in Figure~\ref{fig:A370_footprint} in Appendix~\ref{sec:appendix}). The complete data analysis of the cluster sample and the redshift catalogs are given in \citet{Richard2021}. The HST data are comprised of WFPC2, ACS/WFC and WFC3-IR images which cover the MUSE observations, sourced from High-Level Science Product (HLSP) images from the CLASH and FF repositories. In this analysis we use only the data from HST which overlap the MUSE observations in the case of both clusters. In this particular analysis, we make use of the F435W, F606W and F814W observations from the FF data, full details of the observations are given in the FF survey paper \citep{Lotz2017}. For comparison with the cluster mass distribution, we also make use of the Clusters As TelescopeS \citep[CATS,][]{Jullo2009,Richard2014,Jauzac2015a,Jauzac2015b,Limousin2016,Lagattuta2017,Mahler2018} mass surface density model. This model is produced using the \textsc{lenstool} code which uses the positions, magnitudes, shapes, multiplicity and redshifts of lensed objects to derive the mass distribution of the cluster. The overall mass distribution is calculated as a superposition of the smooth, global cluster potential and smaller individual substructures associated with bright cluster member galaxies. The full methodology of the technique is presented in \citet{Jullo2009}. We also utilize X-ray images based on Chandra data, described in \cite{Mantz2010} \citep[see also][]{vonderLinden2014,Vulcani2017} in order to compare the cluster gas distribution to the locations of the observed RPS and PSB galaxies. The X-ray images use the 0.7-2.0~keV energy band observations, which is the preferred range for tracing gas mass, as the emissivity in this range is largely insensitive to the gas temperature \citep{Mantz2010}. \subsection{Cluster Membership and Galaxy Colours}\label{sec:membership} In order to have an overview of the cluster population against which we can later compare RPS and PSB galaxies, we identify the sample of cluster galaxies within the observed central region by their velocities, and highlight color-magnitude selected red and blue galaxies in order to contextualize our samples within the different cluster populations. We first extracted the sample of confirmed cluster members by calculating the peculiar velocity of each galaxy using the \citet{Richard2021} spectroscopic redshifts, and selecting galaxies within a specified threshold of the cluster velocity, which was set at $\pm 3\times(1+\mathrm{z})\sigma$, where $\sigma$ denotes the cluster velocity dispersion.. % For both A2744 and A370, we subdivided the velocity-cut sample into red and blue galaxies using their distribution in F606W-F814W vs F814W color-magnitude space, shown in Figure~\ref{fig:CM_both}. For each cluster individually, we employed Gaussian mixture models (GMM) to extract two groupings of objects in the color-magnitude space, which closely corresponded to the red sequence and blue cloud. The Gaussian mixture model yields a probability with which each object belongs to either group. We fitted the red sequence by selecting galaxies with a probability $>0.9$ of belonging to the upper group of the color-magnitude space, and fit a linear regression line to this sample, marked by the red dashed line in Figure~\ref{fig:CM_both}. We then assigned to the red sample all galaxies above the fitted red sequence line, as well as any galaxies below the line with a $>0.9$ probability of belonging to the upper group extracted from GMM. All remaining galaxies in the color-magnitude space were then assigned to the blue sample. We further subdivided the blue galaxies into those with F606W-F814W$<0.6$, marked as blue in the figure, and those with intermediate colors, marked in light blue. For A2744, we spectroscopically confirm 158 cluster members. Of the 148 of these which have good magnitude measurements in F606W and F814W, 114 galaxies are on the red sequence. For A370, we confirm 248 cluster members, 186 with good magnitude measurements in F606W and F814W, of which 116 are on the red sequence. In the full sample across both clusters, the faintest red sequence galaxy we detect has an F814W magnitude of 25.4, and the faintest blue galaxy we detect has an F814W magnitude of 28.2. For the rest of the analysis, we applied a magnitude limit of 25.5 to both clusters in both F814W and F606W filters, which is close to our detection limit for red galaxies, but does not exclude any of our RPS or PSB sample. The magnitude limit is shown by the grey shaded region in both panels of Figure \ref{fig:CM_both}. \subsection{Visual identification and sample selection}\label{sec:ident} We use the samples of galaxies identified in \citet{Moretti2022}, who selected RPS and PSB galaxies based on visual inspection of the optical HST data and MUSE spectra simultaneously. Potential RPS galaxies were selected based on the presence of unilateral tails/debris exhibiting emission lines in the MUSE data. Some of the selected galaxies also exhibited tails in the HST data, which were confirmed to be associated with the galaxy from the MUSE redshifts. The PSB galaxies were selected based on their spectral features, targeting objects lacking emission lines associated with ongoing star-formation but exhibiting strong Balmer lines in absorption. This classification can be biased against objects that have some ionized gas due to processes other than star-formation, however, our spatially resolved data allows us to identify these cases. This is the case of A2744\_07, where we find centrally concentrated emission associated with an AGN \citep[see][for details]{Werle2022}. The full details of the RPS and PSB sample selection processes are outlined in \citet{Moretti2022}. A focused analysis of the PSB galaxies in these clusters, alongside others, is presented in \citet{Werle2022}. % From \citet{Moretti2022}, there are 6 RPS galaxies as well as 4 PSB galaxies within the MUSE field of view of A2744. We note, however, two galaxies of interest, which show features of being both RPS and PSB \citep{Werle2022}. One of the RPS sample galaxies, A2744\_01, has a clear tail in $\mathrm{H}\alpha$ but no emission lines within the disk, suggesting that it is in an intermediate phase between the two types. In addition, one of the PSB galaxies, A2744\_07, has traces of extraplanar emission, suggestive of a tail from a past stripping event. We primarily classify A2744\_01 and A2744\_07 as RPS and PSB respectively, but highlight these galaxies to distinguish them in the rest of the analysis. The locations of the selected galaxies in A2744 are shown marked on the cluster in Figure~\ref{fig:A2744_footprint} in Appendix~\ref{sec:appendix}. In the case of A370, 7 galaxies were visually identified with signs of RPS, alongside 2 PSB galaxies. Two of the galaxies are also noted in other MACS and Frontier Fields works, A370\_01 is highlighted as an extreme case of RPS in \citet{Ebeling2019}, and A370\_08 is noted in \citet{Lagattuta2019} as ID 8006 along with an associated clump of stripped material, ID CL49. The locations of the selected galaxies in A370 are shown marked on the cluster in Figure~\ref{fig:A370_footprint} in Appendix~\ref{sec:appendix}. \section{Phase space and substructure analysis}\label{sec:phasespace+ds} In order to explore the cluster environments and to understand the nature of the stripping process for each of the galaxies in our sample, we build a picture of the structure of each cluster and the orbits of the galaxies within using phase space maps and Dressler-Shectman \citep[][hereafter DS]{1988AJ.....95..985D} tests. In general, these diagnostics allow us to better understand how galaxies are interacting with the host cluster, such as the types of orbits they are on and whether they are associated with a group or substructure. Since both clusters are undergoing merging activity, it is particularly important to understand how these galaxies are situated within their environments. \subsection{Phase Space Analysis}\label{sec:phasespace} The nature of a galaxy's orbit gives us a useful measure of the likelihood with which it will experience RPS, which is far more common in galaxies passing close to the cluster center at high velocity. In order to investigate the orbits of the galaxies in our sample, we produced plots of their locations in cluster-projected position velocity phase space \citep{HernandezFernandez2014}. These diagnostics reveal the nature of the galaxies' orbits, allowing us to determine whether they lie on more circular orbits or steep, plunging radial orbits, conducive to RPS \citep{Jaffe2015}. Typically, galaxies at low projected clustercentric radii with high line-of-sight velocities are likely to be within this regime of orbits \citep{Jaffe2018}, infalling steeply into their host cluster and experiencing RPS. For the center of A370, we follow the definition given by \citet{Lah2009}, who use the mid-point between the northern and southern BCGs. We also use the R$_{200}$ radius of 2.57 Mpc from \citet{Lah2009}. For the center of A2744, we use the coordinates of the BCG closest to the X-ray peak, as used in \citet{Owers2011}, and an R$_{200}$ radius of 2.00 Mpc measured by \citet{Boschin2006}. For each cluster we plot the projected radial distance from the center relative to R$_{200}$ and the line-of-sight velocity deviation from the cluster average, relative to the cluster global velocity dispersion, for all cluster galaxies. The resulting phase-space diagrams are shown in Figure~\ref{fig:PS_both}. The galaxies are divided into red and blue (and marked with accordingly colored points) based on the red sequence classification described in section \ref{sec:membership}. Red and blue filled contours show the kernel density estimates (hereafter KDE) of their respective galaxy samples, to better visualize their distribution within phase space. We mark the galaxies in our RPS sample with solid black stars, and the PSB galaxies with open black squares. In both figures, we denote a 100\% azimuthal completeness radius with a vertical dashed gray line. A circular aperture larger than this radius begins to extend beyond the boundaries of the square field-of-view of MUSE, therefore limiting the number count of visible galaxies. We also mark the average velocities of three important regions described in section \ref{sec:clusters}, measured by \citet{Owers2011}, which are the NC, the CTD and SMRC, as well as the X-ray velocity from the same paper, of the region therein described as MISC2. The phase space diagram for A2744 shows the extremely disturbed, non-virialized nature of the cluster. Two distinct clumps are visible with a velocity separation of approximately $3\sigma$. The clump at $-1\sigma$ contains all galaxies in the RPS sample, as well as a much higher fraction of blue galaxies. % The difference in the blue fraction between the two components could be the result of increased star formation \citep{Vulcani2018} in the CTD galaxies due to mild interaction with the ICM \citep{Stroe2017, Stroe2020}, or a higher quenched fraction in the SMRC as a result of its previous merging history. In addition to this, some of our RPS galaxies are located within the MISC2 region identified in \citet{Owers2011}, corresponding to the X-ray surface brightness peak, shown in Figure~\ref{fig:A2744_footprint} in Appendix~\ref{sec:appendix}. In \citet{Owers2011}, a value of $0.3189^{+0.0092}_{-0.0110}$ was measured for the redshift of the X-ray emission, yielding a peculiar velocity of 2854.78$\mathrm{km\,s^{-1}}$ (marked v$_{\rm gas}$ in the left panel of Figure~\ref{fig:PS_both}). This velocity is very similar to the velocity of galaxies in the SMRC (marked v$_{\rm SMRC}$ in the same figure), and greatly offset from our sample of RPS and PSB galaxies. The measured velocities of the RPS sample, as well as the velocities of the merging components of the cluster and the X-ray emitting gas, suggest that the RPS galaxies and most of the PSB galaxies are associated with the NC/CTD, but are passing through a region of gas which is moving with the SMRC. The extreme difference in velocity between the galaxies in this region and the cospatial gas is likely to be inducing the ram-pressure effect observed in our sample of RPS galaxies. This also corroborates the hypothesis put forward by \citet{Owers2011} that the CTD region consists of tidal debris removed from the main cluster (now the NC) during the core-passage phase of its merger with the SMRC. In the case of A370, the phase space diagram shows a comparatively more relaxed distribution of galaxies, despite A370 also being a merging cluster. The observed portion of the cluster has a skewed but single-peaked redshift distribution, as seen in the right-hand panel of Figure~\ref{fig:PS_both}. The velocities of the northern and southern BCGs are indicated as black dashed lines on the figure. Previous studies \citep{Lagattuta2019, Molnar2020} have noted that there are no prominent subgroups associated with the velocities of the BCGs. Instead, the velocities of the cluster galaxies appear to follow a more uniform Gaussian distribution. The blue galaxies in A370 are generally located at higher clustercentric radii (median clustercentric distance: $0.25\, \mathrm{Mpc}$, $0.10\,\mathrm{R}_{200}$) compared with the red galaxies (median clustercentric distance: $0.19\,\mathrm{Mpc}$, $0.07\,\mathrm{R}_{200}$) and the velocity dispersion of the blue galaxies ($\sigma_{\rm blue}=2251\mathrm{km\,s^{-1}}$) is higher than that of the red galaxies ($\sigma_{\rm red}=1320\mathrm{km\,s^{-1}}$). The velocity distribution of blue galaxies also appears to be slightly skewed towards negative values according to the histogram, although not as prominently as in A2744. 2 sample K-S tests reveal that at the 10\% significance level, the two populations follow the same velocity distribution ($p=0.14$) but distinct distributions in clustercentric radius ($p=0.05$). The RPS and PSB galaxies in our sample are generally located at high positive and negative line-of-sight velocities and are not restricted to any particular region, but are distributed throughout the cluster. The high LOS velocities of the galaxies are conducive to ram-pressure due to the velocity difference between the galaxies and the ICM, whilst the scatter in locations suggests that infall is the root cause, in contrast to a large-scale movement or cluster interaction, which would affect galaxies in a specific region as we observe in A2744. We therefore find two different stories regarding the cause of RPS between these two clusters. In the case of A2744, the RPS galaxies are likely to be experiencing an intense interaction with the ICM in the CTD region, directly resulting from the merging activity of the cluster. The galaxies, likely part of the CTD stripped from the NC, are colliding with a dense region of the ICM associated with the SMRC, which has a significantly different velocity. Similar scenarios of merger-induced RPS have previously been discussed in other clusters \citep{Ebeling2019, Stroe2020}. In contrast, within the observed region of A370, the RPS galaxies appear to be isolated infallers, experiencing ram-pressure as they accelerate into the cluster potential well. If, as discussed in \citet{Lagattuta2019}, the cluster has undergone an initial passage, it is possible that the disturbance of the ICM is enhancing the ram-pressure, but we do not observe a consistent, large-scale motion as in A2744. \subsection{Cluster Substructure Analysis} We carry out Dressler-Shectman (hereafter DS) \citep{Dressler1980,Knebe2000} tests to investigate where our galaxy samples are located within the context of the clusters' substructures. The DS test compares the velocity distribution of each galaxy and its 10 nearest neighbors to the velocity distribution of the cluster to identify regions of consistent velocity that are significantly offset from the cluster in general, indicative of a group or substructure. For each galaxy and its 10 nearest neighbors, we measure the group standard deviation $\sigma_{\rm g}$ using the gapper method \citep{Beers1990} taking the differences between the sorted velocities of the galaxies and weighting by an approximately Gaussian envelope. The deviation of each galaxy and its nearby companions is then defined as: \begin{math} \delta^{2} = \frac{11}{\sigma_{\rm cl}^{2}}\left[\left(\Bar{v}_{\rm g, pec}\right)^{2}+\left(\sigma_{\rm g} - \sigma_{\rm cl}\right)^{2}\right] \end{math} where $\sigma_{\rm cl}$ is the cluster standard deviation measured from the literature, $\sigma_{\rm g}$ is the gapper method standard deviation of the group and $\Bar{v}_{\rm g,pec}$ is the group mean peculiar velocity, $\Bar{v}_{\rm g,pec} = \Bar{v}_{\rm g} - \Bar{v}_{\rm cl}$. The value of $\delta$ gives a measure of the local deviation in velocity from the cluster as a whole. Groups of galaxies with similar velocities that are significantly different to the cluster average will have higher deviations, highlighting possible regions that kinematically stand out from the cluster as a whole. In the case of A2744, \citet{Owers2011} consider the system to be a post-core merger, and due to the recent core passage, the main structures are still not fully disrupted and are clearly separated in velocity (as shown in the left panel of Figure~\ref{fig:PS_both}). In this work, we focus on the smaller FoV of the MUSE observations, which exclude the NC but cover the SMRC and CTD structures (see Figure~\ref{fig:A2744_footprint} in Appendix~\ref{sec:appendix}). To better separate the SMRC and CTD structures, we perform the DS analysis considering the SMRC as the main structure, which naturally highlights the deviation of galaxies potentially belonging to the CTD (see left panel of Figure~\ref{fig:PS_both}). We note, additionally, that we also tried performing the DS test using the average velocity and velocity dispersion of all the cluster galaxies measured by \citet{Owers2011} and found signs of deviation in both groups, confirming this result (not shown). The DS test results are shown in Figure~\ref{fig:DS_test_both} for both clusters. Galaxies in the clusters are shown as circles, with the size corresponding to their $\delta$ value. The size scale is enhanced $10\times$ in the case of A370 for clarity. Close groups of large circles are indicative of regions of substructure, where several galaxies are moving with a significantly deviated velocity from the cluster average. The limiting threshold on the delta value, indicating a significant deviation from the cluster velocity, is calculated to be $3\times\sigma_{\delta}$, where $\sigma_{\delta}$ is the standard deviation of all velocity deviation values in the given cluster \citep{Girardi1996, OlaveRojas2018}. Galaxies with a $\delta$ above this limit are shown with filled points, while those below the limit are shown unfilled. The RPS galaxies are marked with stars and the PSB galaxies are marked squares. A2744\_01 is marked as both RPS and PSB for the reasons outlined in section \ref{sec:ident}. The DS test for A2744 indicates that many galaxies are moving with velocities which are significantly deviated from the SMRC. This is expected, considering the cluster is undergoing a significant merging event, and the majority of galaxies in the field of view are associated with different merging components. Our DS test for this region of the cluster concurs with the results of \citet{Owers2011}, the most significant substructure in the field of view is the CTD, visible across the north west half of the figure as a grouping of large circles. All of the ram-pressure stripped galaxies and PSB galaxies in our sample are parts of some velocity substructure, which is also expected if the merging of the different components is driving the onset of stripping. Combining these results with the phase space analysis, we find that several pieces of evidence consolidate the hypothesis that these galaxies are experiencing a magnified interaction with the ICM resulting from the merger event: \begin{enumerate} \item The presence of many RPS and PSB galaxies in the velocity component at $-1\sigma$, along with the difference in the blue galaxy fraction compared with the component at $2\sigma$. \item The location of these galaxies within or close to the region postulated by \citet{Owers2011} as the CTD (shown by the grey circle in Figure~\ref{fig:DS_test_both} and the green circle in Figure~\ref{fig:A2744_footprint} in Appendix~\ref{sec:appendix}), resulting from the major merger of the clusters. \item The velocity of these galaxies being similar to the average velocity of galaxies in the CTD and the NC (marked v$_{\rm NC}$ in the left hand panel of Figure~\ref{fig:PS_both}, measured by \citet{Owers2011}. \end{enumerate} The DS test for A370 highlights a more relaxed distribution of galaxies within the observed region of the cluster in comparison to A2744. The majority of galaxies within the MUSE field of view are not indicated to reside within any separate substructures, and only a few significant substructures are detected, to the north east, south west, and to a lesser extent, the west of the figure. All but two of the RPS galaxies and all of the PSB galaxies are not located within any of the detected substructures, which may suggest that the majority of our sample entered the cluster as isolated galaxies. \section{Comparison with X-ray and Gravitational Lensing analysis}\label{sec:xray+lensing} \subsection{X-ray and mass surface density maps} In order to investigate the distribution of RPS and PSB galaxies within the environment of the cluster, we plotted the locations of the galaxies overlaid on the X-ray and the lensing modelled mass surface density maps, shown in Figures \ref{fig:A2744_overview} and \ref{fig:A370_overview} for A2744 and A370 respectively. For both clusters, we show the CATS version 4 \citep{Lagattuta2017, Lagattuta2019, Mahler2018} mass surface density map in cyan and the Chandra X-ray image \citep{Mantz2010,vonderLinden2014,Vulcani2017} in magenta. For display purposes, the X-ray images were smoothed with a $3\times3$ median filter and convolved with a $5\times5$ Gaussian kernel. For each of the RPS and PSB galaxies, we plot the cleaned HST F606W map in white, as well as the MUSE H$\alpha$ map shown in yellow. The H$\alpha$ map was produced using our custom-made emission-line fitting software \textsc{highelf} (Radovich et al. \textit{in preparation}) which is based on the \href{https://lmfit-py.readthedocs.io/}{LMFIT python library (https://lmfit-py.readthedocs.io/)} and fits a user-defined set of emission lines using one or two Gaussian components (see \citealt{Moretti2022}, Section~3 therein for full details of the measurement). The white x marks in each figure show the centers of the clusters used in this study. The cluster maps highlight the differences between the observed regions of the clusters. A2744 has a prominently disturbed gas component, shown by the X-ray map, with the majority of the X-ray emitting gas located to the upper right of the observed region. The mass component, shown by the lensing modelled mass surface density map, is distinct from the X-ray component and the majority of the mass is located to the lower left of the X-ray emission. This large difference between the galaxies and gas is attributed in \citet{Owers2011} to the collision between the major components of the cluster which has decoupled the gas component from the collisionless galaxies and dark matter components. In comparison, the X-ray and mass surface density maps for A370 are coincident, with the peak of the X-ray emission lying between the two peaks of the mass map (see also \citealt{Lagattuta2017}, Figure~10). In the case of A2744, as discussed in section \ref{sec:phasespace}, the majority of the PSB and RPS galaxies in our sample are located within the X-ray region, to the upper and/or right of the majority of the mass distribution. This X-ray emitting gas has a high velocity offset from the RPS and PSB galaxies. Since this velocity offset is the result of the merger between this subcluster and the NC, the merging activity appears to be driving the RPS interactions, rather than infall alone. In particular, the RPS is being enhanced by the collision between the galaxies associated with one merging component, and the gas associated with another merging component. One galaxy of interest in this cluster is A2744\_01, which was classified both as a RPS and PSB galaxy, due to its long tail of stripped material but lack of star formation or nebular emission lines. The direction of the galaxy's tail suggests that it has recently passed through the region characterized by very strong ICM X-ray emission, indicating that the galaxy may have just exited a phase of very strong RPS. For A370, the RPS and PSB galaxies are more uniformly distributed around the cluster, and not constrained to a particular region. Together with the more uniform distribution of the X-ray component and its alignment with the mass surface density distribution, this suggests that the galaxies are undergoing stripping due to infall rather than collision with a merging component's ICM as in A2744. \subsection{Comparing ICM X-ray emission} Several works have explored the correlation between stripping efficiency and the presence of X-ray gradients and shocks from the ICM \citep{Owers2012,Vijayaraghavan2013}. \citet{Vulcani2017} observed that in unrelaxed clusters, H$\alpha$ emitter properties exhibit slight trends with the local ICM X-ray emission, suggesting that some form of interaction with these features may be responsible for the stripping of gas and/or triggering of star formation. In addition, \citet{Stroe2020} find strong evidence for triggering of SF by shocks produced by merging activity in the post-core passage merging cluster CIZA J2242.8+5301, nicknamed the Sausage cluster. The alignment of disturbed features in the Sausage cluster galaxies with the merger axis of the cluster strongly indicates that the galaxies have been disturbed by interactions with the travelling shock fronts in the ICM. In order to investigate whether we see a correlation between stripping activity and ICM X-ray emission, we investigated the coincident X-ray flux of the ICM at the locations of the galaxies in our sample with the cluster members. To do this we calculated the median X-ray flux within an annulus, avoiding any X-ray emission from the galaxies themselves, between 15 and 30 kpc from the location of each galaxy. The X-ray fluxes are shown in Figure~\ref{fig:xray_hist} for A2744 (\textit{top}) and A370 (\textit{bottom}) with the RPS galaxies marked in black and white hatching and PSB galaxies marked in grey. The general population of blue cluster galaxies is shown in blue for comparison. We tentatively observe in both clusters that PSB galaxies are generally found within regions of higher X-ray flux emission from the ICM, in comparison to blue cluster galaxies. In the case of A370, the population of RPS galaxies are also found in regions of higher ICM X-ray emission compared with blue cluster galaxies, whilst in A2744 the distribution of ICM X-ray emission is comparable to, or lower than that of the blue cluster galaxies. The differences between the samples are small, however. 2-sample K-S tests revealed no distinction at the 5\% significance level between the distributions of the RPS and PSB galaxies compared with the blue cluster galaxies in both A2744 ($p_{\rm RPS}=0.58$, $p_{\rm PSB}=0.27$) and A370 ($p_{\rm RPS}=0.28$, $p_{\rm PSB}=0.71$) individually. On the other hand, when the two clusters are combined as shown in Figure~\ref{fig:xray_hist_combined}, the distribution of PSBs becomes distinct from the blue cluster galaxies ($p_{\rm PSB}=0.04$). In general, A2744 appears to be too disturbed to draw a reliable conclusion on its own. Compared with A370, the distribution of blue galaxies in A2744 appears to be pushed toward higher X-ray fluxes, which may be due to the dearth of blue galaxies in the SMRC (which corresponds a region of generally lower X-ray emission) resulting from its past minor merger. Whilst we emphasise the caveat that the 3D distribution of the disturbed cluster ICM and the 3D locations of the galaxies are not known, our result for the PSB galaxies may be consistent with the progression of galaxies through the RPS stage as they pass through denser regions of the ICM into the PSB phase. Galaxies encountering interactions with denser regions of the ICM are expected to be subjected to more intense RPS, up to the point that the gas becomes fully stripped and the galaxy becomes a PSB galaxy. In this case, galaxies located in denser regions of the ICM may have already been stripped to the point of becoming PSB. Our findings for the PSB galaxies may also be consistent with \citet{Vulcani2017}, who measured the offset between the peaks of the H$\alpha$ and F475W emission projected along the cluster radial direction, for cluster galaxies in the GLASS survey, and found that a correlation emerged with the ICM X-ray emission for galaxies in unrelaxed clusters. \section{Morphology analysis}\label{sec:morphology} We utilize several quantitative measures of the morphologies to understand whether our RPS and PSB galaxies occupy a specific region of morphology space. The selection of RPS and PSB galaxies, based the presence of H$\alpha$ tails measured from the MUSE data, allows us to better explore their distribution in terms of visual morphology parameters, since the selection is not strictly biased towards galaxies with notable visual disturbances. Many galaxies in our sample appear fairly undisturbed in broad-band imaging, but have clear tails in the MUSE data. This allows us to test the sensitivity of these parameters in order to determine whether subtle cases of RPS can still be differentiated from the general cluster population using this analysis. We also investigate whether this analysis is improved by the inclusion of multiple broad band filters, to incorporate color measurements, or if one broad-band filter is sufficient. This will determine the minimum requirements for potential sample selections based solely on these techniques. For each galaxy, we make a cutout from the F606W HST image and use the python package \texttt{statmorph} \citep{Rodriguez-Gomez2019} to extract different morphological parameters. We ran the \texttt{statmorph} function \texttt{source\_morphology} using the segmentation maps produced by the GTO pipeline, with a gain of 2.5 and a point spread function (PSF) generated using \texttt{sextractor}. Weight maps and masks were not used in this case. The cutouts, segmentation maps and resulting morphology parameters are shown for a few examples in Figure~\ref{fig:segmentation} in Appendix~\ref{sec:appendix}. We focus on the morphological quantities \textit{concentration} and \textit{asymmetry} \citep{conselice2003a, Conselice2014}, as well as \textit{gini} \citep{Glasser1962, Abraham2003,Lotz2004} and \textit{M$_{20}$} \citep{Lotz2004}. We note here brief summaries of the morphological parameters used and refer the reader to \citet{Roberts2020} for an effective summary and the original papers cited here for the full details. The \textit{concentration} measure \citep{conselice2003a, Conselice2014} is derived from the ratio of the radii that contain 80 and 20 percent of the total luminosity of a galaxy, giving an indication of the steepness of the light profile of the source. The \textit{asymmetry} \citep{conselice2003a, Conselice2014} parameter is defined as the difference between the flux map of a galaxy and its $180^\circ$ rotated counterpart. This parameter is particularly suitable for detecting the asymmetric offsets and disturbances expected during RPS interactions. We note also the \textit{shape asymmetry} \citep{Pawlik2016} parameter, utilized in \citet{Roberts2020}, which uses the binary detection map instead of the total flux map. This parameter is more sensitive to low surface-brightness features, making it ideal for detecting the disturbances visible in RPS, however the observations we use in this study have particularly crowded fields, impacting the shape of the detection maps. We found that the standard asymmetry, whilst less sensitive to low surface-brightness features, was a more robust measure in a crowded environment. The \textit{gini} parameter \citep{Glasser1962} is traditionally used within the context of economics and parametrises the distribution of wealth across a population. The parameter has also been utilized in astronomy to quantify the distribution of flux in an image, with lower values indicating a more homogeneous distribution and higher values describing a more concentrated source of flux. Finally, the \textit{M$_{20}$} statistic is the ratio of the second order moment of the brightest $20\%$ of pixels in a galaxy's image and the second order moment of the entire image. This parameter is sensitive to bright features offset from the galaxy center, which makes it suitable for detecting large disturbances in a galaxy's morphology. We combine the parameters \textit{concentration} and \textit{asymmetry} in the top-left panel of Figure~\ref{fig:morphology} for both A2744 and A370 together. In the figure, we mark red galaxies, classified as described in section~\ref{sec:membership} as red points, galaxies with F606W-F814W$<0.6$ as blue points, and intermediate colour galaxies as light blue points. The non-red cluster population is divided into two groups in this way to better highlight the locations of the bluest galaxies, in order to understand whether they have discernibly distinct morphologies according to this analysis. RPS galaxies are shown as solid black stars and PSB galaxies as open black squares. We also separate our RPS galaxy sample based on the lengths of the H$\alpha$ tails measured by \cite{Moretti2022}. Galaxies with tails longer than 20kpc are marked with solid black stars, and galaxies with tails shorter than this are marked with open black stars. For the populations of red and blue galaxies, we show the KDE as correspondingly colored contours. For the PSB galaxies as well as the long and short tailed RPS galaxies, we plot the median and standard deviation as errorbars, marked with the relevant icon as appropriate. This is done in order to highlight the general distributions of each population in parameter space. We include on the figure the line of A=0.35 from \citet{conselice2003b}, which was given as a threshold to select merging galaxies. Whilst we do not classify merging galaxies in this study, it is useful to understand how our sample would be interpreted using this morphological criterion. We note that this threshold is subject to variation, as the measured asymmetry can vary with image resolution and depth \citep{Lotz2004,Sazonova2021,Thorp2021} and the \texttt{statmorph} asymmetries are lower than those used in \citet{conselice2003b} \citep{Rodriguez-Gomez2019}, but include the threshold for reference as an indicator of ``significant" asymmetry. The parameters \textit{gini} and \textit{M$_{20}$} were combined in the top right panel of Figure~\ref{fig:morphology} for both A2744 and A370, with each sample marked and coloured as described above. We include in the figure the lines used in \citet{Lotz2008} to separate out different types of galaxies in \textit{gini}, \textit{M$_{20}$} space: \begin{align*} \mathrm{Mergers{:}}&~\mathrm{G} > -0.14 \mathrm{M}_{20} + 0.33;\\ \mathrm{E/S0/Sa{:}}&~\mathrm{G} \leq -0.14 \mathrm{M}_{20} + 0.33 ~\mathrm{and}~ \mathrm{G}>0.14\mathrm{M}_{20} + 0.80;\\ \mathrm{Sb - Ir{:}}&~\mathrm{G} \leq -0.14 \mathrm{M}_{20} + 0.33 ~\mathrm{and}~ \mathrm{G}\leq0.14\mathrm{M}_{20} + 0.80; \end{align*} As with the \textit{asymmetry} threshold, whilst we do not actively classify these types of galaxies in our sample, we include these lines to compare our sample with the classifications given by these criteria. We find that the RPS galaxies are spread across a wide range of \textit{concentrations}, similarly to the rest of the cluster galaxies. The bluest galaxies, with F606W-F814W$<0.6$, appear to be less concentrated on average compared with the red galaxies. The \textit{asymmetry} provides a more prominent separation between our disturbed sample and the cluster galaxies, with the PSB galaxies located slightly higher than the average, and the RPS galaxies exhibiting much higher values compared with both the PSB galaxies and the rest of the cluster galaxies. The galaxies with long H$\alpha$ tails in the MUSE data have, on average, higher asymmetries in the broad-band data compared with those with shorter H$\alpha$ tails, suggesting that the visual disturbance in broad-band imaging is correlated with the underlying ionized gas disturbance. Comparing the sample with the $\mathrm{A}>0.35$ line, we find that the majority of our long-tailed RPS galaxies would be classified as mergers by this criterion, whilst most of the short-tailed RPS galaxies and PSB galaxies lie below the threshold. This suggests that whilst the $\mathrm{A}>0.35$ criterion is useful for selecting disturbed morphologies, it cannot be solely relied upon to distinguish the underlying cause of the disturbance without making considerations about the environment of the galaxies. The \textit{gini} and \textit{M$_{20}$} parameters on their own do not strictly separate the RPS or PSB galaxies from the rest of the cluster sample, but when combined, we see that the bulk of the red cluster galaxies are located in a small region in the middle right of the figure, whilst our sample of PSB galaxies are generally located around the outskirts of this region and the RPS galaxies occupy a much wider range of values, generally being found away from this concentrated region of cluster galaxies. The blue galaxies are fairly scattered in this figure, however there is a high concentration of galaxies with F606W-F814W$<0.6$ in the Sb-Ir region. The galaxies with short and long H$\alpha$ tails are not particularly differentiated by their \textit{gini} and \textit{M$_{20}$} values. We find that the mean values of the long and short tailed RPS galaxies lie within the merger region given by the \citet{Lotz2008} criteria. As with the asymmetry criterion, this implies that such criteria cannot distinguish between merging and RPS galaxies, and that samples of merging galaxies selected using these criteria may also include galaxies disturbed by ram-pressure. In both the \textit{concentration}, \textit{asymmetry} and the \textit{gini}, \textit{M$_{20}$} figures, the galaxies A2744\_01 and A2744\_07 are located at opposite regions of the morphology space. Both of these galaxies exhibit RPS and PSB features (with A2744\_01 being primarily RPS and A2744\_07 primarily PSB) and are likely to be in an intermediate phase between the two types. Their extreme locations in morphology space therefore suggests that the visual indicators of the disturbance are at their highest toward the end of the RPS phase, and are quick to vanish after the stripping ceases. \subsection{Centroid Variance Method} Each of the above methods provide useful ways to quantify various aspects of the morphologies, however the disturbed morphologies of the ram-pressure stripped sample do not generally occupy specific regions of any given parameter space, but overlap with the general cluster population. Here we test methods to detect the low surface brightness tails and disturbances characteristic of ram-pressure stripping interactions, with the aim of developing a criterion which is tailored to be more sensitive to such objects. To this end, we experiment with measuring the variation between the centroids of flux isocontours to quantify inconsistencies in the light distribution of the galaxy. RPS is often characterized by offset low surface brightness emission, which will cause the centroids of isocontours taken at different flux levels to vary more than they would in an undisturbed galaxy. The technique is summarized as follows. We first mask the galaxy with the segmentation map, and take the range of flux values between the minimum and maximum values in the masked image. We ignore the lowest 20\% of this range, as the faintest contours tend to be clipped by the segmentation mask and do not reflect the shape of the galaxy's light distribution. We draw a set of 8 contours, between 20\%-99\% of the flux range, and for comparison an additional set of 8 contours focused only on the outer regions of the galaxy between 20\%-50\% of the flux range. For each flux value we draw an isocontour of the emission, and obtain the position of the non-flux-weighted centroid of that contour using the {centre-of-mass} method from the python package \textsc{scipy}. For axisymmetric emission, such as an undisturbed circular or elliptical gaussian, the centroids of every contour would be expected to lie in exactly the same location. Any disturbances or asymmetries in the light distribution will manifest as variations in the centroid locations. We then calculate the variance and covariance of the set of coordinates over all of the centroids, normalised by the galaxy's half-light radius, to quantify the movement of the centroids across the different flux thresholds, using the equations: \begin{align*} \mathrm{centroid\ variance} &= \frac{\sigma^{2}(\mathrm{X}) + \sigma^{2}(\mathrm{Y})}{\mathrm{r}_\mathrm{e}}\\ \mathrm{centroid\ covariance} &= \frac{|\mathrm{cov}(\mathrm{X},\mathrm{Y})|}{\mathrm{r}_\mathrm{e}} \end{align*} Where X and Y are the arrays of x and y coordinates of the flux isocontour centroids and $\mathrm{r}_\mathrm{e}$ is the galaxy half light radius A higher variance indicates that the distribution of flux is non-uniform or asymmetric, whilst a high covariance indicates that the disturbance is along a particular direction. We found that the variance of the centroid offers a promising indicator of disturbance. The covariance performs similarly well, with slightly less separation between disturbed an undisturbed morphologies. An example is shown in Figure~\ref{fig:contours} for A370\_08, which shows the contours between 20\% and 99\% of the range of flux values, and the centroids of each contour as x markers colored accordingly. The skewed light distribution resulting from the disturbance pushes the centroids of the lower surface brightness emission towards the lower left of the figure relative to the central peak, resulting in an increased variance. A scatter plot of the centroid variance in the outer region vs the full range is shown in the lower left panel of Figure~\ref{fig:morphology}. The figure shows that the majority of galaxies in the RPS sample have significantly higher centroid variances than the rest of the cluster sample, using either range of flux values to define the contour levels. The sample of PSB galaxies do not appear to be distinct from the rest of the cluster galaxies, as is the case for the other morphology parameters. The red cluster galaxies are fairly concentrated with low centroid variances, with the blue galaxies and galaxies with F606W-F814W$<0.6$ having slightly higher centroid variances in the outer regions in comparison. The distinction between the stripped galaxies and the rest of the cluster sample suggests that the variance in the non flux-weighted centroid of emission across different flux thresholds is a promising indicator of disturbed morphologies and could feasibly be used to detect galaxies of interest. It is possible that disturbances due to non-RPS processes, e.g. tidal interaction, could affect the centroids in a similar way, however on inspection of the non-RPS objects which are close to our RPS sample in the figure, we do not see evidence of gravitational disturbances, and a dedicated sample would be required to test the classification of gravitationally interacting galaxies using this method. The plot of the covariances can be found in Figure~\ref{fig:c_c} in Appendix~\ref{sec:appendix}, for reference. To test the success of the centroid variance parameters at resolving the RPS galaxies from the cluster populations, we carried out 2 sample K-S tests for each morphology parameter. We found that at the 5\% significance level, the \textit{concentration} ($p=0.12$) and \textit{gini} ($p=0.16$) parameters cannot distinguish the RPS galaxies from the distribution of undisturbed cluster galaxies, whereas for the \textit{asymmetry} ($p=2.84\times10^{-5}$), \textit{M$_{20}$} ($p=0.04$), outer centroid variance ($p=4.13\times10^{-6}$) and full centroid variance ($p=7.11\times10^{-7}$) parameters, the RPS galaxies are distinct from the undisturbed cluster population. To test the selection of RPS galaxies using this diagnostic, we draw a line approximately separating the distinct clump of galaxies in the top right from the rest of the sample, described by the equation: \begin{align*} \log_{10}&(\mathrm{outer\ CV}) > -2\times\log_{10}(\left(\mathrm{full\ CV}\right))-2.2 \end{align*} Where outer CV and full CV refer to the outer centroid variance and full centroid variance respectively. This criterion yields a sample of 10 galaxies, of which 4 are long-tailed RPS galaxies, 3 are short-tailed RPS galaxies, and 3 are non-RPS cluster galaxies. We calculated the precision and recall of this selection criterion, defined as: \begin{align*} \mathrm{precision}&=\frac{\mathrm{true\ positives}}{\mathrm{true\ positives} + \mathrm{false\ positives}}\\ \mathrm{recall}&=\frac{\mathrm{true\ positives}}{\mathrm{true\ positives} + \mathrm{false\ negatives}} \end{align*} finding a precision of 0.70 and a recall of 0.58. \subsection{Principal Component Analysis}\label{sec:PCA} We applied principal component analysis (hereafter PCA) in order to visualise the distribution of these galaxies and investigate whether similar galaxies are grouped together in the combined morphology space. PCA reduces multidimensional parameter space by finding the ``principal components" of covariant parameters, in order to explore relations between different parameters, or simplify the visualization of a large number of parameters which may all scale according to some common underlying property. In this case, each of the morphology parameters gives us complementary, but overlapping information about the shape of a galaxy. By normalizing each of the different quantities to negate the effects of scale, and transforming them into a reduced parameter space, we can retain the maximum amount of information with a simplified set of quantities. PCA transforms a data into a set of orthogonal eigenvectors, or principal components, which are linear combinations of the input parameters. These are arranged such that the maximum amount of variance from the original data is contained in the first few eigenvectors of the transformed dataset. To select the number of relevant principal components, we use the rule proposed by \citet{Kaiser1960}, whereby components are rejected if they contain less than the expected variance of uncorrelated variables. In this case, with 9 variables at play, each would contain 11.1\% of the total sample variance if no correlation was present. We find that 3 principal components are above this threshold, which altogether contain 67\% of the total sample variance (PC1: 37\%, PC2: 16\%, PC3: 13\%). Notably, the first principal component contains more than double the sample variance of each of the other components. The variances of the top 5 principal components yielded by the PCA are shown graphically in Figure~\ref{fig:pca_variance} in Appendix~\ref{sec:appendix}. The principal components resulting from the PCA are described in Table~\ref{tab:PCA}. The input parameters (each of the morphology quantities) are given in the first column of the table. To obtain each of the principal components PC1, PC2 and PC3, the morphology parameters for a given galaxy are scaled by their corresponding weights (given in columns PC1, PC2 and PC3 of table~\ref{tab:PCA} respectively) and combined in summation (i.e.: $\mathrm{PC1}=-0.276\times\textit{concentration} + 0.16\times\textit{asymmetry} - 0.004\times\textit{gini} + ...$). The first principal component, PC1, which contains significantly more of the sample variance than the other components, is most influenced by the outer centroid variance and full centroid variance parameters, with weights of 0.474 and 0.457 respectively. PC2 is driven mostly by asymmetry and gini, whilst PC3 is influenced mostly by the full centroid covariance and the concentration. All of the components, however, are non-negligibly influenced by several other parameters in addition to these. The sample of galaxies is displayed reprojected into the resulting principal component space in Figure~\ref{fig:pca}. These 3 components on their own do not describe physical properties, but help to visualize any groupings of galaxies in higher dimensional parameter space in a simplified plot. \begin{table} \centering \begin{tabular}{l|r r r r} &\multicolumn{3}{c}{Weights $\omega_i$}\\ Input Variable $x_i$ & PC1 & PC2 & PC3\\\hline concentration & -0.276 & -0.073 & 0.481\\ asymmetry & 0.160 & 0.654 & -0.126\\ gini & -0.004 & 0.589 & 0.387\\ M$_{20}$ & 0.282 & 0.370 & -0.302\\ $\mathrm{Log}_{10}$(centroid var.) (outer) & 0.474 & -0.119 & -0.039\\ $\mathrm{Log}_{10}$(centroid covar.) (outer) & 0.436 & -0.113 & -0.011\\ $\mathrm{Log}_{10}$(centroid var.) (full) & 0.457 & -0.096 & 0.324\\ $\mathrm{Log}_{10}$(centroid covar.) (full) & 0.315 & -0.038 & 0.593\\ F606W-F814W color & -0.309 & 0.211 & 0.234\\ \end{tabular} \caption{Table of weights $\omega_i$ assigned to each of the input variables $x_i$ to yield the principal components in the linear combination $\omega_1x_1 + \omega_2x_2 + \omega_3x_3 + ... $} \label{tab:PCA} \end{table} The three components are shown in Figure~\ref{fig:pca}, with PC1 compared with PC2 in the upper panel, and with PC3 in the lower panel. It is clear from the figure that PC1 provides the strongest separation between the RPS sample and the cluster galaxies, with the vertical line at PC1 = 4.3 drawn on the figure to mark our suggested threshold. This criterion of $\mathrm{PC1} > 4.3$ selects 4/5 long-tailed RPS galaxies, 4/7 short tailed RPS galaxies, as well as 4 blue galaxies and 1 red galaxy. We calculated a precision of 0.62 and a recall of 0.67 for the threshold of 4.3, and additionally show the two diagnostics calculated over a range of threshold values shown in Figure~\ref{fig:prec_rec} in Appendix~\ref{sec:appendix}. In comparison, the precision of this selection is slightly lower than when selecting galaxies using only the centroid variance measurements, but the recall is higher, i.e., the PCA selection retrieved more of the known RPS sample. We further inspected the non-RPS sample objects and found that two of the blue objects in this region of the diagram are clumps which are close in location and velocity to A2744\_01 and A370\_08, strongly suggesting that they are clumps of material associated with those galaxies, which have been flagged as distinct objects by the source extraction but indeed are considered part of the jellyfish tail by our MUSE analysis \citep{Moretti2022}. The latter of those two objects, the clump to the south of A370\_08, is likely to be the object identified as CL49 in \citet{Lagattuta2017,Lagattuta2019} which the authors also concluded was a clump of material detached from the main galaxy. Two sources, the red object and one of the blue objects, were found to be galaxies overlapping in projection. The last blue object was inspected and found to have associated emission lines in the MUSE image, however the object is very small and faint, and whilst it appears to disturbed in the HST image, the nature of the disturbance cannot be verified. The distribution of galaxies across PC1 and PC2 is also shown in Figure~\ref{fig:pca_grid}, with the space divided into bins and an example galaxy shown for each bin to indicate typical morphologies corresponding to that combination of parameters. In general, the lower left corner of the figure appears to contain the majority of undisturbed spiral galaxies as well as elliptical and spheroidal galaxies. Moving upwards toward the top left corner of the figure, many of the objects appear to be in increasingly crowded environments or have close companions, which may be impacting their morphology parameters. Moving towards the bottom right, the galaxies appear to have increasingly disturbed morphologies, which is where most of our RPS sample is located. \section{Discussion and Summary}\label{sec:discussion} We have analysed a sample of 12 RPS galaxies and 6 PSB galaxies within two clusters at intermediate redshift, A2744 and A370. We have compared several characteristics of the RPS and PSB galaxies with the general cluster population, specifically, their orbital information, their distribution within cluster substructures and environment, and their morphologies. We found that the general cluster population in the observed field of A2744 follows a bimodal distribution, with the two components having similar velocities to the regions described as the SMRC and the NC in \citet{Owers2011}. All of the RPS galaxies, and all but one of the PSB galaxies are located in the blueshifted structure, along with a significantly higher fraction of blue galaxies in comparison with the redshifted component. Together, this is indicative that the collision of the galaxies in the CTD with the X-ray gas associated with the SMRC is responsible for the stripping being experienced by the RPS galaxies. The difference in blue fractions between the substructures may be due to an excess of blue galaxies in the blueshifted component caused by increased star formation due to weak stripping, or a dearth of blue galaxies in the redshifted component resulting from quenching during a previous merger event. We find that in A370, the RPS galaxies and PSB galaxies are more evenly distributed in phase-space and the majority are not residing in any substructures. Whilst A370 is also a merging cluster, we do not see any evidence that the observed RPS galaxies are the result of the merger, rather that they are more likely to be isolated infalling galaxies. We analysed the ICM X-ray emission at the locations of the cluster galaxies, and found that in both clusters the PSB galaxies reside in regions of higher ICM X-ray flux compared with the blue cluster members and RPS galaxies. In A370 the RPS galaxies are also located at higher ICM X-ray fluxes than the blue cluster galaxies, but not as high as the PSB galaxies. The location of the PSB galaxies in regions of higher ICM X-ray flux, which scales with gas density, is consistent with the population of PSB galaxies being produced, at least in part, by ram-pressure interactions. Finally, we implemented several measures to quantify the morphologies of the galaxies and compared the results for the different samples. We utilised \textit{concentration}, \textit{asymmetry}, \textit{gini} and \textit{M20}, and also tested whether the variance and covariance of the emission centroid of different flux isocontours could be a useful measure of the disturbance caused by RPS. We found that the most effective standalone measure of the morphology was the centroid variance, which shows promise as a potential measure to detect disturbed morphologies. By combining the different parameters using principal component analysis, the scatter was further reduced and the separation of both weakly and strongly disturbed galaxies from the general cluster population was made clearer than when using any of the individual morphological quantities alone. This could have practical applications for filtering through large broad-band imaging datasets for potentially disturbed galaxies prior to manual inspection to determine the cause of the disturbance, reducing the workload on human classifiers. These kinds of automated techniques will open up the possibility of detecting samples of RPS candidates from huge catalogs of survey data, including, as we have found, cases where the disturbance is minimal in the broad-band images. By expanding the known RPS sample and obtaining snapshots of galaxies from first infall to fully quenched, we will be able to further explore the process of ram-pressure stripping and its impact on galaxies in clusters. \section*{Acknowledgements} We wish to thank D. J. Lagattuta for valued comments and suggestions. We are grateful to the anonymous referee who helped us clarify and strengthen this analysis. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 833824, GASP project). We acknowledge funding from the INAF main-stream funding programme (PI B. Vulcani) and the Italian PRIN-Miur 2017 (PI A. Cimatti). YJ gratefully acknowledges support from the ANID BASAL project FB210003. \software{Astropy \citep{astropy:2013, astropy:2018}, statmorph \citep{Rodriguez-Gomez2019}, \textsc{sinopsis} \citep{Fritz2017}, \textsc{highelf}} \bibliographystyle{aasjournal} \bibliography{hiz} % \appendix \section{Additional Figures}\label{sec:appendix} \label{lastpage}
Title: The planetary theory of solar activity variability: a review
Abstract: Commenting the 11-year sunspot cycle, Wolf (1859, MNRAS 19, 85-86) conjectured that "the variations of spot frequency depend on the influences of Venus, Earth, Jupiter, and Saturn". The high synchronization of our planetary system is already nicely revealed by the fact that the ratios of the planetary orbital radii are closely related to each other through a scaling-mirror symmetry equation (Bank and Scafetta, Front. Astron. Space Sci. 8, 758184, 2022). Reviewing the many planetary harmonics and the orbital invariant inequalities that characterize the planetary motions of the solar system from the monthly to the millennial time scales, we show that they are not randomly distributed but clearly tend to cluster around some specific values that also match those of the main solar activity cycles. In some cases, planetary models have even been able to predict the time-phase of the solar oscillations including the Schwabe 11-year sunspot cycle. We also stress that solar models based on the hypothesis that solar activity is regulated by its internal dynamics alone have never been able to reproduce the variety of the observed cycles. Although planetary tidal forces are weak, we review a number of mechanisms that could explain how the solar structure and the solar dynamo could get tuned to the planetary motions. In particular, we discuss how the effects of the weak tidal forces could be significantly amplified in the solar core by an induced increase in the H-burning. Mechanisms modulating the electromagnetic and gravitational large-scale structure of the planetary system are also discussed.
https://export.arxiv.org/pdf/2208.09293
\onecolumn \title{The planetary theory of solar activity variability: a review} \author{Nicola Scafetta$^{1}$ and Antonio Bianchini$^{2}$} \lyxaddress{$^{1}$Department of Earth Sciences, Environment and Georesources, University of Naples Federico II, Complesso Universitario di Monte S. Angelo, via Cintia 21, 80126 Naples, Italy.} \lyxaddress{$^{2}$INAF, Astronomical Observatory of Padua, Padua, Italy.} \lyxaddress{Email: nicola.scafetta@unina.it; antonio.bianchini@unipd.it} \twocolumn \section{Introduction} Since antiquity, the movements of the planets of the solar system have attracted the attention of astronomers and philosophers such as Pythagoras and Kepler because the orbital periods appeared to be related to each other by simple harmonic proportions, resonances, and/or commensurabilities \citep{Haar,Stephenson1974}. Such a philosophical concept is known as the ``\emph{Music of the Spheres}'' or the ``\emph{Harmony of the Worlds}'' \citep{Godwin,Scafetta2014a}. This property is rather common for many orbital systems \citep{Agol,Aschwanden,Moons,Scafetta2014a}. \citet{Bank} improved the Geddes and King-Hele equations describing the mirror symmetries among the orbital radii of the planets \citep{Geddes} and discovered their ratios obey the following scaling-mirror symmetry relation \begin{align} \frac{1}{64}\left(\frac{a_{Er}}{a_{Sz}}\right)^{\frac{2}{3}} & \approx\frac{1}{32}\left(\frac{a_{Pl}}{a_{Vu}}\right)^{\frac{2}{3}}\approx\frac{1}{16}\left(\frac{a_{Ne}}{a_{Me}}\right)^{\frac{2}{3}}\approx\frac{1}{8}\left(\frac{a_{Ur}}{a_{Ve}}\right)^{\frac{2}{3}}\nonumber \\ \approx\frac{1}{4}\left(\frac{a_{Sa}}{a_{Ea}}\right)^{\frac{2}{3}} & \approx\frac{1}{2}\left(\frac{a_{Ju}}{a_{Ma}}\right)^{\frac{2}{3}}\approx1\left(\frac{a_{7:3}}{a_{3:1}}\right)^{\frac{2}{3}}\approx\frac{9}{8}\label{eq:1.0} \end{align} where $a_{planet}$ are the semi-major axes of the orbits of the relative planets: Eris (Er), Pluto (Pl), Neptune (Ne), Uranus (Ur), Saturn (Sa), Jupiter (Ju), Mars (Ma), Earth (Ea), Venus (Ve), Mercury (Me), Vulcanoid asteroid belt (Vu), and the scattered zone surrounding the Sun (Sz). See Figure \ref{fig0}. The ratio 9/8 is, musically speaking, a whole tone known as the Pythagorean \emph{epogdoon}. The deviations of Eq. \ref{eq:1.0} from the actual orbital planetary ratios are within 1\%. Another intriguing aspect regarding the synchronization of the solar system is the fact that many planetary harmonics are found spectrally coherent with the solar activity cycles \citep[e.g.: ][and many others]{Scafetta2012a,Scafetta2020}. The precise physical origin of solar cycles is still poorly known and dynamo models are debated, but recent literature has strengthened the hypothesis of a correlation with planetary harmonics. Actually, a few years after the discovery of the 11-year sunspot cycle, \citet{Wolf} himself conjectured that ``\emph{the variations of spot-frequency depend on the influences of Venus, Earth, Jupiter, and Saturn}''. \citet{Dicke(1978)} noted that the sunspot cycle shows no statistical indication of being randomly generated but rather of being synchronized by a chronometer hidden deep in the Sun. Solar activity is characterized by several cycles like the Schwabe 11-year sunspot cycle \citep{Schwabe}, the Hale solar magnetic 22-year cycle \citep{Hale}, the Gleissberg cycle ($\sim$85 years), the Jose cycle ($\sim$178 years), the Suess-de Vries cycle ($\sim$208 years), the Eddy cycle ($\sim$1000 years), and the Bray-Hallstatt cycle ($\sim$2300 years) \citep{Abreu,McCracken2001,McCracken2013,Scafetta2016}. Shorter cycles are easily detected in total solar irradiance (TSI) and sunspot records, while the longer ones are detected in long-term geophysical records like the cosmogenic radionuclide ones (\textsuperscript{14}C and \textsuperscript{10}Be) and in climate records \citep{Neff,Steinhilber}. Planetary cycles have also been found in aurora records \citep{Scafetta2012c,ScafettaWillson2013a}. Due to the evident high synchronization of planetary motions, it is worthwhile investigating the possibility that orbital frequencies could tune solar variability as well. However, although Jupiter appears to play the main role in organizing the solar system \citep{Bank}, its orbital period ($\sim$11.86 years) is too long to fit the Schwabe 11-year solar cycle. Thus, any possible planetary mechanism able to create this solar modulation must involve a combination of more planets. We will see that the only frequencies that could be involved in the process are the orbital periods, the synodical periods, and their beats and harmonics. In the following sections, we review the planetary theory of solar variability and show how it is today supported by many empirical and theoretical evidences at multiple timescales. We show that appropriate planetary harmonic models correlate with the 11-year solar cycle, the secular and millennial cycles, as well as with several other major oscillations observed in solar activity, and even with the occasional occurrences of solar flares. The physics behind these results is not yet fully understood, but a number of working hypotheses will be herein briefly discussed. \section{The solar dynamo and its open issues} The hypothesis we wish to investigate is whether the solar activity could be synchronized by harmonic planetary forcings. In principle, this could be possible because the solar structure itself is an oscillator. The solar cyclical magnetic activity can be explained as the result of a dynamo operating in the convective envelope or at the interface with the inner radiative region, where the rotational energy is converted into magnetic energy. Under certain conditions, in particular if the internal noise is sufficiently weak relative to the external forcing, an oscillating system could synchronize with a weak external periodic force, as first noted by Huygens in the 17\textsuperscript{th} century \citep{Pikovsky}. A comprehensive review of solar dynamo models is provided by \citet{Charbonneau(2020)}. In the most common $\alpha$-$\Omega$ models, the magnetic field is generated by the combined effect of differential rotation and cyclonic convection. The mechanism starts with an initially poloidal magnetic field that is azimuthally stretched by the differential rotation of the convective envelope, especially at the bottom of the convective region (tachocline) where the angular velocity gradient is most steep. The continuous winding of the poloidal field lines ($\Omega$ mechanism) produces a magnetic toroidal field that accumulates in the boundary overshooting region. When the toroidal magnetic field and its magnetic pressure get strong enough, the toroidal flux ropes become buoyantly unstable and start rising through the convective envelope where they undergo helical twisting by the Coriolis forces ($\alpha$ mechanism) \citep{Parker1955}. When the twisted field lines emerge at the photosphere, they appear as bipolar magnetic regions (BMRs), that roughly coincide with the large sunspot pairs, also characterized by a dipole moment that is systematically tilted with respect to the E--W direction of the toroidal field. The turbulent decay of BMRs finally releases a N-S oriented fraction of the dipole moment that allows the formation of a global dipole field, characterized by a polarity reversal as required by the observations (Babcock--Leighton mechanism). However, magneto-hydrodynamic simulations suggest that purely interface dynamos cannot be easily calibrated to solar observations, while flux-transport dynamos (based on the meridional circulation) are able to better simulate the 11-year solar cycle when the model parameters are calibrated to minimize the difference between observed and simulated time--latitude BMR patterns \citep{Charbonneau(2020),Dikpati}. \citet{Cole} showed that by changing the parameters of the MHD $\alpha$-$\Omega$ dynamo models it is possible to obtain transitions from periodic to chaotic states via multiple periodic solutions. \citet{Macario-Rojas} obtained a reference Schwabe cycle of 10.87 years, which was also empirically found by \citet{Scafetta2012a} by analyzing the sunspot record. This oscillation will be discussed later in the Jupiter-Saturn model of Sections 4.2 and 6. Full MHD dynamo models are not yet available and several crucial questions are still open such as the stochastic and nonlinear nature of the dynamo, the formation of flux ropes and sunspots, the regeneration of the poloidal field, the modulation of the amplitude and period of the solar cycles, how less massive fully convective stars with no tachocline may still show the same relationship between the rotation and magnetic activity, the role of meridional circulation, the origin of Maunder-type Grand Minima, the presence of very low-frequency Rieger-type periodicities probably connected with the presence of magneto-Rossby waves in the solar dynamo layer below the convection zone, and other issues \citep{Zaqarashvili2010,Zaqarashvili,Gurgenashvili}. \section{The solar wobbling and its harmonic organization} The complex dynamics of the planetary system can be described by a general harmonic model. Any general function of the orbits of the planets -- such as their barycentric distance, speed, angular momentum, etc. -- must share a common set of frequencies with those of the solar motion \citep[e.g.:][]{Jose,Bucha,Cionco2018,Scafetta2010}. Instead, the amplitudes and phases associated with each constituent harmonic depend on the specific chosen function. Figure \ref{fig1} (A and B) shows the positions and the velocities of the wobbling Sun with respect to the barycenter of the planetary system from BC 8002, to AD 9001 (100-day steps) calculated using the JPL\textquoteright s HORIZONS Ephemeris system \citep{Scafetta2010,Scafetta2014a}. We can analyze the main orbital frequencies of the planetary system by performing, for example, the harmonic analysis of the solar velocity alone. Its periodograms were obtained with the Fourier analysis (red) and the maximum entropy method (blue) \citep{Press} and are shown in Figure \ref{fig1}C. Several spectral peaks can be recognized: the $\sim$1.092 year period of the Earth-Jupiter conjunctions; the $\sim$9.93 and $\sim$19.86 year periods of the Jupiter-Saturn spring (half synodic) and synodic cycles, respectively; the $\sim$11.86, $\sim$29.5, $\sim$84 and $\sim$165 years of the orbital periods of Jupiter, Saturn, Uranus and Neptune, respectively; the $\sim$60 year cycle of the Trigon of Great Conjunctions between Jupiter and Saturn; the periods corresponding to the synodic cycles between Jupiter and Neptune ($\sim$12.8 year), Jupiter and Uranus ($\sim$13.8 year), Saturn and Neptune ($\sim$35.8 year), Saturn and Uranus ($\sim$45.3) and Uranus and Neptune ($\sim$171.4 year), as well as their spring periods. The synodic period is defined as \begin{equation} P_{12}=\frac{1}{f_{12}}=\left|\frac{1}{P_{1}}-\frac{1}{P_{2}}\right|^{-1},\label{eq:2.0} \end{equation} where $P_{1}$ and $P_{2}$ are the orbital periods of two planets. Additional spectral peaks at $\sim$200-220, $\sim$571, $\sim$928 and $\sim$4200 years are also observed. The spring period is the half of $P_{12}$. The observed orbital periods are listed in Table \ref{tab1}. \begin{table}[!t] \centering{}% \begin{tabular}{ccc} \hline Planet & days & years\tabularnewline \hline Mercury & 87.969 & 0.241\tabularnewline Venus & 224.701 & 0.615\tabularnewline Earth & 365.256 & 1\tabularnewline Mars & 686.980 & 1.881\tabularnewline Jupiter & 4332.589 & 11.862\tabularnewline Saturn & 10759.22 & 29.457\tabularnewline Uranus & 30685.4 & 84.011\tabularnewline Neptune & 60189.0 & 164.79\tabularnewline \hline \end{tabular}\caption{Sidereal orbital periods of the planets of the solar system. From the Planetary Fact Sheet - Metric \protect\href{https://nssdc.gsfc.nasa.gov/planetary/factsheet/}{https://nssdc.gsfc.nasa.gov/planetary/factsheet/}.} \label{tab1} \end{table} Some of the prominent frequencies in the power spectra appear clustered around well-known solar cycles such as in the ranges 42-48 years, 54-70 years, 82-100 years (Gleissberg cycle), 155-185 (Jose cycle), and 190-240 years (Suess-de Vries cycle) \citep[e.g.:][]{Ogurtsov,ScafettaWillson2013a}. The sub-annual planetary harmonics and their spectral coherence with satellite total solar irradiance records will be discussed in Section 5. The important result is that the several spectral peaks observed in the solar motion are not randomly distributed but are approximately reproduced using the following simple empirical harmonic formula \begin{equation} p_{i}=\frac{178.38}{i}\quad yr,\qquad i=1,2,3,\ldots,\label{eq:1.1} \end{equation} where 178 years corresponds to the period that \citet{Jose} found both in the solar orbital motion and in the sunspot records \citep[cf.:][]{Jakubcova,Charvatova2013}. A comparison between the observed frequencies and those predicted by the harmonic model of Eq. \ref{eq:1.1} is shown in Figure \ref{fig1}D, where a strong coincidence is observed. Eq. \ref{eq:1.1} suggests that the solar planetary system is highly self-organized and synchronized. \section{The Schwabe 11-year solar cycle} \citet{Wolf} himself proposed that the $\sim$11-year sunspot cycle could be produced by the combined orbital motions of Venus, Earth, Jupiter and Saturn. In the following, we discuss two possible and complementary solar-planetary models made with the orbital periods of these four planets. \subsection{The Venus-Earth-Jupiter model} The first model relates the 11-year solar cycle with the relative orbital configurations of Venus, Earth and Jupiter, which was first proposed by \citet{Bendandi} as recently reminded by \citet{Battistini}. Later, \citet{Bollinger}, \citet{Hung} and others \citep[e.g.:][]{Scafetta2012c,Tattersall,Wilson,Stefani2016,Stefani2019,Stefani2021} developed more evolved models. This model is justified by the consideration that Venus, Earth and Jupiter are the three major tidal planets \citep{Scafetta2012b}. Their alignments repeat every: \begin{equation} \frac{1}{f_{VEJ}}=P_{VEJ}=\left(\frac{3}{P_{V}}-\frac{5}{P_{E}}+\frac{2}{P_{J}}\right)^{-1}=22.14\:yr\label{eq:2.1} \end{equation} where $P_{V}=224.701$ days, $P_{E}=365.256$ days and $P_{J}=4332.589$ days are the sidereal orbital periods of Venus, Earth and Jupiter, respectively. The calculated 22.14-year period is very close to the $\sim$22-year Hale solar magnetic cycle. Since the Earth--Venus--Sun--Jupiter and Sun--Venus--Earth--Jupiter configurations present equivalent tidal potentials, the tidal cycle would have a recurrence of $11.07$ years. This period is very close to the average solar cycle length observed since 1750 \citep{Hung,Scafetta2012a,Stefani2016}. \citet{Vos} found evidence for a stable Schwabe cycle with a dominant 11.04-year period over a 1000-year interval which is very close to the above 11.07 periodicity, as suggested by \citet{Stefani(2020)}. However, the Jupiter-Saturn model also reproduces a similar Schwabe cycle (see Sections 4.2 and 6). Eq. \ref{eq:2.1} is an example of ``orbital invariant inequality'' \citep{Scafettaetal2016,Scafetta2020}. Section 7 explains their mathematical property of being simultaneously and coherently seen by any region of a differentially rotating system like the Sun. This property should favor the synchronization of the internal solar dynamics with external forces varying with those specific frequencies. Eq. \ref{eq:2.1} can be rewritten in a vectorial formalism as \begin{equation} (3,-5,2)=3(1,-1,0)-2(0,1,-1).\label{eq:2.2} \end{equation} Each vector can be interpreted a frequency where the order of its components correspond to the arbitrary assumed order of the planets, in this case: (Venus, Earth, Jupiter). Thus, $(3,-5,2)\equiv3/P_{V}-5/P_{E}+2/P_{J},$ $3(1,-1,0)\equiv3(1/P_{V}-1/P_{E})$ and $-2(0,1,-1)\equiv-2(1/P_{E}-1/P_{J})$. We observe that $(1,-1,0)$ indicates the frequency of the synodic cycle between Venus and Earth and $(0,1,-1)$ indicates the frequency of the synodic cycle between Earth and Jupiter (Eq. \ref{eq:2.0}). Thus, the vector $(3,-5,2)$ indicates the frequency of the beat created by the third harmonic of the synodic cycle between Venus and Earth and the second harmonic of the synodic cycle between Earth and Jupiter. Eq. \ref{eq:2.2} also means that the Schwabe sunspot cycle can be simulated by the function: \begin{equation} f(t)=\cos\left(2\pi\cdot2\cdot3\frac{t-t_{VE}}{P_{VE}}\right)+\cos\left(2\pi\cdot2\cdot2\frac{t-t_{EJ}}{P_{EJ}}\right),\label{eq:2.3} \end{equation} where $t_{VE}=2002.8327$ is the epoch of a Venus-Earth conjunction whose period is $P_{VE}=1.59867$ years; and $t_{EJ}=2003.0887$ is the epoch of an Earth-Jupiter conjunction whose period is $P_{EJ}=1.09207$ years. The 11.07-year beat is obtained by doubling the synodic frequencies given in Eq. \ref{eq:2.2}. Figure \ref{fig2}A shows that the three-planet model of Eq. \ref{eq:2.3} (red) generates a beat pattern of 11.07 years reasonably in phase with the sunspot cycle (blue). More precisely, the maxima of the solar cycles tend to occur when the perturbing forcing produced by the beat is stronger, that is when the spring tides of the planets can interfere constructively somewhere in the solar structure. \citet{Hung} and \citet{Scafetta2012a} developed the three-planet model by introducing a three-planetary alignment index. In the case of two planets, the alignment index $I_{ij}$ between planet $i$ and planet $j$ is defined as: \begin{equation} I_{ij}=|\cos(\Theta_{ij})|,\label{eq:2.4} \end{equation} where $\Theta_{ij}$ is the angle between the positions of the two planets relative to the solar center. Eq. \ref{eq:2.4} indicates that when the two planets are aligned ($\Theta_{ij}=0$ or $\Theta_{ij}=\pi$), the alignment index has the largest value because these two positions imply a spring-tide configuration. Instead, when $\Theta_{ij}=\pi/2$, the index has the lowest value because at right angles -- corresponding to a neap-tide configuration -- the tides of the two planets tend to cancel each other. In the case of the Venus-Earth-Jupiter system, there are three correspondent alignment indexes: \begin{eqnarray} I_{V} & = & |\cos(\Theta_{VE})|+|\cos(\Theta_{VJ})|\label{eq:2.5}\\ I_{E} & = & |\cos(\Theta_{EV})|+|\cos(\Theta_{EJ})|\label{eq:2.6}\\ I_{J} & = & |\cos(\Theta_{JV})|+|\cos(\Theta_{JE})|.\label{eq:2.7} \end{eqnarray} Then, the combined alignment index $I_{VEJ}$ for the three planets could be defined as: \begin{equation} I_{VEJ}=smallest~among~(I_{V},I_{E},I_{J}),\label{eq:2.8} \end{equation} which ranges between 0 and 2. Figure \ref{fig2}B shows (in red) that the number of the most aligned days of Venus, Earth and Jupiter -- estimated by Eq. \ref{eq:2.8} -- presents an 11.07-year cycle. These cycles are well correlated, both in phase and frequency, with the $\sim$11-year sunspot cycle. \citet{Scafetta2012a} also showed that an 11.08-year recurrence exists also in the amplitude and direction (latitude and longitude components) of the solar jerk-shock vector, which is the time-derivative of the acceleration vector. For additional details see \citet{Hung}, \citet{Scafetta2012a}, \citet{Salvador}, \citet{Wilson} and \citet{Tattersall}. A limitation of the Venus-Earth-Jupiter model is that it cannot explain the secular variability of the sunspot cycle which alternates prolonged low and high activity periods such as, for example, the Maunder grand solar minimum between 1645 and 1715, when very few sunspots were observed \citep[cf.][]{Smythe}. However, this problem could be solved by the Jupiter-Saturn model \citep{Scafetta2012a} discussed below and, in general, by taking into account also the other planets \citep{Scafetta2020,Stefani2021}, as discussed in Sections 6 and 7. The 11.07-year cycle has also been extensively studied by \citet{Stefani2016,Stefani2018,Stefani2019,Stefani2020b,Stefani2021} where it is claimed to be the fundamental periodicity synchronizing the solar dynamo. \subsection{The Jupiter-Saturn model} The second model assumes that the Schwabe sunspot cycle is generated by the combined effects of the planetary motions of Jupiter and Saturn. The two planets generate two main tidal oscillations associated with the orbit of Jupiter (11.86-year period) -- which is characterized by a relatively large eccentricity ($e=0.049$) -- and the spring tidal oscillation generated by Jupiter and Saturn (9.93-year period) \citep{Brown,Scafetta2012c}. In this case, the Schwabe sunspot cycle could emerge from the synchronization of the two tides with periods of 9.93 and 11.86 years, whose average is about 11 years. The Jupiter-Saturn model is supported by a large number of evidences. For example, \citet{Scafetta2012a,Scafetta2012b} showed that the sunspot cycle length -- i.e. the time between two consecutive sunspot minima -- is bi-modally distributed, being always characterized by two peaks at periods smaller and larger than 11 years. This suggests that there are two dynamical attractors at the periods of about 10 and 12 years forcing the sunspot cycle length to fall either between 10 and 11 years or between 11 and 12 years. Sunspot cycles with a length very close to 11 years are actually absent. In addition, Figures \ref{fig2}C and D show the periodograms of the monthly sunspot record since 1749. The spectral analysis of this long record reveals the presence of a broad major peak at about 10.87 years obtained by some solar dynamo models \citep{Macario-Rojas} which is surrounded by two minor peaks at 9.93 and 11.86 years that exactly correspond with the two main tides of the Jupiter-Saturn system. In Section 6 we will show that the combination of these three harmonics produces a multidecadal, secular and millennial variability that is rather well correlated with the long time-scale solar variability. \section{Solar cycles shorter than the Schwabe 11-year solar cycle} On small time scales, \citet{Bigg} found an influence of Mercury on sunspots. Indeed, in addition to Jupiter, Mercury can also induce relatively large tidal cycles on the Sun because its orbit has a large eccentricity ($e=0.206$) \citep{Scafetta2012a}. Rapid oscillations in the solar activity can be optimally studied using the satellite total solar irradiance (TSI) records. Since 1978, TSI data and their composites have been obtained by three main independent science teams: ACRIMSAT/ACRIM3 \citep{Willson2003}, SOHO/VIRGO \citep{Frohlich} and SORCE/TIM \citep{Kopp2005a,Kopp2005b}. Figure \ref{fig3} compares the ACRIM3, VIRGO and TIM TSI from 2000 to 2014; the average irradiance is about $1361$ $W/m^{2}$. \subsection{The 22-40 days time-scale} Figure \ref{fig3}B shows the power spectra in the 22-40 days range of the three TSI records (Figure \ref{fig3}A) from 2003.15 to 2011.00 \citep{ScafettaWillson2013c}. A strong spectral peak is observed at $\sim27.3$ days (0.075 years) \citep{Willson1999}, which corresponds to the synodic period between the Carrington solar rotation period of $\sim25.38$ days and the Earth's orbital period of $\sim365.25$ days. The Carrington period refers to the rotation of the Sun at $26{^\circ}$ of latitude, where most sunspots form and the solar magnetic activity emerges \citep{Bartels}. The observed 27.3-day period differs from the Carrington 25.38-day period because the Sun is seen from the orbiting Earth. Thus, the 27.3-day period derives from Eq. \ref{eq:2.0} using $T_{1}=25.38$ days and $T_{2}=365.25$ days. Figure \ref{fig3}B reveals additional spectral peaks at $\sim24.8$ days ($\sim0.068$ years), $\sim34$-$35$ days ($\sim0.093$-$0.096$ years), and $\sim36$-$38$ days ($\sim0.099$-$0.104$ years). They fall within the range of the solar differential rotation that varies from 24.7-25.4 days near the equator \citep{Kotov} to about 38 days near the poles \citep{Beck2000}. However, the same periods appear to be also associated with the motion of the planets. In fact, the $\sim24.8$-day cycle corresponds to the synodic period between the sidereal orbital period of Jupiter ($\sim4332.6$ days) and the sidereal equatorial rotation period of the Sun ($\sim24.7$ days) calculated using Eq. \ref{eq:2.0}. Additional synodic cycles between the rotating solar equator and the orbital motion of the terrestrial planets are calculated at $\sim26.5$ days, relative to the Earth, $\sim27.75$ days, relative to Venus, and $\sim34.3$ days, relative to Mercury (see Table \ref{tab3.1}). We also notice that the major TSI spectral peak at 34.7 days is very close to the $\sim34.3$-day Mercury-Sun synodic period, although it would require the slightly different solar rotation period of 24.89 days. \begin{table}[!t] \centering{}% \begin{tabular}{ccccc} \hline Cycle & Type & P (day) & P (year) & color\tabularnewline \hline Sun & equ-rot & 24.7 & 0.0676 & black\tabularnewline Sun -- Ju & equ-rot & 24.8 & 0.0679 & red\tabularnewline Sun -- Ea & equ-rot & 26.5 & 0.0726 & red\tabularnewline Sun -- Ea & Car-rot & 27.3 & 0.0747 & blue\tabularnewline Sun -- Ve & equ-rot & 27.8 & 0.0761 & red\tabularnewline Sun -- Me & equ-rot & 34.3 & 0.0940 & red\tabularnewline 2/5 Me & resonance & 35.2 & 0.0964 & green\tabularnewline \hline \end{tabular}\caption{Solar equatorial (equ-) and Carrington (Car-) rotation cycles relative to the fixed stars and to the four major tidally active planets calculated using Eq. \ref{eq:2.0} where $P_{1}=24.7$ days is the sidereal equatorial solar rotation and $P_{2}$ the orbital period of a planet. Last column: color of the arrows in Figure \ref{fig3}B. \citep[cf.][]{ScafettaWillson2013c}.} \label{tab3.1} \end{table} \subsection{The 0.1-1.1 year time-scale} Tables \ref{tab3.2} and \ref{tab3.3} collect the orbital periods, the synodic cycles and their harmonics among the terrestrial planets (Mercury, Venus, Earth and Mars). The tables also show the synodic cycles between the terrestrial and the Jovian planets (Jupiter, Saturn, Uranus, and Neptune). The calculated periods are numerous and clustered. If solar activity is modulated by planetary motions, these frequency clusters should be observed also in the TSI records. \begin{table*}[!t] \centering{}% \begin{tabular}{cccccc} \hline Cycle & Type & P (day) & P (year) & min (year) & max (year)\tabularnewline \hline Me & $\nicefrac{1}{2}$ orbital & $44\pm0$ & $0.1205\pm0.000$ & $0.1205$ & $0.1205$\tabularnewline Me -- Ju & spring & $45\pm9$ & $0.123\pm0.024$ & $0.090$ & $0.156$\tabularnewline Me -- Ea & spring & $58\pm10$ & $0.159\pm0.027$ & $0.117$ & $0.189$\tabularnewline Me -- Ve & spring & $72\pm8$ & $0.198\pm0.021$ & $0.156$ & $0.219$\tabularnewline Me & orbital & $88\pm0$ & $0.241\pm0.000$ & $0.241$ & $0.241$\tabularnewline Me -- Ju & synodic & $90\pm1$ & $0.246\pm0.002$ & $0.243$ & $0.250$\tabularnewline Ea & $\nicefrac{1}{4}$ orbital & $91\pm3$ & $0.25\pm0.000$ & $0.250$ & $0.250$\tabularnewline Ve & $\nicefrac{1}{2}$ orbital & $112.5\pm0$ & $0.3075\pm0.000$ & $0.3075$ & $0.3075$\tabularnewline Me -- Ea & synodic & $116\pm9$ & $0.317\pm0.024$ & $0.290$ & $0.354$\tabularnewline Ve -- Ju & spring & $118\pm1$ & $0.324\pm0.003$ & $0.319$ & $0.328$\tabularnewline Ea & $\nicefrac{1}{3}$ orbital & $121\pm7$ & $0.333\pm0.000$ & $0.333$ & $0.333$\tabularnewline Me -- Ve & synodic & $145\pm12$ & $0.396\pm0.033$ & $0.342$ & $0.433$\tabularnewline Ea & $\nicefrac{1}{2}$ orbital & $182\pm0$ & $0.500\pm0.000$ & $0.5$ & $0.5$\tabularnewline Ea -- Ju & spring & $199\pm3$ & $0.546\pm0.010$ & $0.531$ & $0.562$\tabularnewline Ve & orbital & $225\pm0$ & $0.615\pm0.000$ & $0.241$ & $0.241$\tabularnewline Ve -- Ju & synodic & $237\pm1$ & $0.649\pm0.004$ & $0.642$ & $0.654$\tabularnewline Ve -- Ea & spring & $292\pm3$ & $0.799\pm0.008$ & $0.786$ & $0.810$\tabularnewline Ea & orbital & $365.25\pm0$ & $1.000\pm0.000$ & $1.000$ & $1.000$\tabularnewline Ea -- Ju & synodic & $399\pm3$ & $1.092\pm0.009$ & $1.082$ & $1.104$\tabularnewline Ea -- Ve & synodic & $584\pm6$ & $1.599\pm0.016$ & $1.572$ & $1.620$\tabularnewline \hline \end{tabular}\caption{Major theoretical planetary harmonics with period $P<1.6$ years. The synodic period is given by Eq. \ref{eq:2.0}; the spring period is half of it. \citep[cf.][]{ScafettaWillson2013c}.} \label{tab3.2} \end{table*} \begin{table*}[!t] \centering{}% \begin{tabular}{ccccc} \hline Cycle & Type & P (year) & Type & P (year)\tabularnewline \hline Me -- Ne & spring & $0.1206$ & synodic & $0.2413$\tabularnewline Me -- Ur & spring & $0.1208$ & synodic & $0.2416$\tabularnewline Me -- Sa & spring & $0.1215$ & synodic & $0.2429$\tabularnewline Me -- Ma & spring & $0.1382$ & synodic & $0.2763$\tabularnewline Ve -- Ne & spring & $0.3088$ & synodic & $0.6175$\tabularnewline Ve -- Ur & spring & $0.3099$ & synodic & $0.6197$\tabularnewline Ve -- Sa & spring & $0.3142$ & synodic & $0.6283$\tabularnewline Ve -- Ma & spring & $0.4571$ & synodic & $0.9142$\tabularnewline Ea -- Ne & spring & $0.5031$ & synodic & $1.006$\tabularnewline Ea -- Ur & spring & $0.5060$ & synodic & $1.0121$\tabularnewline Ea -- Sa & spring & $0.5176$ & synodic & $1.0352$\tabularnewline Ea -- Ma & spring & $1.0676$ & synodic & $2.1352$\tabularnewline Ma & $\nicefrac{1}{2}$ orbital & $0.9405$ & orbital & $1.8809$\tabularnewline Ma -- Ne & spring & $0.9514$ & synodic & $1.9028$\tabularnewline Ma -- Ur & spring & $0.9621$ & synodic & $1.9241$\tabularnewline Ma -- Sa & spring & $1.0047$ & synodic & $2.0094$\tabularnewline Ma -- Ju & spring & $1.1178$ & synodic & $2.2355$\tabularnewline Ju & $\nicefrac{1}{2}$ orbital & $5.9289$ & orbital & $11.858$\tabularnewline Ju -- Ne & spring & $6.3917$ & synodic & $12.783$\tabularnewline Ju -- Ur & spring & $6.9067$ & synodic & $13.813$\tabularnewline Ju -- Sa & spring & $9.9310$ & synodic & $19.862$\tabularnewline Sa & $\nicefrac{1}{2}$ orbital & $14.712$ & orbital & $29.424$\tabularnewline Sa -- Ne & spring & $17.935$ & synodic & $35.870$\tabularnewline Sa -- Ur & spring & $22.680$ & synodic & $45.360$\tabularnewline Ur & $\nicefrac{1}{2}$ orbital & $41.874$ & orbital & $83.748$\tabularnewline Ur -- Ne & spring & $85.723$ & synodic & $171.45$\tabularnewline Ne & $\nicefrac{1}{2}$ orbital & $81.862$ & orbital & $163.72$\tabularnewline Me -- (Ju -- Sa) & spring & $0.122$ & synodic & $0.244$\tabularnewline Me -- (Ea -- Ju) & spring & $0.155$ & synodic & $0.309$\tabularnewline Ve -- (Ju -- Sa) & spring & $0.317$ & synodic & $0.635$\tabularnewline Ea -- (Ju -- Sa) & spring & $0.527$ & synodic & $1.053$\tabularnewline Ve -- (Ea -- Ju) & spring & $0.704$ & synodic & $1.408$\tabularnewline \hline \end{tabular}\caption{Additional expected harmonics associated with planetary orbits. The last five rows report the synodic and spring periods of Mercury, Venus and Earth relative to the Jupiter-Saturn and Earth-Jupiter synodic periods calculated as $P_{1(23)}=1/|1/P_{1}-|1/P_{2}-1/P_{3}||$. \citep[cf.][]{ScafettaWillson2013b,ScafettaWillson2013c}.} \label{tab3.3} \end{table*} Figure \ref{fig3}C shows two alternative power spectra of the ACRIM and PMOD TSI records superposed to the distribution (yellow) of the planetary frequencies reported in Tables \ref{tab3.1}, \ref{tab3.2} and \ref{tab3.3}. The main power spectral peaks are observed at: $\sim0.070$, $\sim0.097$, $\sim0.20$, $\sim0.25$, 0.30-0.34, $\sim0.39$, $\sim0.55$, 0.60-0.65, 0.7-0.9, and 1.0-1.2 years. Figure \ref{fig3}C shows that all the main spectral peaks observed in the TSI records appear compatible with the clusters of the calculated orbital harmonics. For example: the Mercury-Venus spring-tidal cycle (0.20 years); the Mercury orbital cycle (0.24 years); the Venus-Jupiter spring-tidal cycle (0.32 years); the Venus-Mercury synodic cycle (0.40 years); the Venus-Jupiter synodic cycle (0.65 years); and the Venus-Earth spring tidal cycle (0.80 years). A 0.5-year cycle is also observed, which could be due to the Earth crossing the solar equatorial plane twice a year and to a latitudinal dependency of the solar luminosity. These results are also confirmed by the power spectra of the planetary tidal function on the Sun (see Figure \ref{fig8}C) and of the speed of the Sun relative to the solar system barycenter (Figures \ref{fig3}G-J). The 1.0-1.2 year band observed in the TSI records correlates well with the 1.092-year Earth-Jupiter synodic cycle. Actually, the TSI records present maxima in the proximity of the Earth-Jupiter conjunction epochs \citep{ScafettaWillson2013b}. Figure \ref{fig3}D shows the ACRIM and PMOD TSI records (red curves) plotted against the Earth-Jupiter conjunction cycles with the period of 1.092 years (black curve) from 1998 to 2004. TSI peaks are observed around the times of the conjunctions. The largest peak occurs at the beginning of 2002 when the conjunction occurred at a minimum of the angular separation between Earth and Jupiter (0° 13' 19\textquotedbl ). Figure \ref{fig3}E shows the PMOD (blue) and ACRIM (black) records band-pass filtered to highlight the 1.0-1.2 year modulation. The two curves (blue and black) are compared to the 1.092-year harmonic function (red): \begin{equation} f(t)=g(t)\cos\left[2\pi~\frac{(t-2002)}{1.09208}\right],\label{eq:4.1} \end{equation} where the amplitude $g(t)$ was modulated according to the observed Schwabe solar cycle. The time-phase of the oscillation is chosen at $t_{EJ}=2002$ because one of the Earth-Jupiter conjunctions occurred on the 1\textsuperscript{st} of January, 2002. The average Earth-Jupiter synodic period is 1.09208 years. The TSI 1.0-1.2 year oscillation is significantly attenuated during solar minima (1995-1997 and 2007-2009) and increases during solar maxima. In particular, the figure shows the maximum of solar cycle 23 and part of the maxima of cycles 22 and 24 and confirms that the TSI modulation is well correlated with the 1.092-year Earth-Jupiter conjunction cycle. Figure \ref{fig3}F extends the model prediction back to 1978. Here the TSI records are empirically compared against the following equations: \\ for ACRIM, \begin{equation} f(t)=S_{A}(t)+0.2(S_{A}(t)-1360.58)\cos\left[2\pi~\frac{(t-2002)}{1.09208}\right];\label{eq:4.2} \end{equation} for PMOD, \begin{equation} f(t)=S_{P}(t)+0.2(S_{P}(t)-1365.3)\cos\left[2\pi~\frac{(t-2002)}{1.09208}\right].\label{eq:4.3} \end{equation} The blue curves are the 2-year moving averages, $S_{A}(t)$ and $S_{P}(t)$, of the ACRIM and PMOD TSI composite records, respectively. The data-model comparison confirms that the 1.092-year Earth-Jupiter conjunction cycle is present since 1978. In fact, TSI peaks are also found in coincidences with a number of Earth-Jupiter conjunction epochs like those of 1979, 1981, 1984, 1990, 1991, 1992, 1993, 1994, 1995, 1998, 2011 and 2012. The 1979 and 1990 peaks are less evident in the PMOD TSI record, likely because of the significant modifications of the published Nimbus7/ERB TSI record in 1979 and 1989-1990 proposed by the PMOD science team \citep{Frohlich,Scafetta2009,Scafetta2011}. The result suggests that the side of the Sun facing Jupiter could be slightly brighter, in particular during solar maxima. Thus, when the Earth crosses the Sun-Jupiter line, it could receive an enhanced amount of radiation. This coalesces with strong hotspots observed on other stars with orbiting close giant planets \citep{Shkolnik2003,Shkolnik2005}. Moreover, \citet{KotovH} analyzed 45 years of observations and showed that the solar photosphere, as seen from the Earth, is pulsating with two fast and relatively stable periods $P_{0}=9,600.606(12)$ s and $P_{1}=9,597.924(13)$ s. Their beatings occur with a period of 397.7(2.6) days, which coincides well with the synodic period between Earth and Jupiter (398.9\LyXThinSpace days). A hypothesis was advanced that the gravity field of Jupiter could be involved in the process. \subsection{The solar cycles in the 2-9 year range} The power spectrum in Figure \ref{fig2}D shows peaks at 5-6 and 8.0-8.5 years. The former ones appear to be harmonics of the Schwabe 11-year solar cycle discussed in Section 3. The latter peaks are more difficult to be identified. In any case, some planetary harmonics involving Mercury, Venus, Earth, Jupiter and Saturn could explain them. For example, the Mercury-Venus orbital combination repeats almost every 11.08 years, which is similar to the 11.07-year invariant inequality between Venus, Earth and Jupiter discussed in Section 3. In fact, $P_{M}=0.241$ years and $P_{V}=0.615$ years, therefore their closest geometrical recurrences occur after 23 orbits of Mercury ($23P_{M}=5.542$ years) and 9 orbits of Venus ($9P_{V}=5.535$ year). Moreover, we have $46P_{M}=11.086$ years and $18P_{V}=11.07$ years. Thus, the orbital configuration of Mercury and Venus repeats every 5.54 years as well as every 11.08 years and might contribute to explain the 5-6 years spectral peak observed in Figure \ref{fig2}D. Moreover, 8 orbits of the Earth ($8P_{E}=8$ years) and 13 orbits of Venus ($13P_{V}=7.995$ years) nearly coincide and this combination might have contributed to produce the spectral peak at about 8 years. There is also the possibility that the harmonics at about 5.5 and 8-9 years could emerge from the orbital combinations of Venus, Earth, Jupiter and Saturn. In fact, we have the following orbital invariant inequalities \begin{equation} \left(\frac{2}{P_{V}}-\frac{3}{P_{E}}-\frac{2}{P_{J}}+\frac{3}{P_{S}}\right)^{-1}=5.43\:yr\label{eq:4.4} \end{equation} and \begin{equation} 2\left(-\frac{1}{P_{V}}+\frac{2}{P_{E}}-\frac{2}{P_{J}}+\frac{1}{P_{S}}\right)^{-1}=8.34\:yr,\label{eq:4.5} \end{equation} where the orbital periods of the four planets are given in Table \ref{tab1}. Eq. \ref{eq:4.4} combines the spring cycle between Venus and Jupiter with the third harmonic of the synodic cycle between Earth and Saturn. Eq. \ref{eq:4.5} is the first inferior harmonic (because of the factor 2) of a combination of the synodic cycle between Venus and Saturn and the spring cycle between Earth and Jupiter. Eqs. \ref{eq:4.4} and \ref{eq:4.5} express orbital invariant inequalities, whose general physical properties are discussed in Section 7. The above results, together with those discussed in Section 4, once again suggest that the major features of solar variability at the decadal scale from 2 to 22 years could have been mostly determined by the combined effect of Venus, Earth, Jupiter and Saturn, as it was first speculated by \citet{Wolf}. \section{The multi-decadal and millennial solar cycles predicted by the Jupiter-Saturn model} As discussed in Section 4.1, the Jupiter-Saturn model interprets quite well two of the three main periods that characterize the sunspot number record since 1749: $P_{S1}=9.93$, $P_{S2}=10.87$ and $P_{S3}=11.86$ years (Figure \ref{fig2}C) \citep{Scafetta2012a}. The two side frequencies match the spring tidal period of Jupiter and Saturn (9.93 years), and the tidal sidereal period of Jupiter (11.86 years). The central peak at $P_{S2}=10.87$ years can be associated with a possible natural dynamo frequency that is also predicted by a flux-transport dynamo model \citep{Macario-Rojas}. However, the same periodicity could be also interpreted as twice the invariant inequality period of Eq. \ref{eq:4.4}, which gives 10.86 years. According to the latter interpretation, the central frequency sunspot peak might derive from a dynamo synchronized by a combination of the orbital motions of Venus, Earth, Jupiter and Saturn. The three harmonics of the Schwabe frequency band beat at $P_{S13}=60.95$ years, $P_{S12}=114.78$ years and $P_{S23}=129.95$ years. Using the same vectorial formalism introduced in Section 3.1 to indicate combinations of synodical cycles, a millennial cycle, $P_{S123}$, is generated by the beat between $P_{S12}\equiv(1,-1,0)$ and $P_{S23}\equiv(0,1,-1)$ according to the equation $(1,-1,0)-(0,1,-1)=(1,-2,1)$ that corresponds to the period \begin{equation} P_{S123}=\left(\frac{1}{P_{S1}}-\frac{2}{P_{S2}}+\frac{1}{P_{S3}}\right)^{-1}\approx983~yr,\label{eq:4.6} \end{equation} where we adopted the multi-digits accurate values $P_{S1}=9.929656$ years, $P_{S2}=10.87$ years and $P_{S3}=11.862242$ years (Table \ref{tab1}). However, the millennial beat is very sensitive to the choice of $P_{S2}$. To test whether this three-frequency model actually fits solar data, \citet{Scafetta2012a} constructed its constituent harmonic functions by setting their relative amplitudes proportional to the power of the spectral peaks of the sunspot periodogram. The three amplitudes, normalized with respect to $A_{S2}$, are: $A_{S1}=0.83$, $A_{S2}=1$, $A_{S3}=0.55$. The time-phases of the two side harmonics are referred to: $t_{S1}=2000.475$, which is the synodic conjunction epoch of Jupiter and Saturn (23/June/2000) relative to the Sun, when the spring tide must be stronger; and $t_{S3}=1999.381$, which is the perihelion date of Jupiter (20/May/1999) when its tide is stronger. The time-phase of the central harmonic was set to $t_{S2}=2002.364$ and was estimated by fitting the sunspot number record with the three-harmonic model keeping the other parameters fixed. The time-phases of the beat functions are calculated using the equation \begin{equation} t_{12}=\frac{P_{2}t_{1}-P_{1}t_{2}}{P_{2}-P_{1}}~.\label{eq:4.7} \end{equation} It was found $t_{S12}=2095.311$, $t_{S13}=2067.044$ and $t_{S23}=2035.043$. The time-phase of the beat between $P_{S12}$ and $P_{S23}$ was calculated as $t_{S123}=2059.686$. Herein, we ignore that the phases for the conjunction of Jupiter and Saturn vary by a few months from the average because the orbits are elliptic, which could imply a variation up to a few years of the time phases of the beat functions. The proposed three-frequency harmonic model is then given by the function \begin{equation} \sum_{i=1}^{3}h_{i}(t)=\sum_{i=1}^{3}A_{Si}~\cos\left(2\pi~\frac{t-t_{Si}}{P_{Si}}\right).\label{eq:4.8} \end{equation} The components and the beat functions generated by the model are given by the equations \begin{equation} h_{1}(t)=0.83~\cos\left(2\pi~\frac{t-2000.475}{9.929656}\right),\label{eq:4.9} \end{equation} \begin{equation} h_{2}(t)=1.0~\cos\left(2\pi~\frac{t-2002.364}{10.87}\right),\label{eq:4.10} \end{equation} \begin{equation} h_{3}(t)=0.55~\cos\left(2\pi~\frac{t-1999.381}{11.862242}\right).\label{eq:4.11} \end{equation} Thus, the final model becomes \begin{equation} h_{123}(t)=h_{1}(t)+h_{2}(t)+h_{3}(t).\label{eq:4.12} \end{equation} To emphasize its beats we can also write \begin{equation} f_{123}(t)=\begin{cases} h_{123}(t) & \text{if }h_{123}(t)\geq0\\ 0 & \text{if }h_{123}(t)<0 \end{cases}\label{eq:4.13} \end{equation} The resulting envelope functions of the beats are \begin{equation} b_{12}(t)=0.60~\cos\left(2\pi~\frac{t-1980.528}{114.783}\right)\label{eq:4.14} \end{equation} \begin{equation} b_{13}(t)=0.40~\cos\left(2\pi~\frac{t-2067.044}{60.9484}\right)\label{eq:4.15} \end{equation} \begin{equation} b_{23}(t)=0.45~\cos\left(2\pi~\frac{t-2035.043}{129.951}\right)\label{eq:4.16} \end{equation} Figure \ref{fig4} shows the three-frequency solar model of Eq. \ref{eq:4.13} (red). Figure \ref{fig4}A compares it against two reconstructions of the solar activity based on $^{10}$Be and $^{14}$C cosmogenic isotopes (blue and black, respectively) \citep{Bard,Steinhilber}. The millennial beat cycle is represented by the green curve. The model correctly hindcast all solar multi-decadal grand minima observed during the last 1000 years, known as the Oort, Wolf, Spцrer, Maunder and Dalton grand solar minima. They approximately occurred when the three harmonics interfered destructively. Instead, the multi-decadal grand maxima occurred when the three harmonics interfere constructively generating a larger perturbation on the Sun. Figure \ref{fig4}B compares Eq. \ref{eq:4.13} against the Northern Hemisphere proxy temperature reconstruction of \citet{Ljungqvist} (black). We notice the good time-matching between the oscillations of the model and the temperature record of both the millennial and the 115-year modulations, which is better highlighted by the smoothed filtered curves at the bottom of the figure. The Roman Warm Period (RWP), Dark Age Cold Period (DACP), Medieval Warm Period (MWP), Little Ice Age (LIA) and the Current Warm Period (CWP) are well hindcast by the three-frequency Jupiter-Saturn model. Figure \ref{fig4}C shows the millennial oscillation (blue) predicted by Eq. \ref{eq:4.13} given by \begin{equation} g_{m}(t)=\cos\left(2\pi~\frac{t-2059.686}{983.401}\right).\label{eq.4.17} \end{equation} The curve is well correlated with the quasi millennial solar oscillation -- known as the Eddy oscillation -- throughout the Holocene as revealed by the $^{14}$C cosmogenic isotope record (red) and other geological records \citep{Kerr,Scafetta2012a,Scafetta2014b,Steinhilber}. \citet{Scafetta2012a} discussed other properties of the three-frequency solar model. For example, five 59-63 year cycles appear in the period 1850-2150, which are also well correlated with the global surface temperature maxima around about 1880, 1940 and 2000. The model also predicts a grand solar minimum around the 2030s constrained between two grand solar maxima around 2000 and 2060. The modeled solar minimum around 1970, the maximum around 2000 and the following solar activity decrease, which is predicted to last until the 2030s, are compatible with the multidecadal trends of the ACRIM TSI record \citep{Willson2003}, but not with those shown by the PMOD one \citep{Frohlich} that uses TSI modified data \citep{Scafetta2019b} and has a continuous TSI decrease since 1980. The plots of ACRIM and PMOD TSI data are shown in Figure \ref{fig3}F and have been extensively commented by \citet{Scafetta2019b}. Finally, the model also reproduces a rather long Schwabe solar cycle of about 15 years between 1680 and 1700. This long cycle was actually observed both in the $\delta^{18}O$ isotopic concentrations found in Japanese tree rings (a proxy for temperature changes) and in $^{14}$C records (a proxy for solar activity) \citep{Yamaguchia}. \citet{Scafetta2014b} also suggested that the input of the planetary forcing could be nonlinearly processed by the internal solar dynamo mechanisms. As a consequence, the output function might be characterized by additional multi-decadal and secular harmonics. The main two frequency clusters are predicted at 57, 61, 65 years and at 103, 115, 130, and 150 years. These harmonics actually appear in the power spectra of solar activity \citep{Ogurtsov}. In particular, \citet{Cauquoin} found the four secular periods (103, 115, 130, 150 years) in the $^{10}Be$ record of 325--336 kyr ago. These authors claimed that their analyzed records do not show any evidence of a planetary influence but they did not realize that their found oscillations could be derived from the beating among the harmonics of Jupiter and Saturn with the 11-year solar cycle, as demonstrated in \citet{Scafetta2014b}. We notice that the multi-secular and millennial hindcasts of the solar activity records made by the three-frequency Jupiter-Saturn model shown in Figure \ref{fig4} are impressive because the frequencies, phases and amplitudes of the model are theoretically deduced from the orbits of Jupiter and Saturn and empirically obtained from the sunspot record from 1750 to 2010. The prolonged periods of high and low solar activity derive from the constructive and destructive interference of the three harmonics. \section{Orbital invariant inequality model: the Jovian planets and the long solar and climatic cycles} The orbital invariant inequality model was first proposed by \citet{Scafettaetal2016} and successively developed by \citet{Scafetta2020} using only the orbital periods of the four Jovian planets (Table \ref{tab1}). It successfully reconstructs the main solar multi-decadal to millennial oscillations like those observed at 55-65 years, 80-100 years (Gleissberg cycle), 155-185 years (Jose cycle), 190-240 years (Suess-de Vries cycle), 800-1200 years (Eddy cycle) and at 2100-2500 years (Bray-Hallstatt cycle) \citep{Abreu,McCracken2001,McCracken2013,Scafetta2016}. The model predictions well agree with the solar and climate long-term oscillations discussed, for example, in \citet{Neff} and \citet{McCracken2013}. Let us now describe the invariant inequality model in some detail. Given two harmonics with period $P_{1}$ and $P_{2}$ and two integers $n_{1}$ and $n_{2}$, there is a resonance if $P_{1}/P_{2}=n_{1}/n_{2}$. In the real planetary motions, this identity is almost always not satisfied. Consequently, it is possible to define a new frequency $f$ and period $P$ using the following equation \begin{equation} f=\frac{1}{P}=\left|\frac{n_{1}}{P_{1}}-\frac{n_{2}}{P_{2}}\right|,\label{eq:5.1} \end{equation} which is called ``inequality''. Clearly, $f$ and $P$ represent the beat frequency and the beat period between $n_{1}/P_{1}$ and $n_{2}/P_{2}$. The simplest case is when $n_{1}=n_{2}=1$, which corresponds to the synodal period between two planets defined in Eq. \ref{eq:2.0}, which is reported below for convenience: \begin{equation} P_{12}=\frac{1}{f_{12}}=\left|\frac{1}{P_{1}}-\frac{1}{P_{2}}\right|^{-1}.\label{eq:5.2} \end{equation} Eq. \ref{eq:5.2} indicates the average time interval between two consecutive planetary conjunctions relative to the Sun. The conjunction periods among the four Jovian planets are reported in Table \ref{tab5.1}. \begin{table*}[!t] \centering{}% \begin{tabular}{cccccc} \hline & Inv. Ineq. & Period (year) & Julian Date & Date & Long.\tabularnewline \hline Jup-Sat & (1,-1,0,0) & 19.8593 & 2451718.4 & 2000.4761 & 52° 01'\tabularnewline Jup-Ura & (1,0,-1,0) & 13.8125 & 2450535.8 & 1997.2383 & 305° 22'\tabularnewline Jup-Nep & (1,0,0,-1) & 12.7823 & 2450442.1 & 1996.9818 & 297° 21'\tabularnewline Sat-Ura & (0,1,-1,0) & 45.3636 & 2447322.1 & 1988.4397 & 269° 05'\tabularnewline Sat-Nep & (0,1,0,-1) & 35.8697 & 2447725.6 & 1989.5444 & 281° 14'\tabularnewline Ura-Nep & (0,0,1,-1) & 171.393 & 2449098.1 & 1993.3021 & 289° 22'\tabularnewline \hline \end{tabular}\caption{Heliocentric synodic invariant inequalities and periods with the timing of the planetary conjunctions closest to 2000 AD. \citep[cf.][]{Scafetta2020}.} \label{tab5.1} \end{table*} Eq. \ref{eq:5.1} can be further generalized for a system of $n$ orbiting bodies with periods $P_{i}$ ($i=1,2,\ldots,n$). This defines a generic inequality, represented by the vector $(a_{1},a_{2},\ldots,a_{n})$, as \begin{equation} f=\frac{1}{P}=\left|\sum_{i=1}^{n}\frac{a_{i}}{P_{i}}\right|,\label{eq:5.3} \end{equation} where $a_{i}$ are positive or negative integers. Among all the possible orbital inequalities given by Eq. \ref{eq:5.3}, there exists a small subset of them that is characterized by the condition: \begin{equation} \sum_{i=1}^{n}a_{i}=0.\label{eq:5.4} \end{equation} This special subset of frequencies is made of the synodal planetary periods (Eq. \ref{eq:5.2}) and all the beats among them. It is easy to verify that the condition imposed by Eq. \ref{eq:5.4} has a very important physical meaning: it defines a set of harmonics that are invariant with respect to any rotating system such as the Sun and the heliosphere. Given a reference system at the center of the Sun and rotating with period $P_{o}$, the orbital periods, or frequencies, seen relative to it are given by \begin{equation} f_{i}'=\frac{1}{P_{i}'}=\frac{1}{P_{i}}-\frac{1}{P_{o}}.\label{eq:5.5} \end{equation} With respect to this rotating frame of reference, the orbital inequalities among more planets are given by: \begin{equation} f'=\frac{1}{P'}=\left|\sum_{i=1}^{n}\frac{a_{i}}{P_{i}'}\right|=\left|\sum_{i=1}^{n}\frac{a_{i}}{P_{i}}-\frac{\sum_{i=1}^{n}a_{i}}{P_{o}}\right|.\label{eq:5.6} \end{equation} If the condition of Eq. \ref{eq:5.4} is imposed, we have that $f'=f$ and $P'=P$. Therefore, this specific set of orbital inequalities remains invariant regardless of the rotating frame of reference from which they are observed. For example, the conjunction of two planets relative to the Sun is an event that is observed in the same way in all rotating systems centered in the Sun. Since the Sun is characterized by a differential rotation that depends on its latitude, this means that all solar regions simultaneously feel the same planetary beats, which can strongly favor the emergence of synchronized phenomena in the Sun. Due to this physical property, the orbital inequalities that fulfill the condition given by Eq. \ref{eq:5.4} were labeled as ``invariant'' inequalities. Table \ref{tab5.2} reports the orbital invariant inequalities generated by the large planets (Jupiter, Saturn, Uranus, and Neptune) up to some specific order. They are listed using the vectorial formalism: \begin{equation} f=\frac{1}{P}=(a_{1},a_{2},a_{3},a_{4}),\label{eq:5.7} \end{equation} where $a_{1}$ (for Jupiter), $a_{2}$ (for Saturn), $a_{3}$ (for Uranus) and $a_{4}$ (for Neptune) are positive or negative integers and their sum is zero (Eq. \ref{eq:5.4}). Two order indices, $M$ and $K$, can also be used. $M$ is the maximum value among $|a_{i}|$ and $K$ is defined as \begin{equation} K=\frac{1}{2}(|a_{1}|+|a_{2}|+|a_{3}|+|a_{4}|).\label{eq:5.8} \end{equation} Since for the invariant inequalities the condition of Eq. \ref{eq:5.4} must hold, $K$ indicates the number of synodal frequencies between Jovian planet pairs producing a specific orbital invariant. For example, $K=1$ means that the invariant inequality is made of only one synodal frequency between two planets, $K=2$ indicates that the invariant inequality is made of two synodal frequencies, etc. For example, the invariant inequality cycle $(1,-3,1,1)$ has $K$ = 3 and it is the beat obtained by combining the synodal cycles of Jupiter-Saturn, Saturn-Uranus and Saturn-Neptune because it can be decomposed into three synodal cycles like $(1,-3,1,1)=(1,-1,0,0)-(0,1,-1,0)-(0,1,0,-1)$. In the same way, it is possible to decompose any other orbital invariant inequality. Hence, all the beats among the synodal cycles are invariant inequalities and can all be obtained using the periods and time phases listed in Table \ref{tab5.1}. Table \ref{tab5.2} lists all the invariant inequalities of the four Jovian planets up to $M$ = 5. They can be collected into clusters or groups that recall the observed solar oscillations. The same frequencies are also shown in Figures \ref{fig5}A and B revealing a harmonic series characterized by clusters with a base frequency of 0.00558 1/year that corresponds to the period of 179.2 years, which is known as the Jose cycle \citeyearpar{Jose} \citep{Fairbridge,Landscheidt(1999)}. \begin{table*}[!t] \centering{}% \begin{tabular}{ccccc|cccc} \hline \textbf{(Jup, Sat, Ura, Nep)} & & \textbf{(M, K)} & \textbf{T (year)} & \textbf{cluster} & \textbf{(Ven, Ear, Jup, Sat)} & & \textbf{(M, K)} & \textbf{T (year)}\tabularnewline \hline (1, -3, 5, -3) & & (5, 6) & 42.1 & & ( 3, -5, 5, -3 ) & & ( 5, 8 ) & 5.10\tabularnewline (0, 0, 4, -4) & & (4, 4) & 42.8 & & ( -1, 2, -3, 2 ) & & ( 3, 4 ) & 5.28\tabularnewline (2, -5, 1, 2) & & (5, 5) & 43.7 & & ( 2, -3, -2, 3 ) & & ( 3, 5 ) & 5.43\tabularnewline (-1, 3, 3,-5) & & (5, 6) & 43.7 & \multirow{2}{*}{$\sim45$ year} & ( -3, 5, 2, -4 ) & & ( 5, 7 ) & 6.40\tabularnewline (1, -2, 0, 1) & & (2, 2) & 44.5 & & ( 0, 0, 3, -3 ) & & ( 3, 3 ) & 6.62\tabularnewline (0, 1, -1, 0) & & (1, 1) & 45.4 & & ( 3, -5, 4, -2 ) & & ( 5, 7 ) & 6.86\tabularnewline (-1, 4, -2, -1) & & (4, 4) & 46.3 & & ( -1, 2, -4, 3 ) & & ( 4, 5 ) & 7.19\tabularnewline (1, -1, -5, 5) & & (5, 6) & 47.2 & & ( 2, -3, -3, 4 ) & & ( 4, 6 ) & 7.47\tabularnewline \cline{5-5} (1, -3, 4, -2) & & (4, 5) & 55.8 & & ( -3, 5, 1, -3 ) & & ( 5, 6 ) & 9.44\tabularnewline (0, 0, 3, -3) & & (3, 3) & 57.1 & & ( 0, 0, 2, -2 ) & & ( 2, 2 ) & 9.93\tabularnewline (2, -5, 0, 3) & & (5, 5) & 58.6 & & ( 3, -5, 3, -1 ) & & ( 5, 6 ) & 10.47\tabularnewline (-1, 3, 2, -4) & & (4, 5) & 58.6 & $\sim60$ year & ( -1, 2, -5, 4 ) & & ( 5, 6 ) & 11.27\tabularnewline (1, -2, -1, 2) & & (2, 3) & 60.1 & & ( 2, -3, -4, 5 ) & & ( 5, 7 ) & 11.97\tabularnewline (0, 1, -2, 1) & & (2, 2) & 61.7 & & ( -3, 5, 0, -2 ) & & ( 5, 5 ) & 18.00\tabularnewline (-1, 4, -3, 0) & & (4, 4) & 63.4 & & ( 0, 0, 1, -1 ) & & ( 1, 1 ) & 19.86\tabularnewline \cline{5-5} (1, -3, 3, -1) & & (3, 4) & 82.6 & & ( 3, -5, 2, 0 ) & & ( 5, 5 ) & 22.14\tabularnewline (0, 0, 2, -2) & & (2, 2) & 85.7 & & ( -3, 5, -1, -1 ) & & ( 5, 5 ) & 192.8\tabularnewline (2, -5, -1, 4) & & (5, 6) & 89.0 & & & & & \tabularnewline (-1, 3, 1, -3) & & (3, 4) & 89.0 & Gleissberg & & & & \tabularnewline \cline{6-9} \cline{7-9} \cline{8-9} \cline{9-9} (1, -2, -2, 3) & & (3, 4) & 92.5 & & \textbf{(Mer, Ven, Ear, Jup)} & & \textbf{(M, K)} & \textbf{T (year)}\tabularnewline \cline{6-9} \cline{7-9} \cline{8-9} \cline{9-9} (0, 1, -3, 2) & & (3, 3) & 96.4 & & ( -2, 3, 4, -5 ) & & (5, 7) & 6.63\tabularnewline (-1, 4, -4, 1) & & (4, 5) & 100.6 & & ( 2, -4, -2, 4 ) & & (4, 6) & 7.18\tabularnewline \cline{5-5} (1, -3, 2, 0) & & (3, 3) & 159.6 & & ( 1, -2, -1, 2 ) & & (2, 3) & 14.35\tabularnewline (0, 0, 1, -1) & & (1, 1) & 171.4 & \multirow{2}{*}{Jose} & ( 3, -5, 2, 0 ) & & ( 5, 5 ) & 22.14\tabularnewline (2, -5, -2, 5) & & (5, 7) & 185.1 & & ( 1, -5, 4, 0 ) & & (5, 5) & 40.82\tabularnewline (-1, 3, 0, -2) & & (3, 3) & 185.1 & & & & & \tabularnewline \cline{5-5} (1, -2, -3, 4) & & (4, 5) & 201.1 & & & & & \tabularnewline (0, 1, -4, 3) & & (4, 4) & 220.2 & Suess-de Vries & & & & \tabularnewline (-1, 4, -5, 2) & & (5, 6) & 243.4 & & & & & \tabularnewline \cline{5-5} (0, -1, 5, -4) & & (5, 5) & 772.7 & \multirow{2}{*}{Eddy} & & & & \tabularnewline (-1, 2, 4, -5) & & (5, 6) & 1159 & & & & & \tabularnewline \cline{5-5} (1, -3, 1, 1) & & (3, 3) & 2318 & Bray-Hallstatt & & & & \tabularnewline \hline \end{tabular}\caption{(Left) List of invariant inequalities for periods $T\protect\geq40$ years and $M\protect\leq5$ for Jupiter, Saturn, Uranus, Neptune. (Right) The same for Venus, Earth, Jupiter, and Saturn, and for Mercury, Venus, Earth and Jupiter. \citep[cf.][]{Scafetta2020}.} \label{tab5.2} \end{table*} The physical importance of the harmonics listed in Table \ref{tab5.2} is shown in Figure \ref{fig5}C, which compares a solar activity reconstruction from a \textsuperscript{14}C record, and the climatic reconstruction from a $\delta{}^{18}O$ record covering the period from 9500 to 6000 years ago \citep{Neff}: the two records are strongly correlated. Figure \ref{fig5}D shows that the two records present numerous common frequencies that correspond to the cycles of Eddy (800--1200 years), Suess-de Vries (190--240 years), Jose (155--185 years), Gleissberg (80--100 years), the 55--65 year cluster, another cluster at 40-50 years, and some other features. Figure \ref{fig5}D also compares the common spectral peaks of the two records against the clusters of the invariant orbital inequalities (red bars) reported in Figure \ref{fig5}B and listed in Table \ref{tab5.2}. The figure shows that the orbital invariant inequality model well predicts all the principal frequencies observed in solar and climatic data throughout the Holocene. The efficiency of the model in hindcasting both the frequencies and the phases of the observed solar cycles can also be more explicitly shown. For example, the model perfectly predicts the great Bray-Hallstatt cycle (2100-2500 years) that was studied in detail by \citet{McCracken2013} and \citet{Scafettaetal2016}. The first step to apply the model is to determine the constituent harmonics of the invariant inequality $(1,-3,1,1)$. This cycle is a combination of the orbital periods of Jupiter, Saturn, Uranus and Neptune that gives \begin{equation} P_{JSUN}=\frac{1}{f_{JSUN}}=\left(\frac{1}{P_{j}}-\frac{3}{P_{S}}+\frac{1}{P_{U}}+\frac{1}{P_{N}}\right)^{-1}=2317.56\:yr.\label{eq:5.9} \end{equation} The constituent harmonics are the synodic cycles of Jupiter-Saturn, Saturn-Uranus and Saturn-Neptune as described by the following relation \begin{equation} (1,-3,1,1)=(1,-1,0,0)-(0,1,-1,0)-(0,1,0,-1).\label{eq:5.10} \end{equation} Thus, the invariant inequality $(1,-3,1,1)$ is the longest beat modulation generated by the superposition of these three synodic cycles and it can be expressed as the periodic function \begin{equation} f(t)=\sin\left(2\pi\frac{t-t_{JS}}{P_{JS}}\right)+\sin\left(2\pi\frac{t-t_{SU}}{P_{SU}}\right)+\sin\left(2\pi\frac{t-t_{SN}}{P_{SN}}\right)\label{eq:5.11} \end{equation} where $P_{ij}$ are the synodic periods and $t_{ij}$ are the correspondent time-phases listed in Table \ref{tab5.1}. Eq. \ref{eq:5.11} is plotted in Figure \ref{fig5}E and shows the long beat modulation superposed to the Bray-Hallstatt period of 2318 years found in the $\Delta^{14}C$ (\textperthousand) record (black) throughout the Holocene \citep[IntCal04.14c]{Reimer}. This beat cycle is captured, for example, by the function: \begin{equation} f_{B}(t)=-\sin\left(2\pi\frac{t-t_{JS}}{P_{JS}}-2\pi\frac{t-t_{SU}}{P_{SU}}-2\pi\frac{t-t_{SN}}{P_{SN}}\right),\label{eq:5.12} \end{equation} whose period is 2318 years and the timing is fixed by the three conjunction epochs and the respective synodic periods. In fact, the argument of the above sinusoidal function is the sum of three terms that correspond to those of Equation \ref{eq:5.10}. Equation \ref{eq:5.12} is plotted in Figure \ref{fig5}E as the blue curve. Three important invariant inequalities -- $(1,-3,2,0)$, $(0,0,1,-1)$ and $(-1,3,0,-2)$ -- are found within the Jose 155--185 year period band: \begin{equation} P_{JSU}=\frac{1}{f_{JSU}}=\left(\frac{1}{P_{J}}-\frac{3}{P_{S}}+\frac{2}{P_{U}}\right)^{-1}=159.59\:yr,\label{eq:5:13} \end{equation} \begin{equation} P_{UN}=\frac{1}{f_{UN}}=\left(\frac{1}{P_{U}}-\frac{1}{P_{N}}\right)^{-1}=171.39\:yr,\label{eq:5:14} \end{equation} \begin{equation} P_{JSN}=\frac{1}{f_{JSN}}=\left(-\frac{1}{P_{J}}+\frac{3}{P_{S}}-\frac{2}{P_{N}}\right)^{-1}=185.08\:yr.\label{eq:5:15} \end{equation} The long beat between Eq. \ref{eq:5:14} and Eq. \ref{eq:5:13} -- that is $(0,0,1,-1)-(-1,3,0,-2)=(1,-3,1,-1)$ -- is the great Bray--Hallstatt cycle. The fast beat between Eq. \ref{eq:5:14} and Eq. \ref{eq:5:15} -- $(0,0,1,-1)+(-1,3,0,-2)=(-1,3,1,-3)$ -- is the Gleissberg 89-year cycle, which also corresponds to half of the Jose period of $\sim$178 year that regulates the harmonic structure of the wobbling of the solar motion. Another interesting invariant inequality is $(1,-2,-1,2)=(1,0,-1,0)-2(0,1,0,-1)$, which is a beat between the synodic period of Jupiter and Uranus (1,0,-1,0) and the first harmonic of the synodic period of Saturn and Neptune. The period is: \begin{equation} P_{JSN}=\frac{1}{f_{JSN}}=\left(\frac{1}{P_{J}}-\frac{2}{P_{S}}-\frac{1}{P_{U}}+\frac{2}{P_{N}}\right)^{-1}=60.1\:yr,\label{eq:5:16} \end{equation} The beat oscillation is given by the equation: \begin{equation} f(t)=\cos\left(2\pi\frac{t-t_{JU}}{P_{JU}}\right)+\cos\left(2\pi\cdot2\frac{t-t_{SN}}{P_{SN}}\right),\label{eq:5.17} \end{equation} that shows a 60.1-year beat oscillation. The pattern is found in both solar and climate records and could be physically relevant because the maxima of the 60-year beat occur during specific periods -- the 1880s, 1940s, and 2000s -- that were characterized by maxima in climatic records of global surface temperatures and in several other climate index records \citep{Agnihotri,Scafetta2013,Scafetta2014c,Wyatt}. The 60-year oscillation was even found in the records of the historical meteorite falls in China from AD 619 to 1943 \citep{ChangYu1981,Scafetta2019,Yu1983}. An astronomical 60-year oscillation can be obtained in several ways. In particular, \citet{Scafetta2010} and \citeyearpar{Scafetta2012c} showed that it is also generated by three consecutive conjunctions of Jupiter and Saturn since their synodic cycle is 19.86 years and every three alignments the conjunctions occur nearly in the same constellation. The three consecutive conjunctions are different from each other because of the ellipticity of the orbits. The 60-year pattern has been known since antiquity as the Trigon of the Great Conjunctions \citep{Kepler}, which also slowly rotates generating a quasi-millennial cycle known as the Great Inequality of Jupiter and Saturn \citep{Etz,Lovett,Scafetta2012c,Wilson}. Both the 60-year and the quasi-millennial oscillations also characterize the evolution of the instantaneous eccentricity function of Jupiter \citep{Scafetta2019}. The quasi millennial oscillation (the Heddy cycle) could be related to the two orbital invariant inequalities $(0,-1,5,-4)\equiv772.7$ years and $(-1,2,4,-5)\equiv1159$ years. Their beat frequency being $(0,-1,5,-4)-(-1,2,4,-5)=(1,-3,1,1)\equiv2318$ years, which corresponds to the Bray--Hallstatt cycle. Their mean frequency, instead, is $0.5(0,-1,5,-4)+0.5(-1,2,4,-5)=0.5(-1,1,9,-9)\equiv927$ years that reminds the Great Inequality cycle of Jupiter and Saturn suggesting that this great cycle could also be generated by the beat between the synodic period of Jupiter and Saturn, $(1,-1,0,0)$ and the ninth harmonic of the synodic period of Uranus and Neptune, $9(0,0,1,-1)$. The invariant inequality model can be extended to all the planets of the solar system (see Tables \ref{tab3.2} and \ref{tab3.3} and \ref{tab5.2}). The ordering of the frequencies according to their physical relevance depends on the specific physical function involved (e.g. tidal forcing, angular momentum transfer, space weather modulation, etc.) and will be addressed in future work. \section{The Suess-de Vries cycle (190-240 years)} The Suess-de Vries cycle is an important secular solar oscillation commonly found in radiocarbon records \citep{de Vries,Suess1965}. Several recent studies have highlighted its importance \citep{Abreu,Beer,L=0000FCdecke,McCracken2013,Neff,Stefani2020b,Stefani2021,Wagner2001,Weiss(2016)}. Its period varies between 200 and 215 years but the literature also suggests a range between 190 and 240 years. \citet{Stefani2021} argued that the Suess-de Vries cycle, together with the Hale and the Gleissberg-type cycles, could emerge from the synchronization between the 11.07-year periodic tidal forcing of the Venus--Earth--Jupiter system and the 19.86-year periodic motion of the Sun around the barycenter of the solar system due to Jupiter and Saturn. This model yields a Suess-de Vries-type cycle of 193 years. Actually, the 193-year period is the orbital invariant inequality $(-3,5,-1,-1)=(0,0,1,-1)-(3,-5,2,0)$ where $(0,0,1,-1)$ is the synodic cycle of Jupiter and Saturn (19.86 years) and $(3,-5,2,0)$ is the 22.14-year orbital inequality cycle of Venus, Earth and Jupiter (Eq. \ref{eq:2.2}). We also notice that $(0,0,1,-1)+(3,-5,2,0)=(3,-5,3,-1)$ corresponds to the period of 10.47 years which is a periodicity that has been observed in astronomical and climate records \citep{Scafetta2014b,Scafettaetal2020}. The orbital invariant inequality model discussed in Section 7 provides an alternative and/or complementary origin of the Suess-de Vries cycle. In fact, the orbital invariant inequalities among Jupiter, Saturn, Uranus and Neptune form a cluster of planetary beats with periods between 200 and 240 years. Thus, the Suess-de Vries cycle might also emerge as beat cycles among the orbital invariant inequalities with periods around 60 years and those belonging to the Gleissberg frequency band with periods around 85 years. See Table \ref{tab5.2}. In fact, their synodic cycles would approximately be \begin{equation} \frac{1}{1/60-1/85}=204\:yr. \end{equation} It might also be speculated that the Suess-de Vries cycle originates from a beat between the Trigon of the Great Conjuctions of Jupiter and Saturn ($3\times19.862=59.6$ years, which is an oscillation that mainly emerges from the synodical cycle between Jupiter and Saturn combined with the eccentricity of the orbit of Jupiter) and the orbital period of Uranus (84 years). In this case, we would have $1/(1/59.6-1/84)=205$ years. The last two estimates coincide with the 205-year Suess-de Vries cycle found in radiocarbon records by \citet{Wagner2001} and are just slightly smaller than the 208-year cycle found in other similar recent studies \citep{Abreu,Beer,McCracken2013,Weiss(2016)} We notice that the natural planetary cycles that could theoretically influence solar activity are either the orbital invariant inequality cycles (which involve the synodic cycles among the planets assumed to be moving on circular orbits) and the orbital cycles of the planets themselves because the orbits are not circular but eccentric, and their harmonics. \section{Evidences for planetary periods in climatic records} A number of solar cycles match the periods found in climatic records (see Figures \ref{fig4}, \ref{fig5} and \ref{fig6}) and often appear closely correlated for millennia \citep[e.g.:][and many others]{Neff,Scafetta2004,Scafetta2006,Scafetta2009,Scafetta2021,Steinhilber2012}. Evidences for a astronomical origin of the Sub-Milankovitch climate oscillations have been discussed in several studies \citep[e.g.:][]{Scafetta2010,Scafetta2014b,Scafetta2016,Scafetta2018,Scafetta2021}. Let us now summarizes the main findings relative to the global surface temperature record from 1850 to 2010. Figures \ref{fig6}A and B compare the time-frequency analyses between the speed of the Sun relative to the center of mass of the solar system (Figure \ref{fig1}) and the HadCRUT3 global surface records \citep{Scafetta2014b}. It can be seen that the global surface temperature oscillations mimic several astronomical cycles at the decadal and multidecadal scales, as first noted in \citet{Scafetta2010} and later confirmed by advanced spectral coherence analyses \citep{Scafetta2016,Scafetta2018}. The main periods found in the speed of the Sun (Figure \ref{fig6}A) are at about 5.93, 6.62, 7.42, 9.93, 11.86, 13.8, 20 and 60 years. Most of them are related to the orbits of Jupiter and Saturn. The main periods found in the temperature record (Figure \ref{fig6}B) are at about 5.93, 6.62, 7.42, 9.1, 10.4, 13.8, 20 and 60 years. Most of these periods appear to coincide with orbital invariant inequalities (Table \ref{tab5.2}) but the 9.1 and 10.4-year cycles. Among the climate cycles, it is also found an important period of about 9.1 years, which is missing among the main planetary frequencies shown in Figure \ref{fig6}A. \citet{Scafetta2010} argued that this oscillation is likely linked to a combination of the 8.85-year lunar apsidal line rotation period, the first harmonic of the 9-year Saros eclipse cycle and the 9.3-year first harmonic of the soli-lunar nodal cycle \citep[supplement]{Cionco,Scafetta2012d}. These three lunar cycles induce oceanic tides with an average period of about 9.1 years \citep{Wood,Keeling} that could affect the climate system by modulating the atmospheric and oceanic circulation. The 10.4-year temperature cycle is variable and appears to be the signature of the 11-year solar cycle that varies between the Jupiter-Saturn spring tidal cycle (9.93 years) and the orbital period of Jupiter (11.86 years). Note that in Figure \ref{fig6}B, the frequency of this temperature signal increased in time from 1900 to 2000. This agrees with the solar cycle being slightly longer (and smaller) at the beginning of the 20th century and shorter (and larger) at its end (see Figure \ref{fig2}). We also notice that the 10.46-year period corresponds to the orbital invariant inequality $(3,-5,3,-1)$ among Venus, Earth, Jupiter and Saturn. The above findings were crucial for the construction of a semi-empirical climate model based on the several astronomically identified cycles \citep{Scafetta2010,Scafetta2013}. The model included the 9.1-year solar-lunar cycle, the astronomical-solar cycles at 10.5, 20, 60 and, in addition, two longer cycles with periods of 115 years (using Eq. \ref{eq:4.14}) and a millennial cycle here characterized by an asymmetric 981-year cycle with a minimum around 1700 (the Maunder Minimum) and two maxima in 1080 and 2060 (using Eq. \ref{eq.4.17}). The model was completed by adding the volcano and the anthropogenic components deduced from the ensemble average prediction of the CMIP5 global circulation models assuming an equilibrium climate sensitivity (ECS) of about 1.5°C that is half of that of the model average, which is about 3°C. This operation was necessary because the identified natural oscillations already account for at least 50\% of the warming observed from 1970 to 2000. Recently, \citet{Scafetta2021} upgraded the model by adding some higher frequency cycles. Figure \ref{fig6}C shows the HadCRUT4.6 global surface temperature record \citep{Morice} against the ensemble average simulations produced by the CMIP6 global circulation models (GCMs) using historical forcings (1850-2014) extended with three different shared socioeconomic pathway (SSP) scenarios (2015-2100) \citep{Eyring}. Figure \ref{fig6}D shows the same temperature record against the proposed semi-empirical astronomical harmonic model under the same forcing conditions. The comparison between panels C and D shows that the semi-empirical harmonic model performs significantly better than the classical GCMs in hindcasting the 1850-2020 temperature record. It also predicts moderate warming for the future decades, as explained in detail by \citet{Scafetta2013,Scafetta2021}. \section{Possible physical mechanisms} Many authors suggest that solar cycles revealed in sunspot and cosmogenic records could derive from a deterministic non-linear chaotic dynamo \citep{Weiss(2016),Charbonneau(2020),Charbonneau(2022)}. However, the assumption that solar activity is only regulated by dynamical and stochastic processes inside the Sun has never been validated mainly because these models have a poor hindcasting capability. We have seen how the several main planetary harmonics and orbital invariant inequalities tend to cluster towards specific frequencies that characterize the observed solar activity cycles. This suggests that the strong synchronization among the planetary orbits could be further extended to the physical processes that are responsible for the observed solar variability. The physical mechanisms that could explain how the planets may directly or indirectly influence the Sun are currently unclear. It can be conjectured that the solar dynamo might have been synchronized to some planetary periods under the action of harmonic forcings acting on it for several hundred million or even billion years. In fact, as pointed out by Huygens in the 17\textsuperscript{th} century, synchronization can occur even if the harmonic forcing is very weak but lasts long enough \citep{Pikovsky}. There may be two basic types of mechanisms referred to how and where in the Sun the planetary forcing is acting. In particular, we distinguish between the mechanisms that interact with the outer regions of the Sun and those that act in its interior. \begin{enumerate} \item Planetary tides can perturb the surface magnetic activity of the Sun, the solar corona, and thus the solar wind. The solar wind, driven by the rotating twisted magnetic field lines \citep{Parker,Tattersall}, can reconnect with the magnetic fields of the planets when they get closer during conjunctions. This would modulate the solar magnetic wind density distribution and the screening efficiency of the whole heliosphere on the incoming cosmic rays. The effect would be a modulation of the cosmogenic records which then also act on the cloud cover. It is also possible that the planets can focus and modulate by gravitational lensing the flux of interstellar and interplanetary matter -- perhaps even of dark matter -- towards the Sun and the Earth stimulating solar activity \citep{Bertolucci,Scafetta2020,Zioutas} and, again, contributing to clouds formation on Earth which alters the climate. \item Gravitational planetary tides and torques could reach the interior of the Sun and synchronize the solar dynamo by forcing its tachocline \citep{Abreu,Stefani2016,Stefani2019,Stefani2021} or even modulate the nuclear activity in the core \citep{Scafetta2012b,Wolff}. \end{enumerate} \citet{ScafettaWillson2013b} argued that these two basic mechanisms could well complement each other. In principle, it might also be possible that the physical solar dynamo is characterized by a number of natural frequencies that could resonate with the external periodic forcings yielding some type of synchronization. Let us briefly analyze several cases. \subsection{Mechanisms associated with planetary alignments} The frequencies associated with planetary alignments and, in particular, those of the Jovian planets, were found to reproduce the main observed cycles in solar and climatic data. \citet{Scafetta2020} showed examples of gravitational field configurations produced by a toy-model made of four equal masses orbiting around a 10 times more massive central body. The Sun could feel planetary conjunctions because at least twenty-five out of thirty-eight largest solar flares were observed to start when one or more planets among Mercury, Venus, Earth, and Jupiter were either nearly above the position of the flare (within $10{^\circ}$ longitude) or on the opposite side of the Sun \citep{Hung}. For example, \citet{Morner(2015)} showed that, on January 7 2014, a giant solar flare of class X1.2 was emitted from the giant sunspot active region AR1944 \citep{NASA2014a}, and that the flare pointed directly toward the Earth when Venus, Earth and Jupiter were exactly aligned in a triple conjunction and the planetary tidal index calculated by \citet{Scafetta2012b} peaked at the same time. \citet{Hung} estimated that the probability for this to happen at random was 0.039\%, and concluded that \textit{``the force or momentum balance (between the solar atmospheric pressure, the gravity field, and magnetic field) on plasma in the looping magnetic field lines in solar corona could be disturbed by tides, resulting in magnetic field reconnection, solar flares, and solar storms.''} Comparable results and confirmations that solar flares could be linked to planetary alignments were recently discussed in \citet{Bertolucci} and \citet{Petrakou}. \subsection{Mechanisms associated with the solar wobbling} The movement of the planets and, in particular, of the Jovian ones, are reflected in the solar wobbling. \citet{Charvatova2000} and \citet{Charvatova2013} showed that the solar wobbling around the center of mass of the solar system forms two kinds of complex trajectories: an ordered one, where the orbits appear more symmetric and circular, and a disordered type, where the orbits appear more eccentric and randomly distributed. These authors found that the alternation between these two states presents periodicities related, for example, to the Jose ($\sim$178 years) and Bray--Hallstatt ($\sim$2300 years) cycles. Figure \ref{fig7}A compares the Bray--Hallstatt cycle found in the $\Delta^{14}C$ (\textperthousand) record (black) throughout the Holocene \citep[IntCal04.14c]{Reimer} with two orbital records representing the periods of the pericycle and apocycle orbital arcs of the solar trajectories as extensively discussed by \citet{Scafettaetal2016}. Figure \ref{fig7}B shows the solar wobbling for about 6000 years where the alternation of ordered and disordered orbital patterns typically occurs according to the Bray--Hallstatt cycle of 2318 years \citep{Scafettaetal2016}. In particular, the astronomical records show that the Jose cycle is modulated by the Bray--Hallstatt cycle. Figures \ref{fig7}C and D show examples of how planetary configurations can reproduce the Bray--Hallstatt cycle: see details in \citet{Scafettaetal2016}. The fast oscillations correspond to the orbital invariant inequalities with periods of 159, 171.4 and 185 years while the long beat oscillation corresponds to the orbital invariant inequality with a period of 2318 years, which perfectly fits the Bray--Hallstatt cycle as estimated in \citet{McCracken2013} (see Table \ref{tab5.2}). It is possible that the pulsing dynamics of the heliosphere can periodically modulate the solar wind termination shock layer and, therefore, the incoming interstellar dust and cosmic ray fluxes. \subsection{Mechanisms associated with planetary tides and tidal torques} Discussing the tidal interactions between early-type binaries, \citet{Goldreich1989} demonstrated that the tidal action and torques can produce important effects in the thin overshooting region between the radiative and the convective zone, which is very close to the tachocline. This would translate both in tidal torques and in the onset of g-waves moving throughout the radiative region. A similar mechanism should also take place in late-type stars like the Sun \citep{Goodman}. \citet{Abreu} found an excellent agreement between the long-term solar cycles and the periodicities in the planetary tidal torques. These authors assumed that the solar interior is characterized by a non-spherical tachocline. Under such a condition, the planetary gravitational forces exert a torque on the tachocline itself that would then vary with the distribution of the planets around the Sun. These authors showed that the torque function is characterized by some specific planetary frequencies that match those observed in cosmogenic radionuclide proxies of solar activity. The authors highlighted spectral coherence at the following periods: 88, 104, 150, 208 and 506 years. The first four periods were discussed above using alternative planetary functions; the last period could be a harmonic of the millennial solar cycle also discussed above and found in the same solar record \citep{Scafetta2012a,Scafetta2014b}. \citet{Abreu} observed that the tachocline approximately coincides with the layer at the bottom of the convection zone where the storage and amplification of the magnetic flux tubes occur. These are the flux tubes that eventually erupt at the solar photosphere to form active regions. The tachocline layer is in a critical state because it is very sensitive to small perturbations being between the radiative zone characterized by stable stratification ($\delta<0$) and the convective zone characterized by unstable stratification ($\delta>0$). The proposed hypothesis is that the planetary tides could influence the magnetic storage capacity of the tachocline region by modifying its entropy stratification and the superadiabaticity parameter $\delta$, thereby altering the maximum field strength of the magnetic flux tubes that regulate the solar dynamo. However, \citet{Abreu} also acknowledged that their hypothesis could not explain how the tiny tidal modification of the entropy stratification could produce an observable effect although they conjectured the presence of a resonance mediated by gravity waves. \begin{table*}[!t] \begin{centering} \begin{tabular}{cccccccc} \hline & mass & semi-major & perihelion & aphelion & mean tidal & diff. tidal & Sun rot.\tabularnewline & (kg) & axis (m) & (m) & (m) & elong. (m) & elong. (m) & (days)\tabularnewline \hline Me & 3.30E23 & 5.79E10 & 4.60E10 & 6.98E10 & 3.0E-4 (7.5E-4) & 4.3E-4 (1.1E-3) & 37.92\tabularnewline \hline Ve & 4.87E24 & 1.08E11 & 1.08E11 & 1.09E11 & 6.8E-4 (1.7E-3) & 2.6E-5 (6.6E-5) & 30.04\tabularnewline \hline Ea & 5.97E24 & 1.50E11 & 1.47E11 & 1.52E11 & 3.2E-4 (7.9E-4) & 3.2E-5 (7.9E-5) & 28.57\tabularnewline \hline Ma & 6.42E23 & 2.28E11 & 2.07E11 & 2.49E11 & 9.6E-6 (2.4E-5) & 5.5E-6 (1.4E-5) & 27.56\tabularnewline \hline Ju & 1.90E27 & 7.79E11 & 7.41E11 & 8.17E11 & 7.1E-4 (1.8E-3) & 2.1E-4 (5.2E-4) & 26.66\tabularnewline \hline Sa & 5.69E26 & 1.43E12 & 1.35E12 & 1.51E12 & 3.4E-5 (8.5E-5) & 1.2E-5 (2.9E-5) & 26.57\tabularnewline \hline Ur & 8.68E25 & 2.88E12 & 2.75E12 & 3.00E12 & 6.4E-7 (1.6E-6) & 1.7E-7 (4.3E-7) & 26.52\tabularnewline \hline Ne & 1.02E26 & 4.50E12 & 4.45E12 & 4.55E12 & 2.0E-7 (5.0E-7) & 1.3E-8 (3.3E-8) & 26.51\tabularnewline \hline \end{tabular} \par\end{centering} \caption{Mean tidal elongations at the solar surface produced by all planets. \textquotedblleft Diff. tidal elongation\textquotedblright{} is the difference between the tides at perihelion and aphelion. The 26.5-day mean solar rotation as seenby the planets. Tidal elongations are calculated for Love-number 3/2 and 15/4, the latter being inside parentheses. \citep[cf.][]{Scafetta2012b}.} \label{tab6.1} \end{table*} The planetary tidal influence on the solar dynamo has been rather controversial because the tidal accelerations at the tachocline layer are about 1000 times smaller than the accelerations of the convective cells \citep{Jager}. \citet{Scafetta2012b} calculated that the gravitational tidal amplitudes produced by all the planets on the solar chromosphere are of the order of one millimeter or smaller (see Table \ref{tab6.1}). More recently, \citet{Charbonneau(2022)} critiqued \citet{Stefani2019,Stefani2021} by observing that also the planetary tidal forcings of Jupiter and Venus could only exert a ``homeopathic'' influence on the solar tachocline concluding that they should be unable to synchronize the dynamo. \citet{Charbonneau(2022)} also observed that even angular momentum transport by convective overshoot into the tachocline would be inefficient and concluded that synchronization could only be readily achieved in presence of high forcing amplitudes, stressing the critical need for a powerful amplification mechanism. While it is certainly true that the precise underlying mechanism is not completely understood, the rough energetic estimate that 1 mm tidal height corresponds to 1 m/s velocity at the tachocline level might still entail sufficient capacity for synchronization by changing the (sensitive) field storage capacity \citep{Abreu} or by synchronizing that part of $\alpha$ that is connected with the Tayler instability or by the onset of magneto-Rossby waves at the tachocline \citep{Dikpati2017,Zaqarashvili(2018)}. In all cases, it could be possible that only a few high-frequency planetary forcing (e.g. the 11.07-year Venus-Earth-Jupiter tidal model) are able to efficiently synchronize the solar dynamo \citep{Stefani2016,Stefani2018,Stefani2019}. At the same time, additional and longer solar cycles could emerge when some feature of the dynamo is also modulated by the angular momentum exchange associated with the solar wobbling \citep{Stefani2021}. Finally, \citet{Albert} proposed that stochastic resonance could explain the multi-secular variability of the Schwabe cycle by letting the dynamo switch between two distinct operating modes as the solution moves back and forth from the attraction basin of one to the other. Alternatively, the problem of the tidal ``homeopathic'' influence on the tachocline could be solved by observing that tides could play some more observable role in the large solar corona where the solar wind originates, or in the wind itself at larger distances from the Sun where the tides are stronger, or even in the solar core where they could actually trigger a powerful response from nuclear fusion processes. Let us now discuss the latter hypothesis. \subsection{A possible solar amplification of the planetary tidal forcing} A possible amplification mechanism of the effects of the tidal forcing was introduced by \citet{Wolff} and \citet{Scafetta2012b}. \citet{Wolff} proposed that tidal forcing could act inside the solar core inducing waves in the plasma by mixing the material and carrying fresh fuel to the deeper and hotter regions. This mechanism would make solar-type stars with a planetary system slightly brighter because their fuel would burn more quickly. \citet{Scafetta2012b} further developed this approach and introduced a physical mechanism inspired by the mass-luminosity relation of main-sequence stars. The basic idea is that the luminosity of the core of the Sun can be written as \begin{equation} L(t)\approx L_{\varodot}+A\cdot\dot{\Omega}_{tidal}(t),\label{eq:6.2-1} \end{equation} where $L_{\varodot}$ is the baseline luminosity of the star without planets and $\Delta L_{tidal}(t)=A\cdot\dot{\Omega}_{tidal}(t)$ is the small luminosity increase induced by planetary tides inside the Sun. $\dot{\Omega}_{tidal}(t)$ is the rate of the gravitational tidal energy which is continuously dissipated in the core and $A$ is the amplification factor related to the triggered luminosity production via H-burning. To calculate the magnitude of the amplification factor $A$ we start by considering the Hertzsprung-Russell \emph{mass-luminosity relation}, which establishes that, if the mass of a star increases, its luminosity $L$ increases as well. In the case of a G-type main-sequence star, with luminosity $L$ and mass $M=M_{\varodot}+\Delta M$, the mass-luminosity relation approximately gives \begin{equation} \frac{L}{L_{\varodot}}\approx\left(\frac{M}{M_{\varodot}}\right)^{4}\approx1+\frac{4\Delta M}{M_{\varodot}},\label{eq:6.1} \end{equation} where $L_{\varodot}$ is the solar luminosity and $M_{\varodot}$ is the mass of the Sun \citep{Duric}. By relating the luminosity of a star to its mass, the Hertzsprung-Russell relation suggests a link between the luminosity and the gravitational power continuously dissipated inside the star. The total solar luminosity is \begin{equation} L_{\varodot}=4\pi(1AU)^{2}\times TSI=3.827\cdot10^{26}~W~,\label{eq:6.3} \end{equation} where 1 AU = $1.496\cdot10^{11}$ m is the average Sun-Earth distance, and TSI is the total solar irradiance 1360.94 $W/m^{2}$ at 1 AU. Every second, the core of the Sun transforms into luminosity a certain amount of mass according to the Einstein equation $E=mc^{2}$. If $dL(r)$ is the luminosity produced inside the shell between $r$ and $r+dr$ \citep{Bahcall01,Bahcall}, the mass transformed into light every second in the shell is \begin{equation} \frac{d\dot{m}(r)}{dr}=\frac{1}{c^{2}}~\frac{dL(r)}{dr},\label{eq:6.4} \end{equation} where $c=2.998\cdot10^{8}~m/s$ is the speed of the light and $r$ is the distance from the center of the Sun. The transformed material can be associated with a correspondent loss of gravitational energy of the star per time unit $\dot{U}_{\varodot}$, which can be calculated using Eq. \ref{eq:6.4} as \begin{align} \dot{U}_{\varodot} & =\frac{1}{2}~G\int_{0}^{R_{S}}m_{\varodot}(r)~\frac{d\dot{m}(r)}{dr}~\frac{1}{r}~dr\label{eq:6.5}\\ = & \frac{1}{2}~\frac{G}{c^{2}}\int_{0}^{R_{S}}m_{\varodot}(r)~\frac{dL(r)}{dr}~\frac{1}{r}~dr=3.6\cdot10^{20}~W\nonumber \end{align} where the initial factor $1/2$ is due to the virial theorem, $m_{\varodot}(r)$ is the solar mass within the radius $r\leq R_{S}$ and $L(r)$ is the luminosity profile function derived by the standard solar model \citep{Bahcall01,Bahcall}. The gravitational forces will do the work necessary to compensate for such a loss of energy to restore the conditions for the H-burning. In fact, the solar luminosity would decrease if the Sun's gravity did not fill the vacuum created by the H-burning, which reduces the number of particles by four ($4H\rightarrow1He$). At the same time, the nucleus of He slowly sinks releasing additional potential energy. All this corresponds to a gravitational work in the core per time unit, $\dot{\Omega}_{\varodot}$, that is associated with light production. The basic analogy made by \citet{Scafetta2012b} is that $\dot{\Omega}_{\odot}$ should be of the same order of magnitude as the rate of the gravitational energy loss due to H-fusion ($\dot{\Omega}_{\varodot}\approx\dot{U}_{\varodot}$). Moreover, the energy produced by the dissipation of the tidal forces in the core should be indistinguishable from the energy produced by the other gravitational forces in the Sun. Thus, it is as if tides added some gravitational power that becomes $\dot{\Omega}_{\odot}+\dot{\Omega}_{tidal}$. For small perturbations, since light production is directly related both to the solar mass and to the gravitational power dissipated inside the core, \citet{Scafetta2012b} assumed the equivalence \begin{equation} \frac{\Delta M}{M_{\varodot}}\equiv\frac{\dot{\Omega}_{tidal}}{\dot{\Omega}_{\varodot}}, \end{equation} where $\dot{\Omega}_{tidal}$ is the tidal perturbing power dissipated inside the Sun and $\dot{\Omega}_{\odot}\equiv\dot{U}_{\varodot}$ is the rate of the gravitational energy lost by the Sun through H-burning. Thus, from Eqs. \ref{eq:6.2-1} and \ref{eq:6.1} we get \begin{equation} L(t)\approx L_{\varodot}+\frac{4L_{\varodot}}{\dot{\Omega}_{\varodot}}\dot{\Omega}_{tidal}(t)=L_{\varodot}+A\cdot\dot{\Omega}_{tidal}(t),\label{eq:6.2} \end{equation} where the amplification factor is \begin{equation} A=4\frac{L_{\varodot}}{\dot{\Omega}_{\varodot}}\approx4\frac{L_{\varodot}}{\dot{U}_{\varodot}}\approx4.25\cdot10^{6}.\label{eq:6.6} \end{equation} Eq. \ref{eq:6.6} means that any little amount of gravitational power dissipated in the core (like that induced by planetary tidal forcing) could be amplified by a factor of the order of one million by nuclear fusion. This could be equivalent to having gravitational tidal amplitudes amplified from 1 mm to 1 kilometer at the tachocline. This amplification could solve the problem of the ``homeopathic'' gravitational tidal energy contribution highlighted by \citet{Charbonneau(2022)}. By using such a large amplification factor the estimated gravitational power $\dot{\Omega}_{tidal}$ dissipated inside the solar core, \citet{Scafetta2012b} calculated the tidally-induced TSI produced by each of the planets (Figure \ref{fig8}A and B), as well as that of all the planets together (Figure \ref{fig8}C). The sequence of the relative tidal relevance of the planets is Jupiter, Venus, Earth, Mercury, Saturn, Mars, Uranus and Neptune. The mean enhancement of their overall tidally-induced TSI is of the order of 0.3-0.8 $W/m^{2}$, depending on the specific Love number of the tides (see Figure \ref{fig8}C). However, on shorter time scales the tides could produce TSI fluctuations up to 0.6-1.6 $W/m^{2}$ in absence of dampening mechanisms. In particular, on a decadal time scale, the TSI fluctuations due to Jupiter and Saturn could reach amplitudes of 0.08-0.20 $W/m^{2}$ (see the black curve in Figure \ref{fig8}C). If the luminosity flux reaching the tachocline from the radiative zone is modulated by the contribution of tidally-induced luminosity oscillations with a TSI amplitude of the order of 0.01-0.10 $W/m^{2}$, the perturbation could be sufficiently energetic to tune the solar dynamo with the planetary frequencies. The dynamo would then further amplify the luminosity signal received at the tachocline up to $\sim1$ $W/m^{2}$ amplitudes as observed in TSI cycles \citep{Willson2003}. Figure \ref{fig8}D compares the periodograms of the sunspot number record and of the planetary luminosity signal shown in Figure \ref{fig8}C. The two side frequency peaks at about 10 years (J/S-spring tide) and 11.86 years (J-tide) perfectly coincide in the two spectral analyses. The central frequency peak at about 10.87 years shown only by sunspot numbers could be directly generated by the solar dynamo excited by the two tidal frequencies \citep{Scafetta2012a} or other mechanisms connected with the dynamo as discussed above. An obvious objection to the above approach is that the Kelvin-Helmholtz time-scale \citep{Mitalas,Stix} predicts that the light journey from the core to the convective zone requires $10^{4}$ to $10^{8}$ years. Therefore, the luminosity fluctuations produced inside the core could be hardly detectable because they would be smeared out before reaching the convective zone. At most, there could exist only a slightly enhanced solar luminosity related to the overall tidally-induced TSI mean enhancement of the order of 0.3-0.8 $W/m^{2}$ as shown in Figure \ref{fig8}C. However, several different mechanisms may be at work. In fact, the harmonic tidal forcing acts simultaneously throughout the core and in the radiative zone , and simultaneously produces everywhere a synchronized energy oscillation that can be amplified in the core as discussed above. This would give rise to modulated seismic waves \textit{\emph{(g and p-mode oscillations) that can}} propagate from the inner core up to the tachocline region in a few hours because the sound speed inside the Sun is a few hundred kilometers per second \citep{Hartlep,Ahuir,Barker}. These waves might also couple with the g-waves produced in the tachocline \citep{Goodman} producing a in the tachocline region sufficiently strong to synchronize the solar dynamo with the planetary tidal frequencies. \section{Conclusion} Many empirical evidences suggest that planetary systems can self-organize in synchronized structures although some of the physical mechanisms involved are still debated. We have shown that the high synchronization of our own planetary system is nicely revealed by the fact that the ratios of the orbital radii of adjacent planets, when raised to the 2/3rd power, express the simple ratios found in harmonic musical consonances while those of the mirrored ones follow the simple, elegant, and highly precise scaling-mirror symmetry Eq. \ref{eq:1.0} \citep{Bank}. The solar system is made of synchronized coupled oscillators because it is characterized by a set of frequencies that are linked to each other by the harmonic Eq. \ref{eq:1.1}, which are easily detected in the solar wobbling. Thus, it is then reasonable to hypothesize that the solar activity could be also tuned to planetary frequencies. We corroborated this hypothesis by reviewing the many planetary harmonics and orbital invariant inequalities that characterize the planetary motions and observing that often their frequencies correspond to those of solar variability. It may be objected that, since the identified planetary frequencies are so numerous, it could be easy to occasionally find that some of them roughly correspond to those of the solar cycles. However, the fact is that the planetary frequencies of the solar system, from the monthly to the millennial time scales, are not randomly distributed but tend to cluster around some specific values that quite well match those of the main solar activity. Thus, it is rather unlikely that the results shown in Figures \ref{fig2}-\ref{fig6} are just occasional. In some cases, our proposed planetary models have even been able to predict the time-phase of the solar oscillations like that of the Schwabe 11-year sunspot cycle throughout the last three centuries, as well as those of the secular and millennial modulations throughout the Holocene. The two main planetary models that could explain the Schwabe 11-year cycle and its secular and millennial variation involve the planets Venus, Earth, Jupiter and Saturn, as it was initially suggested by \citet{Wolf}. We further suggest that the Venus-Earth-Jupiter model and the Jupiter-Saturn model could be working complementary to each other. The alternative hypothesis that the solar activity is regulated by an unforced internal dynamics alone (i.e. by an externally unperturbed solar dynamo) has never been able to reproduce the variety of the observed oscillations. In fact, standard MHD dynamo models are not self-consistent and do not directly explain the well-known 11-year solar cycle nor they are able to predict its timing without assuming a number of calibrated parameters \citep{Jiang,Tobias2002}. There have been several objections to a planetary theory of solar variability. For example, \citet{Smythe} claimed that planetary cycles and conjunctions could not predict the timing of grand solar minima, like the Maunder Minimum of the 17\textsuperscript{th} century. However, \citet{Scafetta2012a} developed a solar-planetary model able to predict all the grand solar maxima and minima of the last millennium (Figure \ref{fig4}). Other authors reasonably claimed that planetary gravitational tides are too weak to modulate solar activity \citep{Charbonneau,Jager,Charbonneau(2022)}; yet,several empirical evidences support the importance of their role \citep{Abreu,Scafetta2012b,Stefani2016,Stefani2019,Wolff}. \citet{Stefani2016,Stefani2021} proposed that the Sun could be at least synchronized by the tides of Venus, Earth and Jupiter producing an 11.07-year cycle that reasonably fits the Schwabe cycle. Longer cycles could be produced by a dynamo excited by angular momentum transfer from Jupiter and Saturn. Instead, \citet{Scafetta2012b} proposed that, in the solar core, the effects of the weak tidal forces could be amplified one million times or more due to an induced increase in the H-burning, thus providing a sufficiently strong forcing to synchronize and modulate the solar dynamo with planetary harmonics at multiple time scales. Objections to the latter hypothesis, based on the slow light propagation inside the radiative zone according to the Kelvin--Helmholtz timescale \citep{Mitalas,Stix}, could be probably solved. In fact, tidal forces are believed to favor the onset of g-waves moving back and forth throughout the radiative region of the Sun \citep{Ahuir,Barker}. Thus, g-waves themselves could be amplified and modulated in the core by the tidally induced H-burning enhancement \citep{Scafetta2012b}. Then, both tidal torques and g-waves could cyclically affect the tachocline region at the bottom of the convective zone and synchronize the solar dynamo. Alternatively, planetary alignments can also modify the large-scale electromagnetic and gravitational structure of the planetary system altering the space weather in the solar system. For example, in coincidence of planetary alignments, an increase of solar flares has been observed \citep{Hung,Bertolucci,Petrakou}. The solar wobbling, which reflects the motion of the barycenter of the planets, changing from more regular to more chaotic trajectories, correlates well with some long climate cycles like the Bray-Hallstatt cycle (2100-2500 years) \citep{Charvatova2000,Charvatova2013,Scafettaetal2016}. Finally, \citet{Scafettaetal2020} showed that the infalling meteorite flux on the Earth presents a 60-year oscillation coherent with the variation of the eccentricity of Jupiter\textquoteright s orbit induced by Saturn. The falling flux of meteorites and interplanetary dust would then contribute to modulate cloud formation. In conclusion, much empirical evidence suggests that planetary oscillations should be able to modulate the solar activity and even the Earth\textquoteright s climate, although several open physical issues remain open. These results stress the importance of identifying the relevant planetary harmonics, the solar activity cycles and the climate oscillations as phenomena that, in many cases, are interconnected. This approach could be useful to predict both solar and climate variability using harmonic constituent models as it is currently done for oceanic tides. We think that the theory of a planetary modulation of solar activity should be further developed because no clear alternative theory exists to date capable to explain the observed planetary-solar interconnected periodicities. \subsection*{Author Contributions} NS wrote the initial draft and prepared the figures. NS and AB contributed to the discussion and the revision of the submitted version. \subsection*{Conflict of Interest} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \onecolumn
Title: JWST NIRCam+NIRSpec: Interstellar medium and stellar populations of young galaxies with rising star formation and evolving gas reservoirs
Abstract: We present an interstellar medium and stellar population analysis of three spectroscopically confirmed $z>7$ galaxies in the ERO JWST/NIRCam and JWST/NIRSpec data of the SMACS J0723.3-7327 cluster. We use the Bayesian spectral energy distribution (SED) fitting code Prospector with a flexible star-formation history (SFH), a variable dust attenuation law, and a self-consistent model of nebular emission (continuum and emission lines). Importantly, we self-consistently fit both the emission line fluxes from JWST/NIRSpec and the broad-band photometry from JWST/NIRCam, taking into account slit-loss effects. We find that these three $z=7.6-8.5$ galaxies ($M_{\star}\approx10^{8}~M_{\odot}$) are young with rising SFHs and mass-weighted ages of 3-7 Myr, though we find indications for underlying older stellar populations. The inferred gas-phase metallicities broadly agree with the direct metallicity estimates from the auroral lines. The galaxy with the lowest gas-phase metallicity ($\mathrm{Z}_{\rm gas}=0.06~\mathrm{Z}_{\odot}$) has a steeply rising SFH, is very compact ($<0.2~\mathrm{kpc}$) and has a high star-formation rate surface density ($\Sigma_{\rm SFR}\approx38~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}~\mathrm{kpc}^{-2}$), consistent with rapid gas accretion. The two other objects with higher gas-phase metallicity show more complex multi-component morphologies on kpc scales, indicating that their recent increase in star-formation rate is driven by mergers or internal, gravitational instabilities. We discuss effects of assuming different SFH priors or only fitting the photometric data. Our analysis highlights the strength and importance of combining JWST imaging and spectroscopy for fully assessing the nature of galaxies at the earliest epochs.
https://export.arxiv.org/pdf/2208.03281
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} early Universe -- galaxies: formation -- galaxies: evolution -- galaxies: high-redshift -- galaxies: star formation \end{keywords} \section{Introduction} The \emph{James Webb Space Telescope} (\JWST) promises to reveal a new view of galaxy formation in the early Universe. For the first time, astronomers can now combine photometry (NIRCam) and spectroscopy (NIRSpec) in the rest-frame UV/optical to observe directly the growth of stellar populations and galactic structure in the first few hundred million years of cosmic history. The discoveries enabled by \JWST will allow for the formation history of stars, gas, metals, and dust to be detailed during the critical Epoch of Reionization \citep[EoR; for a review, see][]{robertson21}. Together, NIRCam and NIRSpec provide information on both broad-band stellar light and emission lines powered by Lyman Continuum (LyC) photons emitted by newly formed stars. This powerful combination allows us to precisely infer the properties of early galaxy properties without redshift uncertainties. The focus of this paper is to use jointly the initial \JWST Early Release Observations (EROs) NIRCam and NIRSpec data in the SMACS0723 cluster \citep{pontoppidan22} to infer the properties of galaxies with spectroscopic redshifts at $z\sim7.6-8.5$ and thereby gain a foothold on the physics of galaxy formation in the early Universe. Detailed stellar population analyses of early galaxies advanced significantly during the pre-\JWST era thanks to combining \emph{Hubble Space Telescope} (\HST) and \emph{Spitzer Space Telescope} (\Spitzer) $3.6-4.5\mu$m data. Several studies used these datasets to constrain the stellar populations of $z>8$ galaxies, finding populations of young galaxies (mass-weighted stellar ages of a few tens of Myr; e.g., \citealt{stefanon21}), but also more evolved, older objects (with ages of a few hundred Myr; e.g., \citealt{hashimoto18, laporte21}). Significant challenges with these observations are the low signal-to-noise ratio, source blending as a result of the low spatial resolution of \Spitzer, and strong emission lines (i.e., H$\beta$ and [OIII]) that can mimic a strong Balmer/4000\AA\ break \citep{finkelstein13, labbe13, smit15, de-barros19, endsley21_ew}. Because of this, \citet{tacchella22_highz} and \citet{whitler22} have investigated how priors might influence the inference of the stellar ages and star-formation histories (SFHs) more generally, finding that the current \HST+ \Spitzer data are not very constraining in age-dating the highest-$z$ galaxies. Along these lines, the ultra-violet (UV) colours have been used to study the earliest phases of chemical enrichment \citep[e.g.,][]{wilkins11, finkelstein12a, bouwens14, bhatawdekar21}. However, the amount of attenuation, the attenuation law itself and the stellar and gas-phase metallicities remain largely unconstrained at $z>7$. \JWST will improve the situation tremendously by delivering larger wavelength coverage and higher spectral resolution, both with spectra and medium/narrow band imaging. This will help break the degeneracy between strong emission lines and a strong Balmer/4000\AA\ break, which is needed to age-date these galaxies accurately \citep{roberts-borsani21_jwst, tacchella22_highz}, enabling not only with learning more about the physical properties of these high-$z$ galaxies during and before the EoR, but also with timing cosmic dawn by delivering accurate SFHs. Several \JWST programmes will deliver such datasets (i.e. spectra and medium band imaging), including the \textit{JWST Advanced Deep Extragalactic Survey} (JADES), which is a Guaranteed Time Observer (GTO) programme using about 800 hours of prime time and 800 hours of parallel time to study the formation and evolution of galaxies, combining NIRSpec, NIRCam, and MIRI data in a coordinated observing programme \citep{rieke19}. In this work, we make use of the \JWST ERO of the SMACS J0723.3-7327 cluster field \citep{pontoppidan22}. The ERO multi-mode observations of SMACS J0723.3-7327 include NIRCam and MIRI imaging, NIRSpec Multi-Object Spectroscopy, and NIRISS Wide-Field Slitless Spectroscopy of the cluster and surrounding field. The NIRSpec MSA configuration was designed based on a photometric catalogue constructed from the NIRCam imaging, following a workflow similar to that of future science programs such as JADES. We focus on the three galaxies that have a spectroscopic redshift $z>7$, namely Galaxy ID 04590, 06355 and 10612 (Table~\ref{tab:galaxies}). These three objects have been discussed in several recent papers \citep[e.g.,][]{arellano-cordova22, carnall22, curti22, katz22, rhoads22, schaerer22, trump22, trussler22}. The different studies rely on different measurement techniques, but also on different reduction and calibration pipelines, making comparisons difficult. \citet{carnall22} performed SED fitting of the NIRCam imaging and the existing \HST imaging for these three objects, finding low stellar masses ($\log(M_{\star}/M_{\odot})=7.4-8.6$) and correspondingly young mean stellar ages of only a few Myr. \citet{curti22} used the [OIII]4363 auroral line to measure metallicities with the direct $T_{\rm e}$ method, finding Galaxy ID 04590 to be extremely metal poor ($12+\log(\mathrm{O}/\mathrm{H})\approx7$), Galaxy ID 10612 to be about 1/10 solar and Galaxy ID 06355 to be about one-third solar. The latter two objects are marginally consistent with the Fundamental Metallicity Relation \citep[FMR;][]{mannucci10, onodera16, curti20}, while Galaxy ID 04590 deviates significantly from it, consistent with being far from the smooth equilibrium between gas flows, star formation and metal enrichment in place at later epochs. This possibly indicates that we enter at $z>7$ another epoch of galaxy formation, where galactic systems are out of equilibrium and evolve more stochastically. Consistent with this, other studies find high ionization, low metallicity and high pressure in the interstellar medium \citep[ISM;][]{katz22, schaerer22, rhoads22, trump22, trussler22}. In this paper, we contribute to these interesting results by performing a full spectro-photometric analysis by simultaneously fitting the NIRCam photometry and the NIRSpec spectroscopy, taking into account slit-loss effects. We confront the measured gas-phase metallicity with the inferred SFHs and morphology of these three $z=7.6-8.5$ galaxies, highlighting the strength and importance of combining \JWST imaging and spectroscopy for fully assessing the nature of galaxies at the earliest epochs. Specifically, Section~\ref{sec:data} presents the NIRCam and NIRSpec data processing and describes the details of the stellar population inference from the spectro-photometric modelling. We present our key results in Section~\ref{sec:results} and conclude in Section~\ref{sec:conclusions}. We assume the latest {\it Planck} flat $\Lambda$CDM cosmology with $\mathrm{H}_{0}=67.36$, $\Omega_m=0.3153$, and $\Omega_{\Lambda}=0.6847$ \citep{planck-collaboration20} and $\mathrm{Z}_{\odot}=0.0142$. All quantities are corrected for magnification if not otherwise indicated. When quoting values of derived quantities, we quote the median and $16-84\%$ quantile-based errors if not otherwise stated. \section{Data and Method} \label{sec:data} All \JWST observations used in this work are taken as part of the SMACS0723 ERO (Programme \#2736; \citealt{pontoppidan22}). We focus in this analysis on three galaxies that are both covered with NIRCam and NIRSpec and lie at $z>7$ (Tab.~\ref{tab:galaxies}). \subsection{NIRCam imaging and photometry} \label{subsec:nircam} We use the deep NIRCam imaging data of SMACS0723 in the F090W, F150W, F200W, F277W, F356W, and F444W filters, which cover an observed wavelength range of $\lambda_{\rm obs}=0.8-5\upmu\mathrm{m}$. The images reach (and surpass) $AB\approx29$ mag. We reduce the raw level-1 data products with the public \JWST pipeline (v1.6.1; \url{https://github.com/spacetelescope/jwst}), using the latest available calibration files (CRDS\_CTX=jwst\_0927.pmap). An additional, local background subtraction is performed on the final mosaiced images and applied to the individual exposures. We measure object fluxes using the Bayesian model-fitting program \texttt{forcepho} (Johnson et al., in prep.). This program simultaneously fits multiple PSF-convolved S\'{e}rsic profiles, each having 11 parameters, to all of the background-subtracted stage 2 \texttt{cal} exposures for every NIRCam filter. The joint posterior distribution for the parameters of all of the profiles is sampled via Markov Chain Monte Carlo (MCMC), allowing estimation of covariances between the parameters of a single object (such as S\'{e}rsic index and half-light radius) as well as covariance between objects (due to blending), and identification of any non-Gaussian posterior distributions. The simultaneous fit to all bands and the MCMC sampling allow us to obtain self-consistent estimates of object fluxes in each band. We represent Galaxy ID 06355 and 10612 with several sub-components to account for clumpy structure obvious in the higher resolution F150W imaging, and we simultaneously model all sources within 1\arcsec\ to account for the contribution of extended structures. To determine total source fluxes we then combine the fluxes of the sub-components while taking into account covariances among them, which we accomplish by summing the sub-component fluxes for each posterior sample and then computing the mean and standard deviation of the summed fluxes. % In Figures \ref{fig:data4590}, \ref{fig:data6355}, and \ref{fig:data10612} we show examples of the data, model, and stacked residuals for one draw from the MCMC chain. We have varied the number of sub-components used for the sources and find similar total fluxes. Due to uncertainties in photometric calibration and other factors affecting these very early data, we adopt an error floor of 10\% on the photometry. The \texttt{forcepho} fits to the images also provide size estimates. For the single component of Galaxy ID 04590 this is taken directly from the half-light radius of the fitted S\'{e}rsic profile. For the other two Galaxy IDs that have multiple components, we estimate a combined effective half-light radius by generating a model image with the PSF removed using the F150W fluxes, compute the barycenter, and determine the radius containing half the combined model flux. This yields half-light radii of $0.041''\pm0.004$ ($0.19\pm 0.02$ kpc), $0.103''\pm 0.010$ ($0.52\pm 0.05$ kpc) and $0.080''\pm 0.008$ ($0.40\pm 0.04$ kpc) respectively for Galaxy ID 04590, 06355, and 10612. The size for Galaxy ID 10612 is driven by the separation between the two dominant components, which are each $\lesssim 0.03''$ in half-light radius. We note that we have not corrected these sizes for lensing distortions (Section~\ref{subsec:magnification}). We expect this effect to be small for Galaxy ID 06355 and 10612, because they are only weakly magnified. This effect could however be more substantial for Galaxy ID 04590, indicating that our size is possibly an upper limit. \subsection{NIRSpec spectroscopy} \label{subsec:nirspec} NIRSpec observations for SMACS0723 were carried out using two disperser/filter combinations, but given the redshift of our targets, in this work we use only data from the G395/F290LP combination, covering the wavelength range 2.9--5.2~$\upmu$m with a spectral resolution $R\sim1000$. The observations consist of two pointings, with a total integration of 8,840 seconds. In this paper, we use the fully reduced spectra from \citet{curti22}\footnote{The 1-d spectra are publicly available at \url{https://doi.org/10.5281/zenodo.6940561}}, which are based on level-2 data from the MAST archive and were processed using the GTO pipeline \citetext{NIRSpec/GTO collaboration, in prep.; \citealp{ferruit22}}. The data and data reduction procedure are already described in \citet{curti22}, so here we report only a summary. The GTO pipeline uses a custom flagging and masking algorithm for bad pixels and cosmic rays, which effectively removes all artefacts from the final spectra. The extraction is performed with a custom aperture, using a boxcar extraction. After inspecting the exposures of the individual nods, it was found that one of the shutters on which Galaxy ID 04590 was nodded did not open in Obs~7 \citep{rawle18}; the affected nod positions were discarded before stacking the data. Stacking was performed with the GTO pipeline, taking into account the variance and quality arrays of both pointings. We refined the flux calibration using an empirical response function, derived from a spectro-photometric star observed during commissioning \citep[Programme \#1128,][]{gordon22}. Because our three targets are only marginally resolved, we apply the path-loss correction derived for a point-like source. The path-loss correction appropriate for an extended uniform source gives a spectrum that has higher overall flux, but approximately the same shape as our default choice\footnote{This is because the wavelength range covered by our spectra is sufficiently narrow that wavelength-dependent path losses do not affect significantly our results.}. As we discuss below (see Section~\ref{subsec:prospector}), we introduce a nuisance parameter in the spectro-photometric modelling to account for any mismatch between the photometry and spectroscopy (related to slit loss, calibration issues as well as physical effects). Therefore, it does not matter in this analysis whether we use the point-like or extended source approximation. The resulting spectra are shown in the lower-left corner of Figures~\ref{fig:data4590}--\ref{fig:data10612}. Emission-line fluxes were measured using \textsc{ppxf} \citep{cappellari17}, using the configuration described in \citet{curti22}. Briefly, we fit the stellar continuum using a linear combination of simple stellar-population spectra from the C3K library \citep{conroy19}, using the MIST isochrones \citep{choi16}. The continuum is scaled using a 10\textsuperscript{th}-order Legendre polynomial. Emission-lines are modelled as Gaussians. All the model spectra (both continuum and emission-line) are constrained to have the same velocity and velocity dispersion; note that the continuum is only marginally resolved; replacing the SSP template spectra with a constant does not change the emission-line fluxes \citep{curti22}. The resulting fluxes have already been published \citep[][their Table~1]{curti22} and are not repeated here. We remark that fitting each line individually does not change our conclusions, as reported in \citet{katz22}. The spectroscopic redshifts in this work are based on the emission-line velocities measured by \textsc{ppxf}. In the fitting, we mask the following emission lines: [OIII]4364, H$\delta$, H$\zeta$, and [NeIII]3870. We mask [OIII]4364 because we cannot reproduce very high [OIII]4364 emission line fluxes (as measured here) with our current \texttt{cloudy}-based modelling for the nebular emission (see next section), which includes a too narrow range in the ionization parameter. We mask H$\delta$ because this line is abnormally weak, i.e. about 25\% weaker than what it should be given the observed Balmer decrement on H$\gamma$. As discussed in \citet{curti22}, this might be because of possible background subtraction issues. Finally, we mask both H$\zeta$ and [NeIII]3870 because they are blended and the current decomposition method seems to be unable to provide reliable fluxes for each of the lines (see also Section~\ref{subsec:runs}). \begin{table*} \centering \caption{Properties of the three galaxies of this work. The columns are the ID, redshift, magnification ($\mu$), right ascension (RA), declination (DEC), stellar mass, SFR averaged over 10 Myr, stellar age (half-mass time $t_{50}$), dust attenuation in the V-band (A$_{\rm V}$), gas-phase metallicity ($\mathrm{Z}_{\rm gas}$), stellar metallicity ($\mathrm{Z}_{\star}$), and effective size ($R_{\rm e}$). All quantities here are corrected for magnification.} \label{tab:galaxies} \begin{tabular}{ccccccccccccc} \hline ID & Redshift & $\mu$ & RA & DEC & $\log(M_{\star})$ & SFR$_{10}$ & $t_{50}$ & A$_{\rm V}$ & $\log(\mathrm{Z}_{\rm gas})$ & $\log(\mathrm{Z}_{\star})$ & $R_{\rm e}$ \\ & & & [deg] & [deg] & [$M_{\odot}$] & [$\mathrm{M}_{\odot}/\mathrm{yr}$] & [Myr] & [mag] & [$\mathrm{Z}_{\odot}$] & [$\mathrm{Z}_{\odot}$] & [kpc] \\ \hline 04590 & 8.4953 & $3.74\pm0.07$ & 110.85933 & -73.44916 & $7.9^{+0.5}_{-0.3}$ & $4_{-1}^{+2}$ & $5_{-3}^{+102}$ & $0.66_{-0.20}^{+0.24}$ & $-1.2_{-0.1}^{+0.1}$ & $-1.1_{-0.2}^{+0.3}$ & $0.19\pm 0.02$ \\ 06355 & 7.6643 & $1.23\pm0.01$ & 110.84452 & -73.43508 & $8.7^{+0.3}_{-0.2}$ & $38_{-7}^{+18}$ & $3_{-1}^{+29}$ & $0.47_{-0.14}^{+0.17}$ & $-0.5_{-0.1}^{+0.1}$ & $-0.9_{-0.2}^{+0.4}$ & $0.52\pm 0.05$ \\ 10612 & 7.6592 & $1.34\pm0.01$ & 110.83395 & -73.43454 & $8.1^{+0.7}_{-0.2}$ & $7_{-1}^{+2}$ & $7_{-4}^{+96}$ & $0.14_{-0.06}^{+0.11}$ & $-0.6_{-0.1}^{+0.1}$ & $-0.7_{-0.3}^{+0.3}$ & $0.40\pm 0.04$ \\ \hline \end{tabular} \end{table*} \begin{table*} \centering \caption{Effect on the stellar populations (stellar mass, SFR, and stellar age) when including/excluding emission lines (EL versus photometry-only) and changing the SFH prior (bursty versus continuous). All quantities here are corrected for magnification.} \label{tab:model_variation} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{} & \multicolumn{3}{c|}{EL \& bursty} & \multicolumn{3}{c|}{EL \& continuous} & \multicolumn{3}{c|}{phot-only \& bursty} & \multicolumn{3}{c|}{phot-only \& continuous}\\ \cline{2-4} \cline{5-7} \cline{8-10} \cline{11-13} \multicolumn{1}{|c|}{ID} & $\log(M_{\star})$ & SFR$_{10}$ & $t_{50}$ & $\log(M_{\star})$ & SFR$_{10}$ & $t_{50}$ & $\log(M_{\star})$ & SFR$_{10}$ & $t_{50}$ & $\log(M_{\star})$ & SFR$_{10}$ & $t_{50}$ \\ \hline 04590 & $7.9^{+0.5}_{-0.3}$ & $4_{-1}^{+2}$ & $5_{-3}^{+102}$ & $8.4_{-0.4}^{+0.3}$ & $5_{-2}^{+3}$ & $108_{-97}^{+82}$ & $8.9_{-1.0}^{+0.3}$ & $2_{-2}^{+4}$ & $104_{-100}^{+134}$ & $9.0_{-0.4}^{+0.2}$ & $3_{-2}^{+4}$ & $173_{-106}^{+69}$ \\ 06355 & $8.7^{+0.3}_{-0.2}$ & $38_{-7}^{+18}$ & $3_{-1}^{+29}$ & $9.3_{-0.3}^{+0.3}$ & $33_{-9}^{+20}$ & $129_{-102}^{+139}$ & $8.9_{-0.2}^{+0.3}$ & $53_{-16}^{+24}$ & $4_{-1}^{+56}$ & $9.2_{-0.3}^{+0.4}$ & $40_{-11}^{+31}$ & $101_{-83}^{+126}$ \\ 10612 & $8.1^{+0.7}_{-0.2}$ & $7_{-1}^{+2}$ & $7_{-4}^{+96}$ & $8.6_{-0.4}^{+0.3}$ & $7_{-1}^{+2}$ & $97_{-76}^{+128}$ & $8.2_{-0.2}^{+0.4}$ & $9_{-2}^{+3}$ & $6_{-3}^{+92}$ & $8.5_{-0.4}^{+0.4}$ & $8_{-2}^{+3}$ & $90_{-84}^{+141}$ \\ \hline \end{tabular} \end{table*} \subsection{Modelling of the spectro-photometric data} \label{subsec:prospector} We use the SED fitting code \texttt{Prospector} \citep{johnson21} to simultaneously fit the photometric and spectroscopic data. We adopt a similar, 13-parameter physical model and priors as described in \citet{tacchella22_highz}. Specifically, we fit for the stellar mass, stellar and gas-phase metallicities, dust attenuation (using a two-component dust model, which includes a diffuse component for the entire galaxy, extra attenuation around the youngest stars ($<10~\mathrm{Myr}$), and a flexible slope; 3 free parameter) and ionization parameter for the nebular emission. We adopt the flexible SFH prescription \citep{leja19_nonparm} with 6 time bins with the bursty-continuity prior \citep{tacchella22_highz}. This model includes 5 free parameters which control the ratio of SFR in six adjacent time bins; the first two bins are spaced at $0-5~\mathrm{Myr}$ and $5-10~\mathrm{Myr}$ of lookback time, and the remaining four bins are log-spaced to $z=20$. This implies that we are not able to infer ages below 2.5 Myr, which we believe is a realistic lower limit for galaxies. We also fit for a nuisance parameter that scales all the emission line fluxes by a constant factor to account for flux calibration offsets or, to first order, slit losses or physical effects (see below for details). For all the fits, we assume the MESA Isochrones and Stellar Tracks (MIST) stellar models \citep{choi17} and a \citet{chabrier03} initial mass function (IMF). \texttt{Prospector} has the ability to model (and fit) nebular emission line fluxes using the nebular line and continuum emission predictions provided by the Flexible Stellar Population Synthesis (FSPS) code, as described in \citet{byler17}. These predictions are based on interpolated \texttt{cloudy} \citep[version 13.03;][]{ferland13} photo-ionization models that were constructed on a grid of ionization parameter ($U$), gas-phase metallicity, and Single Stellar Population (SSP) age, using stellar spectra of the same age and metallicity as the ionizing source. The solar nebular abundances are taken from \citet{anders89}. The \texttt{cloudy} modelling used a constant electron density and the highest ionization parameter in the grid is $\log(U)=-1$, making it difficult to model extreme [OIII] ratios \citep[e.g.][]{ferland13, katz22}. Critically, the nebular emission is scaled by the number of ionizing photons predicted at each SSP age for the given SFH. Crucial assumptions in this model for the emission lines are that all emission lines are powered by star formation, the LyC escape fraction is zero, and there is no dust absorption of LyC. Under these assumptions the predicted nebular emission line fluxes are consistent with the ionizing flux of the modelled stellar population. As discussed in \citet[][see also \citealt{smith21_rt}]{tacchella22_Halpha}, these assumptions do not necessarily hold at these early epochs: LyC absorption by dust can be important (absorbing up to 50\% of the LyC photons during a starburst phase), about $5-10\%$ of the LyC photons are absorbed by helium, and a small fraction of the LyC can also escape the galaxy. However, we add additional flexibility in two ways. First, we allow the gas-phase and stellar metallicity to take on different values, which in detail means that the shape of the model ionizing spectrum may be different than that used to predict the nebular lines ratios, though the normalisation (and hence Balmer line flux) is kept consistent. Second, we rescale all the model emission-line fluxes multiplying them by $\text{f}_\mathrm{scale}$, a fittable nuisance parameter (we do not rescale the nebular continuum). This constant factor can account for flux calibration offsets with NIRSpec or, to first order, slit losses, differential magnification effects\footnote{Our sources are extended, consisting of several components. Different emission lines have probably different spatial extents \citep[][their Figure~6]{perez-montero07}. Since the magnification might not be constant across the source, emission lines from more compact regions could be more magnified, leading to a bias in the line ratios (``magnification bias''). This effect is probably negligible for Galaxy ID 06355 and ID 10612, which both have a very low magnification (Table~\ref{tab:galaxies}).} or physical effects like LyC escape or dust absorption. \subsection{Different runs} \label{subsec:runs} In addition to the fiducial \texttt{Prospector} setup described above, we run three further configurations in order to investigate how our results depend on the modelling assumptions as well as the data present. Specifically, our fiducial approach simultaneously fits the NIRCam photometry and NIRSpec emission lines with the bursty-continuity prior (``EL \& bursty'' fits). Since the inferred stellar population parameters can significantly depend on the assumed SFH prior \citep{carnall19_sfh, leja19_nonparm, suess22_prior, tacchella22_quench, tacchella22_highz, whitler22}, we vary the SFH prior. Specifically, the bursty-continuity SFH prior assumes that $\Delta\log(\mathrm{SFR})$ between adjacent time bins is described by Student's t-distribution with $\sigma=1.0$ and $\nu=2$. We also run fits with the standard continuity prior (``EL \& continuous'' fits), which assumes $\sigma=0.3$, which weights against sharp transitions between the different time bins of the SFH \citep{leja19_nonparm}. In addition, to study the effect of including emission lines in the fitting, we fit those two SFH prescriptions to only the photometry, i.e. ignoring the emission line constraints (``phot-only \& bursty'' and ``phot-only \& continuous''). The key results of our fiducial fits are tabulated in Table~\ref{tab:galaxies}, while Table~\ref{tab:model_variation} shows the effects on the inferred stellar masses, SFRs and stellar ages when varying the SFH prior and including/excluding emission lines. We present and discuss these results in Section~\ref{sec:results}. We focus on the impact of including emission lines in the fitting in Sections~\ref{subsec:EL} and discuss the SFH prior in Section~\ref{subsec:sfh_prior}. Figure~\ref{fig:sed} shows the goodness of our fiducial fits (i.e. fitting both photometry and emission lines with the bursty SFH prior): the top panels compare the inferred model and observed SEDs, while the bottom panels compare the individual emission lines. In the top panels, the observed NIRCam photometry obtained with \texttt{forcepho} is indicated with blue circles, while the inferred model photometry is shown as orange squares. The absolute values of $\chi$ for the individual bands are typically $<1$ and the total $\chi^2_{\rm tot}$ values are 3.1, 0.5 and 1.5 for Galaxy ID 04590, 06355 and 10612, respectively. The emission lines are overall well reproduced (bottom panels of Figure~\ref{fig:sed}) with $\chi^2_{\rm tot}=1.3$, 1.6 and 1.1, respectively. As mentioned in Section~\ref{subsec:nirspec}, we have masked [OIII]4364, H$\delta$, H$\zeta$, and [NeIII]3870 (plotted as grey points). Our inferred model indeed underpredicts the emission line [OIII]4364 by $50-250\%$, while H$\delta$ is underpredicted by $10-30\%$. Furthermore, the deblended lines H$\zeta$ and [NeIII]3870 are under- and over-predicted, respectively. Importantly, the bottom panels of Figure~\ref{fig:sed} include the nuisance parameter that rescales all the emission fluxes. We find that the emission lines predicted from the model need to be suppressed by $40-70\%$ in order to be consistent with the observed fluxes. Specifically, the rescale factors $f_{\rm scale}$ are $0.56_{-0.09}^{+0.11}$, $0.26_{-0.03}^{+0.06}$, and $0.51_{-0.06}^{+0.14}$ for Galaxy ID 04590, 06355 and 10612, respectively. Not including this nuisance parameter would lead to nonphysical high stellar metallicities (i.e. $>\mathrm{Z}_{\odot}$) in order to suppress the ionizing photon production. Our inferred rescale factors are physically plausible, given that slit loss can be significant. In particular, we measure the largest correction (i.e. smallest $f_{\rm scale}$) for Galaxy ID 06355, which is also the most extended source of our sample (Figures~\ref{fig:data4590}-\ref{fig:data10612}). We mention this as a possibility because we unfortunately do not have knowledge of the absolute astrometry and therefore the exact shutter position on the sky relative to the NIRCam imaging. In addition to slit loss, physical causes such as LyC absorption by dust or LyC escape can affect the scaling. Overall, this highlights that understanding slit loss corrections is fundamentally important to shed light onto both the stellar populations but also radiative transfer effects. Finally, we compare the goodness of the resulting fits with the four different setups introduced above. The current uncertainties and this small sample do not allow us to infer whether any of the models is preferred (i.e. Bayes factor is inconclusive). However, we find that the bursty SFH prior consistently produces lower $\chi^2$ values both when including or excluding the emission lines in the fitting than the standard continuity prior: we find -- for Galaxy ID 04590, 06355 and 10612 -- $\chi^2_{\rm tot}=3.6$, 1.7 and 1.7 and $\chi^2_{\rm tot}=7.4$, 1.5 and 1.2 for the standard continuity prior with and without emission line constraints. Including or excluding emission lines has only little effect on the resulting $\chi^2$ values. \subsection{Magnification factors} \label{subsec:magnification} We use the magnification factors derived by \citet{curti22} for each galaxy (Table~\ref{tab:galaxies}). Briefly, \citet{curti22} adopted the publicly available lens model of \citet{mahler22} to derive the magnification corrections, which combine ancillary \HST with novel \JWST/NIRCam data to better constrain the cluster mass distribution. For each object, \citet{curti22} derived the magnification maps for each target redshift and then obtained the fiducial magnification factors by averaging its value within a 1\arcsec-wide box around the central coordinates of each galaxy. Other lens models have been published in the literature \citep[e.g.,][]{pascale22, caminha22}. The two weakly magnified objects (Galaxy ID 06355 and 10612) are robust, but we note that for the highest magnification galaxy ID4590, the different values obtained by the available models can impact the stellar mass and SFR determination for this source by up to $30-50\%$. \section{Results} \label{sec:results} After showing in the previous section that our fits are able to reproduce the observational data, we present and discuss here the key results for our fiducial \texttt{Prospector} run regarding the early stellar mass build-up (Section~\ref{subsec:sfh}) and enrichment (Section~\ref{subsec:enrichment}). We then discuss why self-consistent emission line modelling is important (Section~\ref{subsec:EL}) and what the effects are of changing the SFH prior (Section~\ref{subsec:sfh_prior}). Finally, we close by highlighting several caveats and future improvements (Section~\ref{subsec:caveats}). \subsection{Stellar mass build-up} \label{subsec:sfh} Figure~\ref{fig:SFH} plots the inferred SFHs and also gives the stellar masses and mass-weighted ages. The SFHs are increasing for all three galaxies and have similar stellar masses of about $10^8~M_{\odot}$, with Galaxy ID 04590 being the least massive ($M_{\star}\approx10^{7.9}~M_{\odot}$) and and Galaxy ID 06355 being the most massive galaxy ($M_{\star}\approx10^{8.7}~M_{\odot}$). Galaxy ID 04590 has increased its SFR from $<0.1~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ 100 Myr ago to $>5~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ in the past 5 Myr, i.e. this galaxy is undergoing a recent burst. This is consistent with its young, mass-weighted age of $t_{50}=5_{-3}^{+102}$ Myr. We note that older ages are still possible and difficult to rule out because of the strong outshining effect due to the ongoing starburst. This young age is consistent with the morphological appearance, which indicates that this source is compact. Galaxy ID 06355 is going through an even larger burst of star formation. In this case, older ages can largely be ruled out\footnote{This galaxy has the reddest F444W-F356W colour, indicating strong emission lines.} and we inferred the youngest age of this sample with $3_{-1}^{+29}$ Myr. It is interesting to note that this is also the most extended and clumpy object (Figure~\ref{fig:data6355}), consistent with the idea that mergers or an internal, gravitational instability triggered this intense starburst. Galaxy ID 10612 has an age of $t_{50}=7_{-4}^{+96}$ Myr and is slightly older. The SFH shows a hint of elevated star formation $30-60$ Myr ago. Together with insight that this source is made-up of two components (Figure~\ref{fig:data10612}), it is consistent with a picture where we are witnessing a merger in progress. For comparison, \citet{carnall22} fitted the \HST and \JWST photometry with the SED fitting code \texttt{Bagpipes} \citep{carnall18}, inferring stellar masses (corrected for the different assumptions related to magnification) of $\log(M_{\star}/M_{\odot}=7.9_{-0.2}^{+0.3}$, $9.0_{-0.1}^{+0.1}$, and $8.2_{-0.1}^{+0.1}$, and mean stellar ages of $2_{-1}^{+10}$ Myr, $1.2_{-0.2}^{+0.3}$ Myr and $1.2_{-0.2}^{+0.3}$ Myr for Galaxy ID 04590, 06355 and 10612, respectively. The stellar masses are consistent within the uncertainties. The ages of \citet{carnall22} are at lower end of our quantiles, but still consistent with our estimates. However, \citet{carnall22} seem to rule out any underlying older component for Galaxy ID 06355 and 10612, which contrasts with our estimates. It is not straightforward to pinpoint the cause for this difference, but it most probably has to do with the different SFH prescriptions. \citet{carnall22} assume a simple constant SFH, while we assume a more flexible SFH. In addition, different extraction methods of the photometry could also lead to some differences. Finally, \citet{carnall22} only fit the photometry, while we fit both photometry and emission lines. As shown in Section~\ref{subsec:EL}, we would however expect that this leads to the opposite trend: excluding emission lines leads typically to older ages (Table~\ref{tab:model_variation}). \subsection{Dust \& enrichment} \label{subsec:enrichment} Figure~\ref{fig:metallicity_comparison} investigates our inferred gas-phase and stellar metallicities. In particular, the top panel compares our gas-phase metallicities ($\mathrm{Z}_{\rm gas}$) from the \texttt{Prospector} SED fitting with the ones from \citet{curti22}, which used the direct $T_{\rm e}$ method enabled by the detection of [OIII]4363. Encouragingly, we find very good agreement between the two approaches, confirming the rather low gas-phase metallicity of Galaxy ID 04590 with $\mathrm{Z}_{\rm gas}=0.06~\mathrm{Z}_{\odot}$. We find that the gas-phase and stellar metallicities are marginally consistent, though we emphasise that the inferred stellar metallicities are uncertain. Galaxy ID 04590 has $\mathrm{Z}_{\rm gas}\lesssim\mathrm{Z}_{\star}$, while the other two have $\mathrm{Z}_{\rm gas}\gtrsim\mathrm{Z}_{\star}$. Figure~\ref{fig:metallicity_sfh} correlates the metallicities with the inferred stellar population and morphological parameters, including sSFR$_{10}$ (averaged over past 10 Myr), dust attenuation $\mathrm{A}_{\rm V}$, SFR surface density $\Sigma_{\rm SFR}$, and sSFR surface density $\Sigma_{\rm sSFR}$. We do not find any convincing trend for $\mathrm{Z}_{\rm gas}$ with either sSFR$_{10}$ (as well as $t_{50}$). Interestingly, the two galaxies (Galaxy ID 04590 and ID 06355) with an elevated $A_{\rm V}$ and $\Sigma_{\rm SFR}$ have very different gas-phase metallicities. The possible cause from this can be found in the right panel of Figure~\ref{fig:metallicity_sfh}: these two galaxies have very different sSFR surface densities with $\Sigma_{\rm sSFR}=510_{-177}^{+261}~\mathrm{Gyr}^{-1}~\mathrm{kpc}^{-2}$ (Galaxy ID 04590) and $89_{-23}^{+57}~\mathrm{Gyr}^{-1}~\mathrm{kpc}^{-2}$ (Galaxy ID 06355), given by the fact that Galaxy ID 04590 is a factor of $2-3$ times more compact and lower in stellar mass than Galaxy ID 06355 (Figures~\ref{fig:data4590} and \ref{fig:data6355}). Galaxy ID 04590 is vigorously forming stars in a very compact configuration. This is consistent with a scenario in which Galaxy ID 04590 is undergoing rapid accretion of pristine gas, which leads to a steeply rising SFH and a high (s)SFR surface density. Despite having a low gas-phase metallicity, the high $A_{\rm V}$ can be explained by its compactness (i.e. high gas surface density\footnote{The dust-star geometry plays an important role in setting the attenuation \citep[e.g.,][]{zuckerman21}.}). Consistent with this rapid inflow of pristine gas is the tentative evidence of $\mathrm{Z}_{\rm gas}\lesssim\mathrm{Z}_{\star}$. The two other galaxies (Galaxy ID 06355 and 10612) with higher gas-phase metallicity show more complex multi-component morphologies on kpc scale, indicating that their recent increase in SFR is driven by mergers or internal, gravitational instabilities. This is consistent with the FMR analysis of \citet{curti22}, where both Galaxy ID 06355 and 10612 are roughly consistent with the FMR at lower redshifts, while Galaxy ID 04590 deviates significantly, implying it being far from the smooth equilibrium between gas flows, star formation and metal enrichment in place at later epochs. \subsection{The constraining power of emission lines} \label{subsec:EL} From a pure Bayesian modelling approach, we want to include information to constrain the posterior distribution as tightly as possible. We therefore want to fit both the broad-band photometry and the emission lines. The emission lines themselves include crucial information about the gas properties (e.g., metallicity, temperature, and density) and ionizing sources (stars and other sources), including the most recent SFR. Figure~\ref{fig:effect_EL} shows the posterior distribution of the fits including both photometry and emission lines (red; fiducial) and only photometry (blue) for Galaxy ID 06355. The results are summarised for all objects in Table~\ref{tab:model_variation}. An obvious, straightforward difference is that the gas-phase metallicity is basically unconstrained when fitting only the photometry. In addition, since the relative line ratios are sensitive to the dust attenuation (in particular Balmer line series), the dust attenuation is more tightly constrained when including emission lines, which also affects the stellar age constraint via the rest-UV (although some degeneracy remains due to the flexible dust attenuation law). Directly related, another large difference can be seen for the inferred SFHs: without emission line constraints, the inferred stellar ages are older since the most recent SFH is less well constrained. This then also leads to larger stellar masses (up to 1 dex in the case of Galaxy ID 04590), indicating that most inferred properties of the galaxies are affected by whether emission lines are included or not in the fits. \subsection{Effects of the SFH prior} \label{subsec:sfh_prior} Since all these galaxies seem to be undergoing a recent burst in star formation (Figure~\ref{fig:SFH}), young stellar populations dominate the SED and outshine older stellar populations. Therefore, a significant amount of stellar mass could be hidden and is difficult to rule out, which implies that the prior on the SFH is expected to play an important role. As mentioned in Section~\ref{subsec:runs}, we run \texttt{Prospector} with two SFH priors: our fiducial ``bursty'' prior, which allows for large variations between adjacent time bins, and the standard ``continuity'' prior, which down-weights extreme bursts. Figure~\ref{fig:effect_sfh_prior} shows the effect of the SFH prior on key inferred quantities, including the SFH, stellar mass, sSFR, dust attenuation, age, stellar metallicity and gas-phase metallicity. The results from the bursty and standard continuity prior are shown in red and blue, respectively. The same observational data, i.e. both photometry and spectroscopy, are fitted (``EL \& bursty'' and ``EL \& continuous'' runs in Table~\ref{tab:model_variation}). As expected, the SFH prior affects the inferred SFH: the continuous SFH prior leads to more stellar mass being formed early and a less steep rise in recent time. This leads to overall older ages, larger stellar masses and lower sSFRs than what is inferred with the bursty prior. Specifically, as tabulated in Table~\ref{tab:model_variation}, the stellar ages increase from $\lesssim10$~Myr to $\gtrsim100$ Myr, highlighting that it is difficult to rule out early star formation. Importantly, the $t_{50}$ posterior distributions of the two runs are overlapping, i.e. they are consistent with each other. There is also a big impact on the inferred stellar masses: they increase by up to 0.6 dex. The stellar age constraints come from the rest-UV and 4000\AA-break \citep[e.g.,][]{conroy13_rev}. Since we are using a flexible dust attenuation law (Section~\ref{subsec:prospector}), the rest-UV has only little constraining power. The 4000\AA-break for our three galaxies at $z\sim7-9$ is probed by the broad-band photometry, but -- as shown in Figure~\ref{fig:sed} -- there is a degeneracy with the strength of the emission lines. But why are we not able to do better when including emission lines in the fitting? The problem is that our nuisance parameter, which rescales the emission line fluxes, is degenerate with the strength of the 4000\AA-break, i.e. our emission line measurements mainly constrain the gas-phase metallicity and the dust attenuation, which in second order leads to a tighter constraint on the stellar ages via the rest-UV. Further improvements (and solving the degeneracy with slit losses) can be achieved by exactly assessing the slit position (and performing spatially resolved SED modelling) and by directly probing emission lines with the photometry, which can be achieved with narrow and medium bands \citep[e.g.,][]{roberts-borsani21_jwst, tacchella22_highz}. The medium bands on/off the lines would allow the measurement of the 4000\AA-break more directly. \subsection{Caveats and outlook} \label{subsec:caveats} We list here the most important caveats in our analysis, which we hope to improve upon in the future. From the Bayesian SED modelling perspective, our goal is to include as much information as possible that can be described by our physical model. This means we want to fit both the NIRCam photometry and the NIRSpec emission line constraints. One fundamental challenge with this is that the photometry and spectroscopy could probe different regions of the galaxy. Specifically, the photometry attempts to include the total emission of the galaxy, while the spectroscopy only captures the light that is within the shutter of the NIRSpec MSA. Slit losses affect most spectroscopic observations, but because not every object can be aligned perfectly with the NIRSpec MSA, slit losses may be a larger issue for these observations. Besides this technical challenge\footnote{Another technical challenge is that the level-3 data products of the current STScI pipeline lead to unphysical ratios between the Balmer lines (e.g. H$\upgamma$/H$\upbeta$; see, e.g., \citealt{schaerer22, curti22, trump22}) -- one key reason why we used the reprocessed data from \citet{curti22}. This challenge will hopefully be resolved in the near future.}, other observational and physical effects can also lead to a mismatch between the photometry and spectroscopy. On the observational side, differential magnification across the source can lead to boosting of flux of certain galactic sub-regions (typically of more compact regions), leading to a bias between the photometry and spectroscopy (as well as of certain emission lines). On the physical side, we assume in this work that the emission lines (and the nebular continuum) are powered by ionizing photons of the stellar populations, without taking into account LyC absorption by dust of LyC escape, which could be important, in particular for objects in the starburst phase \citep[e.g.,][]{tacchella22_Halpha}. In this work, we addressed these technical, observational and astrophysical issues by fitting for a nuisance parameter, $f_{\rm scale}$, which scales all the emission lines by a constant factor, mainly intended to account for slit loss. We find $f_{\rm scale}$ to be $0.56_{-0.09}^{+0.11}$, $0.26_{-0.03}^{+0.06}$, and $0.51_{-0.06}^{+0.14}$ for Galaxy ID 04590, 06355 and 10612, respectively. If this rescaling is ignored, we find extremely high values for the stellar metallicities ($\mathrm{Z}_{\star}>\mathrm{Z}_{\odot}$). This highlights that this factor is important when performing the fitting. Unfortunately, we cannot currently assess which of the above mentioned factors (i.e. technical vs. observational vs. astrophysical) is the main cause for driving $f_{\rm scale}$. In the future, improvements in the astrometry will allow assessing the exact location of the shutter on the galaxies and thereby estimate the importance of slit losses (in combination with spatially resolved SED modelling). Furthermore, working with unlensed galaxies will resolve the issue with the differential magnification bias. Related to this is the assumption in our approach that stellar populations are driving the emission lines. Firstly, active galactic nuclei (AGN) and other non-stellar sources could power the emission lines as well as contribute to the rest-UV emission. Within \texttt{Prospector}, only the rest-frame mid-infrared emission of AGN can be modelled (i.e. dusty torus emission). There are on-going efforts to expand upon this. Secondly, a recent analysis by \citet{katz22} shows that the inclusion of high-mass X-ray binaries or a high cosmic ray background in addition to a young, low-metallicity stellar population can provide the additional heating necessary to explain the observed high [OIII]4364/[OIII]5007 ratio while remaining consistent with other observed line ratios. However, currently the nebular emission line modelling in \texttt{Prospector} does not extend to ionization parameters $U$ greater than $-1.0$ or high enough electron temperature (see Section~\ref{subsec:prospector}), making it difficult to model extreme [OIII] line ratios. We solve this issue in this work by masking the [OIII]4364 emission line, though we plan to expand the parameter space of the nebular emission models in \texttt{Prospector} in the future. For completeness, we also mention the caveats related to stars. In particular, in this work we assume a solar abundance pattern scaled by the metallicity. However, there is an indication that at higher redshifts (for $z\approx2$ results see \citealt{strom22}) galaxies are more $\alpha$-enhanced (but see \citealt{arellano-cordova22} for a recent analysis of this for those three galaxies, finding no evolution in this $\alpha$-element ratio). This is also theoretically expected given that some of the high-$z$ galaxies are old enough to have seen enrichment from intermediate-mass stars, but are still young enough that Type Ia supernovae have not had time to contribute significantly to their enrichment \citep[e.g.,][]{kriek16}. We further assume the MIST stellar models \citep{choi17}, which include rotation, but not binarity. Investigating the effects the binary-based stellar model \citep[e.g.,][]{eldridge17} or varying the IMF is beyond the scope of this work. Finally, this work investigates three galaxies, the only three $z>7$ galaxies that have both NIRCam and NIRSpec observations. Obviously, we cannot draw strong conclusions regarding the whole galaxy population from three objects. Although we think that galaxies have mostly increasing SFHs at these early times \citep[e.g.,][]{tacchella18, endsley21_ew}, the substantial bursts found in these galaxies might be a selection effect and not representative of all galaxies at $z\sim7-9$. Therefore, we stress that the main purpose of this work is to deliver interesting insights into the properties of those three galaxies by combining NIRSpec and NIRCam data, discuss technical but important details for doing this, and discuss how priors play an important role in driving some of the results. \section{Conclusions} \label{sec:conclusions} We present a careful ISM and stellar population analysis of the ERO NIRCam and NIRSpec data of three $z=7.6-8.5$ galaxies in the SMACS cluster field. These three galaxies have diverse morphologies (Figures~\ref{fig:data4590}-\ref{fig:data10612}), from being compact (Galaxy ID 04590) to being made up of several components (at least two components in Galaxy ID 10612 and four components in Galaxy ID 06355). We perform the photometry with the new photometry tool \texttt{forcepho}, which is a Bayesian model-fitting program that simultaneously fits multiple PSF-convolved S\'{e}rsic profiles to all filters. Within the \texttt{Prospector} framework, we fit a 13-parameter model to NIRCam photometry and the NIRSpec emission lines. This physical SED model includes a flexible SFH, a multi-component dust model including a variable dust attenuation law, different gas-phase and stellar metallicities, a free ionization parameter for the nebular emission and a nuisance parameter that scales the emission line fluxes. The latter parameter is important to account for possible slit-loss effects and physical effects such as LyC dust absorption and escape. We find that this factor is important: model-based emission line fluxes need to be rescaled by factors of $0.3-0.6$. Overall, we find that we are able to reproduce the photometry and emission line measurements (Figure~\ref{fig:sed}). An exception is the emission line [OIII]4364, which is brighter than predicted by our modelling, which could be attributed to non-stellar sources powering this line, to a conservative \texttt{cloudy} grid (too low ionization parameter values). We infer for all three galaxies rising SFHs and stellar masses of $M_{\star}\approx10^{8}~M_{\odot}$ (Figure~\ref{fig:SFH} and Table~\ref{tab:galaxies}). These galaxies are all young with mass-weighted ages of $t_{50}=3-7$ Myr. However, we find indications for underlying older stellar populations, implying the SFHs extend at least over several tens of Myr. We emphasise that the SFHs, stellar masses and stellar ages depend on the adopted SFH prior (Figure~\ref{fig:effect_sfh_prior} and Table~\ref{tab:model_variation}): assuming a bursty SFH prior leads to younger, lower-mass galaxies, which is consistent with previous studies \citep{tacchella22_highz, whitler22}. Emission lines are helpful to constrain mainly the gas-phase metallicity and ionization parameter, and only have a second order effect on the inferred SFHs and ages (Figure~\ref{fig:effect_EL} and Table~\ref{tab:model_variation}). This is because of the aforementioned rescaling of the emission lines, i.e. we are fitting for relative emission line strength and not their absolute strength. However, we still find that emission lines help with constraining the SFHs because they pin down the dust attenuation parameters, which then in turn constrains the rest-UV emission -- another important age indicator. Focusing on metallicity, our SED fitting approach (with masking the [OIII]4364 line) delivers gas-phase metallicities consistent with the direct method from the [OIII]4364 line from \citet[][Figure~\ref{fig:metallicity_comparison}]{curti22}. We find no convincing trend between the gas-phase metallicity and the total sSFR (and stellar age) of the galaxies (Figure~\ref{fig:metallicity_sfh}). However, Galaxy ID 04590 with the lowest gas-phase metallicity shows the highest star-formation intensity as measured by the sSFR surface density $\Sigma_{\rm sSFR}$ ($\Sigma_{\rm sSFR}\approx500~\mathrm{Gyr}^{-1}~\mathrm{kpc}^{-2}$), i.e. this galaxy has very high SFR surface ($\Sigma_{\rm SFR}\approx38~\mathrm{M}_{\odot}~\mathrm{yr}^{-1}~\mathrm{kpc}^{-2}$) density for its stellar mass ($M_{\star}\approx10^8~\mathrm{M}_{\odot}$), indicating that this galaxy is vigorously forming stars in a very compact configuration. This is consistent with a scenario in which Galaxy ID 04590 is undergoing rapid accretion of pristine gas, which leads to a steeply rising SFH and a high (s)SFR surface density. Despite having a low gas-phase metallicity, the high $A_{\rm V}$ can be explained by its compactness (i.e. high gas surface density). Consistent with this rapid inflow of pristine gas is the tentative evidence of $\mathrm{Z}_{\rm gas}\lesssim\mathrm{Z}_{\star}$. The two other galaxies (Galaxy ID 06355 and 10612) with higher gas-phase metallicity show more complex multi-component morphologies on kpc scales, indicating that their recent increase in SFR is driven by mergers or internal, gravitational instabilities. In summary, our work highlights the great potential for combining photometric with spectroscopic \JWST data to study early galaxy formation, such as the upcoming planned JADES survey \citep{rieke19}. \section*{Acknowledgements} We are grateful to Pierre Ferruit, Peter Jakobsen, and Nora L\"{u}tzgendorf for sharing their expertise on NIRSpec and the processing of its unique data. We thank Matt Auger for discussing magnification effects. This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for \JWST. These observations are associated with programme \#2736. The authors acknowledge the SMACS ERO team led by Klaus Pontoppidan for developing their observing program with a zero-exclusive-access period. This work is supported by JWST/NIRCam contract to the University of Arizona, NAS5-02015, by the Science and Technology Facilities Council (STFC), by the European Research Council (ERC) Advanced Grant 695671 ``QUENCH'', by the ERC Advanced Grant INTERSTELLAR H2020/740120, and by the ERC Advanced Grant 789056 ``FirstGalaxies''. \section*{Data Availability} Derived data (including the reduced NIRCam images, photometry and \texttt{Prospector} posterior distributions) supporting the findings of this study are available from the corresponding author ST on request. Fully reduced NIRSpec spectra are publicly available at \url{https://doi.org/10.5281/zenodo.6940561}. \bibliographystyle{mnras} \appendix \section*{Affiliations} \noindent {\it $^{1}$Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, UK\\ $^{2}$Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge, CB3 0HE, UK\\ $^{3}$Center for Astrophysics $\vert$ Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA\\ $^{4}$Department of Astronomy and Astrophysics, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA\\ $^{5}$Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa, Italy\\ $^{6}$AURA for the European Space Agency, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA\\ $^{7}$Department of Physics and Astronomy, University College London, Gower Street, London, WC1E 6BT, UK\\ $^{8}$Department for Astrophysical and Planetary Science, University of Colorado, Boulder, CO 80309, USA\\ $^{9}$Department of Astronomy and Astrophysics, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 USA\\ $^{10}$Kavli Institute for Particle Astrophysics and Cosmology and Department of Physics, Stanford University, Stanford, CA 94305, USA\\ $^{11}$NSF's National Optical-Infrared Astronomy Research Laboratory, 950 North Cherry Avenue, Tucson, AZ 85719, USA\\ $^{12}$Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721, USA\\ $^{13}$Centro de Astrobiolog\'{i}a, (CAB, CSIC–INTA), Departamento de Astrof\'{i}sica, Cra. de Ajalvir Km. 4, 28850 – Torrej\'{o}n de Ardoz, Madrid, Spain\\ $^{14}$European Space Agency, ESA/ESTEC, Keplerlaan 1, 2201 AZ Noordwijk, NL\\ $^{15}$Cosmic Dawn Center, Niels Bohr Institute, University of Copenhagen, Radmandsgade 62, 2200 Copenhagen N, Denmark\\ $^{16}$Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, School of Natural Sciences, The University of Manchester, Manchester, M13 9PL, UK\\ $^{17}$Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK\\ $^{18}$University of Massachusetts Amherst, 710 North Pleasant Street, Amherst, MA 01003-9305, USA\\ $^{19}$Observatoire de Gen\`{e}ve, Universit\'{e} de Gen\`{e}ve, Chemin Pegasi 51, 1290 Versoix, Switzerland\\ $^{20}$NRC Herzberg, 5071 West Saanich Rd, Victoria, BC V9E 2E7, Canada } \bsp % \label{lastpage}
Title: Hybrid Very Long Baseline Interferometry Imaging and Modeling with Themis
Abstract: Generating images from very long baseline interferometric observations poses a difficult, and generally not unique, inversion problem. This problem is simplified by the introduction of constraints, some generic (e.g., positivity of the intensity) and others motivated by physical considerations (e.g., smoothness, instrument resolution). It is further complicated by the need to simultaneously address instrumental systematic uncertainties and sparse coverage in the u-v plane. We report a new Bayesian image reconstruction technique in the parameter estimation framework Themis that has been developed for the Event Horizon Telescope. This has two key features: first, the full Bayesian treatment of the image reconstruction makes it possible to generate a full posterior for the images, permitting a rigorous and quantitative investigation into the statistical significance of image features. Second, it is possible to seamlessly incorporate directly modeled features simultaneously with image reconstruction. We demonstrate this second capability by incorporating a narrow, slashed ring in reconstructions of simulated M87 data in an attempt to detect and characterize the photon ring. We show that it is possible to obtain high-fidelity photon ring sizes, enabling mass measurements with accuracies of 2%-5% that are essentially insensitive to astrophysical uncertainties, and creating opportunities for precision tests of general relativity.
https://export.arxiv.org/pdf/2208.09003
\title{Hybrid Very Long Baseline Interferometry Imaging and Modeling with \themis} \correspondingauthor{Avery E. Broderick} \email{abroderick@perimeterinstitute.ca} \author[0000-0002-3351-760X]{Avery E. Broderick} \affiliation{ Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada} \affiliation{ Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada} \affiliation{ Waterloo Centre for Astrophysics, University of Waterloo, Waterloo, ON N2L 3G1 Canada} \author[0000-0002-5278-9221]{Dominic W. Pesce} \affiliation{Center for Astrophysics $|$ Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA} \affiliation{ Black Hole Initiative at Harvard University, 20 Garden Street, Cambridge, MA 02138, USA} \author[0000-0003-3826-5648]{Paul Tiede} \affiliation{ Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada} \affiliation{ Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada} \affiliation{ Waterloo Centre for Astrophysics, University of Waterloo, Waterloo, ON N2L 3G1 Canada} \author[0000-0001-9270-8812]{Hung-Yi Pu} \affiliation{ Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada} \author[0000-0003-2492-1966]{Roman Gold} \affiliation{CP3-Origins, University of Southern Denmark, Campusvej 55, DK-5230 Odense M, Denmark} \affiliation{ Institut f\"ur Theoretische Physik, Goethe-Universit\"at Frankfurt, Max-von-Laue-Stra{\ss}e 1, D-60438 Frankfurt am Main, Germany} \affiliation{ Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada} \keywords{Black hole physics --- Astronomy data modeling --- Computational astronomy --- Submillimeter astronomy --- Long baseline interferometry --- General relativity} \section{Introduction} \label{sec:intro} Generating images from radio wavelength very long baseline interferometric (VLBI) experiments presents a substantial computational and mathematical challenge. Generally, the quantity that is directly observed is related to the spatial Fourier transform of the image, i.e., the complex visibilities, $V(u,v)$, defined in the spatial frequency plane $(u,v)$. In principle, if $V$ were measured at all spatial frequencies (i.e., all points in the $u$-$v$ plane), and if it were well calibrated (i.e., the full complex $V$ were known), then image reconstruction would reduce to an inverse Fourier transform. In practice neither of the above conditions is met. First, even for experiments with the most densely sampled $u$-$v$ coverage possible, its finite extent precludes a unique image reconstruction. That is, the image becomes unconstrained on angular scales smaller than ${\sim}\lambda/u_{\rm max}$ -- this is simply the interferometric manifestation of the diffraction limit. However, more typical in VLBI is sparse $u$-$v$ coverage, consisting of densely sampled tracks and significant gaps in which no visibilities are measured. As a result, the lack of uniqueness can become significant, extending even to intermediate- and large-scale features. Second, and particularly within the context of millimeter-wavelength VLBI \citep[see, e.g.,][]{M87_PaperIII}, the full complex visibilities are typically not available. Atmospheric phase delays induce potentially large station-specific phase errors. Imperfect antenna and receiver performance results in station gain amplitude errors. Both of these may be ameliorated by careful treatment of correlations among baselines, e.g., self-calibration, or via the use of closure quantities, e.g., closure phase and/or closure amplitudes, or some combination thereof. Again, these complicate and potentially alter the uniqueness of possible image solutions. Both of these are relevant for observations made by the Event Horizon Telescope (EHT), a global millimeter-VLBI array \citep[][hereafter Papers~I, II, and III, respectively]{M87_PaperI,M87_PaperII,M87_PaperIII}. Nevertheless, a number of efficient and high-fidelity methods have been developed to produce image reconstructions from interferometric data. These include the venerable CLEAN algorithm \citep{Hogbom_1974,Schwarz_1978,Clark_1980,Schwab_1984}, which works directly with the inverse Fourier transform of the calibrated visibility data (the so-called ``dirty image''). CLEAN attempts to model the image structure as a sum of point sources, whose locations and intensities are determined by iterative subtraction of appropriately scaled and shifted versions of the point spread function (the so-called ``dirty beam'') from the dirty image. This procedure is typically repeated with interleaving ``self-calibration'' steps that attempt to infer the station-specific gain and phase errors consistent with the modeled image. The resulting collection of point sources is subsequently smoothed to the ostensible diffraction scale, i.e., the radio ``beam,'' for display purposes, though the goodness-of-fit is computed directly from the point sources. A number of imaging methods have also been developed that attempt to forward model the image directly, traditionally using ``maximum entropy'' methods \citep[e.g.,][]{Frieden_1972,Gull_1978,Narayan_1986}, and more recently with ``regularized maximum likelihood'' (RML) methods \citep[e.g.,][]{Chael_2016,Chael_2018,Akiyama_2017a,Akiyama_2017b}. These algorithms have two main advantages over the traditional ``inverse modeling'' approach used by CLEAN: (1) they permit the imposition of general-purpose regularizers on the image structure, such as positivity, smoothness, or sparseness; and (2) they can fit directly to closure quantities rather than requiring complex visibilities. As with CLEAN methods, RML modeling may be iterated with self-calibration steps, during which the station-based gain terms may also be subject to prior constraints. Both CLEAN and RML methods have been used with considerable success to reconstruct images of M87 from the 2017 April EHT observations \citep[hereafter, Paper~IV]{M87_PaperIV}. In principle, the forward-modeling approach used by RML imaging methods makes them amenable to posterior exploration within a Bayesian framework, generating not just a single optimal fit but the full range of images consistent with the data. Access to the posterior is highly desirable because it enables a quantification of the non-uniqueness of the image reconstruction; i.e., it permits an estimation of the uncertainty in the image in both a formal and practical sense. Posterior inference can also be used to naturally assess choices of resolution and field of view (\fov) of the image, systematically addressing two key parameters often left to the experience of the end user. Previous efforts to perform Bayesian posterior exploration within the context of radio interferometric imaging have often focused on algorithms that function for connected-element arrays with excellent \uv-coverage and large FOVs \citep[e.g.][]{Sutter_2014}, motivating the development and use of sparsity priors \citep[e.g.][]{Cai_2018a,Cai_2018b} or resorting to single-point statistics that bypass the computationally taxing sampling \citep[e.g.][]{Greiner_2016,Junklewitz_2016,Naghibzadeh_2018}. Nearly all of these efforts construct likelihoods that assume perfectly calibrated complex visibility data subject only to Gaussian thermal noise, and those that do attempt to fit for calibration terms either do not pursue image reconstruction \citep[e.g.,][who fit parameterized source models]{Lochner_2015} or impose simplifying approximations about the structure of the posterior \citep[e.g.,][who use variational inference]{Arras_2019}. In this paper we present an imaging framework implemented within \themis, a flexible and highly parallel model comparison and parameter estimation code developed for the EHT \citep{themis}. \themis has already been used to compare geometric and general relativistic magnetohydrodynamic (GRMHD) models to EHT data obtained from the 2017 April observing campaign \citep[hereafter, Papers~V and VI, respectively]{M87_PaperV,M87_PaperVI}. It also provides a powerful set of existing sampling, modeling, and calibration tools, including samplers that can find and explore multi-modal likelihoods, and the ability to trivially generate new models by combining existing models \citep[see][for details]{themis}. In this way image reconstruction is treated in exactly the same standardized form as any other form of model fitting. By leveraging the ability of \themis to explore image posteriors, we gain access to direct estimates of the pixel uncertainties and their spatial correlations throughout the image. Perhaps even more importantly, the implementation of a \themis-based image model permits the self-consistent introduction of additional parameterized model components to the image. Such components can include those on scales smaller than the image pixel size or larger than the image \fov, enabling the search for and characterization of features that may be infeasible to image but which may nevertheless be constrained by the data in the Fourier domain. Here we present a non-parametric image model and example image reconstructions with and without additional geometric model components. In particular, we demonstrate the ability to faithfully reconstruct bright ring-like features within images from GRMHD simulations appropriate for M87, and thus provide both a validation of the general underlying physical picture and a precision measure of the central supermassive black hole mass in M87 that is less sensitive to astrophysical uncertainties than prior methods. We summarize key relevant features of \themis in Section \ref{sec:themis}, present the image model in Section \ref{sec:imaging}, provide the introduction and validation of additional geometric features in Sections \ref{sec:hybrid} and \ref{sec:photon_rings}, respectively, and collect conclusions in Section \ref{sec:conclusions}. \section{Summary of relevant \themis properties} \label{sec:themis} \themis is a Bayesian model comparison and feature extraction framework developed for the EHT \citep{themis}. Its modular design enables rapid development in most facets of the model analysis procedure. Two aspects are particularly relevant here: the ability to add any image-based models rapidly and implement new samplers. All work reported here makes use of the parallel-tempered, differential-evolution Markov chain Monte Carlo (MCMC) sampler. This is an affine invariant, ensemble MCMC scheme, and thus optimizes the proposal distribution. Multiple tempering levels are used to efficiently locate and explore multi-modal likelihood distributions. This sampler has been demonstrated with prescribed/known likelihoods containing thousands of independent peaks \citep[see Section 6.1.2 of][]{themis}. However, it is well known that this class of samplers can struggle when parameter sets exceed $\gtrsim100$ \citep{Huijser+2015}. While this sampler performance is sufficient for current EHT applications, additional sampler development is warranted prior to applications with a larger separation between the ostensible image resolution and \fov. Currently, the outputs of this sampler are MCMC chains that contain the full posterior probabilities of parameters of a given model. \themis already has a number of simple geometric models implemented. These include asymmetric Gaussians and slashed rings, i.e., annuli with a linear brightness gradient across the ring \citep[see Section 4.2 of][]{themis}. Methods for dynamically generating new image-based models from existing models extend these with minimal effort. A key example is the ability to sum images to generate new models; note that this sum is performed after visibility construction, and thus does not impose resolution restrictions from one model on others. Additional image-based models may be simply implemented, requiring only information about the interface and otherwise agnostic regarding the remainder of \themis. \themis also incorporates the ability to mitigate key systematic uncertainties in EHT observations. Chief among these are the uncertain station gains, associated with atmospheric propagation and absorption. Phase delays are naturally addressed by the use of closure phases. An efficient and accurate method for reconstructing station gain amplitudes, even in large numbers, has been implemented \citep[see Section 5 of][]{themis}. A simple gain model is effectively marginalized over after the construction of likelihoods, permitting \themis to reconstruct many gains without creating additional demands on the sampler. Here we present analyses of various simulated data sets produced with $u$-$v$ coverages identical and systematic error budgets similar to the 2017 observations for M87. The analyses are performed using the visibility amplitudes and closure phases, with the standard signal-to-noise (\SNR) restrictions and adopting the minimally covariant closure phase set \citep{themis,Blackburn+2019}. In all cases we simultaneously reconstruct the individual station gain amplitudes while fitting image models as described in \citet{themis} and demonstrated in \citetalias{M87_PaperVI}. \section{Imaging with \themis} \label{sec:imaging} The line between ``imaging'' and ``modeling'' is conceptually blurred, relying on a qualitative distinction between models that prescribe a specific class of features and those that do not. We approach imaging with \themis as a model-fitting endeavor through the use of an effectively non-parametric image model. In this case, we use the term ``non-parametric'' loosely; the model is formally parametric (see \autoref{sec:RasterModel} for specifics), but the form of this parameterization is chosen to be flexible and to avoid steering the model toward any particular image structure. We begin with a description of the underlying image model, discuss how resolution and \fov are set in a systematic fashion, and then present validation examples. \subsection{Bicubic raster model} \label{sec:RasterModel} The image model\footnote{This model is implemented within \themis using the \texttt{model\_image\_splined\_raster} class.} is generated in two steps: (1) amplitudes are selected for each of a number of ``control points,'' followed by (2) convolving those amplitudes with a bicubic smoothing kernel. First, $\Npx$ control points are placed at the vertices of a uniform square grid contained within a pre-defined \fov, as shown in \autoref{fig:rastercartoon}. These control points act as image pixels, the amplitudes of which are the primary model parameters; we enforce positivity at these control points by modeling the amplitudes logarithmically. Given an \fov and a pixel number per side $N_{x,y}$ (such that $\Npx=N_x\times N_y$), we set the control point positions $(l_i,m_j)$ and intensities $I_{ij}$ to be \begin{equation} l_i = s_x i,~~ m_j = s_y j,~~ I_{ij} = e^{p_{ij}}. \label{eqn:ControlPoints} \end{equation} where $s_{x,y}=\fov/(N_{x,y}-1)$. We restrict the images considered in this paper to having $N_x = N_y = \sqrt{\Npx}$, such that the images are square and $s_x = s_y$. Next, the grid of control points is convolved with a bicubic interpolation kernel to generate intensities that smoothly vary in all directions. From the $I_{ij}$ values specified in \autoref{eqn:ControlPoints}, we generate the actual image intensity at every $(l,m)$ using \begin{equation} I(l,m) = \sum_{i,j} w(l-l_i) w(m-m_j) I_{ij} , \label{eqn:ImageDomainConvolution} \end{equation} where \begin{equation} w(x) = \begin{cases} 0 & -2\ge x\\ b\left[ -x^3 - 5x^2 - 8x - 4 \right] & -1\ge x>-2\\ - (b+2) x^3 - (b+3) x^2 + 1 & 0\ge x>-1\\ (b+2) x^3 - (b+3) x^2 + 1 & 1 \ge x > 0\\ b\left[ x^3 - 5x^2 + 8x - 4 \right] & 2 \ge x > 1\\ 0 & x\ge2 \end{cases} \label{eq:bciweight} \end{equation} is the 1D bicubic interpolation kernel and $b$ is a control parameter that affects the monotonicity of the interpolation. For all of the images presented in this paper, we have set $b=-0.5$. We note that this bicubic interpolation can in principle result in negative values for $I$ at some $(l,m)$, despite the strictly positive nature of $I_{ij}$. These departures from positivity are controlled by the value of $b$, and in practice we have found that setting $b=-0.5$ ensures that any negative excursions are typically very small. In practice, we perform the convolution described by \autoref{eqn:ImageDomainConvolution} in the visibility domain. That is, we set \begin{equation} V(u,v) = \sum_{i,j} W(2\pi s_x u)W(2\pi s_y v) e^{2\pi i (l_i u + m_j v)} I_{ij} \,, \label{eq:V} \end{equation} in which \begin{multline} W(k) = - \frac{4}{k^3} \sin(k) \left[2b\cos(k)+(4b+3)\right]\\ + \frac{12}{k^4} \left\{ b \left[ 1-\cos(2k) \right] + 2\left[1-\cos(k)\right] \right\}, \label{eq:bcifour} \end{multline} is the Fourier transform of $w(x)$ (see \autoref{app:CubicInterpolation}). For modest values of $N_x$, such as those presented here and relevant for the EHT, $V(u,v)$ is most rapidly computed via the direct sum in \autoref{eq:V}; fast Fourier transforms become advantageous only for $N_x\gtrsim10^2$. In the absence of absolute phase information, as occurs here, the image is defined only up to an arbitrary translation. However, the sparse nature of the available control points coupled with the assumed lack of flux outside of the prescribed \fov typically forces the image toward a single location, alleviating this concern. Thus, we make no attempt to fix the location of the image, e.g., by setting the center of light or imposing a prior on the spatial distribution of flux in the image. \subsection{Pixel size and FOV selection} \label{sec:resolution} \begin{deluxetable*}{lccccc} \tablecaption{Dependence of the BIC (left entry) and AIC (right entry) on $N_x$ and \fov for a simulated data set designed to approximate the 2017 April 11 EHT observations of M87}\label{tab:Npxfov} \tablehead{ \colhead{} & \multicolumn{5}{c}{FOV ($\mu$as)} \vspace{-0.15in}\\ \colhead{} & \multicolumn{5}{c}{\rule{6.5in}{0.4pt}} \vspace{-0.100in}\\ \colhead{$N_x$} & \colhead{40} & \colhead{60} & \colhead{80} & \colhead{100} & \colhead{120} } \startdata 4 & 222 / 244 & $(2.02$ / $ 2.04)\times10^{3}$ & $ (3.24$ / $ 3.24)\times10^{4}$ & $ (5.38$ / $ 5.38)\times10^{5}$ & $ (2.12$ / $ 2.12)\times10^{6}$ \\ 6 & 31.0 / 31.0 & {\bf 0 / 0} & 106 / 106 & 862 / 862 & $ (5.81$ / $ 5.81)\times10^{3}$ \\ 8 & 217 / 207 & 178 / 168 & 175 / 165 & 169 / 159 & 246 / 236 \\ 10 & 446 / 471 & 408 / 433 & 402 / 427 & 394 / 419 & 390 / 415 \\ 12 & 731 / 902 & 689 / 859 & 689 / 860 & 670 / 840 & 672 / 843 \\ \enddata \tablecomments{Both information criteria are quoted relative to their minima, which occur for $N_x=6$, $\fov=60~\mu$as (in bold).} \end{deluxetable*} Within the context of VLBI, the smallest and largest recoverable image structures are approximately determined by the longest and shortest baselines in the array, respectively. Information about the source structure on spatial scales that are larger than those probed by the shortest baselines, on scales that are smaller than those probed by the longest baselines, or indeed on any scales that are not directly measured by the array, must therefore be guided by the image model. For our purposes, the relevant model hyperparameters are the control point separation (i.e., the pixel size) and the \fov, neither of which has a unique \textit{a priori} specification. Other imaging methods have to contend with this same issue, but it is common practice to simply employ some rule of thumb when selecting these hyperparameters. In our image modeling procedure, we would ideally pursue a strategy for selecting unique and data-driven values for both $N_x$ and \fov via Bayesian evidence maximization. In practice, we approximate the Bayes factors using the Bayesian information criterion (BIC), which explicitly penalizes additional parameters: \begin{equation} \BIC = -2\mathcal{L}_{\rm max} + k \log( N_{\rm data} ) . \end{equation} Here, $\mathcal{L}_{\rm max}$ is the logarithm of the maximum likelihood, $k$ is the number of model parameters, and $N_{\rm data}$ is the number of data points. We also make use of the Akaike information criterion (AIC), defined by \begin{equation} \AIC = -2\mathcal{L}_{\rm max} + 2k + \frac{2k(k+1)}{N_{\rm data}-k-1}. \end{equation} Both the BIC and the AIC are listed in \autoref{tab:Npxfov} for a simulated data set constructed to approximate the 2017 April EHT observations of M87. Both information criteria converge on the same optimal values for \fov and $N_x$, yielding a natural choice for both hyperparameters. We note that the selected \fov and $N_x$ values remain subject to our assumption of a raster geometry. For example, we do not explore asymmetric or disconnected grids here. The inferred resolution ($10~\mu$as) and extent of the emission region are very similar to those found by other forward-modeling approaches to image reconstruction applied to the same simulated data set \citetalias{M87_PaperIV}. This implies a modest degree of superresolution, by roughly a factor of two, relative to the ostensible resolution of the EHT, approximately $20~\mu$as. \subsection{Image reconstructions from synthetic data} \label{sec:ImageReconstructions} In \autoref{fig:imagerecons} we present image reconstructions from a variety of simulated data sets with realistic $u$-$v$ coverage and noise for the 2017 April EHT observations of M87. To facilitate comparison with other EHT imaging methods, we use the same simulated data sets for which reconstructions were presented in Figure~10 of \citetalias{M87_PaperIV}, which include realistic sources of systematic error, including large gain amplitude variations at a single station (Large Millimeter Telescope). These images were selected to both characterize the ability of the imaging procedures to reconstruct and probe ringlike structures (e.g., Ring, Crescent, GRMHD) and to assess their ability to distinguish ringlike from non-ringlike structures (e.g., Disk and Double). Prior to fitting, the data were pre-processed in the manner described in \citetalias{M87_PaperVI}; that is, we scan-averaged the visibility data and added a 1\% systematic error contribution (consistent with the magnitude of the non-closing errors measured in \citetalias{M87_PaperIII}) in quadrature to the thermal errors prior to constructing closure quantities. We jointly fit to the closure phase and debiased visibility amplitude data products from both the high and low frequency bands, and we excluded all data points having a \SNR ratio below 2. To accommodate potential structures that are resolved on all but the intrasite baselines \citepalias[e.g., those confined to Hawaii;][]{M87_PaperIV,M87_PaperVI}, all fits included a large-scale Gaussian component (see \autoref{sec:LargeScaleGaussian}). In all cases we initialize the image model with a broad Gaussian emission structure, though subsequent exploration renders the details of the initial parameter values moot. All tests were run multiple times, and their MCMC chains have reached the well-mixed regime. Good fits are found for all reconstructions, with the maximum likelihood fit achieving a $\chi^2$ per degree of freedom of $387/360$, $385/360$, $472/360$, $370/360$, and $330/360$ for the Ring, Crescent, Disk, Double, and GRMHD tests, respectively. An example set of residual plots is shown in \autoref{fig:M87residuals} for the GRMHD test, and we see no obvious structure or poorly fit sections of data. To facilitate comparisons with other methods used for EHT image reconstruction, for each test we computed the equivalent blurring kernel defined and presented in Table~4 of \citetalias{M87_PaperIV}. We find similar values to those reported there: $14.8~\mu$as, $14.6~\mu$as, $28.0~\mu$as, and $12.5~\mu$as, for the Ring, Crescent, Disk, and Double tests, respectively. Despite the rectilinear nature of the control point gridding, a surprising variety of images is possible, as can be seen in \autoref{fig:imagerecons} (e.g., the reproduction of nearly circular Gaussian components in the Double test). On the smallest angular scales accessible to the model -- i.e., the $10~\mu$as inter-pixel spacing -- we see imperfections in the maximum a posteriori reconstruction (e.g., knots on the Crescent, asymmetry in the components of the Double) associated with the limited \uv-coverage. Smoothing the models with a $15~\mu$as FWHM Gaussian kernel mutes these small-scale structures and highlights the larger scales, for which the agreement between model and truth is more visually apparent (see the bottom two rows of \autoref{fig:imagerecons}). However, we emphasize that even the apparent discrepancies in the unsmoothed model are statistically consistent with the truth, within the confidence levels claimed by the posterior. That is, the locations and magnitudes of the various small ``artifacts'' are no more than expected given the data-driven uncertainties in each pixel's amplitude, and the myriad realizations of all possible small-scale discrepancies are fully captured by the posterior distribution (see \autoref{sec:ImagePosteriors}). All of the image reconstructions shown in \autoref{fig:imagerecons} recover the scale and morphology of the truth images well. This includes the sizes and separations between features, a brightness gradient when present and its absence when not present, and the overall image orientations. Of particular relevance for detecting black hole shadows with the EHT is the ability to detect ring-like structures with high fidelity, that is, to reproduce rings when present in the underlying truth image but convincingly rule them out when they are not. The overall flux normalization is a function of the station gain amplitude reconstruction, and thus subject to the associated priors. This limits the flux reconstructions to a precision of roughly 20\% and is responsible for differences in the total brightness of some image reconstructions, e.g., the Disk test. \subsection{Image posteriors} \label{sec:ImagePosteriors} In practice, image parameter exploration as described in the previous sections is far more computationally expensive than the methods that have been used to date to generate images from EHT observations, which typically perform a constrained optimization of some comparison metric between image and data \citepalias[see][]{M87_PaperIV}. This increased expense is a consequence of the general parameter estimation paradigm, which considers the best fit to be less important than the full set of compatible fits. Thus, a key feature of image reconstruction within our modeling framework is the generation of a posterior distribution for all image parameters, which can be usefully conceptualized as an ``image posterior.'' The image posterior can be used to quantitatively address a variety of questions about a given image reconstruction. Note that, despite the absence of absolute phase information, we do find that the image centroids are well fixed by the FOV combined with the modest number of pixels employed. As such, we neglect the potential contributions to the statistics we present here from translations of the reconstructed image. A common question in radio interferometric imaging pertains to the reliability (or ``believability'') of specific image features. In \autoref{fig:fitfamily} we show a selection of 15 reconstructed images taken from the MCMC chain for the GRMHD test image (fifth column of \autoref{fig:imagerecons}). Ten of these images (shown in the middle two rows of the figure) are sampled directly from the posterior distribution, such that the magnitude and frequency of any variations seen across the images mirror those present in the image posterior (e.g., a feature seen in two of the 10 images indicates that $\sim$20\% of the image posterior space is consistent with that feature). Additionally, the bottom row of \autoref{fig:fitfamily} shows the five most extreme\footnote{The degree of extremity here is quantified for any single image using the L2 norm of that image relative to the average image from the sample.} outlier images contained within $10^4$ elements randomly drawn from the MCMC chain; these images highlight image morphologies that live on the fringe of what the data will permit. Single-point statistics computed from the image posterior can be used to produce ``typical'' images and to provide pixel-specific error budgets. The top row of \autoref{fig:fitfamily} shows examples of such statistics in the form of the mean and standard deviation of the image posterior. We find that the standard deviation map is highly inhomogeneous, varying substantially across the image domain; such behavior is both expected and frequently observed in radio interferometric imaging \citepalias[see, e.g., Fig.~17 of][]{M87_PaperIV}, despite often being presented as a single global rms value that is assumed to be shared by all pixels, but it is potentially critical for accurate image assessment. Perhaps most striking is the clear ringing present within the standard deviation map, associated with the control point structure. The origin of this ringing is elucidated by the pairwise marginal posterior distributions for pixels across the image, shown in \autoref{fig:crosspx} for a single row and column. The nodes in the standard deviation map are associated with a set of sinusoidal oscillations with a wavelength of ${\sim}20~\mu$as, corresponding to the maximum spatial frequencies accessible to the 2017 EHT \uv-coverage. For M87 in particular, limited north-south \uv-coverage (see Figure~12 in \citetalias{M87_PaperIII}) increases the uncertainty in the mode amplitude in that direction. In contrast, we find little to no correlation between pixels in the east-west direction, where the 2017 EHT \uv-coverage was more complete for M87. That is, the joint pixel flux distributions indicate both the presence of the expected resolution limit and the magnitude of the uncertainties this limit induces in the reconstructed image (roughly 10\% in this case). With access to $n$-point statistics provided by the image posterior, arbitrarily sophisticated image interrogation procedures could be developed, such as metrics to assess how likely it is that the image contains a ring-like feature. The ability to generate statistically rigorous posteriors is also critical for applications such as presented in \autoref{sec:hybrid}, where rigidly parameterized modeling is performed simultaneously with image reconstructions. \section{Hybrid imaging with \themis} \label{sec:hybrid} Visibility-domain modeling of VLBI data can be a powerful tool for robust parameter estimation, but it assumes that the underlying source structure is well-described by the model under consideration. This assumption is rarely satisfied for simple models when the data have high \SNR, at which point it becomes necessary to include additional and flexible model complexity to account for deviations between the desired model structure and the true source structure. For a primary EHT science target, M87, the additional complexities have been associated with some combination of stochastic fluctuations in the emission region (as caused by, e.g., turbulence) and uncalibrated systematic uncertainties in the data. The flexibility provided by imaging has played a critical role in constraining the nature of these deviations and in guiding the introduction of ad hoc model features \citepalias[e.g., the ``nuisance Gaussian components'' used alongside the geometric crescent models in][]{M87_PaperVI} to address them. The implementation in the previous section of imaging within a modeling framework enables a novel path forward toward realizing the best elements of both approaches: (a) modeling known features and thus accessing substantial improvements in their parameter estimation, and (b) imaging the remainder of the source structure, yielding an otherwise agnostic view of the stochastic aspects of the image. We refer to this simultaneous modeling and imaging approach as ``hybrid imaging.''\footnote{Note that the ``hybrid imaging'' presented in this paper is distinct from the ``hybrid mapping'' of, e.g., \citet{Baldwin_1978} and \citet{Pearson_1984}, which refers to an iterative procedure for reconstructing images without phase information.} There are two key elements to successful hybrid imaging. First, because a primary goal is the accurate reconstruction of the parameters of the model components, the ability to explore the posterior distribution for the full image/model combination is required. Where simultaneous exploration is not possible -- e.g., when modeling separately from or iteratively with finding best-fit images -- the sampler will only have restricted access to the parameter space and the reconstructed posteriors will not necessarily be reliable. Thus, the ability to construct full image priors as described in the previous section is critical. Second, some ordering parameter in which the model and image components can be distinguished is highly desirable if not strictly necessary. This ordering may be accomplished by enforcing a separation in angular scale, variability, polarization properties, wavelength, etc. Where features in the image component can be reconstructed independently by either the model or image components, a degeneracy is induced between the two. There are many problems for which these components are well separated in a natural way, and in this section we describe two such cases that are of particular relevance to EHT applications. \subsection{Large-scale flux} \label{sec:LargeScaleGaussian} The primary EHT targets exhibit structure over a wide range of angular scales, including on those much larger than the ${\sim}200$\,\uas \fov of the EHT images. This large-scale image structure impacts only the shortest (intrasite, $<10$~km) baselines while having little effect on intermediate and long baselines \citepalias[see, e.g.,][]{M87_PaperIV,M87_PaperVI}. Nevertheless, these short baselines provide important constraints on station gains and removing them would thus be undesirable. To address the intrasite baseline flux excess, it has been found sufficient to include a large-scale ($\text{FWHM} \gtrsim 1$\,mas) Gaussian model component in the image generation and model comparison exercises. A Gaussian is chosen simply to have a structure-agnostic model component with a specifiable size scale. This component is naturally separated in spatial scale from the much more compact ($\text{FWHM} < 100$\,\uas) image features produced in the reconstructions (see \autoref{fig:scale_separation}). We include such a Gaussian in all of the examples presented in this and the following section for this purpose. \subsection{Photon rings} Of more physical and algorithmic interest to the EHT is the ability to probe scales much smaller than the nominal array resolution. Imaging algorithms currently in use by the EHT regularly achieve a modest degree of ``superresolution'' \citep[up to factors of $\sim$2--5;][]{Honma_2014, Chael_2016, Akiyama_2017b, Kuramochi_2018} -- i.e., the ability to recover image features that are smaller than some rule of thumb such as the Rayleigh criterion would nominally permit -- by imposing assumptions such as sparsity and smoothness in the reconstructed image. Even more substantial resolution improvements are attainable by enforcing correspondingly stronger assumptions about the image structure, such as through the use of rigidly parameterized models, provided that the true image structure adheres to the imposed assumptions. For instance, if the true image consisted of two point sources, then the optimal angular resolution would be achieved by fitting a double point source model to the data; our resulting ability to discern the source separation in this case could potentially be orders of magnitude better than the nominal array resolution. A natural example of a parameterizable sub-beam source structure within the context of EHT observations is the ``photon ring.'' This actually consists of a concentric series of narrow, bright rings that originate from photons orbiting (``winding'') multiple times about the black hole. Typically the largest of these rings, which generally contains the highest total flux, arises from photons that traverse behind the black hole only once prior to propagating toward Earth; we will call this feature the $n=1$ photon ring. Higher-order rings are predicted by general relativity, though their flux decreases exponentially with winding number; the $n=\infty$ ring is the boundary of the black hole shadow \citep{Hilbert1917,Johnson_2019}. The relationships between the widths, fluxes, and shapes of different rings provide additional tools for probing the underlying spacetime, though we leave a full discussion to future work; here, we focus on only a single bright, narrow ring. The photon ring is not expected to be uniformly bright in general for many reasons: photons seen at different azimuthal locations on the ring have traversed different regions, and have propagated in potentially very different directions through a typically highly structured environment exhibiting relativistic motions and suffering large gravitational redshifts. In the case of M87, for which the inclination is ostensibly less than $20^\circ$, this photon ring is very nearly circular, deviating from circularity by less than 2\% \citep{Johnson_2019}. We thus model the photon ring as a circular annulus with a linear brightness gradient \citep[see Section 4.2.5 of][]{themis}. The key parameters are the total flux in the ring, outer radius of the ring, fractional width of the ring, and the magnitude and direction of a linear brightness gradient. We enforce that the ring be narrow by restricting its width to be less than 5\% of the outer radius, corresponding to a width of roughly 1~$\mu$as, or about 10\% of the pixel resolution. This width prior also serves to enforce the component order, ensuring that the ring component and the image component of the model are not degenerate with one another. The narrowness of the ring manifests in the Fourier domain as an essentially fixed peak-to-peak flux ratio; i.e., it sets the decay envelope for the Bessel function. This envelope will in general not match that for the bicubic kernel (see \autoref{fig:scale_separation}), which is how the two components become decoupled. Note that the flux in the ring is allowed to vanish and hence we do not force such a ring into the image. \section{Detecting and Constraining Photon Rings with the EHT} \label{sec:photon_rings} In this section we describe a number of imaging experiments designed to assess and demonstrate the ability of existing and future EHT observations to detect and constrain photon rings. We focus on several example simulations from the GRMHD library presented in \citetalias{M87_PaperV}, all of which contain a bright, narrow ringlike feature. These simulations differ substantially in their other properties, however, including the brightness and structure of the extended emission, presence of additional ringlike and spiral features, asymmetric components, and dynamical structures. For GRMHD simulations appropriate for M87 we are able to faithfully reconstruct the properties of the bright ringlike feature, and recover its radius. \subsection{Ring Reconstruction from GRMHD Simulations} Five example GRMHD movie snapshots that are broadly consistent with the EHT 2017 observations of M87 were selected, listed in \autoref{tab:grmhdsims}. All of these come from simulations that were deemed acceptable in \citetalias{M87_PaperV} and were resized to best match the EHT data and rescaled to a total flux of 0.6~Jy. Data were prepared with the $u$-$v$ coverage from the 2017 EHT observations of M87 from \citetalias{M87_PaperIII}. The fitting procedure is identical to that described in \autoref{sec:ImageReconstructions}, but now with the inclusion of a ring model. Figure~\ref{fig:Jppcomp} shows the reconstructions in comparison to the truths both before and after smoothing. In each instance high-quality fits exist with $\chi^2/{\rm DoF}$ of $130/165$, $447/453$, $276/253$, $618/592$, and $539/596$, for the models as shown from left to right\footnote{The large variation in the number of degrees of freedom is due to the variety in the number of measurements on the different days of the EHT2017 observation campaign, with April 10 containing fewest measurements.}. The slashed ring model component accurately reproduces the bright ringlike features in each image. The remaining image component reconstructs the low-brightness extended emission. There is little traction on the ring thickness, for which the posterior distribution remains dominated by the imposed uniform prior restricting the fractional width to below 5\%. The weakness of this constraint is consistent with the fundamental resolution limits imposed by the $u$-$v$ coverage. In contrast, the total flux and ring diameter are well constrained. The combined ring component and image component together do an excellent job of reproducing the image morphology. In all cases the ring component is modestly subdominant, contributing between 40\% and 60\% of the total flux in the image. \begin{deluxetable*}{cccccccccc} \caption{Properties of GRMHD simulations in \autoref{fig:Jppcomp} (from left to right) and associated reconstructions}\label{tab:grmhdsims} \tablehead{ \colhead{\hspace{0.125in}$a$}\hspace{0.125in} & \colhead{\hspace{0.125in}$i$}\hspace{0.125in} & \colhead{\hspace{0.125in}$R_{\rm hi}$}\hspace{0.125in} & \colhead{\hspace{0.125in}Flow}\hspace{0.125in} & \colhead{\hspace{0.125in}$r_{\rm ph}$}\hspace{0.125in} & \colhead{\hspace{0.125in}$r_{\rm peak}$}\hspace{0.125in} & \colhead{\hspace{0.125in}$r_{\rm ring}$}\hspace{0.125in} & \colhead{\hspace{0.125in}$F_{\rm total}$}\hspace{0.125in} & \colhead{\hspace{0.125in}$F_{\rm ring}/F_{\rm total}$}\hspace{0.125in} & \colhead{\hspace{0.125in}$w/r_{\rm ring}$}\hspace{0.125in} \vspace{-0.100in}\\ & & & \colhead{Type} & \colhead{($\mu$as)} & \colhead{($\mu$as)} & \colhead{($\mu$as)} & \colhead{(Jy)} & & } \startdata $0.5$ & $158^\circ$ & $10$ & MAD & 21.2 & 21.4 & $21.8_{-0.4}^{+0.3}$ & $0.50\pm0.03$ & $0.57_{-0.03}^{+0.04}$ & $0.02\pm0.01$ \\ $0.94$ & $158^\circ$ & $40$ & MAD & 18.7 & 19.3 & $19.3\pm1.0$ & $0.42\pm0.02$ & $0.44\pm0.05$ & $0.02_{-0.01}^{+0.02}$ \\ $-0.94$ & $17^\circ$ & $20$ & SANE & 19.6 & 20.5 & $20.0\pm0.9$ & $0.37\pm0.02$ & $0.42\pm0.04$ & $0.03\pm0.02$ \\ $0.94$ & $163^\circ$ & $160$ & SANE & 16.9 & 17.0 & $17.9_{-0.4}^{+0.3}$ & $0.47\pm0.02$ & $0.52_{-0.05}^{+0.04}$ & $0.03\pm0.02$ \\ $0.94$ & $163^\circ$ & $160$ & SANE & 16.9 & 16.6 & $16.6_{-0.6}^{+0.5}$ & $0.47_{-0.01}^{+0.02}$ & $0.40\pm0.03$ & $0.03\pm0.02$ \\ \enddata \tablecomments{Listed are the simulation parameters (spin, inclination, $R_{\rm hi}$, flow type) and the associated average asymptotic photon ring radius ($r_{\rm ph}$), radius of the peak of the azimuthally averaged flux ($r_{\rm peak}$), and reconstructed ring radius (median and 68\% confidence interval), total compact flux ($F_{\rm total}$), fraction of the flux contained in the ring ($F_{\rm ring}/F_{\rm total}$) and the fractional width of the ring ($w/r_{\rm ring}$).} \end{deluxetable*} The quantitative success of the ring reconstruction is presented in \autoref{tab:grmhdsims} and \autoref{fig:M87profiles}, where the azimuthally averaged intensity profiles from the ground truth image are compared against the posteriors on the ring radii. In each case, the posterior covers in part or in its entirety the bright excess associated with the $n=1$ photon ring. In the case of the magnetically arrested disk (MAD) $a=0.94$ model the posterior is double peaked, associated with the two apparent ringlike structures in the underlying truth image. The fourth simulation (standard and normal evolution or SANE, $a=0.94$) has multiple ringlike features appearing at significantly different radii. Of these, only one is associated with the gravitational, $n=1$ photon ring feature, the others with orbiting, turbulent features. This results in multiple peaks in the azimuthally averaged brightness profile, limiting the formal applicability of the single-ring model. Despite this, the ring radius posterior has a significant overlap with the brightness peak associated with the $n=1$ photon ring. However, these additional ringlike features are highly dynamic and expected to evolve significantly on timescales of weeks in M87. To determine how the ring reconstruction depends on the dynamical state of the emitting gas, we repeated the reconstruction exercise with a frame taken from the same GRMHD simulation taken at a sufficiently different time so as to be effectively uncorrelated. This time, the resulting radius posterior matches the location of the bright peak with a similar fidelity to the other GRMHD simulation experiments. More importantly, the two ring radii reconstructions are inconsistent with each other at the 2.7$\sigma$ level. This suggests that repeated measurements separated by many dynamical times presents a natural strategy for distinguishing transient turbulent structures that arise due to astrophysical processes from the anticipated bright lensing features caused by gravity. \subsection{Gravitational Inferences from Ring Reconstructions} Encoded within the size and shape of the asymptotic photon ring is the structure of the underlying spacetime \citep{JP2010}. However, the ability to infer the gravitational properties of the black hole, e.g., the mass or spin, depends on the relationship between the location of the bright ringlike feature, presumably the $n=1$ photon ring, and the asymptotic photon ring. In all of the GRMHD simulations listed in \autoref{tab:grmhdsims} the peak of the bright ring is biased away from the asymptotic photon ring. Three factors are relevant to the interpretation and magnitude of this bias. First, note that the underlying resolution of the images from which the simulated data were constructed is $1~\mu$as. This dominates the widths of the flux profiles in \autoref{fig:M87profiles}. Thus, even in principle, the precision with which the bright ring radius can be constrained is $\gtrsim0.1~\mu$as, limited fundamentally by the number of pixels covered by the ring. In all instances in \autoref{tab:grmhdsims} the disparity is larger than this ostensible precision limit, though less than the underlying pixel resolution. The role of GRMHD simulation resolution will be explored elsewhere. Second, a dominant contribution to the ring flux is from the $n=1$ photon ring, which is produced by photons that have executed half orbits from emission to detection \citep[see, e.g.,][]{Johnson_2019}. These are biased away from the asymptotic photon ring ($n\rightarrow\infty$). Typical values for this bias from GRMHD simulations are $\approx 0.5~\mu$as (0.13~$GM/c^2D$) \citep{Johnson_2019} and are consistent with the biases observed in the first three GRMHD simulations in \autoref{tab:grmhdsims}. The direction and precise value of this bias does depend on the distribution of emission near the black hole. However a strong upper limit is imposed by the condition that polar photon trajectories deflect around the black hole sufficiently to return to the equatorial plane (see \autoref{fig:deflection} and \autoref{app:pri}), implying that the bias is less than $4.2~\mu$as (1.1~$GM/c^2D$) generally and typically considerably less than $1~\mu$as (0.26~$GM/c^2D$) for emission regions similar to those inferred for M87 from the geometric crescent fits described in \citetalias{M87_PaperVI} and found in GRMHD simulations of M87. Third, dynamical features will present transient flux excesses inside and outside the asymptotic ring that bias the radial location of the flux peak. This bias is clearly exemplified in the last two entries of \autoref{tab:grmhdsims}, where the radius of the flux peak within a single simulations varies by $0.4~\mu$as over long times ($>20$~days). However, despite the potential sources of bias in the location of flux peak, the posteriors of the reconstructed radii for the ring model component are consistent with the expected asymptotic photon ring radius. In the first three entries of the \autoref{tab:grmhdsims}, for which a ringlike structure is present, the ring radius estimates are consistent with the asymptotic photon ring size at the 1$\sigma$ level. For the final GRMHD simulation, the reconstructed ring radius estimates are consistent at the 2$\sigma$ level despite the presence of confounding features. An exhaustive study of the accuracy of the ring reconstructions for different underlying emission region morphologies will be presented in future publications. \section{Conclusions} \label{sec:conclusions} Fully Bayesian image reconstruction from VLBI observations is now practical. This method has been implemented in a fashion appropriate for EHT applications within the \themis analysis framework by treating the image reconstruction process identically with existing model-fitting tasks. Employing \themis for this has exploited two key advantages: first, it permits the immediate and efficient application of high-performance computing resources to the reconstruction of radio images with realistic observational parameters and complications; the \themis image reconstruction tools can efficiently make use of thousands of cores simultaneously. Second, it ensures that additional development within \themis is immediately available, including the implementation of additional samplers and mitigation of new sources of systematic error. These image reconstruction tools have been successfully applied to a number of simulated EHT data sets. Thus, Bayesian image reconstruction is now a practical tool for radio image analysis, though additional work on samplers may be required to fully realize its potential. Performing image reconstruction in a Bayesian fashion presents two unique opportunities that arise due to treating the image reconstruction problem as a parameter estimation exercise in a statistically rigorous manner. First, a consequence of this approach is the construction of a full image posterior, i.e., not a single best-fit image but rather a statistically meaningful collection of all of the images that are consistent with the data. In parameter estimation applications, this posterior is the primary product. In this case, it permits a quantitative assessment of the sources and magnitudes of errors. For example, two-point functions on the image posterior elucidate the origin and size of the error induced by known holes in the $u$-$v$ coverage of the 2017 April EHT observations of M87. More complex $n$-point functions may be developed in the future to address specific questions about image uniqueness and fidelity. Second, treating image reconstruction and model parameter estimation identically enables seamless integration of the two methods of data analysis, a procedure we term hybrid imaging. Importantly, the reconstructed posteriors on model parameter estimates are meaningful in the normal way. We also take this opportunity to demonstrate the ability to make a data-informed choice of the \fov and resolution, removing a frequent uncertainty in the image reconstruction process and source of subjective operator input. We demonstrate hybrid imaging with two explicit applications. The first is the inclusion of large-scale Gaussian components to model structure on scales much larger than the ostensible \fov, a procedure that has been utilized in previous EHT analyses \citepalias{M87_PaperIV,M87_PaperVI}. The second and more exciting application is the inclusion of narrow ring model components to extract and characterize the $n=1$ photon rings from EHT observations of M87. Hybrid imaging of GRMHD simulations with realistic simulated EHT observations of M87 are able to faithfully reconstruct the properties of the prominent photon ring feature in the underlying truth images. These produce ring size posteriors that typically cover the true value within accuracies of 2\%--5\%. Where turbulent features are mistaken for photon rings, the ring size estimates are variable on timescales of weeks in M87. Thus, the benefits of non-parametric reconstructions combined with the strong restrictions on potential additional image features are already poised to significantly improve EHT constraints on the properties of the supermassive black hole in M87. Photon ring characterization in Sagittarius A*, the supermassive black hole in the Galactic center, brings the additional challenges of addressing the interstellar scattering screen and short-timescale source variability. Both of these will be addressed in a future publication. One considerable benefit of doing so is that high-precision mass and distance estimates exist for Sgr A* \citep{GRAVITY2019}, enabling precision tests of general relativity using the photon ring. There are many potential future applications of hybrid imaging. All of these exploit its ability to constrain the freedom of image components via model specification, where such constraints are required due to experimental considerations (e.g., baseline coverage) or desirable because of high sensitivity. Such applications within the context of the EHT include the connection of small-scale features with large-scale model structures, e.g., the modeling of the millimeter-wavelength milliarcsecond jet in M87 while imaging the horizon-scale millimeter-wavelength image structure. As the EHT adds stations and increases in sensitivity, hybrid imaging may be able to constrain the shape of the photon ring, producing an independent measure of black hole spin, and to detect higher-order rings, verifying a key predicted consequence of gravitational lensing. At the same time, hybrid imaging effectively separates the otherwise unresolved ring component from the larger-scale diffuse emission, and thus will enable dynamical studies on timescales as short as a week in M87. Finally, hybrid imaging permits an extension of the class of sources for which shadows may be detected and characterized beyond the two primary targets of the EHT, for which the shadow is formally resolvable. By assuming that a crescent-like structure exists and contributes substantially to the emission in the bright radio cores, hybrid imaging extends the ability to estimate black hole masses to thousands of systems. \mbox{}\\ \indent We thank Mareki Honma, Michael Johnson, Maciek Wielgus, and Katherine Boumann for useful comments. This work was made possible by the facilities of the Shared Hierarchical Academic Research Computing Network (SHARCNET:www.sharcnet.ca) and Compute/Calcul Canada (www.computecanada.ca). Computations were made on the supercomputer Mammouth Parall\`ele 2 from University of Sherbrooke, managed by Calcul Qu\'ebec and Compute Canada. The operation of this supercomputer is funded by the Canada Foundation for Innovation (CFI), the minist\`ere de l'\'Economie, de la science et de l'innovation du Qu\'ebec (MESI) and the Fonds de recherche du Qu\'ebec - Nature et technologies (FRQ-NT). This work was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. A.E.B.\ thanks the Delaney Family for their generous financial support via the Delaney Family John A. Wheeler Chair at Perimeter Institute. A.E.B.\ and P.T.\ receive additional financial support from the Natural Sciences and Engineering Research Council of Canada through a Discovery Grant. R.G.\ receives additional support from the ERC synergy grant “BlackHoleCam: Imaging the Event Horizon of Black Holes” (Grant No. 610058). D.W.P.\ was supported in part by the Black Hole Initiative at Harvard University, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation to Harvard University. \appendix \section{Fourier domain cubic interpolation} \label{app:CubicInterpolation} Here we derive the Fourier space representation of the bicubic convolution kernel given in Equation (\ref{eq:bcifour}). The weight function in Equation (\ref{eq:bciweight}) is $C_2$. However it does have discontinuities in the second and third derivatives, being identically zero beyond that. Thus it may be represented as a triple and quadruple integral over a small number of Dirac $\delta$ functions. The second derivative of the cubic interpolation kernel is \begin{equation} w''(x) = \begin{cases} 0 & -2\ge x\\ b \left[ -6x - 10 \right] & -1\ge x>-2\\ -6(b+2) x - 2(b+3) & 0\ge x>1\\ 6(b+2) x - 2(b+3) & 1\ge x>0\\ b \left[ 6x - 10 \right] & 2\ge x>1\\ 0 & x\ge 2 \end{cases} \end{equation} Note that this is not continuous, with the discontinuities of $\pm(-8b+6)$ and $\mp2b$ at $x=\pm1$ and $x=\pm2$, respectively. The third derivative has two components, the first describing the smooth pieces of $w''(x)$ and the second consisting of $\delta$ functions associated with the discontinuities. That is, \begin{equation} w'''(x) = g(x) + F_3(x) \end{equation} where \begin{equation} g(x) = \begin{cases} 0 & -2\ge x\\ -6b & -1\ge x>-2\\ -6(b+2) & 0\ge x>-1\\ 6(b+2) & 1\ge x>0\\ 6b & 2>\ge x>1\\ 0 & x\ge 2, \end{cases} \end{equation} and \begin{equation} \begin{aligned} F_3(x) = & 2b\delta(x+2) +(8b+6)\delta(x+1) \\ & -(8b+6)\delta(x-1) -2b\delta(x-2). \end{aligned} \end{equation} Again, there are discontinuities in the non-singular part of $w'''$, i.e., $g(x)$, though now at $x=0$, $x=\pm1$, and $x=\pm2$; $g(x)$ is constant otherwise. Thus, the fourth derivative is consists solely of $\delta$ functions, $w^{(iv)}(x)=F_4(x)$ where \begin{equation} \begin{aligned} F_4(x) = & -6b\delta(x+2) -12\delta(x+1)\\ & +12(b+2)\delta(x) -12\delta(x-1) -6b\delta(x-2). \end{aligned} \end{equation} Then, noting that all constants of integration vanish (because $f(x)=0$ for $|x|>2$), we can write \begin{equation} w(x) = \iiint F_3(x) + \iiiint F_4(x). \end{equation} The frequency-space representation of $F_3$ and $F_4$ are: \begin{equation} \begin{aligned} F_3(k) &= \int dx e^{-ikx} F_3(x)\\ &= 4i \sin(k) \left[ 2b\cos(k) + (4b+3) \right]\\ F_4(k) &= \int dx e^{-ikx} F_4(x)\\ &= 12b\left[1-\cos(2k)\right] + 24\left[1-\cos(k)\right], \end{aligned} \end{equation} where $k=2\pi\Delta u$, where $\Delta$ is the spacing between control points. Therefore, applying the integrals, \begin{equation} \begin{aligned} W(k) &= \frac{F_3(k)}{(ik)^3} + \frac{F_4(k)}{(ik)^4}\\ &= - \frac{4}{k^3} \sin(k) \left[2b\cos(k)+(4b+3)\right]\\ & \quad + \frac{12}{k^4} \left\{ b \left[ 1-\cos(2k) \right] + 2\left[1-\cos(k)\right] \right\}. \end{aligned} \end{equation} \section{Photon ring images} \label{app:pri} In \autoref{fig:deflection} we show the mapping between points on the equatorial and image planes for observers on the polar axis. Here we collect information about how this mapping is generated practically and what it implies generally. The mapping is generated by direct integration of the null geodesic equations for photons leaving a distant screen. We simplify the problem by considering an observer on the polar axis, for which axisymmetry reduces the function to a one-dimensional $\Delta r_n(R)$, which is a mapping of the Boyer-Lindquist radius $R$ in the equatorial plane (at the $(n+1)$th photon crossing) to the radial location in the image plane. While this does not generalize to arbitrary inclinations, the insensitivity of the asymptotic photon ring radius to inclinations below $30^\circ$ suggests that this is sufficient to estimate the magnitudes of the shifts of the low-order photon ring radii from the asymptotic value \citep{JP2010}. This mapping is monotonic, a fact that has two important consequences. First, the entirety of the equatorial plane is remapped at every order, and thus the photon rings present a sequence of copies of the direct emission. Second, and more relevant here, is that the peak of the intensity within the $n$th photon ring is fully determined by the location peak of the emission, $R_{\rm pk}$ and the map. This arises immediately from the conservation of $I_\nu/\nu^3$ along the photon trajectories. Thus, for optically thin emission within the equatorial plane, the total intensity at a radial position of $r$ in the image plane is \begin{equation} \frac{I_\nu}{\nu^3}(r) = \sum_{m=0}^{n} \frac{I_{g\nu}}{(g\nu)^3}(R_m), \end{equation} where $g$ is the standard Doppler factor, $\Delta r_m(R_m) \equiv r - r_{\rm ph}$ defines $R_m$, and $n$ is the highest-order image present at $r$. This is peaked when \begin{equation} \begin{aligned} \frac{d(I^{\rm obs}_\nu/\nu^3)}{d\Delta r} &= \sum_{m=0}^{n}\frac{d[I^{\rm em}_{g\nu}/(g\nu)^3]}{d R_m} \frac{d R_m}{d\Delta r}\\ &\approx \frac{d[I^{\rm em}_{g\nu}/(g\nu)^3]}{d R_n} \frac{d R_n}{d\Delta r} = 0, \end{aligned} \end{equation} where we have used the fact that the ring widths decrease exponentially with $n$, and thus for smoothly varying emission regions $dR_n/d\Delta r$ increases exponentially with $n$ \citep[see, e.g.,][]{Johnson_2019}. Therefore, if $R_{\rm pk}$ is the location of the peak of $I^{\rm em}_{g\nu}/(g\nu)^3$, i.e., the peak of the emission after inclusion of the Doppler shift and Doppler beaming, then the peak of the $n$th-order photon ring emission is $\Delta r_{n,\rm pk} \approx \Delta r_n(R_{\rm pk})$. \mbox{}\\ \bibliographystyle{aasjournal_aeb} \bibliography{references}
Title: Uniting The Sun's Hale Magnetic Cycle and `Extended Solar Cycle' Paradigms
Abstract: Through meticulous daily observation of the Sun's large-scale magnetic field the Wilcox Solar Observatory (WSO) has catalogued two magnetic (Hale) cycles of solar activity. Those two (~22-year long) Hale cycles have yielded four ($\sim$11-year long) sunspot cycles (numbers 21 through 24). Recent research has highlighted the persistence of the "Extended Solar Cycle" (ESC) and its connection to the fundamental Hale Cycle - albeit through a host of proxies resulting from image analysis of the solar photosphere, chromosphere and corona. This short manuscript presents the correspondence of the ESC, the surface toroidal magnetic field evolution, and the evolution of the Hale Cycle. As Sunspot Cycle 25 begins, interest in observationally mapping the Hale and Extended cycles could not be higher given potential predictive capability that synoptic scale observations can provide.
https://export.arxiv.org/pdf/2208.09026
\onecolumn \firstpage{1} \title[Uniting Hale and Extended Cycles]{Uniting The Sun's Hale Magnetic Cycle and `Extended Solar Cycle' Paradigms} \author[\firstAuthorLast ]{\Authors} % \address{} % \correspondance{} % \extraAuth{}% \section*{Introduction} For over four centuries solar observers have pondered the physical origins of the canonical marker of solar activity - the sunspot. It took more than 200 years after the sketching and cataloging of sunspots commenced before it was discovered that the number of sunspots waxes and wanes over an approximately 11-year period \citep{Schw49}. A half century later, mapping the latitudinal variation of the spotted Sun yielded the ``butterfly diagram,'' a pattern progressing from latitudes around 30$^\circ$ (north and south) to the equator over the ~11-year period \citep{Maun04}. In the golden age of solar astronomy that followed, it was first suggested \citep{1908ApJ....28..315H} and then demonstrated \citep{Hale19} that sunspots were sites of intense magnetism protruding through the Sun's photosphere and that the polarities of the butterfly's wings alternated in sign with a period of about 22 years \citep{1925ApJ....62..270H}. This alternating magnetic polarity cycle is synonymously identified with its discoverer, the eponymous (22-year) ``Hale Cycle,'' or the (22-year) ``Hale Magnetic Polarity Cycle.'' Understanding how the magnetic spots, their butterfly patterning, and the polarity flipping are tied together to drive solar activity has formed the keystone problem of observational \citep{Bab61}, theoretical \citep{Lei69} solar- and astro-physics in the intervening century \citep[e.g.,][]{2010LRSP....7....1H}. For over four decades another term describing solar activity has sporadically appeared in the literature - the ``Extended Solar Cycle.'' The extended solar cycle \citep[e.g.,][]{1987SoPh..110....1W} (ESC) was used to describe an spatio-temporal extension of the sunspot butterfly pattern to higher solar latitudes (to around 55{$^{\circ}$}) and further back in time (by almost a decade). A culmination of many years of painstaking observation the ESC is exhibited in prominences and filaments \citep[e.g.,][]{1933MmArc..51....5B,1975SoPh...44..225H}, `ephemeral’ (small-scale transient) active regions \citep[e.g.,][]{1973SoPh...32..389H}, global-scale features of the Sun's corona \citep[e.g.,][]{1988sscd.conf..414A} and the zonal flow patterns \citep[e.g.,][]{1980ApJ...239L..33H,1987Natur.328..696S} of the `torsional oscillation.' In effect, this assortment of observational phenomena created a set of spatio-temporally overlapping chevron-like activity patterns. The concept of the ESC was 're-discovered' by McIntosh et al. in their study of extreme ultraviolet brightpoints and their associated magnetic scale \citep[hereafter M2014]{McIntosh2014}. They identified a pattern of coronal and photospheric features that was greatly extended in time and latitude relative to the sunspot butterfly. They deduced that the activity bands observed were the (toroidal) magnetic bands of the Hale Cycle, but no concurrent photospheric magnetic measurement was available to affirm their deduction. The core inference of their study was that the spatio-temporal overlap and interaction of extended activity bands observed contributed directly to the shape (the butterfly) and modulation (the amplitude) of the sunspot cycle. Figure~\ref{fig:f0} shows the evolution of the total sunspot number, the latitudinal distribution of sunspots and the data-inspired construct introduced by M2014 that inferred the magnetic activity band arrangement and progression of the Hale Cycle and how those bands contribute to the modulation of sunspot cycles. This `band-o-gram,' introduced in section~3 (and Fig.~8) of M2014, was intended as a qualitative, and not quantitative, illustration of the position, timing and magnetic field strength of the bands---with the emphasis on their phasing. The activity bands in the band-o-gram start their (assumed) linear progression towards the equator from 55\degree latitude at each hemispheric maxima, meeting and disappearing at the equator at the terminator. At the terminator the polar reversal process commences at 55\degree latitude, progressing poleward at their (assumed) linear rate---reaching the pole at the appropriate hemispheric maximum. So, for a list of hemispheric maxima and terminators, a band-o-gram can be constructed. The width of the bands is prescribed by a Gaussian distribution 10 degrees in latitude, commensurate with those observed in the coronal brightpoints originally studied by M2014. \section*{Data \& Method} The Wilcox Solar Observatory (WSO) began collecting daily low spatial resolution observations of the Sun's global (or mean) magnetic field in May 1975 \citep{1977SoPh...54..353S} and a very well-known data product of WSO is the evolution of the Sun's polar cap magnetic fields \citep{1978SoPh...58..225S}. These low-resolution synoptic observations are ideal for identifying large-scale, long-lived, patterns - reducing the effects of small-scale, rapidly changing fields of emerging magnetic regions. Following, \cite{1979SoPh...61..233D} the daily WSO magnetograms are obtained by scanning boustrophedonically along 11 east- west rows (i.e., the observation of alternate rows in opposite directions---if one row is taken from left to right then the next row is from right to left). The $180"$ magnetograph aperture moves $90"$ between points in the east-west direction and $180"$ north or south between rows, taking a 15s integration of the Fe~I 5247\AA{} line at 195 points on the solar disk---resulting in a total of about 2 hours per daily map. Because of the large aperture size of the magnetograph the regions from 70\degree{} to the poles lie entirely within the last aperture and are not resolved. Following the method of \cite{1974SoPh...39..275H} and \cite{1979SoPh...61..233D}, the daily WSO magnetographs can be decomposed into the poloidal and toroidal components which, according to dynamo models, are regenerated from one another, alternating and repeating in an approximately 22-year cycle \citep[e.g.,][]{2010LRSP....7....3C}. The method used to perform this decomposition is detailed by \citep{1994SoPh..153..131S}, where the daily WSO magnetographs are first separated into their positive and negative magnetic field polarities which are then tracked as they cross the solar disk. They are then fitted to estimate the average east-west inclination angle of the magnetic field---or the toroidal component of the photospheric magnetic field \citep[see Fig.~1 of][for an illustration of the geometry]{2010ASPC..428..109L}. In this paper we use the \cite{1994SoPh..153..131S} derivative data product of the WSO toroidal magnetic field component in the photosphere and the WSO polar magnetic field estimate using the five central aperture pointings (central meridian $\pm$ two) in first and last rows of observations documented by \cite{1978SoPh...58..225S}. \section*{Results} An initial study of the slowly evolving behavior \citep{1994SoPh..153..131S} noted the potential relationship with the ESC. Figure~\ref{fig:f1} contrasts four and a half decades of WSO observations with the evolution of the sunspot number over the same timeframe. Panel B shows the latitude-time variation of the WSO toroidal magnetic field component in addition to the field strength of the northern and southern polar regions. Several features of Figure~\ref{fig:f1} are immediately visible, but perhaps the most striking are the strong overlap in time of the toroidal magnetic systems, the short transitions from one polarity to the next \-- evidenced through the narrow white (very near 0G) zones, the lack of field migration across the Sun's equator, and the close association of these last two features at the Sun's equator four times in the record (in 1978, 1988, 1998 and 2011). The patterns, including a strong resemblance to the ESC, are described in more detail by \cite{1994SoPh..153..131S} and \cite{2010ASPC..428..109L}. The last of these features, synchronized zero-crossing transitions at the lowest latitudes in each hemisphere, are concurrent with events that mark the end of the Hale Cycle progressions, or termination events as they have become known, that were initially described by M2014 and explored again (in more detail) recently \citep[][hereafter M2019]{McIntosh2019}. The termination events are illustrated with dashed vertical lines in Figure~\ref{fig:f1}. These events signify the final cancellation of the magnetic systems that were responsible for the last sunspot cycle at the equator and, near-simultaneously, a period of very rapid growth of the next sunspot cycle at mid-solar latitudes. Interestingly, M2019 also noted that these termination events at the equator were co-temporal with the start of the polar magnetic field reversal process. This process is perhaps best visualized through the observed progression of the highest latitude filaments (or polar crown filament) to the pole, the so-called ``rush to the poles'' \citep[e.g.,][]{Bab61, 1989SoPh..119..323S}. The time at which this poleward march completes corresponds to when the measured polar magnetic field crosses zero. In order to visually compare the WSO observations [Figure~\ref{fig:f1}B] and the ESC band-o-gram [Figure~\ref{fig:f0}C] (extended to cover the baseline of the WSO observations) we convert the WSO data from sine latitude to latitude and the result can be seen in Figure~\ref{fig:f2}. Additionally, Table~1 of \cite{2022ApJ...927L...2L} places bounds on the correspondence of the toroidal field zero-crossings (near the solar equator) with the Hale Cycle terminator events determined by other means. With the 5 degree resolution of the WSO magnetograph scanning rows around the equator the zero-crossing times of the toroidal magnetic field provided correspond well with the Hale Cycle terminator times presented in Table~1 of M2019: 1978.00 [N5:1976.67, S5:1977.17]; 1988.50 [N5:1988.75, S5:1986.83]; 1997.75 [N5:1997.75, S5:1999.17]; 2011.20 [N5:2008.83, S5:2011.50]. \subsection*{High-Res/Low-Res \&\ The 2021 Hale Cycle Termination} The alternating toroidal field patterns clearly visible in the WSO observations are borne out also with considerably higher spatial resolution observations from space with SOHO/MDI and SDO/HMI shown in Fig.~\ref{fig:fX}, and Fig.~2 of \cite{2022ApJ...927L...2L} which, unlike our previous plots, are current to time of publication. In tandem, the three magnetograph observations illustrate the clear pattern of the ESC that is consistent with previous studies. Further, as we have discussed immediately above, we observe that another zero crossing of the toroidal magentic field at the equator, characteristic of a Hale Cycle terminator event, occurred very recently. In a forthcoming publication we will explore this event in detail (McIntosh et al. \-- in preparation). \section*{Discussion} A general criticism of the M2014 band-o-gram is that it was based on catalogued proxies of the photospheric magnetic field through chromospheric and coronal features. Those tracked features formed by the overlapping activity bands observed were not necessarily representative of the photospheric or interior magnetic field itself. It is clear from the WSO observations that, while comparison of the observed progression with the band-o-gram is still qualitative, that there is an overwhelming correspondence of the features observed in the WSO observations with those of the highly idealized band-o-gram. We note that a similar treatment of higher spatial resolution photospheric observations from the Mt Wilson Solar Observatory over a shorter timeframe yields similar correspondence \citep{2005ApJ...620L.123U}. Further, it is known that the heliosphere exhibits a `sector' structure. The sector, or Hale sector, structure reflects the polarity of the heliospheric magnetic field relative to the solar direction in a state of either being ``away'' from or ``towards'' the Sun and expresses the largest spatial scales of solar magnetism and connectivity \citep[e.g.,][]{Hudson2014}. Since the earliest articles about sector structure \citep[e.g.,][]{1969JGR....74.5611R} the solar cycle has been noted to have a strong annual modulation around solar minimum. At that time the heliospheric current sheet (HCS) is so flat that for six months of the year (early December to early June) the Earth is at southern heliographic latitudes and the dominant polarity corresponds to the Sun's southern hemisphere. For the other six months of the year in these epochs the Earth almost exclusively samples the dominant polarity of the north, holding at a level of $\sim$85\%{} \citep[e.g.,][]{1975SoPh...41..461S}. The top panel of Fig.~\ref{fig:fb} shows the tilt of the HCS as computed by the WSO from 1976 to the present. The slowly evolving solar minimum behavior of the HCS is shown graphically in the lower panel of the figure \-- an adaptation of Fig~1 of \citet{2004GeoRL..3112808E}. The wavelet transform is used to illustrate the prominent periodicities in the sector structure \-- at approximately one Carrington rotation (CR) timescale, and the other at approximately one year. There are two clear results shown in Figure~\ref{fig:fb}: (1) the strongest signal at one year indeed corresponds to the times of extreme HCS flatness, but also that the strongest signal reverts to CR timescales at Hale Cycle terminators, when the tilt rises sharply with new and stronger new-cycle active regions emerging at mid latitudes. (2) the {\em onset\/} of the annual periodicity signal is at apprximately 0.4~cycles (first dotted vertical line) for even numbered cycles and at 0.6~cycles (second dotted vertical line) for odd numbered cycles. We reserve discussion of the the 22-year difference between odd and even numbered cycles to a manuscript in preparation, that looks at a longer epoch than that covered by the WSO we focus on. Nevertheless, this highly ordered large-scale sector structure is one more piece of evidence consistent with the data-inspired ESC schematic based on the timing of the Hale Cycle terminators. \section*{Conclusion} The meticulous daily synoptic scale observations of the WSO have captured two complete 22-year Hale cycles. These observations have permitted a mapping of the Sun's photospheric toroidal magnetic field component over that timeframe. Key features of the WSO observations compare directly to the data-inspired schematic of the ESC that was conceived to illustrate how the activity bands of the ESC can interact to shape the latitudinal progression of sunspot cycles and their amplitude. The WSO observations should unambiguously unify the Hale magnetic cycle and the ESC as being, physically, one and the same and indistinguishable. These low spatial resolution ground-based observations are corroborated by higher resolution space-based magnetographic observations from SOHO and SDO where all three identify zero-crossing events we associate as Hale Cycle terminators. As \cite{2010ASPC..428..109L} and M2014 inferred, there is predictive capability in these synoptic analyses through the ESC \--providing strong indicators of the current progression and potential evolution of upcoming solar activity at the decadal scale, beyond those amenable through the analysis of sunspots. This result demonstrates the intrinsic power of synoptic observations at a time when it is becoming increasingly difficult to sustain such efforts. \section*{Acknowledgements} SMC is supported by the National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation under Cooperative Agreement No. 1852977. RJL acknowledges support from NASA's Living With a Star Program. SMC and RJL acknowledge the grant of Indo-US Virtual Networked Center (IUSSTF-JC-011-2016) to support the joint research on ESCs. Stanford University operates WSO with funding provided by the National Science Foundation with Grant \#1836370. NASA funding for WSO ended in 2018. Historically WSO has been supported by NASA Heliophysics, the NSF, and the Office of Naval Research. Dr. J. Todd Hoeksema serves as WSO Director and we acknowledge his steadfast stewardship of the data and calibration. Sunspot data from the NOAA Space Weather Prediction Center and the World Data Center SILSO, Royal Observatory of Belgium, Brussels. \section*{Conflict of Interest Statement} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \section*{Author Contribution Statement} All authors conceived the experiment, P.S., L.S. and S.M. analyzed the results. S.M. created Figures~\ref{fig:f0}--\ref{fig:f2}; R.L. created Figure~\ref{fig:fb}. All authors reviewed the manuscript. \bibliographystyle{frontiersinSCNS_ENG_HUMS}
Title: Spectropolarimetry of life: airborne measurements from a hot air balloon
Abstract: Does life exist outside our Solar System? A first step towards searching for life outside our Solar System is detecting life on Earth by using remote sensing applications. One powerful and unambiguous biosignature is the circular polarization resulting from the homochirality of biotic molecules and systems. We aim to investigate the possibility of identifying and characterizing life on Earth by using airborne spectropolarimetric observations from a hot air balloon during our field campaign in Switzerland, May 2022. In this work we present the optical-setup and the data obtained from aerial circular spectropolarimetric measurements of farmland, forests, lakes and urban sites. We make use of the well-calibrated FlyPol instrument that measures the fractionally induced circular polarization ($V/I$) of (reflected) light with a sensitivity of $<10^{-4}$. The instrument operates in the visible spectrum, ranging from 400 to 900 nm. We demonstrate the possibility to distinguish biotic from abiotic features using circular polarization spectra and additional broadband linear polarization information. We review the performance of our optical-setup and discuss potential improvements. This sets the requirements on how to perform future airborne spectropolarimetric measurements of the Earth's surface features from several elevations.
https://export.arxiv.org/pdf/2208.02317
\keywords{Spectropolarimetry, Polarization, Biosignatures, Field campaign, Earth observation, Remote-sensing} \section{INTRODUCTION} \label{sec:intro} The remote-sensing of the Earth gives us indispensable information for our preparations to search for extraterrestrial life. Well-known examples of biosignatures that can be identified in the reflected sunlight are atmospheric constituents such as O$_2$ and spectral signatures of vegetation, such as the green bump and the red edge (see Seager et al. (2005)\cite{Seager05} for a detailed description of the red edge). In addition, spectropolarimetry proves itself to be a robust remote-sensing tool. Complementary to the reflectance, it carries additional and unique information, such as surface roughness or particle size of the scatterers which helps us to look for and characterize bio-signatures. Spectropolarimetry also provides unique biosignatures itself, such as the linear polarization resulting from the O2-A band \cite{Stam99,Fauchez17} and from vegetation\cite{Vanderbilt85,Vanderbilt91}, as well as circular polarization resulting from the homochirality of biotic systems\cite{Patty19,Gimenez19}. % Homochirality is what we refer to as the single handedness of chiral molecules. Terrestrial chiral molecules that are utilized by life, e.g. amino acids, sugars and the biological polymers they construct, almost always exists in either left-handed or right-handed forms, and are therefore homochiral. This is in contrast to abiotic chemistry, which produces equal amounts of these mirror forms, resulting in a racemic mixture. Circular polarization originates from the differential absorption by homochiral molecules and macromolecular structures. As biochemical homochirality is essential for life and thought to be a universal property of life, the circular polarization life produces constitutes an unambiguous bio-signature \cite{Patty18}. Therefore, circular polarization promises to be a powerful tool for the remote-sensing of biotic matter on Earth \cite{Wolstencroft04,Patty17,Patty19,Patty21} and beyond \cite{Sparks05,Patty18}. In general, direct light emitted by a solar type star is virtually unpolarized when integrated across its stellar disk \cite{Kemp87}. Since linear polarization is produced by interaction of light with surfaces and particles, whenever Sunlight interacts with the Earth's atmosphere or surface, it usually becomes (partly) linearly polarized. As such, we do find an abundance of polarized light by looking at the Earth's atmosphere or surface. Various of these polarizing scattering mechanisms, like Rayleigh scattering and reflections at air-water interfaces, are well understood\cite{Cronin11}. Despite all this knowledge, the interpretation of remotely sensed linear polarization data of the real world is challenging as it involves e.g. depolarization effects, varying atmospheric aerosol concentrations and a diversity of (cloud)particle faces, shapes and orientations. Circularly polarized light is much more scarce in nature. % It can be produced through multiple scattering processes. Single scattering processed usually generate linearly polarized light, after which a second scattering event with atmospheric aerosols can produce circularly polarized light. We refer to Gasso et al. (2022)\cite{Gasso22} for an elaborate summary of circular polarization due to atmospheric aerosols. In addition to multiple scattering processes, circularly polarized light can be produced by the homochirality of biotic systems\cite{Patty19,Patty21}. The development of theoretical scattering models including both linear and circular polarization is essential to understand all information coming from remotely sensed spectropolarimetric data. There exist various extensive spectropolarimetric models of Earth-like (exo)planets featuring realistic atmosphere profiles\cite{stam08}\, realistic cloud parameters\cite{Groot20}, wind-ruffled oceans with sea foam and shadows of the waves\cite{Trees22} and multiple surface reflection scattering matrices based on bidirectional reflectance functions and characteristic (wavelength-dependent) surface albedos\cite{stam08}. For the latter, surface albedos for many different natural and man-made materials are provided by libraries such as the ASTER\cite{Baldridge09} and the ECOSTRESS spectral library\cite{Meerdink19}. Surface albedos can be used for surface identification. For example, the spectral shapes of vegetation features can be easily distinguished from bare soils\cite{Liang02}, see Figure \ref{fig: albedo}. Albedos might vary over the year due to the change of seasons, change in vegetation characteristics or their moistness. The vegetation albedo spectra share the following characteristics: (i) absorption bands of chlorophyll around 435-485 nm and around 645-685 nm, (ii) a high albedo at wavelengths longer than 700 nm, which is also referred to as the red edge and (iii) the absorption of light due to intracellular liquid water causing slight dips around 0.97, 1.15, 1.45, and 1.92 Вµm \cite{Hedge15}. The definition of surface albedo is commonly used in the field of astronomy and climate research. For simplicity, we will refer to the surface albedo as the (surface) reflectance. Even though there has been an extensive advancement of the models over the years, there is still room for improvement. The surface models derived from scattering matrices based on the surface reflectance serve their purpose very well when considering a planetary disk-integrated signal. Unfortunately, they do not include circular polarization signals induced by vegetation features. In general, and especially in nature, circular polarization signals are very faint compared to those of linear polarization: the difference can easily be three orders of magnitude. However, being a powerful tool for the remote-sensing of biotic matter, we do want to investigate the possibility of adding circular polarization to existing surface models. Remote-sensing of (circularly) polarized biosignatures has received a lot of interest in recent years \cite{Patty19,Patty21,Snik19,Sparks20}. Data acquired through remote-sensing offers crucial information for preparation for the design and development of the next generation of space-based spectropolarimeters. We follow up upon the results of Patty et al. (2021)\cite{Patty21}. They used a helicopter to perform airborne measurements. In this work, we are using a stable and cheaper airborne platform: a hot air balloon. One of the biggest advantages of a hot air balloon is that our proximity to our instrument during the flight allows us make on the fly adjustments to the optical-setup. We measure the light that is reflected by the surface from the moving balloon basket, hence, we are limited to a single illumination angle, viewing angle and elevation per surface scene. Therefore, our main focus lies with measuring the circular polarization spectra from landscapes and their use in surface identification. In addition to circular polarization, we continuously record four individual linear polarization states using an ordinary polarization camera. % We investigate the contribution and practicality of using a polarization camera for future airborne field campaigns. This paper has the following structure. In Section \ref{sec: methods}, we describe the basics of polarimetry and the concept of the Normalized Difference Vegetation Index (NDVI) to distinguish between different surface types. In Section \ref{sec: instrument}, we describe our instrumental set-up that was designed to measure the fractional circular polarized reflection of the Earth from a balloon. In Section \ref{sec: results}, we summarize the main results from the airborne measurements. At the end, we finalize this paper with our conclusions and discussion in Section \ref{sec: conclusion}. \section{Methods} \label{sec: methods} \subsection{Polarization} Polarization is generally described with a Stokes vector $\textbf{S} = (I,Q,U,V)$, where $I$ is the total intensity, $Q$ and $U$ are the linearly polarized intensities and $V$ the circularly polarized intensity. The complete polarization states are described in terms of the normalized Stokes parameters $Q/I$, $U/I$ and $V/I$ (each ranging from -1 to 1), where $Q/I$ denotes the difference between linearly polarized intensities normal and parallel to the plane of scattering, $U/I$ denotes the difference between +45$^\circ$ and -45$^\circ$ to the plane of scattering, and $V/I$ denotes the difference between right-handed and left-handed circularly polarized light. The linear Stokes parameters $Q$ and $U$ can be combined into the (dimensionless) Degree of Linear Polarization, DoLP. The DoLP of a surface indicates the fraction of the reflected light that is linearly polarized. It can provide us with essential information about land surface characteristics. The Angle of Linear Polarization, AoLP, contains additional information related to surface or material properties. The DoLP and AoLP are expressed as: \begin{equation} \label{eqn: dolp&aolp} \text{DoLP} = \frac{\sqrt{Q^2+U^2}}{I}\vspace{5pt}; \hspace{10pt} \text{AoLP} = \frac{1}{2} \tan^{-1}\left(\frac{U}{Q}\right). \end{equation} \subsection{The NDVI} Using satellite imaging techniques, we can measure the reflected light spectra of different types of Earth surfaces. However, calculating an accurate widespread surface reflectance can be complex. Instead, we choose to calculate the Normalized Difference Vegetation Index (NDVI)\cite{Rouse74} for individual measurements to distinguish between different surface types. We calculate the NDVI as according to Patty et al. (2021)\cite{Patty21} \begin{equation} \text{NDVI} = \frac{I_{\text{NIR}}-I_\text{{R}}}{I_{\text{NIR}}+I_{\text{R}}}, \end{equation} where $I_{\text{R}}$ is the mean of the reflectance from $650 \mathrm{nm}$ to $710 \mathrm{nm}$ and $I_{\text{NIR}}$ is the mean from $750 \mathrm{nm}$ to $780 \mathrm{nm}$. This NDVI ranges from -1.0 to +1.0, where the negative values are likely to identify water. NDVI values close to +1 have a large possibility to indicate green vegetation, as this absorbs solar radiation for photosynthesis in the spectral region from 400 to 700~nm. It reflects radiation in the NIR-region of the solar spectrum (the Red Edge). For green vegetation, $I_{\text{NIR}}$ is thus much larger than $I_{\text{R}}$ and the NDVI is close to 1.0. An NDVI close to zero indicates an absence of green vegetation, for example, an urban area with roofs, roads, and/or concrete. \section{Instrumental set-up} \label{sec: instrument} Circular spectropolarimetric measurements were performed with the FlyPol\cite{Patty21} instrument. FlyPol is a spectropolarimeter, based on the TreePol design\cite{Patty17,Patty19}, that uses fast temporal polarization modulation to obtain the fractionally induced circular polarization ($V/I$) of the observed light as a function of wavelength from 400 to 900~nm. The instrument sensitivity ($< 10^4$) and accuracy ($< 10^3$) is high enough to measure the circular polarization induced by vegetation. FlyPol improves its stability in the field by actively controlling the temperature of its optics and electronics. Its angular field of view is approximately 1.2В°\cite{Patty21}. All airborne measurements presented in this paper were obtained during one flight in an Ultramagic hot air balloon carrying a so-called T-partition basket with a capacity of 11 passengers including the pilot. The volume of this hot air balloon is relatively large, 8500 cubic meters, which allowed us to take FlyPol and four scientists to perform the measurements. The size of the entire basket was $1.2\times1.75\times3.05$ $(\text{h}\times\text{w}\times\text{l})$ meters. A single partition was large enough ($\sim 0.80\times2.00$ meters) to fit the tripod to which FlyPol was connected. All electronics could be stored beneath the tripod, leaving enough space in the partition for two persons. FlyPol was oriented using a calibrated parallax-free telescope pointer\cite{Patty21}. A GoPro HERO7 reference camera and a broadband Thorlabs' Polarization-Sensitive Kiralux$^{\tiny{\text{\textregistered}}}$ camera were aligned with the pointer and mounted on top of FlyPol's instrument casing, see Figure \ref{fig: pointing-cameras}. The polarization camera features a 5.0 MP monochrome CMOS sensor with a wire grid polarizer array. The array consists of a repeating pattern of four linear polarizers with transmission orientation axes of 0В°, 45В°, 90В° and 135В°, respectively. A 'super-pixel' calibration algorithm\cite{Gimenez19,Lane22} corrects the images for dark noise, flat fielding, and the optical imperfections of the Polarization Filter Array. In this way, we acquired pointing and additional information regarding the observed areas, which is insightful when analyzing the data. Two individual GPS trackers were used to record the flight trajectory, see Figure \ref{fig: pointing-cameras}. The balloon flew over the Broye district in the canton of Fribourg, Switzerland. This region lies on an elevated, relatively flat plain that is called the Swiss Plateau. The canton is predominantly rural, featuring farm sites, valleys and small forests. With an altitude of 2389~m, the Vanil Noir is the highest mountain in the canton. This is about 200~m lower than the highest altitude that we reached during our flight. From hereon, we refer to the elevation of the balloon, which is the difference between measured GPS altitude in the air and the local ground-level altitude. The highest elevation was just below 2000 m. The wind determines both the flight speed and direction. At altitudes $< 1500$~m we were heading South with a ground speed of $\sim$ 10 km/h, while the winds at altitudes $> 1500$~m droves us North with a ground speed of $\sim$27 km/h. The pilot could turn the orientation of the balloon, and thus the basket, in the timeframe between the end of the take-off and the start of the landing. In total, 31 measurements were obtained under clear sky conditions on May 30, 2022, from 19:45 to 20:45 CEST, just before sunset (21:20 CEST). We choose to fly during the evening hours, as hot air balloons are only able to fly within the time-span of 3 hours before sunset until 3 hours after sunrise. In this work, we refer to a single, continuous measurement as a `scene'. During the flight, the solar zenith and azimuth angle were approximately 86В°-90В° and 300В°-310В°, respectively. Throughout the flight, FlyPol's integration times were varied per scene to obtain the highest average photon count possible while preventing saturation, see Figure \ref{fig: itimevscounts}. For the first and last measurements, the average photon count ($< 10000$ counts) is low compared to the later measurements. While approaching sunset, the integration time increased as the photon counts strongly decreased with time. At the same time, we adjusted the integration times of the polarization camera, as the auto-exposure mode caused saturation of the first set of images. The polarization camera was set to record one exposure every 5 seconds. Unfortunately, adjusting the integration times took at least a minute, leaving holes in the dataset. The reference camera took an individual automatic shutter image every 30 seconds\footnote{30 seconds is the bare minimum for automatic shutter imaging with a GoPro HERO7.}. \section{Results} \label{sec: results} In Figure \ref{fig: observation_194857_G0013896}, we show the results of a scene observed during take-off, at an elevation of about 20~m. The reference photo on the left contains three transparent black/white dots that mark the central pointing of FlyPol for three sequential measurements with a 30 seconds time interval. The line connecting the dots indicates the likely intermediate pointing trajectory. The panels on the right show the qualitative reflectance and circular polarization $V/I$ for the trajectory per wavelength, over time. Both plots show a very clear separation between the reflected light signals of the soil and the grass after $\sim400$ measurements. The soil shows a spectrally flat reflectance and a negligible circular polarization. The grass shows a strong increase in the reflectance in the Red Edge, around 710~nm. The inset in the left image reveals the circular polarization spectrum averaged over time for the soil and grass features on the right. The grass spectrum reveals a negative band with a minimum at 675 nm with a magnitude of $V/I_{\text{min}} = 2.0\times 10^{-3}$. In Figure \ref{fig: observation_200113_G0013921} we show the results for a scene, pointing at a rural area from an elevation of $\sim$~650~m. The yellow and pink lines in the white line trajectory match the location of the in yellow and pink highlighted qualitative reflectance data for the trajectory per wavelength, over time. We were able to identify the two different surface types using the reflectance and the NDVI. Just as for the reflectance in Figure \ref{fig: observation_194857_G0013896}, there is an increase in the reflectance around 675~nm due to the Red Edge. The measured $V/I$ is smaller than for the previous scene. Using the NDVI values and the reference photos for the individual scenes, we were able to differentiate between five surface types; grass, soil, trees, urban, and water. The spectra presented in Figure \ref{fig: landscapesVI} are time averaged $V/I$ spectra. The scenes that feature vegetation (grass, trees) are easily distinguishable from the others, due to their high ($> 0.75$) NDVI values. As explained earlier, this is due to the relative strong reflection of near-infrared light and absorption of red light by the chlorophyll molecules. The circular polarization spectra of trees had a positive polarization band of $V/I=1.0\times^{-3}$ around 650 nm. The grass has a negative band with a minimum of $V/I=2.0\times^{-3}$ around 660 nm. Beyond $\sim$~675~nm, $V/I$ decreases. As expected, scenes that feature the soil, urban and water do not show significant circular polarization signals. Patty et al. (2021)\cite{Patty21} flew over the lake Lac des TaillГЁres and measured a circular polarization signal of $V/I = 1.1 \times 10^{-3}$, which indicates a possible presence of photosynthetic organisms like algae. We pointed FlyPol several times at the Murtzensee to try and detect a visible red edge as well. All our circular polarization spectra have a similar shape to the one presented in Figure~\ref{fig: observation_201251_G0013943}. We did not observe a circular polarization signal originating from surface water, which suggests an absence of biotic organisms. The Figures~\ref{fig: observation_195217_healthyness_vegetation} and~\ref{fig: observation_grass_trees_underdiscussion} illustrate the difficulty of identifying multiple distinct surface features within one scene. Figure \ref{fig: observation_195217_healthyness_vegetation} contains 15~subsequent three-second duration circular polarization spectra originating from one observation scene while flying over farmland. The magnitude varies from $V/I_{\text{min}}= -9.0 \times^{-3}$ to $V/I_{\text{max}} = 2.5 \times^{-3}$ due to a large variety of observed grass and soil. This large variety makes it diffucult to designate an accurate source to all the individual lines. We end up with a signal of $V/I = 2.0 \times^{-3}$, see pink line in Figure \ref{fig: observation_195217_healthyness_vegetation}, when avereriging over all 15 spectra. The three spectra, large-, no- and small red edge, presented in Figure \ref{fig: observation_195217_healthyness_vegetation} are also originating from one observation scene. The `small red edge'-spectra shows one positive band of $V/I = 1.0 \times^{-3}$, where the `large red edge'-spectra shows a similar postive band and a large negative band of $V/I = -4.2 \times^{-3}$. By looking at the trajectory, we identify the small and the large red edge as grass and trees respectively. The circular polarization spectrum averaged over the entire scene would show no clear signature due to the relative large soil (`no red edge') coverage across the scene. From top to bottom, Figure \ref{fig: DoLPAoLP} shows four polarization camera images of the Murtensee and its coast, a distant mountain top, a central point of the Murtensee and an urban landscape. The DoLP reveals structural features of the landscapes, which allows us to distinguish between a mountain top covered in ice and foggy clouds located just above the horizon. For the urban landscape (the middle bottom plot), we clearly distinguish the (white) buildings located in the lower half of the image from the grass fields located in the top half of the image. The AoLP images of the `Murtzensee' and `Glint Murtzensee' reveal a patchy structure on the water surface that we identify as a glint phenomenon. \section{Conclusions and Discussion} \label{sec: conclusion} We described an airborne optical-setup that we used to simultaneously measure circular polarization spectra and broadband DoLP and AoLP from various scenes that include biotic and abiotic features. During the entire flight, reference photos were taken with a regular imaging camera. These encapsulated the area that we were pointing towards, allowing for concise identification of the sources inducing the observed the circular polarization spectra. To our knowledge, this was the first time that a hot air balloon was used as an observing platform for spectropolarimetric measurements of the Earth's surface. We established the maximum solar angle and integration time for which we can obtain circular polarization spectra. This information is of great value for the next upcoming flights. At elevations of $\sim$~20~m and $\sim$~650~m (Figures \ref{fig: observation_194857_G0013896} and \ref{fig: observation_200113_G0013921}), we could distinguish between circular polarization spectra of grass and soil. The spectral characteristics of the grass landscapes in this paper are qualitatively similar to those presented by Patty et al. (2021)\cite{Patty21}. We measured a circular polarization of grass of $V/I$ = $2\times10^{-3}$ ($\sim$~20~m elevation) and $V/I$ = $-5\times10^{-4}$ ($\sim$~650~m elevation). Unlike Patty et al. (2021)\cite{Patty21}, we did not observe circular polarization signals from surface water (Figure \ref{fig: observation_201251_G0013943}). A possible explanation is that there is a smaller biomass of photosynthetic organisms in the water, as this study was conducted in the early spring, while Patty et al. measured during late summer. In addition, the lake was observed in the second half of the balloon flight, close to sunset. The lack of signal could thus also be due to the combination of a large viewing and solar angle. We captured how $V/I$ varies for a subsequent observation of farmland featuring various types of grass and possibly dry, and wet soil types (Figure \ref{fig: observation_195217_healthyness_vegetation}). The circular polarization spectra for 15 subsequent three-second duration measurements of the farm land reveal the circular polarization ranging between $V/I$ = $-9.0 \times10^{-2} - 0.25 \times 10^{-2}$. We have two explanations for this variation: (i) the arrangement of e.g. crops in soil causes the variation as we interchange observing crops and soil, or (ii) the variation is a consequence of a diversity in health of the observed vegetation. The latter explanation is the more likely one, as we do not see clear crop(-like) fields on the reference photo. Although quantitative circular polarization measurements on the health of the vegetation are lacking, we do know that leaves so show a reduced circular polarization when decaying \cite{Patty17}. Further quantitative and qualitative analysis is required to verify our hypothesis. % The circular polarization spectra from grass feature solely a single band, often being negative. Unlike the grass spectra, those from tree canopies show vary both from shape and sign. The forest spectra, as presented in Patty et al. (2021)\cite{Patty21}, have both a positive and negative band that exhibit a larger variation in shape and magnitude than the grass spectra. The difference between the two is illustrated in Figure \ref{fig: observation_grass_trees_underdiscussion}. The spectra showing the large negative peak is identified as grass, whereas the single positive peak around 710 nm appears to be induced by tree canopies. During this field campaign, we noticed a high diversity in circular polarization spectra for forests. It remains unclear what mechanisms could cause these variations. Measurements performed on individual (tree) leaves is a first step towards understanding the signals resulting from a canopy. As far as we know, Patty et al. (2022)\cite{Patty22} is the only study that investigated the full-Stokes spectropolarimetry of single leaves from different species and its dependency on the incidence angle of the light source. In their experiment, they varied between a phase angle of 10В° to 75В° where the phase angle is similar to the angle of the incident light. In general, all their fractional linear ($Q/I$) polarization spectra show a peak around 680 nm due to absorption by chlorophyll and all their fractional circular ($V/I$) spectra feature a negative band around 670 nm and a positive band around 700 nm. The $Q/I$ spectra reveal a strong dependence on phase angle, whereas the $V/I$ spectra are relatively insensitive to changes in the phase angle. We are currently investigating the effect of changing the phase angle and viewing angle of multiple-, leaf-to-leaf reflections, see Mulder et al. (in prep). With this knowledge, we will be a step closer to formulate a realistic circular polarization surface model that can be used to accurately simulate various tree canopies and therefore also aid in the interpretation of airborne full-Stokes spectropolarimetric measurements. In addition, these realistic circular polarization surface models would be valuable for realistic Earth-like (exo)planet models\cite{stam08,Groot20}. We used the obtained broadband DoLP and AoLP information for landscape identification purposes. It would be interesting to capture linear polarization spectra together with our circular polarization spectra of the surface scenes. This can be achieved by either adding a second spectropolarimeter or a spectropolarimeter that is capable of acquiring full-Stokes polarization from a single data frame \cite{Snik19,Sparks19,Keller20,Mulder21}. Peltoniemi et al. (2015)\cite{Peltoniemi15} points out that linear polarization would predominantly provide scalar reflectance information. However, we might be able to obtain more information than surface reflectance, especially when covering multiple phase angles. Due to the relatively fast ascend and descend, the effect of the balloon elevation on the circular polarized spectra from abiotic and biotic surfaces remains inconclusive. Further research (including numerical simulations) will help us to understand the influence of illumination, viewing angles on the circular polarization spectra for various landscapes including high elevation biomes (e.g. tundra, moss, lichens), barren land, possible water bodies (in shallow water) ice, snow, and red algae on snow from different elevations. Hot air balloons are only able to fly within the timespan of 3 hours before sunset until 3 hours after sunrise. We will plan our next balloon flight in the early morning. This allows us to prepare our instrument during take-off, e.g. taking flat fields, while setting up the polarization and reference camera. Doing so, we will start the scientific measurements just after sunrise, thus preventing large integration times and large scattering angles. In upcoming research, we will include measurements of grass and various tree canopies from one single observation point over the course of an entire day. With this quantitative and qualitative data, we will formulate a realistic circular polarization vegetation model. \acknowledgments % We thank the team of Balloons du Leman (the entire ground team, Julie, pilot Laura and Gael) personally for their enthusiasm, flexibility and help in the preparations for and during the balloon flight. We thank Remko Stuik for the airborne platform brainstorm session. This work has been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation under grants 51NF40\_182901 and 51NF40\_205606. This work was supported by the second Planetary and Exoplanetary Science Programme (PEPSci-II) of the Netherlands Organisation for Scientific Research (NWO). \bibliography{main} % \bibliographystyle{spiebib} %
Title: High-order Discontinuous Galerkin hydrodynamics with sub-cell shock capturing on GPUs
Abstract: Hydrodynamical numerical methods that converge with high-order hold particular promise for astrophysical studies, as they can in principle reach prescribed accuracy goals with higher computational efficiency than standard second- or third-order approaches. Here we consider the performance and accuracy benefits of Discontinuous Galerkin (DG) methods, which offer a particularly straightforward approach to reach extremely high order. Also, their computational stencil maps well to modern GPU devices, further raising the attractiveness of this approach. However, a traditional weakness of this method lies in the treatment of physical discontinuities such as shocks. We address this by invoking an artificial viscosity field to supply required dissipation where needed, and which can be augmented, if desired, with physical viscosity and thermal conductivity, yielding a high-order treatment of the Navier-Stokes equations for compressible fluids. We show that our approach results in sub-cell shock capturing ability, unlike traditional limiting schemes that tend to defeat the benefits of going to high order in DG in problems featuring many shocks. We demonstrate exponential convergence of our solver as a function of order when applied to smooth flows, such as the Kelvin-Helmholtz reference problem of arXiv:1509.03630. We also demonstrate excellent scalability of our GPU implementation up to hundreds of GPUs distributed on different compute nodes. In a first application to driven, sub-sonic turbulence, we highlight the accuracy advantages of high-order DG compared to traditional second-order accurate methods, and we stress the importance of physical viscosity for obtaining accurate velocity power spectra.
https://export.arxiv.org/pdf/2208.11131
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} methods: numerical – hydrodynamics – turbulence - shock waves \end{keywords} \section{Introduction} Computational fluid dynamics has become a central technique in modern astrophysical research \citep[for reviews, see, e.g.,][]{Trac2003, Vogelsberger2020, Andersson2021}. It is used in numerical simulations to advance the understanding of countless systems, ranging from planet formation \citep[e.g.][]{Nelson2000} over the evolution of stars \citep[e.g.][]{Edelmann2019}, and the interplay of gas, black holes and stars in galaxy formation \citep[e.g.][]{Weinberger2017}, up to extremely large scales involving clusters of galaxies \citep[e.g.][]{Dolag2009} or the filaments in the cosmic web \citep[e.g.][]{Mandelker2019}. This wide breadth of scientific applications is also mirrored in a bewildering diversity of numerical discretization schemes. Even so the underlying equations for thin, non-viscous gases -- the Euler equations -- are the same in a broad class of astrophysical studies, the commonly applied numerical methods come in many different flavors, and are sometimes based on radically different principles. At a basic level, one often distinguishes between Lagrangian and Eulerian discretization schemes. The former partition the gas into elements of (nearly) constant mass, as done for example in the popular smoothed particle hydrodynamics (SPH) approach \citep[e.g.][]{Monaghan1992} and its many derivatives. In contrast, the latter discretize the volume using a stationary (often Cartesian) mesh \citep[e.g.][]{Stone1992}, such that the fluid is represented as a field. Hybrid approaches, which for example use an unstructured moving-mesh \citep{Springel2010} are also possible. For mesh-based codes, finite-volume and finite-element methods are particularly popular. In the finite-volume approach, one records the averaged state in a cell, which is updated in time by the numerical scheme. This approach combines particularly nicely with the conservative character of the Euler equations, because the updates of the conserved quantities in each cell can be expressed as pair-wise fluxes through cell boundaries, yielding not only a manifestly conservative approach but also a physically intuitive formulation of the numerical method. In finite-element approaches one instead expands the fluid state in terms of basis functions. In spectral methods, the support of the basis functions can be the full simulation domain, for example if Fourier series are used to represent the system. Discontinuous Galerkin (DG) approaches \citep[first introduced for non-linear problems by][]{Cockburn1989}, which are the topic of this paper, are a particular kind of finite-element approaches in which a series expansion for the solution is carried out separately within each computational cell (which can have a fairly general shape). Inside a cell, it is thus simply a truncated spectral method. The solutions for each of the cells are coupled with each other, however, at the surfaces of the cells. Interestingly, high-order accuracy of global solutions can be obtained simply through the high order of the spectral method applied inside a cell, while it does not require continuity of the solutions at the cell interfaces. This makes it particularly straightforward to extend DG schemes to essentially arbitrarily high order, because this does not make the coupling at cell interfaces any more complicated. This is quite different from high-order finite volume schemes, where the reconstruction step requires progressively deeper stencils at high order \citep{Janett2019}. Another advantage of the DG approach is that it allows in principle cells of different convergence order to be directly next to each other \citep{Schaal2015}. This makes a spatially varying mesh resolution, or a spatially varying expansion order, more straightforward to implement than in high-order extensions of finite volume methods, where typically the high-order convergence property is compromised at resolution changes unless preserved with special treatments. Despite these advantages, DG methods have only recently begun to be considered in astrophysics. First implementations and applications include \citet{Mocz2014, Schaal2015, Kidder2017, guillet_high_order_2019}, as well as more recently \citet{Lombart2021, Markert2022, Deppe2022}. We here focus on exploring a new implementation of DG that we developed from the ground up for use with graphical processing units (GPUs). The recent advent of exascale supercomputers has been enabled through the use of graphical processing units (GPUs) or various other types of accelerator units. The common feature of these accelerators is the capability to execute a large number of floating point operations at the expense of lower memory bandwidth and total memory per computing unit (few MBs compared to few GBs on an ordinary compute node) compared to the CPU. Another peculiarity of accelerators is that they have hundreds of computing units (roughly equivalent to CPU cores) which execute operations in a single instruction, multiple data (SIMD) mode. Since many of the newest and largest supercomputers use such accelerators, it becomes imperative to either modify existing simulation codes for their efficient use, or to write new codes optimized for this hardware from scratch. While there are already many successes in the literature for both approaches \citep[e.g.][]{Schneider2015, Ocvirk2016, Wibking2022}, most current simulation work in the astrophysical literature is still being carried out with CPU codes. Certainly one reason is that large existing code bases are not easily migrated to GPUs. Another is that not all numerical solvers easily map to GPUs, making it hard or potentially impossible to port certain simulation applications to GPUs. However, there are also numerous central numerical problems where GPU computing should be applicable and yield sizable speed-ups. One is the study of hydrodynamics with uniform grid resolutions, as needed for turbulence. In this work, we thus focus on developing a new implementation of DG that is designed to run on GPUs. We base our implementation of DG on \citet{Schaal2015} and \citet{guillet_high_order_2019}, with one critical difference. We do not apply the limiting schemes described in these studies as they defeat the benefits of high-order approaches when strong shocks are present. Rather, we will revert to the idea of deliberately introducing a small amount of artificial viscosity to capture shocks, i.e.~to add required numerical viscosity just where it is needed, and ideally with the smallest amount necessary to suppress unphysical oscillatory solutions. As we will show, with this approach the high-order approach can still be applied well to problems involving shocks, without having to sacrifice all high-order information on the stake of a slope limiter. This paper is structured as follows. In Section~\ref{SecEuler}, we detail the mathematical basis of the Discontinuois Galerkin discretization of hydrodynamics as used by us. In Section~\ref{SecViscosity}, we generalize the treatment to include source terms which involve derivatives of the fluid states, such as needed for the Navier-Stokes equations, or for our artificial viscosity treatment for that matter. We then turn to a discussion of shock capturing and oscillation control in Section~\ref{SecImplementation}. The following Section~\ref{SecBasicTests} is devoted to elementary tests, such as shock tubes and convergence tests for smooth problems. In Section~\ref{SecKH} we then show results for ``resolved'' Kelvin-Helmholtz instabilities, and in Section~\ref{SecTurbulence}, we give results for driven isothermal turbulence and discuss to what extent DG methods improve the numerical accuracy and efficiency of such simulations. Implementation and parallelization issues of our code, in particular with respect to using GPUs, are described in Section~\ref{SecCode}, while in Section~\ref{SecPerformance}, we discuss the performance and scalability of our new GPU-based hydrodynamical code. Finally, we give a summary and our conclusions in Section~\ref{SecSummary}. \section{Discontinuous Galerkin discretization of the Euler equations} \label{SecEuler} The Euler equations are a system of hyperbolic partial differential equations. They encapsulate the conservation laws for mass, momentum and total energy of a fluid, and can be expressed as \begin{equation} \label{eq:euler_equation} \frac{\partial \boldsymbol{u}}{\partial t}+\sum_{\alpha=1}^{d} \frac{\partial \boldsymbol{f}_{\alpha}(\boldsymbol{u})}{\partial x_{\alpha}}=0 , \end{equation} where the sum runs over the $d$ dimensions of the considered problem. The state vector $\textbf{u}$ holds the conserved variables: density, momentum density, and total energy density: \begin{equation} \label{eq:euler} \boldsymbol{u}=\left[\begin{array}{c} \rho \\ \rho \boldsymbol{v} \\ e \\ \end{array}\right], \quad e=\rho u+\frac{1}{2} \rho \boldsymbol{v}^{2}. \end{equation} To make our system complete we need an equation of state which connects the hydrodynamics pressure $p$ with the specific internal energy $u$. If $\gamma$ is the adiabatic index, i.e.~the ratio of the specific heat of the gas at a constant pressure $C_p$ to its specific heat at a constant volume $C_v$, the ideal gas equation of state is \begin{equation} p = \rho u \left( \gamma - 1 \right). \end{equation} We also need to specify the second term of Eq.~(\ref{eq:euler_equation}). The fluxes $\boldsymbol{f_\alpha} (\boldsymbol{u} )$ in three dimensions are: \begin{equation} \label{eq:flux_matrix} \boldsymbol{f}_{1}=\left(\hspace*{-0.2cm} \begin{array}{c} \rho v_{x} \\ \rho v_{x}v_{x}+p \\ \rho v_{x} v_{y} \\ \rho v_{x} v_{z} \\ (\rho e+p) v_{x} \end{array}\hspace*{-0.2cm}\right), \; \boldsymbol{f}_{2}=\left(\hspace*{-0.2cm} \begin{array}{c} \rho v_{y} \\ \rho v_{x} v_{y} \\ \rho v_{y}v_{y} +p \\ \rho v_{y} v_{z} \\ (\rho e+p) v_{y} \end{array}\hspace*{-0.2cm}\right), \; \boldsymbol{f}_{3}=\left(\hspace*{-0.2cm} \begin{array}{c} \rho v_{z} \\ \rho v_{x} v_{z} \\ \rho v_{y} v_{z} \\ \rho v_{z}v_{z}+p \\ (\rho e+p) v_{z} \end{array}\hspace*{-0.2cm}\right). \end{equation} By summarizing the flux vectors into $\boldsymbol{F} = (\boldsymbol{f}_1, \boldsymbol{f}_2, \boldsymbol{f}_3)$, we can also write the Euler equations in the compact form \begin{equation} \label{eq:euler_compact} \frac{\partial \boldsymbol{u}}{\partial t} + \boldsymbol{\nabla} \cdot \boldsymbol{F} = 0, \end{equation} which highlights their conservative character. Numerically solving this set of non-linear, hyperbolic partial differential equations is at the heart of computational fluid dynamics. Here we shall consider the specific choice of a high-order Discontinuous Galerkin (DG) method. \subsection{Representation of conserved variables in DG} In the Discontinuous Galerkin approach, the state vector $\boldsymbol{u}^K(\boldsymbol{x}, t )$ in each cell $K$ is expressed as a linear combination of time-independent, differentiable basis functions $\phi_l^K(\boldsymbol{x})$, \begin{equation} \label{eq:state_vector_expansion} \boldsymbol{u}^{K}(\boldsymbol{x}, t) = \sum_{l=1}^{N} \boldsymbol{w}_{l}^{K}(t)\, \phi_{l}^{K}(\boldsymbol{x}), \end{equation} where the $\boldsymbol{w}_l^K(t)$ are $N$ time dependent weights. Since the expansion is carried out for each component of our state vector separately, the weights $\boldsymbol{w}_l^K$ are really vector-valued quantities with 5 different values in 3D for each basis $l$. Each of these components is a single scalar function with support in the cell $K$. The union of cells forms a non-overlapping tessellation of the simulated domain, and the global numerical solution is fully specified by the set of all weights. Importantly, no requirement is made that the piece-wise smooth solutions within cells are continuous across cell boundaries. We shall use a set of orthonormal basis functions that is equal in all cells (apart from a translation to the cell's location), and we specialize our treatment in this paper to Cartesian cells of constant size. The DG approach can however be readily generalized to other mesh geometries, and to meshes with variable cell sizes. Also, we will here use a constant number $N$ of basis functions that is equal for all cells, and determined only by the global order $p$ of the employed scheme. In principle, however, DG schemes allow this be varied from cell to cell (so-called $p$-refinement). \subsection{Time evolution} \label{sec:time_stepping} To derive the equations governing the time evolution of the DG weights $w_l^K$, we start with the original Euler equation from Eq.~(\ref{eq:euler_compact}), multiply it with one of the basis functions and integrate over the corresponding cell $K$: \begin{equation} \int_K \phi_{l}^{K} \frac{\partial \boldsymbol{u}}{\partial t} {\rm d}\boldsymbol{x} + \int_K \phi_{l}^{K} \,\boldsymbol{\nabla} \boldsymbol{F} \, {\rm d}\boldsymbol{x} = 0. \end{equation} Integration by parts of the second term and applying the divergence theorem leads to the so-called weak formulation of the conservation law: \begin{equation} \label{eq:weakform} \int_K \phi_{l}^{K} \frac{\partial \boldsymbol{u}}{\partial t} {\rm d}\boldsymbol{x} + \int_{\partial K} \phi_{l}^{K} \, \boldsymbol{F} \, {\rm d}\boldsymbol{n} - \int_K \boldsymbol{\nabla} \phi_{l}^{K} \, \boldsymbol{F} \, {\rm d}\boldsymbol{x} = 0. \end{equation} where $|K|$ stands for the volume of the cell (or area in 2D). If we now insert the basis function expansion of $\boldsymbol{u}$ and make use of the orthonormal property of our set of basis functions, \begin{equation} \int_K \phi_{l}^{K}(\boldsymbol{x}) \phi_{m}^{K}(\boldsymbol{x}) {\rm d}\boldsymbol{x} = \delta_{l, k} |K|, \end{equation} we obtain a differential equation for the time evolution of the weights: \begin{equation} \label{eq:weight_evolution} |K| \frac{{\rm d} \boldsymbol{w}^K_{l}}{{\rm d} t} = \int_K \boldsymbol{\nabla} \phi_{l}^{K} \, \boldsymbol{F} \, {\rm d}\boldsymbol{x} - \int_{\partial K} \phi_{l}^{K} \, \boldsymbol{F}^\star(\boldsymbol{u}^+, \boldsymbol{u}^-) \, {\rm d}\boldsymbol{n}. \end{equation} Here we also considered that the flux function at the surface of cells is not uniquely defined if the states that meet at cell interfaces are discontinuous. We address this by replacing $\boldsymbol{F}(\boldsymbol{u})$ on cell surfaces with a flux function $\boldsymbol{F}^{\star}(\boldsymbol{u}^+, \boldsymbol{u}^-)$ that depends on both states at the interface, where $\boldsymbol{u}^+$ is the outwards facing state relative to $\boldsymbol{n}$ (from the neighbouring cell), and $\boldsymbol{u}^-$ is the state just inside the cell. We will typically use a Riemann solver for determining $\boldsymbol{F}^{\star}$, making this akin to Godunov's approach in finite volume methods. In fact, the same type of exact or approximate Riemann solvers can be used here as well. We use for ordinary gas dynamics a simplified version of the Riemann HLLC solver as implemented in the {\small AREPO} code \citep{Springel2010, Weinberger2020}. We have also included an exact Riemann solver in case an isothermal equation of state is specified. What remains to be done to make an evaluation of Eq.~(\ref{eq:weight_evolution}) practical is to approximate both the volume and surface integrals numerically, and to choose a specific realization for the basis functions. We shall briefly discuss both aspects below. Another ingredient is the definition of the weights for the initial conditions. Thanks to the completeness of the basis, they can be computed by projecting the state vector $\boldsymbol{u}(\boldsymbol{x})$ of the initial conditions onto the basis functions $\phi_l^{K}$ of each cell: \begin{equation} \label{eq:weights_equation} \boldsymbol{w}_{l}^{K} = \frac{1}{|K|} \int_{K} \boldsymbol{u}\, \phi_{l}^{K} \mathrm{~d} V. \end{equation} If a finite number $N$ of basis functions is used to approximate the numerical solution, the total approximation error is then \begin{equation} \label{eq:l1_norm} L1 = \frac{1}{|K|} \int_{K} \left|\, \boldsymbol{u}(\boldsymbol{x}) - \sum_{l=1}^{N} \boldsymbol{w}_{l}^{K}\, \phi_{l}^{K}(\boldsymbol{x})\, \right| \mathrm{~d} V. \end{equation} We shall use this L1 norm to examine the accuracy of our code when analytic solutions are known. \subsection{Legendre basis function} Following \citet{Schaal2015}, we select Legendre polynomials $P_l(\xi)$ to construct our set of basis functions. They are defined on a canonical interval $[-1,1]$ and can be scaled such that they form an orthogonal basis with normalization chosen as: \begin{equation} \int_{-1}^{1} P_l(\xi) P_m(\xi) {\rm d}\xi = 2\, \delta_{l,m} . \end{equation} Note that the 0-th order Legendre polynomial is just a constant term, while the 1-st order features a simple pure linear dependence. In general, $P_l(\xi)$ is a polynomial of degree $l$. Within each cell, we define local coordinates $\boldsymbol{\xi} \in \left[ -1, 1 \right]^d$. The translation between global coordinates $\boldsymbol{x}$ to local cell coordinates $\boldsymbol{\xi}$ is: \begin{equation} \label{eq:coordinate_translation} \boldsymbol{\xi}^K=\frac{2}{h}\left(\boldsymbol{x}-\boldsymbol{x}_{c}^{K}\right), \end{equation} with $h$ being the cell size in one dimension, and $\boldsymbol{x}_c^K$ is the cell centre in world coordinates. Multi-dimensional basis functions are simply defined as Cartesian products of Legendre polynomials, for example in three dimensions as follows: \begin{equation} \label{eq:coord_trans} \phi_{l}^{K} (\boldsymbol{x}) = P_l^{\rm 3D}[ \boldsymbol{\xi}^K(\boldsymbol{x}) ], \end{equation} with \begin{equation} P_l^{\rm 3D}[ \boldsymbol{\xi}^K ] \equiv P_{l_x}(\xi^K_x)\cdot P_{l_y}(\xi^K_y) \cdot P_{l_z}(\xi^K_z), \end{equation} where the generalized index $l$ enumerates different combinations of Legendre polynomials $l_x(l)$, $l_y(l)$, and $l_z(l)$ in the different directions. In practice, we truncate the expansion at a predefined order $n$, and discard all tensor products in which the degree of the resulting polynomial exceeds $n$. This means that we end up in 3D with \begin{equation} N^{\rm 3D}(n)=\frac{1}{6}(n+1)(n+2)(n+3) \label{eqn:N3D} \end{equation} basis functions, each a product of three Legendre polynomials of orders $l_{z,y,z}\in \{0, \ldots, n\}$. In 2D, we have \begin{equation} N^{\rm 2D}(n)= \frac{1}{2}(n+1)(n+2), \label{eqn:N2D} \end{equation} and in 1D the number is $N^{\rm 1D}(n)= n +1$. The expected spatial convergence order due to the leading truncation error is in each case $p=n+1$. From now on we will refer to $p$ as the order of our DG scheme, with $n=p-1$ being the highest degree among the involved Legendre polynomials. In Figure~\ref{fig:ndof_fitting_example}, we show an example of approximating a smooth function with Legendre polynomials of different order and with a different number of cells, but keeping the number of degrees constant. In this case, the approximation error tends to be reduced by going to higher order, even when this implies using fewer cells. \subsection{Gaussian quadrature} An integration of a general function $f(x)$ over the interval $[-1,1]$ can be approximated by Gaussian quadrature rules, as \begin{equation} \int_{-1}^{1} f(x)\, {\rm d} x \simeq \sum_{j=1}^{n_g} g_j\, f(x_j) \end{equation} for a set of evaluation points $x_j$ and suitably chosen quadrature weights $g_j$. We use ordinary Gaussian quadrature with internal points only. The corresponding integration rule with $n_g$ evaluation points is exact for polynomials up to degree $2 n_g -1$. If we use Legendre polynomials up to order $n$, we therefore should use at least $n_g \ge (n + 1)/2$ integration points. Note, however, that the nonlinear dependence of the flux function on the state vector $\boldsymbol{u}$ means that we actually encounter rational functions as integrands and not just simple polynomials. As a result, we need unfortunately a more conservative number of integration points for sufficient accuracy and stability in practice. A good heuristic is to take the number of basis functions used for the one-dimensional case as a guide, so that one effectively employs at least one function evaluation per basis function. This means we pick $n_g = n+1$ in what follows. Multi-dimensional integrations, as needed for the surface and volume integrals in our Cartesian setup, can be carried out through tensor products of Gaussian integrations. We denote the corresponding function evaluation points as $\boldsymbol{\xi}^{\rm vol}_{\boldsymbol{j}} = (x_{j_1}, x_{j_2}, x_{j_3})$ and Gaussian weights as $g^{\rm vol}_{\boldsymbol{j}} = g_{j_1} \cdot g_{j_2} \cdot g_{j_3}$ for the combination $\boldsymbol{j} = (j_1, j_2, j_3)$ of Gaussian quadrature points needed for integrations over the cell volume in 3D. For surface integrations over our cubical cells, we correspondingly define $\boldsymbol{\xi}^{\rm sur}_{\boldsymbol{k}, x+} = (+1, x_{k_1}, x_{k_2})$, and $\boldsymbol{\xi}^{\rm sur}_{\boldsymbol{k}, x-} = (-1, x_{k_1}, x_{k_2})$ for evaluation points on the right and left surface in the $x$-direction of one of our cubical cells, with $\boldsymbol{k} = (k_1, k_2)$ and likewise for the $y$- and $z$-directions. The corresponding Gaussian quadrature weights are given by $g^{\rm sur}_{\boldsymbol{k}} = g_{k_1} \cdot g_{k_2}$. Putting everything together, we arrive at a full set of discretized evolutionary equations for the weights. For definiteness, we specify this here for the three dimensional case: \begin{align} \label{eq:time_stepping_final_analytical_equation} \frac{{\rm d} \boldsymbol{w}^K_{l}}{{\rm d} t} = \frac{1}{4} \sum_{\alpha=1}^{3} \sum_{\substack{\boldsymbol{j} \in \\ [1,n_g]^3}} \left\{\boldsymbol{f}_{\alpha}[\boldsymbol{u}^{K}(\boldsymbol{\xi}^{\rm vol}_{\boldsymbol{j}})] \cdot \frac{\partial P_l^{\rm 3D}(\boldsymbol{\xi}^{\rm vol}_{\boldsymbol{j}})}{\partial \xi_{\alpha}}\right\} g^{\rm vol}_{\boldsymbol{{j}}} \nonumber \\ -\frac{1}{8} \sum_{\alpha=1}^{3} \sum_{\substack{\boldsymbol{k}\in \\ [1,n_g]^2}} \Bigg\{ P_l^{\rm 3D}(\boldsymbol{\xi}^{\rm sur}_{\boldsymbol{k},\alpha+})\, \boldsymbol{f}^{\star}_\alpha\left[\boldsymbol{u}^{K,\alpha+}(\xi^{\rm sur}_{\boldsymbol{k},\alpha -}), \boldsymbol{u}^{K}(\boldsymbol{\xi}^{\rm sur}_{\boldsymbol{k},\alpha +})\right] \nonumber \\ - P_l^{\rm 3D}(\boldsymbol{\xi}^{\rm sur}_{\boldsymbol{k},\alpha+})\, \boldsymbol{f}^{\star}_\alpha\left[ \boldsymbol{u}^{K}(\boldsymbol{\xi}^{\rm sur}_{\boldsymbol{k},\alpha -}), \boldsymbol{u}^{K,\alpha-}(\boldsymbol{\xi}^{\rm sur}_{\boldsymbol{k},\alpha+})\right] \Bigg\} g^{\rm sur}_{\boldsymbol{{k}}} . \end{align} Here the notation $\boldsymbol{u}^{K,\alpha+}$ and $\boldsymbol{u}^{K,\alpha-}$ refer to the state vectors evaluated for the right and left neighbouring cells of cell $K$ in the direction of axis $\alpha$, respectively. The state vector evaluations themselves are given by \begin{equation} \boldsymbol{u}^{K}(\boldsymbol{\xi}) = \sum_{l=1}^{N} \boldsymbol{w}_{l}^{K}\, P_l^{\rm 3D}(\boldsymbol{\xi}). \end{equation} Note that the prefactor $1/|K|$ in front of the surface integral terms in Eq.~(\ref{eq:time_stepping_final_analytical_equation}) turns into $1/8$ as a result of the change of integration variables mediated by Eq.~(\ref{eq:coord_trans}). The volume integral acquires a factor of $2/h$ from the coordinate transformation, thus the final prefactor becomes $1/4$. The numerical computation of the time derivative of the weights based on a current set of weights is in principle straightforward using Eq.~(\ref{eq:time_stepping_final_analytical_equation}), but evidently becomes more elaborate at high-order, involving numerous sums per cell. In passing we note that instead of just counting the number of cells per dimensions, both the storage effort and the numerical work needed is better measured in terms of the number of degrees of freedom per dimension. A fixed number of degrees of freedom (and thus storage space) can be achieved with different combinations of cell size and expansion order. The hope in using high-order methods is that they deliver better accuracy for a fixed number of degrees of freedom, or arguably even more importantly, better accuracy at fixed computational expense. \subsection{Time integration} With \begin{equation} \boldsymbol{\dot{w}} \equiv \frac{{\rm d} \boldsymbol{w}^K_{l}}{{\rm d} t} \end{equation} in hand, standard ODE integration methods such as the broad class of Runge-Kutta integrations can be used to advance the solution forward in time. We follow standard procedure and employ strongly positivity preserving (SPP) Runge-Kutta integration rules as defined in \citet[Appendix D]{Schaal2015}. Note that when higher spatial order is used, we correspondingly use a higher order time integration method, such that the time integration errors do not start dominating over spatial discretization errors. The highest time integration method we use is a 5 stage 4-th order SSP RK method. \section{Treatment of viscous source terms} \label{SecViscosity} As we will discuss later on, our approach for capturing physical discontinuities (i.e.~shocks and contact discontinuities) in gas flows deviates from the classical slope-limiting approach and instead relies on a localized enabling of artificial viscosity. Furthermore, we will generalize our method to also account for physical dissipative terms, so that we arrive at a treatment of the full compressive Navier-Stokes equations. To introduce these methods, we start with a generalized set of Euler equations in 3D that are augmented with a diffusion term in all fluid variables, \begin{equation} \label{eq:navier-stokes-basic} \frac{\partial \boldsymbol{u}}{\partial t}+\nabla \cdot \boldsymbol{F}=\nabla \cdot(\varepsilon \nabla \boldsymbol{u}), \end{equation} where $\boldsymbol{u}$ and $\boldsymbol{F}$ are the state vector (\ref{eq:state_vector_expansion}) and the flux matrix (\ref{eq:flux_matrix}), respectively. The crucial difference between the normal Euler equations~(\ref{eq:euler_equation}) and this dissipative form is the introduction of a second derivative on the right-hand side, which modifies the character of the problem from being purely hyperbolic to an elliptic type, while retaining manifest conversation of mass, momentum and energy. This second derivative can however not be readily accommodated in our weight update equation obtained thus far. Recall, the reason we applied integration by parts and the Gauss' theorem going from Eq.~(\ref{eq:euler_compact}) to Eq.~(\ref{eq:weakform}) was to eliminate the spatial derivative of the fluxes. If we apply the same approach to $\nabla \cdot(\varepsilon \nabla \boldsymbol{u})$ we are still left with one $\nabla$-operator acting on the fluid state. \subsection{The uplifting approach} In a seminal paper, \citet{bassi_rebay_1997JCoPh.131..267B} suggested a particular treatment of this second derivative inspired by how one typically reduces second (or higher) order ordinary differential equations (ODEs) to first order ODEs. \citet{bassi_rebay_1997JCoPh.131..267B} reduce the order of Eq.~(\ref{eq:navier-stokes-basic}) by introducing the gradient of the state vector, $\boldsymbol{S} \equiv \nabla \boldsymbol{u}$, as an auxiliary set of unknowns. This yields a system of two partial differential equations: \begin{align} \label{eq:uiss} &\boldsymbol{S} - \nabla \boldsymbol{u} = 0, \\ \label{eq:navier-stokes-reduced-order} &\frac{\partial \boldsymbol{u}}{\partial t}+\nabla \cdot (\boldsymbol{F} - \epsilon\boldsymbol{S})=0. \end{align} Interestingly, if we consider a basis function expansion for $\boldsymbol{S}$ for each cell in the same way as done for the state vector, then the weak formulation of the first equation can be solved with the DG formalism using as input only the series expansion of the current state $\boldsymbol{u}$. This entails again an integration by parts that yields volume and surface integrations for each cell. To compute the latter, one needs to adopt a surface state $\boldsymbol{u}^\star$ for potentially discontinuous jumps $\boldsymbol{u}^+$ and $\boldsymbol{u}^-$ across the cell boundaries. \citet{bassi_rebay_1997JCoPh.131..267B} suggest to use the arithmetic mean $\boldsymbol{u}^\star = [\boldsymbol{u}^- + \boldsymbol{u}^+]/2$ for this, so that obtaining the series expansion coefficients for $\boldsymbol{S}$ is straightforward. One can then proceed to solve Eq.~(\ref{eq:navier-stokes-reduced-order}), with a largely identical procedure than for the Euler equation, except that the ordinary flux $\boldsymbol{F}$ is modified by subtracting the viscous flux $\boldsymbol{F}_{\rm visc} = \epsilon\boldsymbol{S}$. At cell interfaces one furthermore needs to define the viscous flux uniquely somehow, because $\boldsymbol{S}$ can still be discontinuous in general at cell interfaces. Here \citet{bassi_rebay_1997JCoPh.131..267B} suggest to use the arithmetic mean again. A clear disadvantage of this procedure, which we initially implemented in our code, is that it significantly increases the computational cost and code complexity, because the computation of $\boldsymbol{S}$ involves the same set of volume and surface integrals that are characteristic of the DG approach, except that it actually has to be done {\em three times} as often than for $\boldsymbol{u}$ in 3D, once for each spatial dimension. A correspondingly large temporary storage to hold the expansion coefficients for $\boldsymbol{S}$ -- again three times the number than needed to describe $\boldsymbol{u}$ itself -- is needed as well. But more importantly, we have found that this method is prone to robustness problems, in particular if the initial conditions already contain large discontinuities across cells. In this case, the estimated derivatives inside a cell can get significantly distorted by the jumps seen on the outer sides of a cell. In fact, if a large jump is present on one side of a cell, the derivative inferred on the other side of the cell will in general be modified as well, and can assume unphysically large values. Also, the derivative can become discontinuous there even if $\boldsymbol{u}$ happens to be continuously differentiable at this interface. In hindsight, this is perhaps not too suprising. For a continuous solution, there is arguably little if anything to be gained by solving Eq.~(\ref{eq:uiss}) with the DG algorithm if a polynomial basis is in use. Because this must then return a solution identical to simply taking the derivatives of the basis functions (which are analytically known) and retaining the coefficients of the expansion. On the other hand, if there are discontinuities in $\boldsymbol{u}$ at the boundaries, the solution for $\boldsymbol{S}$ sensitively depends on the (to a certain degree arbitrary) choice made for resolving the jumps in the computation of the surface integrals for $\boldsymbol{S}$. In particular, there is no guarantee that using the arithmetic mean does not induce large oscillations or unphysical values for $\boldsymbol{S}$ in the interior of cells in certain cases. For all these reasons we have ultimately abandoned the \citet{bassi_rebay_1997JCoPh.131..267B} method, because it does not yield a robust solution for the diffusion part or the equations in all situations, and does not converge rapidly at high order either. Instead, we conjecture that the key to high order convergence of the diffusive part of the PDE system is the availability of a consistently defined continuous solution across cell boundaries. \subsection{Surface derivatives} For internal evaluations of the viscous flux (which in general may depend on $\boldsymbol{u}$ and $\nabla\boldsymbol{u}$) within a cell, we use the current basis function expansion of the solution in the cell and simply obtain the derivative by analytically differentiating the basis functions. We argue that this is the most natural choice as the same interior solution $\boldsymbol{u}$ is used for computing the ordinary hydrodynamical flux. The problem, however, lies with the surface terms of the viscous flux, as here neither the value of the state vector nor the gradient are uniquely defined, and unlike for the hyperbolic part of the equation, there is no suitable `Riemann solver' to define a robust flux for the diffusion part of the equation. Simply taking arithmetic averages of the two values that meet at the interface for the purpose of evaluating the surface viscous flux is not accurate and robust in practice. We address this problem by constructing a new {\em continuous} solution across a cell interface by considering the current solutions in the two adjacent cells of the interface, and projecting them onto a new joint polynomial expansion in a rectangular domain that covers part (or all) of the two adjacent cells. This interpolated solution minimizes the $L_2$ difference to the original (in general discontinuous) solutions in the two cells, but it is continuous and differentiable at the cell interface by construction. The quantities $\boldsymbol{u}$ and $\nabla\boldsymbol{u}$ needed for the evaluation of the viscous surface flux are then computed by evaluating the new basis function expansion at the interface itself. A sketch of the adopted procedure is shown in Figure~\ref{fig:sketch_surface_projection}. The two solutions in the two adjacent cells are given by \begin{equation} \boldsymbol{u}^{K^-}(\boldsymbol{x}) = \sum_{l=1}^{N} \boldsymbol{w}_{l}^{K^-}\, \phi^{K^-}(\boldsymbol{x}). \end{equation} and \begin{equation} \boldsymbol{u}^{K^+}(\boldsymbol{x}) = \sum_{l=1}^{N} \boldsymbol{w}_{l}^{K^+}\, \phi^{K^+}(\boldsymbol{x}). \end{equation} We now seek an interpolated solution in terms of a set of new basis functions $\psi^{K^\star}$ defined on the domain $K^\star$, i.e. \begin{equation} \boldsymbol{\tilde{u}}^{K^\star}(\boldsymbol{x}) = \sum_{l=1}^{N^{\star}} \boldsymbol{q}_{l}^{K^\star}\, \psi^{K^\star}(\boldsymbol{x}). \end{equation} In order to avoid a degradation of accuracy if the solution is smooth, and to provide sufficient accuracy for the gradient, we adopt order $n+1$ for the polynomial basis of $\boldsymbol{\tilde{u}}^{K^\star}$. As for ordinary cells, the generalized index $l$ enumerates different combinations $[l_x(l), l_y(l), l_z(l)]$ of Legendre polynomials and their Cartesian products in the multidimensional case. If, for example, the two cells are oriented along the $x$-axis, we define \begin{equation} \label{eq:coord_trans2} \psi_{l}^{K^\star} (\boldsymbol{x}) = P_{l_x}(\xi_x^{K^\star})\cdot P_{l_y}(\xi^K_y) \cdot P_{l_z}(\xi^K_z), \end{equation} where now the mapping of the $x$-extension of the domain $K^\star$ into the standard interval $[-1,1]$ is correspondingly modified as \begin{equation} \label{eq:coordinate_translation} \xi_x^{K^\star}=\frac{1}{f\,h}\left({x}-\frac{x_{c}^{K^-} + {x}_{c}^{K^+}}{2}\right), \end{equation} where $f$ is the fraction of overlap of each of the two cells (see Fig.~\ref{fig:sketch_surface_projection}). The coefficients $\boldsymbol{q}_{l}^{K^\star}$ can then be readily obtained by carrying out the projection integrals \begin{eqnarray} \boldsymbol{q}_{l}^{K^\star} & = & \frac{1}{|K^\star|}\int_{K^\star} \boldsymbol{u}(\boldsymbol{x})\; \psi_{l}^{K^\star}(\boldsymbol{x})\,{\rm d}V \\ & \hspace*{-1.7cm}= & \hspace*{-1cm}\frac{1}{|K^\star|}\sum_{m=1}^{N} \left[ \boldsymbol{w}_m^{K^-} \int_{K^-} \phi_m^{K^-} \psi_{l}^{K^\star} \, {\rm d}V + \boldsymbol{w}_m^{K^+} \int_{K^+} \phi_m^{K^+} \psi_{l}^{K^\star} \, {\rm d}V \nonumber \right]. \end{eqnarray} The projection is a linear operation, and the overlap integrals of the Legendre basis functions can be precomputed ahead of time. In fact, many evaluate to zero due to the orthogonality of our Legendre basis. In particular, this is the case for the transverse basis functions if their order is not equal, so that the projection effectively becomes a sparse matrix operation that expresses the new expansion coefficients in the normal direction as a sum of one or several old expansion coefficients in the normal direction. This can be more explicitly seen by defining Legendre overlap integrals as \begin{eqnarray} A_{m,l}^- &=& \int_{-1}^{0} P_m(2 f x + 1) P_l(x) \,{\rm d}x,\\ A_{m,l}^+ &=& \int_{0}^{1} P_m(2 f x + 1) P_l(x) \,{\rm d}x. \end{eqnarray} Then the new coefficients can be computed as follows \begin{equation} \boldsymbol{q}_{(l_x,l_y,l_z)}^{K^\star} = \frac{1}{2f}\sum_{m_x =0}^{l_x} \left[ A_{m_x,l_x}^- \boldsymbol{w}_{(m_x, l_y, l_z)}^{K^-} + A_{m_x,l_x}^+ \boldsymbol{w}_{(m_x, l_y, l_z)}^{K^+} \right]. \label{eqn:basisprojection} \end{equation} Note that for transverse dimensions, only the original Legendre polynomials contribute, hence the new coefficients are simply linear combinations of coefficients that differ only in the order of the Legendre polynomial in the $x$-direction. Also note that for the transverse dimensions, the highest Legendre orders $l_y$ and $l_z$ that are non-zero are the same as for the original coefficients, i.e.~the fact that we extend the order to $n+1$ becomes only relevant for the direction connecting the two cells. Another point to note is that the basis function projection can be carried out independently for the left and right side of an interface (corresponding to the first and second part of the sum in eqn.~\ref{eqn:basisprojection}), each yielding a partial result that can be used in turn to evaluate partial results for $\boldsymbol{\tilde{u}}$ $\nabla\boldsymbol{\tilde{u}}$ at the interface. Adding up these partial results then yields the final interface state and and interface gradient. This means that this scheme does not require to send the coefficients $\boldsymbol{w}^{K^\pm}$ to other processors in case $K^-$ and $K^+$ happen to be stored on different CPUs or GPUs, only ``left'' and ``right'' states for $\boldsymbol{\tilde{u}}$ and $\nabla \boldsymbol{\tilde{u}}$ need to be exchanged (which are the partial results that are then summed instead of taking their average), implying the same communication costs as, for example, methods that would rely on taking arithmetic averages of the values obtained separately for the $K^-$ and $K^+$ sides. Finally, we choose $f=3/4$ for the size of the overlap region for $n\le 2$, but $f=1$ for higher order $n> 2$. For the choice of $f=3/4$, the estimate for the first derivative of the interpolated solution ends up being \begin{equation} \nabla \boldsymbol{\tilde{u}} = \frac{\boldsymbol{u}^{+} - \boldsymbol{u}^{-}}{h} \boldsymbol{n}, \end{equation} for piece-wise constant states, where $h$ is the cell spacing, $\boldsymbol{n}$ is the normal vector of the interface, and $\boldsymbol{u}^\pm$ are the average states in the two cells. This intuitively makes sense for low order. In particular, this will pick up a reasonable gradient even if one starts with a piece-wise constant initial conditions, and even if $n=0$ (corresponding to DG order $p=1$) is used. We also obtain the expected convergence orders for diffusion problems (see below) with this choice when $n \le 2$ is used. On the other hand, we have found that it is necessary to include the full available information of the two adjacent cells by adopting $f=1$ for still higher order in order to obtain the expected high-order convergence rates for diffusion problems also for $n > 2$. \subsection{The Navier-Stokes equations} \label{SecNavierStokesEquations} While we will use the above form of the dissipative terms for our treatment of artificial viscosity (see below), we also consider the full Navier-Stokes equations. They are given by: \begin{equation} \label{eq:navier-stokes-basic} \frac{\partial \boldsymbol{u}}{\partial t}+\nabla \cdot \boldsymbol{F}=\nabla \cdot \boldsymbol{F}_{\rm NS}, \end{equation} where now the Navier-Stokes flux vector $\boldsymbol{F}_{\rm NS}$ is a non-linear function both of the state vector $\boldsymbol{u}$ and its gradient $\boldsymbol{\nabla{u}}$. We pick the canonical form \begin{equation} \boldsymbol{F}_{\rm NS} = \left( \begin{array}{c} 0 \\ \boldsymbol{\Pi} \\ \boldsymbol{v}\cdot\boldsymbol{\Pi} + \chi (\gamma - 1) \rho \nabla u\\ \end{array} \right) , \end{equation} with a viscous tensor \begin{equation} \boldsymbol{\Pi} = \nu \rho \left( \nabla{\boldsymbol{v}} + \nabla{\boldsymbol{v}}^{\rm T} + \frac{2}{3}\nabla\cdot \boldsymbol{v}\right) \end{equation} that dissipates shear motions with viscosity $\nu$. We also include optional heat conduction with thermal diffusivity $\chi$. Note that the derivatives of the primitive variables can be easily obtained from the derivatives of the conservative variables when needed, for example $\boldsymbol{\nabla v} = [\nabla(\rho \boldsymbol{v}) - \boldsymbol{v} \nabla\rho]/\rho$, and one can thus express the velocity gradient $\boldsymbol{\nabla v}$ in terms of $\boldsymbol{\nabla{u}}$ and $\boldsymbol{u}$. \subsection{Passive tracer} \label{SecPassiveScalar} Finally, for later application to the Kelvin-Helmholtz problem, we follow \citet{Lecoanet2016} and add a passive, conserved tracer variable to the fluid equations. The density of the tracer is $c\rho$, with $c$ being its dimensionless relative concentration. It can be added as a further row to the state vector $\boldsymbol{u}$. Since the tracer is conserved and simply advected with the local velocity, the corresponding entry in the flux vector is $c\rho\boldsymbol{v}$. Further, we can also allow for a diffusion of the tracer with diffusivity $\eta$, by adding $\eta \rho \nabla c$ in the corresponding row of the Navier-Stokes flux vector. The governing equation for the passive tracer dye is hence \begin{equation} \frac{\partial (c \rho)}{\partial t} +\nabla \cdot (c \rho\boldsymbol{v}) = \nabla(\eta \rho \nabla c). \label{eqn:passivetracer} \end{equation} \section{Shock capturing and oscillation control} \label{SecImplementation} \subsection{Artificial viscosity} High-order numerical methods are prone to oscillatory behaviour around sharp jumps of density or pressure. Such physical discontinuities arise naturally at shocks in supersonic fluid motion, and they are an ubiquitous phenomenon in astrophysical gas dynamics. In fact, the Euler equations have the interesting property that perfectly initial conditions can evolve with time into states that feature real discontinuities. The physical dissipation that must happen in these jumps is implicitly dictated by the conservation laws, but discrete numerical methods may not always produce the required level of dissipation, such that postshock oscillations are produced that are reminiscent of the Gibbs phenomenon in Fourier series expansion around jump discontinuities. Our DG code produces these kinds of oscillations with increasing prominence at higher and higher order when discontinuities are present. And once the oscillations appear, they do not necessarily get quickly damped because of the very low numerical dissipation of high-order DG. Shocks, in particular, seed new oscillations with time, because inside cells the smooth {\em inviscid} Euler equations are evolved -- in which there is no dissipation at all. Thus the entropy production required by shocks is simply not possible. Note that the oscillations are not only physically wrong, they can even cause negative density or pressure fluctuations in some cells, crashing the code. One approach to prevent this are so-called slope limiters. In particular, the family of minmod slope limiters is highly successfully used in second-order finite volume methods. While use of them in DG methods is possible, applying them in high order settings by discarding the high-order expansion coefficients whenever the slope limiter kicks in \citep[see][]{Schaal2015, guillet_high_order_2019} is defeating much of the effort to going to high order in the first place. Somehow constructing less aggressive high-order limiters that can avoid this is a topic that has seen much effort in the literature, but arguably only with still limited success. In fact, the problem of coping with shocks in high-order DG is fundamentally an issue that still awaits a compelling and reasonably simple solution. Recent advanced treatments had to resort to replacing troubled cells with finite volume solutions computed on small grid patches that are then blended with the DG solution \citep[e.g.][]{Zanotti2014, Markert:2021aa}. We here return to the idea that this problem may actually be best addressed by resurrecting the old idea of artificial viscosity \citep{persson_peraire_2006}. In other inviscid hydrodynamical methods, in particular in the Lagrangian technique of smoothed particle hydrodynamics, it is evident and long accepted that artificial viscosity must be added to capture shocks. Because the conservation laws ultimately dictate the amount of entropy that needs to be created in shocks, the exact procedure for adding artificial viscosity is not overly critical. What is critical, however, is that the there is a channel for dissipation and entropy production. It is also clear that shocks in DG can be captured in a sub-cell fashion only if the required dissipation is provided somehow, either through artificial viscosity that is ideally present only at the place of the shock front itself where it is really needed, or by literally capturing the shock by subjecting the ``troubled cell'' to a special procedure in which it is, for example, remapped to grid of finite volume cells. \citet{persson_peraire_2006} suggested to use a discontinuity (or rather oscillation) sensor to detect the need for artificial viscosity in a given cell. For this, they proposed to measure the relative contribution of the highest order Legendre basis functions in representing the state of the conserved fields in a cell. A solution of a smooth problem is expected to be dominated by the lower order weight coefficients, and statistically the low order weights should be much larger than their high order counterparts. In contrast, for highly oscillatory solutions in a cell (which often are created as pathological side-effects of discontinuities), the high order coefficients are more strongly expressed. We adopt the same discontinuity sensor as \citet{persson_peraire_2006}. For every cell $K$, we can calculate the conserved variables $\boldsymbol{u}(\boldsymbol{x})$ using either the full basis in the normal way, \begin{equation*} \boldsymbol{u}(\boldsymbol{x})=\sum_{i=1}^{N(p)} \boldsymbol{w}^K_{l} \phi_{l} \end{equation*} or by omitting the highest order basis functions that are not present at the next lower expansion order, as \begin{equation*} \boldsymbol{\hat{u}}(\boldsymbol{x})=\sum_{i=1}^{N(p-1)} \boldsymbol{w}^K_{l} \phi_{l} \end{equation*} The discontinuity/oscillatory sensor $S^{K}$ in cell $K$ can now be defined as \begin{equation} \label{eq:discontinuity_sensor} S^{K}=\frac{\int_K (u-\hat{u})(u-\hat{u}) {\rm d}V }{\int_K u(x) u(x) \,{\rm d}V }, \end{equation} where we restrict ourselves to one component of the state vector, the density field. Note that due to the orthogonality of our basis functions, this can be readily evaluated as \begin{equation} \label{eq:discontinuity_sensor} S^{K}=\frac{\sum\textbf{}_{l=N_{p-1}}^{N_p} [w^K_l]^2}{\sum_{l=1}^{N_p} [w^K_l]^2 } \end{equation} in terms of sums over the squared expansion coefficients. While we have $0 \le S^{K} \le 1$, we expect $S^{K}$ to generally assume relatively small values even if significant oscillatory behaviour is already present in $K$, simply because the natural magnitude of the expansion coefficients declines with their order rapidly. \citet{persson_peraire_2006} argue that the coefficients should scale as $1/p^2$ in analogy with the scaling of Fourier coefficients in 1D, so that typical values for $S^{K}$ in case oscillatory solutions are present may scale as $1/p^4$. Our tests indicate a somewhat weaker scaling dependence, however, for oscillatory solutions developing for identical ICs, where the troubled cells scale approximately as $S^K \sim 1/p^2$ as a function of order. In the approach of \citet{persson_peraire_2006}, artificial viscosity is invoked in cells once their $S^K$ value exceeds a threshold value, above which it is ramped up smoothly as a function of $S^K$ to a predefined maximum value. While this approach shows some success in controlling shocks in DG, it is problematic that strong oscillations need to be present in the first place {\em before} the artificial viscosity is injected to damp them. In a sense, some damage must have already happened before the fix is applied. For capturing shocks we therefore argue it makes more sense to resort to a physical shock sensor which detects rapid, non-adiabatic compressions in which dissipation should occur. We therefore propose here to adapt ideas widely used in the SPH literature \citep{Morris1997, Cullen2010}, namely to consider a time-dependent artificial viscosity field that is integrated in time using suitable source and sink functions. Adopting a dimensionless viscosity strength $\alpha(\boldsymbol{x}, t)$, we propose the evolutionary equation \begin{equation} \frac{\partial \alpha}{\partial t} = \dot\alpha_{\rm shock} + \dot\alpha_{\rm wiggles} -\frac{\alpha}{\tau} \label{eqn:timedependentviscosity} \end{equation} for steering the spatially and temporarily variable viscosity. For the moment we use a simple shock sensor $\dot\alpha_{\rm shock} = f_v \max(0, -\boldsymbol{\nabla}\cdot \boldsymbol{v})$ based on detecting compression, where $f_v\sim 1.0$ can be modified to influence how rapidly the viscosity should increase upon strong compression. In the absence of sources, the viscosity decays exponentially on a timescale \begin{equation} \tau = f_\tau \frac{h}{p\,c_s}, \end{equation} where $h/p$ is the expected effective spatial resolution at order $p$, $c_s$ is the local sound speed, and $f_\tau \sim 0.5$ is a user-controlled parameter for setting how rapidly the viscosity decays again after a shock transition. Finally, the term $\dot\alpha_{\rm wiggles}$ in Equation~(\ref{eqn:timedependentviscosity}) is a further source term added to address the occurrence of oscillatory behaviour away from shocks. In fact, this typically is seeded directly ahead of strong shocks, for example when the high-order polynomials in a cell with a shock trigger oscillations in the DG cell directly ahead of the shock through coupling at the interface. Another typical situation where oscillations can occur are sharp, moving contact discontinuities. Here the shock sensor would not be effective in supplying the needed viscosity as there is no shock in the first place. We address this problem by considering the {\em rate of change} of the oscillatory senor $S^K$ as a source for viscosity, in the form \begin{equation} \dot\alpha_{\rm wiggles} = f_w \max\left(0, \frac{{\rm d} \log S^K}{{\rm d}t} \right), \\ \end{equation} for $S^K > S_{\rm onset}$, otherwise $\dot\alpha_{\rm wiggles} = 0$. When ${\rm d} \log S^K/{{\rm d}t}$ is positive and large, oscillatory behaviour is about to grow and the cell is on its way to become a troubled cell, indicating that this should better be prevented with local viscosity. In this way, oscillatory solutions can be much more effectively controlled than waiting until they already reached a substantial size. It is nevertheless prudent to restrict the action of this viscosity trigger to cells that have $S_K$ above a minimum value $S_\textrm{onset}$, otherwise the code would try to suppress even tiny wiggles, which would invariably lead to very viscous behaviour. In practice, we set $S_{\rm onset} = 10^{-3}/p^2$, and we compute ${\rm d} \log S^K/{{\rm d}t}$ based on the time derivatives of the weights of the previous timestep. We add $\alpha$ as a further field component to our state vector $\boldsymbol{u}$, meaning that it is spatially variable and is expanded in our set of basis functions. For the moment, the temporal variation of the expansion coefficients is only determined by the source functions, without some kind of flux redistributing $\alpha$ spatially, although this could in principle be added if desired. Only the shock sensor source function is actually variable in a cell, whereas our oscillatory sensor affects the viscosity throughout a cell. Finally, the actual viscosity applied in the viscous flux of Eqn.~(\ref{eq:navier-stokes-basic}) is parameterized as \begin{equation} \epsilon = \alpha c_s \frac{h}{p}, \end{equation} and we impose a maximum allowed value of $\alpha_{\rm max} = 1$, primarily as a means to prevent overstepping the van Neumann timestep constraint for explicit integration of the diffusion equation, which would cause immediate numerical instability. Since our timestep obeys the Courant condition, this is fortunately not implying a significant restriction for effectively applying the artificial viscosity scheme, but it imposes an upper bound that can be used safely without making the time-integration unstable. We have found that the above parameterization works quite reliably, injecting viscosity only at discontinuities and when spurious oscillations need to be suppressed, while at the same time not smoothing out solutions excessively. Figure~\ref{fig:shockzoom_viscosity} shows an example for a Mach number $\mathcal{M} = 3$ shock that is incident from the left on gas with unit density and unity pressure, and adiabatic index $\gamma=5/3$. The simulation has been computed at order $p=10$, and at the displayed time, the shock position should be at $x=0.5$, for a mesh resolution of $h=1.0/21$. We show our DG result as a thick blue line, and also give the viscosity field $\alpha(x)$ as a red line. Clearly, the shock is captured at a fraction of the cell size, with negligible ringing in the pre- and post-shock regions. This is achieved thanks to the artificial viscosity, which peaks close to the shock center, augmented by additional weaker viscosity in the cell ahead of the shock, which would otherwise show significant oscillations as well. This becomes clear when looking at the solution without artificial viscosity, which is included as a grey line in the background. The blue circles in Fig.~\ref{fig:shockzoom_viscosity} mark the places in which the solution has reached 20 and 80 percent of the height of the shock's density jump. We can operationally define the difference in the corresponding $x$-coordinates as the width $\Delta x_\textrm{shock}$ with which the shock is numerically resolved. In Figure~\ref{fig:shockwidth_viscosity} we show measurements of the shock width for the same set-up, except for varying the employed order $p$. We see that the shock width declines with higher order, accurately following the desired relationship $\Delta x_\textrm{shock} \propto 1/p$, except for the lowest order $p=1$, which deviates towards broader width compared to the general trend. The importance of this result for the DG approach can hardly be overstated, given that it has been a nagging problem for decades to reliably capture shocks at sub-cell resolution in DG without having to throw away much of the higher resolution information. The result of Figure~\ref{fig:shockwidth_viscosity} essentially implies that shocks are resolved with the same width for a fixed number of degrees of freedom, independent of the employed order $p$. Whereas using higher order at a fixed number of degrees of freedom is thus not providing much of an advantage for making shocks thinner compared to using more cells, it at least does not degrade the solution. But smooth parts of a solution can then still benefit from the use of higher order. \subsection{Positivity limiter} \label{sec:positivity_limiter} With our artificial viscosity approach described above we intend to introduce the necessary numerical viscosity where needed, such that slope limiting becomes obsolete. However, for further increasing robustness of our code, it is desirable that it also runs stably if a too weak or no artificial viscosity is specified, or if its strength is perhaps locally not sufficient for some reason in a particularly challenging flow situation. To prevent a breakdown of the time evolution in this case, we consider an optional positivity limiter following \citet{Zhang2010} and \citet{Schaal2015}. This can be viewed as a kind of last line of defense against the occurrence of oscillations in a solution that ventures into the regime of unphysical values, such as negative density or pressure. The latter can happen even for arbitrarily small timesteps, especially when higher order methods are used where such robustness problems tend to be more acute. Finite-element and finite-volume hydrodynamical codes typically employ procedures such as slope limiters to cope with these situations, this means they locally reduce the order of the scheme (effectively making it more diffusive) by discarding high-order information. A similar approach is followed by the positivity limiter described here, which is based on \cite{Schaal2015}, with an important difference in how we select the evaluation points. We stress however that the positivity limiter is not designed to prevent oscillations, only to reduce them to a point that still allows the calculation to proceed. For a given cell, we first determine the average density $\overline{\rho}$ in the cell, which is simply given by the 0-th order expansion coefficient for the density field of the given cell, and we likewise determine the average pressure $\overline{p}$ of the cell. If either $\overline{\rho}$ or $\overline{p}$ is negative, a code crash is unavoidable. Otherwise, we define a lowest permissible density $\rho_\textrm{bottom} = 10^{-6} \bar{\rho}$. Next, we consider the full set of quadrature evaluation points $\{\boldsymbol{x}_i\}$ relevant for the cell, which is the union of the points used for internal volume integrations and the points used for surface integrals on the outer boundaries of the cell. We then determine the minimum density $\rho_{\rm min}$ occurring for the field expansions among these points. In case $\rho_\textrm{min} < \rho_\textrm{bottom}$, which includes the possibility that $\rho_{\rm min}$ is negative, we calculate a reduction factor $f = \left( \bar{\rho} - \rho_\textrm{bottom} \right) / \left( \bar{\rho} - \rho_\textrm{min} \right)$ and replace all higher order weights of the cell with \begin{equation} \boldsymbol{w}'^K_{l} = f\, \boldsymbol{w}^K_{l} \;\;\mbox{for}\;\;l > 1. \end{equation} This limits the minimum density appearing in any of the discrete calculations to $\rho_\textrm{bottom}$. By applying the correction factor $f$ to all fields and not just the density, we avoid to potentially amplify relative fluctuations in the velocity and pressure fields. We proceed similarly for limiting pressure oscillations, except that here no simple reduction factor can be computed to ensure that $p_\textrm{min}$ stays above $p_\textrm{bottom}$, due to the non-linear dependence of the pressure on the energy, momentum and density fields. Instead, we simply adopt $f=0.5$ and repeatedly apply the pressure limiter until $p_{\rm min} \ge p_\textrm{bottom}$. \section{Basic tests} \label{SecBasicTests} In this section we consider a set of basic tests problems that establish the accuracy of our new code both for smooth problems, as well as for problems containing strong discontinuities such as shocks or contact discontinuities. We shall begin with a smooth hydrodynamic problem that is suitable for verifying code accuracy for the inviscid Euler equations. We then turn to testing the diffusion solver of the code, as an indirect means to test the ability of our approach to stably and accurately solve the viscous diffusion inherent in the Navier-Stokes equations. We then consider shocks and the supersonic advection of a discontinuous top-hat profile to verify the stability of our high-order approach when dealing with such flow features. Applications to Kelvin-Helmholtz instabilities and driven turbulence are treated in separate sections. \subsection{Isentropic vortex} The isentropic vortex problem of \citet{Yee1999, Yee2000} is a time-independent smooth vortex flow, making it a particularly useful test for the accuracy of higher-order methods, because they should reach their theoretically optimal spatial convergence order if everything is working well \citep[e.g.][]{Schaal2015, Pakmor2016}. We follow here the original setup used in \citet{Yee1999} by employing a domain with extension $[-5, 5]^2$ in 2D and an initial state given by: \begin{eqnarray} \label{eq:yee_vortex_vx} v_x(\boldsymbol{r}) & = & -\frac{\beta y}{2\pi} \exp\left(\frac{1-r^2}{2}\right) \\ v_y(\boldsymbol{r}) & = & \frac{\beta x}{2\pi} \exp\left(\frac{1-r^2}{2}\right)\\ u(\boldsymbol{r}) & = & 1 - \frac{\beta^2 }{8\gamma \pi^2} \exp\left({1-r^2}\right) \\ \rho(\boldsymbol{r}) & = & [ (\gamma-1) u(\boldsymbol{r}) ]^{\frac{1}{\gamma-1}} \end{eqnarray} where we choose $\gamma = 1.4$, and $\beta =5 $. We evolve the vortex with different DG expansion order $n$ and different mesh resolutions $N_{\rm grid}^2$ until time $t=10$, and then measure the resulting L1 approximation error of the numerical result for the density field relative to the analytic solution (which is identical to the initial conditions). In order to make the actual measurement of $L1$ independent of discretization effects, we use $n+2$ Gaussian quadrature for evaluating the volume integral appearing in Eq.~(\ref{eq:l1_norm}). Likewise, we use this elevated order when projecting the initial conditions onto the discrete realization of DG weights of our mesh. In Figure~\ref{fig:isentropic_vortex_convergence} we show measurements of the L1 error as a function of grid resolution $N_{\rm grid}$, for different expansion order from $p=1$ to $p=8$. The left panel shows that the errors decrease as power laws with spatial resolution for fixed $n$, closely following the expected convergence order $L1\propto N_{\rm grid}^{-p}$ in all cases (except for the $p=1$ resolution, which exhibits slightly worse behavior -- but this order is never used in practice because of its dismal convergence properties). Interestingly, the data also shows that for a given grid resolution, the L1 error goes down {\em exponentially} with the order of the scheme. This is shown in the right panel of Fig.~\ref{fig:isentropic_vortex_convergence}, which shows the L1 error in a log-linear plot as a function of order $p$, so that exponential convergence manifests in a straight decline. This particularly rapid decline of the error with $p$ for smooth problems makes it intuitively clear that it can be advantageous to go to higher resolution if the problem at hand is free of true physical discontinuities. \subsection{Diffusion of a Gaussian pulse} To test our procedures for simulating the diffusion part of our equations, in particular our treatment for estimating surface gradients at interfaces of cells, we first consider the diffusion of a Gaussian pulse, with otherwise stationary gas properties. For simplicity, we consider gas at rest and with uniform density and pressure, and we consider the evolution of a small Gaussian concentration of a passive tracer dye under the action of a constant diffusivity. For definiteness, we consider a tracer concentration $c(\vec{x}, t)$ given by \begin{equation} c(\vec{x}, t) = c_b + \sum_{\vec j} \frac{c_g}{2\pi \sigma^2} \exp\left(-\frac{(\vec{x} - \vec{j})^2}{2\sigma^2}\right), \;\;\;\mbox{with}\; \sigma^2 = 2\eta t, \label{eqn:diffusion_spike} \end{equation} placed in a unit domain $[-0.5, 0.5]^2 $ in 2D with periodic boundary conditions. Here the sum over $\vec{j}$ effectively accounts for a Cartesian grid of Gaussian pulses spaced one box size apart in all dimensions to properly take care of the periodic boundary conditions. If we adopt a fixed diffusivity $\eta$ and initialize $c(\vec{x}, t)$ at some time $t_0$, then the analytic solution of equation~(\ref{eqn:passivetracer}) tells us that eqn.~(\ref{eqn:diffusion_spike}) will also describe the dye concentration at all subsequent times $t > t_0$. For definiteness, we choose $\eta = 1/128$, $c_b = 1/10$, $c_g=1$, and $t_0=1$, and examine the numerically obtained results at time $t = t_0 + 3 = 4$ by computing their L1 error norm with respect to the analytic solution. In the top panel of Fig.~\ref{fig:gaussian_diffusion}, we show the convergence of this diffusion process as a function of the number of grid cells used, for the first five DG expansion orders. Reassuringly, the L1 error norm decays as a power-law with the cell size, in each case with the expected theoretical optimum $L1 \propto N_{\rm cells}^{-p}$. This shows that our treatment of the surface derivatives is not only stable and robust, but is also able to deliver high-order convergence. The bottom panel of Figure~\ref{fig:gaussian_diffusion} shows that this also manifests itself in an exponential convergence as function of DG expansion order when the mesh resolution is kept fixed. For this result, we adopted $N_{\rm cells}=8$ and went all the way to 10-th order. While these results do not directly prove that our implementation is able to solve the full Navier-Stokes equations at high-order, they represent an encouraging prerequisite. Also, we note that both the version without viscous source terms (i.e.~the Euler equations), as well as the viscous term itself when treated in isolation converges at high order. We will later on compare to a literature result for the Kelvin-Helmholtz instability in a fully viscous simulation to back up this further and to test a situation where the full Navier-Stokes equations are used. \subsection{Double blast wave} To test the ability of our DG approach to cope with strong shocks, particularly at high order, we look at the classic double blast wave problem of \citet{Woodward1984}. The initial conditions are defined in the one-dimensional domain $[0,1]$ for a gas of unit density and adiabatic index $\gamma = 7/5$, which is initially at rest. By prescribing two regions of very high pressure, $P=1000$ for $x < 0.1 $, and $P=100$ for $x > 0.9$, in an otherwise low-pressure $P=0.1$ background, the time evolution is characterized by the launching of very strong shock and rarefaction waves that collide and interact in complicated ways. The original problem is defined for reflective boundary conditions. As these are not yet implemented in the current version of our code, we extend the domain to $[0,2]$ and mirror the ICs in the region $1 < x < 2$ so that the problem can alternatively be computed with periodic boundary conditions, although this means to do it effectively twice. Because of the difficulty of this test for shock-capturing approaches, it has often been studied in previous work to examine code accuracy and robustness \citep[e.g.][]{Stone2008, Springel2010}. In order to highlight differences due to different DG orders, we have run deliberately low-resolution realizations of the problem, using 100 cells of equal size within the region $[0,1]$ (i.e.~really 200 cells in total to also represent the mirrored region). We have then evolved the initial conditions with order $p=2$, $p=4$, or $p=8$. Furthermore, we examine a run done with four times as many cells carried out at order $p=2$. This latter simulation has the same number of degrees of freedom as the $p=8$ simulation, and thus should have a similar effective spatial resolution. For comparison purposes, we use a simulation carried out with 10000 cells at order $p=2$, which can be taken as a result close to a converged solution. All simulations were run with our artificial viscosity implementation using our default settings for the method (which do not depend on order $p$). In Figure~\ref{fig:double_blast_wave}, we show the density profile at the time $t = 0.038$, as done in many previous works, based on our 100 cell runs. Clearly, the shock fronts and contact discontinuities of the problem are quite heavily smoothed out for the $p=2$ run with 100 cells, due to the low resolution of this setup. However, the quality of the result can be progressively improved by going to higher order while keeping the number of cells fixed, as seen by the results for $p=4$ and $p=8$. This is in itself important. It shows that even problems dominated by very strong physical discontinuities are better treated by our code when higher order is used. The additional information this brings is not eliminated by slope-limiting in our approach, thanks to the sub-cell shock capturing allowed by our artificial viscosity technique. Finally, in Figure~\ref{fig:doubleblastwave_k1_nc_200_vs_800} we compare the $p=8$, 100 cell result to the $p=2$ result using 400 cells. We find essentially the same quality of the results, which is another important finding. This demonstrates that to first order only the number of degrees of freedom per dimension is important for determining the ability of our DG code to resolve shocks. Putting degrees of freedom into higher expansion order instead of into a larger number of cells is thus not problematic for representing shocks. At the same, it also does not bring a clear advantage for such flow structures. This is because shocks are ultimately always broadened to at least the spatial resolution limit. Real discontinuities therefore only converge with 1st order in spatial resolution, and high-order DG schemes do not provide a magic solution for this limitation as their effective resolution is set by the degrees of freedom. Still, as our results show, DG can be straightforwardly applied to problems with strong shocks using our artificial viscosity formulation. When there is a mixture of smooth regions and shocks in a flow, the smooth parts can still benefit from the higher order accuracy while the shocks are rendered with approximately the same accuracy as done with a second-order method with the same number of degrees of freedom. \subsection{Advection of a top-hat pulse} Next we consider the problem of super-sonically advecting a strong contact discontinuity in the form of an overdense square that is in pressure equilibrium with the background. This tests the ability of our code to cope with a physical discontinuity that is not self-steeping, unlike a shock, i.e.~once the contact discontinuity is (excessively) broadened by numerical viscosity, it will invariably retain the acquired thickness. In fact, the advection errors inherently present in any Eulerian mesh-based numerical method will continue to slowly broaden a moving contact discontinuity with time, in contrast to Lagrangian methods, which can cope with this situation in principle free of any error. A problem that starts with a perfectly sharp initial discontinuity furthermore tests the ability of our DG approach to cope with a situation where strong oscillatory behaviour is sourced in the higher order terms, an effect that is especially strong if the motion is supersonic and the system's state is characterized by large discontinuities. Here any naive implementation that does not include any type of limiter or artificial viscosity terms will invariably crash due to the occurrence of unphysical values for density and/or pressure. The square advection problem is thus also a sensitive stability test for our high-order Discontinuous Galerkin method. In our test we follow the setup-up of \citet{Schaal2015}, but see also \citet{Hopkins2015} for a discussion of results obtained with particle-based Lagrangian codes. In 2D, we consider a domain $[0,1]^2$ with pressure $P=2.5$ and $\gamma=7/5$. The density inside the central square of side-length $0.5$ is set to $\rho=4$, and outside of it to $\rho=1$. A velocity of $v_x=100$ is added to all the gas, and in the $y$-direction we add $v_y=50$. We simulate until $t=1.0$, at which point the pulse has been advected 100 times through the periodic box in the $x$-direction and half that in the $y$-direction, and it should have returned exactly to where it started. We have also run the same test problem generalized to 3D, with an additional velocity of $v_z = -25$ in the $z$-direction, and as well in 1D, where only the motion in the $x$-direction is present. In general, the multi-dimensional tests behave very similarly to the one-dimensional tests, with the size of the overall error being determined by the largest velocity. For simplicity, we therefore here restrict ourselves in the following to report explicit results for the 1D case only. In Figure~\ref{fig:square_advection_shape}, we show the density profile of the pulse at $t=1.0$ when 64 cells are used, for different DG orders $p$. A second-order accurate method, which is equivalent or slightly better than common second-order accurate finite volume methods \citep[see also][]{Schaal2015} has already broadened the profile substantially at this time, turning it into something close to a Gaussian. Already order $p=3$ does substantially better, however, while $p=10$ is able to retain the profile very sharply, albeit with a small amount of ringing right at the discontinuities. Similarly to our results for shocks, we thus find that our code is able to make good use of higher order terms if they are available in the expansion basis. Applying simple limiting schemes such as minmod in the high-order case is in contrast prone to lose much of the benefit of high-order when string discontinuities are present in the simulation, simply because these schemes tend to discard subcell information beyond linear slopes. To look more quantitatively at the errors, we show in Figure~\ref{fig:square_advection_L1} the L1 error norm of the density field as a function of time, for all orders from $p=1$ to $p=10$. We see that the lowest order does very poorly on this problem, due to its large advection errors. In fact, $p=1$ loses the profile completely after about 10 box transitions, yielding a uniform average density throughout the whole box. When one uses higher order, both the absolute error at any given time but also the rate of residual growth of the error with time drop progressively. The latter can be described by a power-law $L1 \propto t^n$, with a slope $n$ that we measure to be just 0.028 for $p=10$, while it is still 0.335 for a second-order, $p=2$ method. The longer a simulation runs, the larger the accuracy advantage of a high-order method over lower-order methods thus becomes. \section{Kelvin-Helmholtz instabilities} \label{SecKH} Simulations of the Kelvin-Helmholtz~(KH) instability have become a particularly popular test of hydrodynamical codes \citep[e.g.][]{Price2008, Springel2010, Junk2010, Valcke2010, Cha2010, Berlok2019, Tricco2019, Borrow2022}, arguably initiated by an influential comparison of SPH and Eulerian codes by \citet{Agertz2007}, where significant discrepancies in the growth of the perturbations in different numerical methods had been identified. One principal complication, however, is that for initial conditions with an arbitrarily sharp discontinuity the non-linear outcome is fundamentally ill-posed at the discretized level \citep[e.g.][]{Robertson2010, McNally2012}, because for an ideal gas the shortest wavelengths have the fastest growth rates, so that one can easily end up with KH billows that are seeded by numerical noise at the resolution limit, rendering a comparison of the non-linear evolution between different methods unreliable. Furthermore, in the inviscid case, the non-linear outcome is fundamentally dependent on the numerical resolution so a converged solution does not even exist. \citet{Lecoanet2016} have therefore argued that using smooth initial conditions across the whole domain combined with an explicit physical viscosity is a better choice, because this allows in principle converged numerical solutions to be reached. We follow their approach here, and in particular compare to the reference solution determined by \citet{Lecoanet2016} using the spectral code {\small DEDALUS} \citep{Burns2020} at high resolution. Specifically, following \citet{Lecoanet2016} we adopt as initial conditions a shear flow with a smoothed density and velocity transition: \begin{eqnarray} \begin{aligned} \rho(x,y) &=1+\frac{\Delta \rho}{\rho_{0}} \frac{1}{2}\left[\tanh \left(\frac{y-y_{1}}{a}\right)-\tanh \left(\frac{y-y_{2}}{a}\right)\right], \\ v_{x}(x,y) &= u_{\text {flow }} \left[\tanh \left(\frac{y-y_{1}}{a}\right)-\tanh \left(\frac{y-y_{2}}{a}\right)-1\right], \end{aligned} \end{eqnarray} with $u_\textrm{flow} = 1$, $a = 0.05$, $y_1 = 0.5$ and $y_2 = 1.5$ in a periodic domain with side length $L=2$. This is perturbed with a small velocity component in the $y$-direction to seed a KH billow on large scales: \begin{equation} v_{y}(x,y) =A \sin (2 \pi x) \left[\exp \left(-\frac{\left(y-y_{1}\right)^{2}}{\sigma^{2}}\right)+\exp \left(-\frac{\left(y-y_{2}\right)^{2}}{\sigma^{2}}\right)\right] , \end{equation} where $A = 0.01$ and $\sigma = 0.2$ is chosen. The pressure is initialized everywhere to a constant value, $P(x,y) =P_{0}$, with $P_0 = 10$. With these choices, the flow stays in the subsonic regime with a Mach number $\mathcal{M} \sim 0.25$. The free parameter $\Delta \rho / \rho_0$ describes the presence or absence of a density ``jump'' across the two fluid phases that stream past each other. By adding a passive tracer field \begin{equation} \begin{aligned} c(x,y) &=\frac{1}{2}\left[\tanh \left(\frac{y-y_{2}}{a}\right)-\tanh \left(\frac{y-y_{1}}{a}\right)+2\right] \end{aligned} \end{equation} to the initial conditions, we can study the KH instability also easily for the case of a vanishing density jump. In fact, we shall focus on the case $\Delta \rho / \rho_0=0$ here as it is free of the particularly subtle inner vortex instability in the late non-linear evolution of the KH problem \citep{Lecoanet2016}, which further complicates the comparison of different codes. The only part where our simulation setup differs from \citet{Lecoanet2016} is the size of the simulation domain. While they computed a rectangular periodic domain with sides $L_x = 1$ and $L_y = 2$, we simulate a square box with $L_x = L_y = 2$, effectively replicating the problem in the $x$-direction twice, because our current implementation so far only supports square-shaped meshes. This increases the computational cost by a factor of two compared to the setup of \citet{Lecoanet2016}. To realize the above initial conditions we evaluate them within each cell of our chosen mesh at multiple quadrature points in order to project them onto our DG basis. We perform this initial projection using a Gauss integration that is 2 orders higher than that employed in the run itself. This ensures that integration errors from the projection of the initial conditions onto our DG basis are subdominant compared to the errors incurred by the time evolution, and are thus unimportant. We choose identical values for shear viscosity $\nu$, thermal diffusivity $\chi$, and dye diffusivity $\eta$. Below, we mostly focus on discussing results for a Reynolds number ${\rm Re} = 10^5$ for which we set $\nu = \chi = \eta = 2 u_{\rm flow} / {\rm Re} = 2 \times 10^{-5}$. We have also carried out simulations with a higher Reynolds number ${\rm Re} = 10^6$, obtaining qualitatively similar results, although these simulations require higher resolution for convergence and thus tend to be more expensive. \subsection{Visual comparison} A visual illustration of the time evolution of the dye concentration for a simulation with ${\rm Re} = 10^5$ and $\Delta \rho / \rho_0 = 0$ is shown in Fig.~\ref{fig:kh_different_times}. In this calculation, 64 DG cells were used to cover the $x$-range $[0,1]$, which is the relevant number to compare to the resolution information in \citet{Lecoanet2016}. Expansion order $p=6$ has been used in this particular run. It is nicely seen that the KH billow seeded in the initial conditions is amplified in linear evolution until a time $t \sim 1-2$, then it rolls up multiple times in a highly non-linear evolution, before finally strong mixing sets in that progressively washes out the dye concentration throughout the vortex. Upon visual inspection, this time evolution compares very closely to that reported by \citet{Lecoanet2016}. In Figure~\ref{fig:kh_time_vs_order_lecoanet_comparison} we make this comparison more explicit by showing results obtained for different order $p$ at a number of times `face-to-face' with their reference simulation. In each of the panels, the left half of the picture contains the {\small DEDALUS} result at resolution $3096\times 6144$, while the right half gives our results at $64\times 128$ resolution, but with different orders $p$. We have deliberately chosen this modest resolution for this comparison in order to allow some differences to be seen after all. They show up clearly only at second-order in the top row, while at $p=4$ they are only discernible at times $t=4$ and $t=6$ as faint discontinuities at the middle of the images, where the result from {\small DEDALUS} meets that from our DG code. Already by $p=5$, visual inspection is insufficient to identify clear differences. We note that for higher DG grid resolutions, this becomes rapidly extremely difficult already for lower orders. \subsection{Dye entropy} \label{SecKHdye} An interesting more quantitative comparison of our simulations to those of \citet{Lecoanet2016} can be carried out by considering the evolution of the passive scalar ``dye'' in some detail. The technical implementation of this passive tracer is described in Section~\ref{SecPassiveScalar}. A dye entropy per unit mass can be defined as $s = -c \ln c$, and its volume integral is the dye entropy \begin{equation} S = \int \rho s \,{\rm d}V, \end{equation} which can only monotonically increase with time. The dye entropy can be viewed as a useful quantitative measure for the degree of mixing that occurs as a result of the non-linear KH instability. To guarantee an accurate measurement of the dye entropy we perform the integral above at two times higher spatial order than employed in the actual simulation run. We also note than when computing the dye entropy we use our entire simulation domain (although left and right halves give identical values), and we then normalize to half of the volume to make our values directly comparable to those of \citet{Lecoanet2016}. In Figure~\ref{fig:khsmooth_jump_re5}, we show measurements of the dye entropy evolution for several of our runs, compared to the converged results obtained by \citet{Lecoanet2016} consistently with the {\small DEDALUS} and {\small ATHENA} codes. We obtain excellent agreement already for 64 cells and order $p=4$, corresponding to 256 degrees of freedom per dimension. Our under-resolved simulations with fewer cells and/or degrees of freedom show an excess of mixing and higher dye entropy, as expected. We note that \citet{Tricco2019} have also studied this same reference problem using SPH. Interestingly, they find that simulations that are carried out at lower resolution than required for (approximate) convergence show an underestimate of dye mixing, marking an important qualitative difference to the mesh-based computations. The SPH simulations also require a substantially higher number of resolution elements to obtain an approximately converged result. \citet{Tricco2019} get close to achieving this for the dye concentration by using 2048 particles per dimension, but even then the dye entropy of their result falls slightly below the converged result at $t=8$. \subsection{Error norm} Finally, we consider a direct comparison of the dye entropy fields obtained in our simulations to the {\small DEDALUS} reference solution made publicly\footnote{https://doi.org/10.5281/zenodo.5803759} available by \citet{Lecoanet2016} at a grid resolution of $3096 \times 6144$ points. To perform a quantitative comparison, we consider the $L_2$-norm of the difference in the dye fields, defined as \begin{equation} L_2 = \left[\frac{1}{V} \int \left(c_{\rm DG} - c_{\rm Lecoanet}\right)^2 {\rm d}V \right]^{1/2} . \end{equation} In Figure~\ref{fig:kh_l2_over_time} we show first the time evolution of the $L_2$-norm, for DG simulations carried out with 64 cells and orders $p=2$ to $p=5$. We also include results reported by \citet{Lecoanet2016} for the {\small ATHENA} mesh code at a resolution of 1024 cells, as well as SPH results by \citet{Tricco2019} at particle resolutions of 256 and 2048, respectively. Our $p=4$ results with 64 cells are already as good as {\small ATHENA} with 2048 cells, demonstrating that far fewer degrees of freedom are sufficient when a high order method is used for this smooth problem. In contrast, a relatively noisy method such as SPH really struggles to obtain truly accurate results. Even at the 2048 resolution, the errors are orders of magnitude larger than for the mesh-based methods, and the sluggish convergence rate of SPH will make it incredibly costly, if possible at all, to push the error down to the level of what our DG code, or {\small ATHENA}, comparably easily achieve. In Figure~\ref{fig:kh_l2_at_t4}, we examine the error as a function of the employed DG expansion order. For a fixed number $N_c=64$ of cells, we show the $L_2$ error at time $t=4$, for orders $p=2$ up to $p=10$. It is reassuring that we again find exponential convergence for this problem, where the error drops approximately linearly with $p$ on this log-linear plot. This demonstrates that we can fully retain the ability to converge at high-order for our compressible Navier-Stokes solver, which is additionally augmented with thermal and dye diffusion processes. We consider this to be a very important validation of our numerical methodology and actual code implementation. Another interesting comparison is to consider simulations that have an equal number of degrees of freedom, but different cell numbers and expansion orders. In the figure (marked with crosses), we also include results for the three cases $N_c=64/p=8$, $N_c=128/p=4$, and $N_c=256/p=2$, which all have the same number of degrees of freedom per dimension. Strictly speaking, the higher order ones have actually slightly less, given that the number $N^{\rm 2D}(p) = p(p+1)/2$ of degrees of freedom per cell is slightly less than $p^2$ for $p>1$, see Equation~(\ref{eqn:N2D}). Regardless, the run with $N_c=64$ clearly has the lowest error. This confirms once more that for a smooth problem it is typically worthwhile in terms of yielding the biggest gain in accuracy to invest additional degrees of freedom into higher order rather than additional cells. \section{Driven sub-sonic turbulence} \label{SecTurbulence} The phenomenon of turbulence describes the notion of an unsteady, random flow that is characterized by the overlap of swirling motions on a variety of scales \citep[e.g.][]{Pope2000}. In three-dimensions, one finds that if fluid motion is excited on a certain scale (the injection scale) it tends to decay into complex flow features on ever smaller scales, helped by fluid instabilities such as the Kelvin-Helmholtz instability. Eventually, the vortical motions become so small that they are eliminated by viscosity on the so-called dissipation scale. If the injection of kinetic energy on large scales persists and is quasi-stationary, a fully turbulent state develops which effectively exhibits a transport of energy from the injection to the dissipation scale. For incompressible isotropic, subsonic turbulence, the statistics of velocity fluctuations in such a turbulent flow are described by the Kolmogorov velocity power spectrum, which has a power law shape in the inertial range, and a universal shape in the dissipative regime. For astrophysics, turbulence plays a critical role in many environments, including the intracluster medium, the interstellar medium, or the buoyantly unstable regions in stars. Numerical simulations need to be able to accurately follow turbulent flows, for example in order to correctly describe the mixing of different fluid components, or the amplification of magnetic fields. However, this is often a significant challenge as the scale separation between injection and dissipation scales in astrophysical settings can be extremely large, while for three-dimensional simulation codes it is already difficult to resolve even a moderate difference between injection and dissipation scales. In addition, most astrophysical simulations to date rely on numerical viscosity exclusively instead of including an explicit physical viscosity, something that can in principal modify the shape of the dissipative part of the turbulent power spectrum, thereby creating turbulent velocity statistics that differ from the expected universal form because they are directly affected by aspects of the numerical method. Of course, the general accuracy of a numerical method is also important for how well turbulence can be represented. For example, \citet{BauerSpringel2012} have pointed out that the comparatively large noise in SPH makes it difficult for this technique to accurately represent subsonic turbulence. While this can in principle be overcome with sufficiently high numerical effort, it is clear that methods that have a low degree of numerical viscosity combined with the ability to accurately account for physical viscosity should be ideal for turbulence simulations. Our DG approach has these features, and especially in the regime of subsonic turbulence, where shocks are expected to play only a negligible role, the DG method should be particularly powerful. This motivates us to test this idea in this section by considering isothermal, subsonic, driven turbulence in periodic boxes of unit density. The subsonic state refers to the average kinetic energy of the flow in units of the soundspeed, as measured through the Mach number. Instead of directly imposing an isothermal equation of state, we simulate gas with a normal ideal gas equation of state and reset the temperature every timestep such that a prescribed sound speed is retained. We have checked that this does not make a difference for any of our results, but this approach allows us to use our approximate, fast HLLC Riemann solver instead of having to employ our exact, but slower isothermal Riemann solver. \subsection{Driving} To create the turbulence, we drive fluid motions on large scales. To do this consistently at high order, we add a source function $\boldsymbol{s}(\boldsymbol{x}, t)$ to the right-hand side of the Euler equations, both in the momentum equation and as work function $\boldsymbol{s} \cdot \boldsymbol{v}$ in the energy equation. These source terms have to be integrated with Gaussian quadrature over cell volumes to retain the high-order discretization. For setting up the driving field $\boldsymbol{s}(\boldsymbol{x}, t)$, we follow standard techniques as implemented in \citet{BauerSpringel2012}, which in turn are directly based on \citet{Schmidt2006, Federrath2008, Federrath2009}. The acceleration field is constructed in Fourier space by populating modes in the narrow range $2\pi/L \le k \le 2\times 2\pi/L$, with amplitudes scaled to follow $\propto k^{-5/3}$ over this range. The phases of the forcing modes are drawn from an Ornstein–Uhlenbeck process. They are periodically updated whenever a time interval $\Delta t$ has elapsed, while keeping a temporal correlation over a timescale $t_s$ with the previous phases. This effectively yields a smoothly varying, random driving field. Our specific settings for update frequency, coherence timescale and distribution function for drawing the driving phases are as in \citet[][their table 1, left column]{BauerSpringel2012}. We here also restrict ourselves to include only solenoidal driving, i.e.~we project out all compressive modes in Fourier space by a Helmholtz decomposition. Specifically, if $\boldsymbol{s}$ is the principal acceleration field constructed in the above fashion, we project it as \begin{equation} \boldsymbol{\hat s}(\boldsymbol{k}) = \left( \delta_{ij} - \frac{k_i k_j}{\boldsymbol{k}^2}\right) \boldsymbol{s}(\boldsymbol{k}) \end{equation} in Fourier space to end up with an acceleration field $\boldsymbol{\hat s}$ that is free of compressive modes, which would only produce a spectrum of additional sound waves in our subsonic case. \subsection{Results for subsonic turbulence} All our turbulence simulations are started with gas of uniform density at rest. We monitor the average kinetic energy, as well as the total cumulative injected kinetic energy and the total cumulative dissipated energy, allowing us to verify the establishment of a quasi-stationary state. An example for this is shown in Figure~\ref{fig:turbulence_energy_and_mach}, where we illustrate the build-up of the turbulent state in terms of the total energies. There is an initial ramp up phase of the turbulence until $t\sim 5$, during which the Mach number grows nearly linearly to its final quasi-stationary time-averaged value of ${\cal M} \simeq 0.47.$ The cumulative injected energy grows approximately linearly with time, whereas the dissipated energy tracks it with a time lag, because the initial evolution until $t\sim 2.5$ does not yet show any significant dissipation. The difference between the injected and dissipated energies is the current kinetic energy of the gas, and thus is effectively given by the Mach number. In Figure~\ref{fig:turbulence_velocity_amplitude}, we show a visual example of the quasi-stationary turbulent state established after some time, here simulated with $N_c=128$ cells and order $p=4$. The slice through the magnitude of the velocity field illustrates the chaotic structures characteristic of turbulence. Even though there are some steep gradients in the velocity field, the velocity varies smoothly overall, reflecting the absence of strong shock waves in this subsonic case. To statistically analyse the turbulent state we turn to measuring power spectra of the velocity field at multiple output times, and then consider a time-average spectrum to reduce the influence of intermittency. To calculate the final power spectrum of a simulation, we average over 64 velocity power spectrum measurements over the time interval $5.12<t<20.48$. \subsubsection{Inviscid treatment of gas} The behaviour of inviscid gas is described by the Euler equations of eqn.~(\ref{eq:euler}). Because of the simplicity of this model and the desire to run simulations with as little viscosity as possible to maximize the intertial range of the turbulence, it is a popular choice for the study of turbulence. For example, the largest driven turbulence simulation to date by \citet{Federrath2016LargestSimulation} were performed using inviscid gas, as well as many other studies in the field \citep[e.g.][]{2004astro.ph..7616S, BauerSpringel2012, 2016arXiv160209079B, Federrath2008, 2010A&A...512A..81F, 2010MNRAS.406.1659P}. In the top two panels of Fig.~\ref{fig:turbulence_euler_ns_order_vs_nc}, we show such simulations carried out with our DG code. In all such simulation, the energy injected at large scales follows the Kolmogorov spectrum and cascades from large to small scales. This part of the spectra is called the inertial range and it follows the $k^{-5/3}$ Kolmogorov spectrum closely, even though our gas is compressible and the density fluctuations for our Mach number are not negligible any more. Note that all our plots are compensated with a $k^{5/3}$ factor, such that the Kolmogorov spectrum corresponds to a horizontal line. The extent of the inertial range is primarily determined by the total number of degrees of freedom in an inviscid simulation. However, as we transition from the inertial range to the dissipation portion of the spectra, a noticeable bump can be seen in which the spectrum significantly exceeds the power-law extrapolation from larger scales. As energy is being transferred from larger to smaller scales, creating ever smaller eddies, it eventually reaches scales at which the code cannot resolve smaller eddies any more. This leads to a build-up of an energy excess at this characteristic scale, until the implicit numerical viscosity terms become strong enough to dissipate away the arriving energy flux. This effect is commonly known in numerical studies of turbulence and referred to as the ``bottleneck'' effect. It should be pointed out that experimental determinations of turbulent velocity spectra do not show such a bottleneck, i.e.~it is a purely numerical effect that renders the velocity power spectrum incorrect on the affected scales. The bottleneck effect cannot be fixed by using higher resolution, or higher order for that matter. Indeed, in the top two panels of Figure~\ref{fig:turbulence_euler_ns_order_vs_nc} we can see that the bottleneck moves to ever smaller scales with increasing cell number at a fixed spatial order, and similarly it moves towards smaller scales if we increase the spatial order of our method at a fixed number of cells. While both avenues of adding further degrees of freedom successfully widen the inertial range and push the dissipative regime to smaller scales, they unfortunately cannot eliminate the ``bump'' in the bottleneck, or address the equally incorrect detailed shape of the dissipation regime itself. This detailed shape changes slightly as we vary the order $p$ because the precise way of how numerical dissipation interacts with the flow is modified by this, while in contrast increasing the number of cells leaves the shape unchanged, because this just moves the dissipation regime to smaller scales in a scale-invariant fashion. The only way around this and to get closer to velocity spectra seen in experimental studies of turbulence is to solve the full compressible Navier-Stokes equation, where the dissipative regime is set not by numerics, but by the physical viscosity of the gas itself. If this viscosity is large enough, it will effectively dissipate energy at scales larger than our numerical viscosity, completely eliminating the bottleneck effect. We consider this case in the following subsection. \subsubsection{Viscous treatment of gas} We now consider driven turbulence results akin to the simulations just discussed, with the only difference being that we are now solving the full compressible Navier-Stokes equations as described in Sec.~\ref{SecNavierStokesEquations}. In the bottom two panels of Fig.~\ref{fig:turbulence_euler_ns_order_vs_nc}, we display compensated velocity power spectra with physical viscosity added. Such full Navier-Stokes simulations do not exhibit the ``bottlenect'' effect, and moreover, they converge to a single, eventually resolution independent solution. Such simulations are in the literature referred to as ``direct numerical simulations'' or DNS. Our code can achieve DNS for turbulence by either increasing the resolution or the spatial order, as is evident in the bottom two panels of Fig.~\ref{fig:turbulence_euler_ns_order_vs_nc}. To determine if increasing the order of our method or its resolution is more beneficial, we compare three simulations with approximately the same number of degrees of freedom, but different resolutions and orders in Fig.~\ref{fig:turbulence_order_over_cells}. The orange line shows a run we can consider a converged DNS result with $N_\textrm{c}=128$ and $p=3$. A simulation with identical $N_\textrm{c}$ but lower $p$ in blue fails to fully converge. On the other hand, the green dashed line shows a simulation with \textit{eight times fewer} total number of cells, but at a higher spatial order. It has as many degrees of freedom as the simulation shown in blue, and yet its power spectra matches that of the simulation shown in orange. We can therefore conclude that running driven turbulence at higher order is preferable to increasing the cell resolution. Or to put it another way, if there is a limited number of degrees of freedom that can be represented due to memory constraints, it is better to ``spend'' the memory on higher $p$ than $N_\textrm{c}$. In the present case, a comparison of the wall-clock time between the high cell resolution and high order runs shows an about 2x faster calculation time at high order vs using a higher cell resolution. For even high order, this CPU-time advantage may not persist, but the memory advantage will. Given that turbulence simulations tend to be memory-bound, this in itself can already be a significant advantage. \section{Code details} \label{SecCode} \subsection{Parallelization strategy} \label{SecParallelizationStrategy} Modern supercomputers consist by now of thousands to millions of computing cores, a trend which is bound to continue. Recently, however, the most significant gains in computational performance (measured in floating point operations per second -- FLOPS) have came from dedicated accelerator cards. These are most commonly, but not always, graphics processing units (GPUs) that have been repurposed to do general computational work. Accelerators achieve a large number of FLOPS by foregoing large, per-core caches and advanced control circuitry for single compute units, while at the same time they are able to execute large sets of threads concurrently in a data-parallel fashion. Current GPU-accelerated computers typically consist of normal, CPU-equipped compute nodes that are outfitted with attached GPU cards. Utilising the power of both, CPUs and GPUs, efficiently with such heterogeneous machines is challenging. It requires not only a suitable subdivision of the work, but often also an algorithmic restructuring of the computations such that they can be mapped efficiently onto the massively parallel execution model of GPUs, as well as prescriptions for data placement and movement between the separate memory of CPU and GPUs. The problem becomes even harder when multiple compute nodes with distributed memory, each with their own GPUs, are supposed to work together on a tightly coupled problem. Efficient and scalable massively parallel codes for such machines must decompose the problem into multiple parts, distribute the parts among the available compute units, and only exchange data between various parts when really needed. In the present version of our code {\small TENETGPU}\footnote{While our code is written from scratch for GPUs, its first version has been heavily inspired by the code {\small TENET} of \citet{Schaal2015}, hence we named ours {\small TENETGPU}.}, we address this by an implementation that can execute a given hydrodynamical problem flexibly either on one or several GPUs, on one or multiple CPU cores, or a mixture thereof. Independent on how GPUs and CPU-cores are distributed onto different compute nodes, {\small TENETGPU} can in this way make use of whatever is available, up to extremely powerful systems such as the first exascale supercomputers that are presently put into service (which are GPU-accelerated, such as `Frontier', ranked the fastest in the world according to the Top500 list released May 30, 2022). To achieve this flexibility, we split the mesh along the $x$-axis into different slabs, which can have different thickness, if desired. Each slab is either computed by a different GPU, or by one CPU core. The communication between slabs, which is realized with the Message Passing Interface (MPI), thus needs to happen along the $x$-dimension between neighboring slabs only, as all the needed data along the other two axes is locally available for the corresponding slab. The data that is communicated consists of surface states or surface fluxes at Gauss points needed for integrations over cell areas. For driving the GPU computations, each GPU requires a separate CPU core as well. For example, if one has a compute node with 32 cores and 2 GPUs as accelerator cards, a simulation with $256^3$ mesh cells could be run by assigning slabs with a thickness of 98 cells to each of the GPUs, while letting the remaining 30 compute cores each work on slabs with a thickness of 2 cells each. Of course, this particular mixed execution example would only make sense if each of the GPUs would be around $\sim 50$ times faster than a single CPU core. In practice, the speed difference is typically considerably larger, so most of the work should typically be assigned to GPUs if those are available. We also note that for the moment our code supports only meshes with uniform and fixed resolution. However, a more general domain decomposition than just a slab-based decomposition is planned for the future and in principle straightforward. This can, in particular, remove the obvious scaling limitation of our current approach, where the number of cells per dimension sets the maximum number of GPUs or CPU cores that could be employed. \subsection{GPU computing implementation} The above parallelization strategy makes it clear that our code is neither a plain CPU code nor a pure GPU code. Rather, it implements its core compute functionality where needed twice, in a CPU-only version and in a GPU-only version. Both versions can be interchangeably used for any given slab taken from the global computational mesh, and they produce the same results. While this approach evidently requires some extra coding, we have found that this is actually quite helpful for code validation, as well as for quantifying the relative performance of CPU and GPU versions. Further, the extra coding effort can be greatly alleviated by using wherever possible functions that can be compiled and executed both by GPUs and CPUs based on a single implementation. For the GPU code, we have used the CUDA programming model available for Nvidia GPU devices. All our code is written in low level C++, and we presently do not make use of programming models such as OpenACC, special GPU-accelerated libraries, or new C++ language features that allow GPU-based execution of standard libraries via execution policies. Our programming model is thus best described as MPI-parallel C++, accelerated with CUDA\footnote{We presently use the CUDA toolkit version 11.4, the GNU g++ 11 compiler and the C++17 standard. For message passing, we prefer the OpenMPI-4 library, for Fourier transforms we use FFTW-3 and for random number generation we rely on GSL 2.4.} when GPUs are available. If no GPUs are available, the code can still be compiled into a CPU-only version. For storing static data such as coefficients of Legendre polynomials or Gaussian quadrature weights, we try to make use of the special constant memory on GPUs, which offers particularly high performance, also in comparison to the ordinary general memory. Likewise, for computing parallel reductions across individual cells, we make use of the special shared memory. However, the size of the corresponding memory spaces is quite limited, and varies between different GPU hardware models. This can necessitate adjustments of the used algorithms at compile time, depending on code settings such as the expansion order and on which execution platform is used. We address this by defining appropriate compile-time switches, such that these adjustments are largely automatic. We note that the data of slabs that are computed with GPUs need to fit completely on GPUs as we refrain from transmitting the data from the front end host computer to the GPU on every timestep. Instead, the data remains on the GPU for maximum performance, and only when a simulation is finished or a temporary result should be output to disk it is copied back from the GPU to the front end host. Wherever such transfers are needed, we use pinned memory on the front end to achieve maximum bandwidth between the host and GPUs. GPUs can access such pinned memory directly, without going through the host CPU first. The problem sizes we are able to efficiently tackle with GPUs are therefore limited by the total combined GPU memory available to a run. Modern GPUs typically have some 10~GBs of main memory, but the detailed amount can vary greatly depending on the model, and is course a matter of price as well. The communication between adjacent slabs is organized such that communication and computation can in principle overlap. This is done such that first the surface states are computed and a corresponding MPI exchange with the neighbouring slabs is initiated. While this proceeds, the volume integrals for slabs are carried out by the GPU, and only once this is completed, the work continues with the received surface data. Because slabs that are computed on GPUs need to be executed in a massively thread-parallel fashion with shared-memory algorithms, some changes in the execution logic compared to the effectively serial CPU code are required. For example, to avoid race-conditions in our GPU code without needing to introduce explicit locks, we process the mesh in a red-black checkerboard fashion. Finally, we note that we also implemented a scheme that makes our results binary identical when the number of mesh slabs is changed. This ultimately relates to the question about how the wrap-around between the leftmost and rightmost planes of the mesh in our periodic domain is implemented. Here the order in which fluxes from the left and right neighboring cells is added to cells needs to be unique and independent of the location and number of slabs in the box in order to avoid that different floating point rounding errors can be introduced when the number of slabs is changed. \begin{table} \begin{center} \begin{tabular}{lrc} \hline $N_c$ & $p$ & min. memory need \\ \hline 128 & 2 & 512 MB\\ 128 & 3 & 1440 MB\\ 128 & 4 & 3520 MB\\ 128 & 6 & 9856 MB\\ 128 & 10 & 37.81 GB \\ 2048 & 2 & 2048 GB\\ 2048 & 3 & 5760 GB\\ 2048 & 4 & 13.75 TB\\ 2048 & 6 & 38.5 TB \\ 2048 & 10 & 151.3 TB \\ \hline \end{tabular} \end{center} \caption{Minimum memory need for our DG code when a 3D simulation is assumed with $(N_\textrm{c})^3$ cells and expansion order $p$, including allowing for an artificial viscosity field. Here double precision with 8 bytes per floating point number has been assumed. } \label{tab:mem} \end{table} \subsection{Memory usage} Before closing this section, it is perhaps worthwhile to discuss the memory need of our DG simulations, as this is ultimately determining the maximum size of simulations that can be done for a given number of GPUs. To represent a scalar field such as the density $\rho$ at order $p$, we need for every cell a certain number of basis function weights $N^{d{\rm D}}(p)$, were $d$ is the number of spatial dimensions, see equations (\ref{eqn:N3D}) and (\ref{eqn:N2D}). When multiplied with the number of cells, we obtain the number of degrees of freedom, which is identical to the number of floating point variables needed to stored the full density field. If we write the total number of cells as $(N_c)^d$, then the total number of variables that need to be stored for the DG weight vector is \begin{equation} N_w = (2+d) (N_c)^d N^{d{\rm D}}(p). \end{equation} Here we assumed that we simulate the plain Euler equations without viscosity, where we need $(2+d)$ conserved fields to describe the flow. If we account for our artificial viscosity field, which will always be required for problems involving shocks, this number goes up by one further unit, yielding \begin{equation} N_w = (3+d) (N_c)^d N^{d{\rm D}}(p). \end{equation} A passive tracer field, if activated, would add a further unit in the prefactor. In 2D and 3D, a conservative upper bound for $N^{d{\rm D}}(p)$ is $p^d$, but this is not particularly tight. Already for $p=2$, $N^{3{\rm D}}$ is lower than $p^3$ by a factor of 2, for $p=4$ this grows to a factor 3.2, and for $p=10$ the difference is more than a factor 4.5. Another significant source of memory need lies in our timestepping algorithm. At present we use stability preserving Runge-Kutta schemes that require a temporary storage of the time derivatives of the weights, evaluated at several different points in time, depending on the order of the Runge-Kutta scheme, which we adjust according to the chosen $p$. The required temporary storage space $N_{\dot{w}}$ is thus a multiple of $N_w$, with a prefactor that depends on the chosen order $p$, i.e. \begin{equation} N_{\dot{w}} = f_t(p) N_w. \end{equation} Here $f_t(p)$ depends on the number of stages in the Runge-Kutta scheme. Presently, we use a setup where $f_t(p) = p$ for $p \le 3$, and $f_t(p) = 5$ otherwise. The minimum amount of total storage (in terms of needed floating point numbers) required by the code is thus \begin{equation} N_w = [3+d + f_t(p)] (N_c)^d N^{d{\rm D}}(p). \end{equation} During execution of our code using multiple GPUs or CPU cores, some temporary buffer space is furthermore required to hold, in particular, send and receive buffers for fluid states or fluxes along slab surfaces orthogonal to the $x$-direction. These tend to be subdominant, however, compared to the memory requirements to store the weights and their time derivatives themselves. The latter thus represent the quantities that need to be primarily examined to decide about the feasibility of a simulation in terms of its memory needs. When we use the oscillatory sensor for controlling artificial viscosity, some further temporary storage is needed as well, but since this is again small compared to $N_w$ since only two scalar quantities per cell are needed, this conclusion is not changed. Note that our DG approach does not need to store gradient fields for any of the fields, which is different from many finite volume methods such as, for example, {\small AREPO}. Also, use of the Navier-Stokes solver instead of simulating just the Euler equations does not increase the primary memory needs in any significant way. In Table~\ref{tab:mem}, we give a few examples of the memory need for a small set of simulation sizes and simulation orders, which illustrates the memory needs of the code, and which can be easily scaled to other problem sizes of interest. A single Nvidia A100 GPU with 40 GB of RAM could thus still run a $N_c=128$ problem at order $p=10$, or a $512^3$ problem at quadratic order $p=2$. For carrying out a $2048^3$ simulation at $p=2$, a cluster offering at least 52 such devices would already be necessary. \section{Code performance} \label{SecPerformance} In order to fully utilise large parallel supercomputers, a code has to be able to run efficiently not only on a single core on one CPU, but also on hundreds to thousands of cores on many CPUs. The degree to which this can accelerate the total runtime of a computation is encapsulated by the concept of parallel scalability. Similarly, for a GPU-accelerated code it is of interest to what extent the use of a GPU can speed up a computation compared to using an ordinary CPU. If more than a single GPU is used, one is furthermore interested in whether a code can efficiently make simultaneous use of several, perhaps hundreds of GPUs. In this section we examine these aspects and present results of weak- and strong scaling tests of our new code. \subsection{Weak scaling} Weak scaling performance describes a situation where a set of simulations of increasing size is run and compared, but where the load per computational unit, be it a CPU core or a GPU in our case, is kept constant. The time to perform a single timestep should remain constant in this case, increasing only due to communication-related overhead, through work-load imbalances, or through other types of parallelization losses, for example if a code contains residual serial work that scales with the problem size. Weak scaling results of our code are shown on Fig.~\ref{fig:weak_scaling}. For definiteness, we consider realistic driven turbulence problems in 3D of different size using the Navier-Stokes equations, i.e.~the type of problem is directly equivalent to the ones considered in Section~\ref{SecTurbulence}. We consider problem sizes of $128^3$, $160^3$, $200^3$, $256^3$, $320^3$, and $384^3$ cells, forming a sequence that approximately doubles in size, with a factor of 27 enlargement from the smallest to the largest runs. To compensate for the fact that the problem size does not exactly double every time we increase the number of cells, we apply a correction factor to the timing results at each resolution\footnote{The current version of the GPU part of the code can only run if $N_\textrm{c}$ and the number of slabs in the $x$-direction per rank are even. This and the fact that $N_\textrm{c}$ has to be an integer in any case prevents ideal doubling of problem size. The correction factors we apply are: 128$^3$: 1.0, 160$^3$: 0.977, 200$^3$: 0.954, 256$^3$: 1.0, 320$^3$: 0.977, and 384$^3$: 0.844.}. Correspondingly, we execute these problems with one Nvidia A100 GPU for the smallest mesh size, and 32 GPUs for the largest mesh size, keeping the load per GPU roughly constant. The results are shown in the left panel of Fig.~\ref{fig:weak_scaling}. For comparison, we also measure the execution speed if instead every GPU is replaced by four CPU cores of Intel Xeon-6138 processors. The corresponding results are shown in the right panel of Fig.~\ref{fig:weak_scaling}. Finally, we repeat these measurements for different DG expansion orders, $p=2$, $p=3$, and $p=4$. The results in the figure show generally good weak scalability, but also highlight some performance losses for large problem sizes. These arise in part because our domain is split into slabs and not cubes. Larger problems lead to ever thinner slabs with a larger surface-to-volume ratio and thus more communication between different slabs. We also see the influence of enhanced communication on weak scalability when data needs to be transferred across node boundaries, and for higher orders, which have more degrees of freedom at the same number of cells. \subsection{Strong scaling} Strong scaling is a test where one runs a problem of given size on an ever increasing number of compute units. Contrary to weak scaling, the load per compute unit decreases in this test, and the time to perform a single timestep should decrease in inverse proportion to the increasing computational power applied to solve the problem. We show a strong scaling result in Fig.~\ref{fig:strong_scaling}, again carried out for a realistic driven turbulence problem in 3D using the Navier-Stokes equations. For definiteness, we use a simulation with $256^3$ cells, and consider orders $p=2$ to $p=4$. The left panel of Figure~\ref{fig:strong_scaling} shows the average execution time for a single step when 1, 2, 4, 8, or 16 Nvidia A100 GPUs are used. In contrast, the right panel of Figure~\ref{fig:strong_scaling} shows the average execution time when CPU cores on a cluster with 2 Intel Xeon-6138 CPUs are used, with 40 cores per node. We show results from 1 core to 256 cores. Especially in the latter case, one sees clear limits of strong scalability, as communication costs become quite large if the problem is decomposed into slabs that are just a single cell wide. This stresses that there is always a limit for strong scalability, something that is known as Ahmdahl's law. By enlarging the problem size, this limit can however usually be pushed to larger parallel partition sizes. \subsection{CPU vs GPU benchmark} Another interesting question is how the absolute speed of GPU execution of our code compares to running it only on ordinary CPU cores. To estimate this speedup we take the average execution times to compute a timestep from our weak scaling results for both the GPU and CPU runs and consider their ratio. We do this for the three considered DG orders $p=2$ to $p=4$, and for the varying problem sizes and number of compute units used. Since we had used 4 CPU cores to pair up with 1 GPU, we rescale the results in two different ways, to either compare the execution performance of four Nvidia A100 GPUs with 40 Intel Xeon-6138 cores -- which is how one of our compute nodes is equipped -- or to the performance of a single GPU compared to one CPU core (which thus gives 10 times higher values). The corresponding results are illustrated in Fig.~\ref{fig:gpu_speedup}. The speedup of GPU execution at the node-level is modest for order $p=2$, as there are not enough floating point operations to fill up the GPUs. At $p=3$ we reach the highest node-level speedup observed among this set of runs, it peaks at just over 16x the CPU speed for large problems. Order $p=3$ runs show better performance because there are a lot of floating points operations to perform at the same time, and all intermediate results still fit into the GPU's limited shared memory. Such shared memory is ``on chip'' and therefore about $\sim$100x faster than global memory. In the current version of our code, runs with $p>3$ need to store the intermediate DG weight updates from each quadrature point in global, rather than shared memory, which we think is the main reason that the performance shows a small dip again for $p=4$. A further complication of storing intermediate result in global memory is that once all the weight updates are calculated, they need to be additionally summed in a reduction step over all quadrature points of a single cell. In the future we will overhaul this part of the code by modifying how threads are operating on quadrature points in a cell and eliminate the need for global memory and the additional reduction steps. We therefore think there is still room for substantial improvements of the performance of the GPU code. At this point, a single GPU is about 160 times as fast as a CPU core, but when comparing a fully equipped GPU node to a fully equipped CPU node, more realistic numbers are in the ballpark of $\sim 16$. This is not far away from the ratio of the nominal peak performances of the involved compute devices for double precision arithmetic (which we have used here throughout), but this comparison also suggests that there is still some modest room for improvement in the performance of our GPU implementation. \section{Summary and Conclusions} \label{SecSummary} In this study, we have described a novel hydrodynamical simulation code which is based on the mathematical Discontinuous Galerkin approach. The fluid state is expanded in this method into a set of spatially varying basis functions with time-variable weights, yielding a separation of the temporal and spatial dependencies. The time evolution of the weights is obtained in a weak formulation of the underlying partial differential equations of fluid dynamics. Our work builds up on the earlier development of a DG code by \citet{Schaal2015} and \citet{guillet_high_order_2019}, but extends it into several crucial directions. First of all, we have developed a novel GPU implementation from scratch, thereby demonstrating the substantial potential of these acceleration devices for achieving higher computational performance in astrophysical applications. This potential has already been identified in a few first finite-volume hydrodynamical GPU codes in astrophysics, but ours is the first one that can carry out DG calculations of the full Navier-Stokes equations at very high order of $p= 10$ and beyond. Secondly, we have introduced a novel approach to shock-capturing at high order, solving the long-standing problem that standard slope-limiting techniques do not work well at high order and tend to discard in troubled cells much of the advantage that is supposed to be delivered by a high order approach. The latter can only be rescued if the DG method is able to capture physical discontinuities in a sub-cell fashion. By means of our new source routines for a time-dependent artificial viscosity field, we have demonstrated very good shock-capturing ability of our code, with a shock broadening that closely tracks the effective spatial resolution $h/p$ that we expect from the method based on its number of degrees of freedom per dimension. While this does not necessarily give high-order approaches an advantage for representing a shock compared with a lower order method with the same number of degrees of freedom, at least it also is not worse -- using a high-order approach will however in any case still be beneficial for all smooth parts of a flow. If it performs at the same time as well as a lower order method in places where there is a shock, this can be a significant advantage. For contact discontinuities, similar considerations apply, but here high-order methods have the additional advantage of exhibiting greatly reduced numerical diffusivity. Contact discontinuities that move over substantial timespans therefore also benefit from the use of higher order. Third, we have stressed that the use of physical viscosity is often a key to make problems well posed and amenable to direct numerical solutions. Here we have introduced a novel method to define the viscous surface fluxes at cell interfaces. This is based on arriving at unambiguous derivatives at interfaces by projecting the two piece-wise solutions in the adjacent cells onto a continuous basis function expansion covering both cells. The derivatives can then be computed in terms of analytic derivatives of the basis functions. We have shown that this technique is robust, consumes much less memory and computational effort than the uplifting technique, and most importantly, it converges at the expected rapid convergence rate when high order is used. In fact, in several of our test problems, we could show that our DG code shows for smooth problems exponential convergence as a function of expansion order $p$, while for fixed order, the $L_1$ error norm declines as a power-law of the spatial resolution, $L_1\propto h^{p}$. These favourable properties suggest that it is often worthwhile to invest additional degrees of freedom into the use of higher expansion order rather than employing more cells. However, since every DG cell effectively represents a small spectral problem in which the required solution evaluations and volume integrations are carried out in real space, the computational cost to advance a single cell also increases rapidly with order $p$. In practice, this can make the optimal order quite problem dependent. With our present implementation we could obtain excellent agreement with the reference Kelvin-Helmholtz solution computed by \citet{Lecoanet2016} with the spectral code {\small DEDALUS}. Remarkably, we achieved this already with 64 cells and order $p=4$, for which our results are equally as accurate as those obtained with the finite volume code {\small ATHENA} at second order using 2048 cells. This again shows the potential of the DG approach. Given that in this work we could overcome one of its greatest weaknesses in an accurate, simple, and robust way -- namely the treatment of shocks at high order -- we are confident that the DG method could soon turn into a method of choice in astrophysical applications, rivaling the traditional finite volume techniques. Our next planned steps to make this a reality are to add additional physics such as radiative cooling and self-gravity to our code, and to provide functionality for local refinement and derefinement ($h$-adaptivity), as well as to allow for varying the expansion order used in a single cell ($p$-adaptivity). The high performance we could realize with our GPU implementation, which outperforms modern multi-core CPUs by a significant factor, furthermore strengthens the case to push into this direction, which seems also a necessity to eventually be able to harness the power of the most powerful supercomputers at the exascale level for unsolved problems in astrophysical research. \section*{Acknowledgements} The authors acknowledge helpful discussions with numerous students at MPA and hail the printf(*) debugging technique.\vspace*{-0.5cm} \section*{Data Availability} Data of specific test simulations can be obtained upon reasonable request from the corresponding author.\vspace*{-0.5cm} \bibliographystyle{mnras} \bsp % \label{lastpage}
Title: Multi-frequency angular power spectrum of the 21~cm signal from the Epoch of Reionisation using the Murchison Widefield Array
Abstract: The Multi-frequency Angular Power Spectrum (MAPS) is an alternative to spherically-averaged power spectra, and computes local fluctuations in the angular power spectrum without need for line-of-sight spectral transform. To test different approaches to MAPS and treatment of the foreground contamination, and compare with the spherically-averaged power spectrum, and the single-frequency angular power spectrum. We apply the MAPS to 110~hours of data in $z=6.2-7.5$ obtained for the Murchison Widefield Array Epoch of Reionisation experiment to compute the statistical power of 21~cm brightness temperature fluctuations. In the presence of bright foregrounds, a filter is applied to remove large-scale modes prior to MAPS application, significantly reducing MAPS power due to systematics. The MAPS shows a contrast of 10$^2$--10$^3$ to a simulated 21~cm cosmological signal for spectral separations of 0--4~MHz after application of the filter, reflecting results for the spherically-averaged power spectrum. The single-frequency angular power spectrum is also computed. At $z=7.5$ and $l=200$, we find an angular power of 53~mK$^2$, exceeding a simulated cosmological signal power by a factor of one thousand. Residual spectral structure, inherent to the calibrated data, and not spectral leakage from large-scale modes, is the dominant source of systematic power bias. The single-frequency angular power spectrum yields slightly poorer results compared with the spherically-averaged power spectrum, having applied a spectral filter to reduce foregrounds. Exploration of other filters may improve this result, along with consideration of wider bandwidths.
https://export.arxiv.org/pdf/2208.06082
\title{Multi-frequency angular power spectrum of the 21~cm signal from the Epoch of Reionisation using the Murchison Widefield Array} \titlerunning{Multi-frequency angular power spectrum of the 21~cm signal from the EoR using the MWA} \author{Cathryn M. Trott\inst{1,2}\fnmsep\thanks{Email: cathryn.trott@curtin.edu.au} \and Rajesh Mondal\inst{3,4} \and Garrelt Mellema\inst{3} \and Steven G. Murray\inst{5} \and Bradley Greig\inst{6,2} \and Jack L. B. Line\inst{1,2} \and Nichole Barry\inst{1,2} \and Miguel F. Morales\inst{7} } \institute{International Centre for Radio Astronomy Research, Curtin University, Bentley, WA 6102, Australia \and ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Bentley, WA 6102, Australia \and The Oskar Klein Centre, Department of Astronomy, Stockholm University, AlbaNova, SE-10691, Stockholm Sweden \and Department of Astrophysics, School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel \and School of Earth and Space Exploration, Arizona State University, Tempe AZ, USA \and School of Physics, The University of Melbourne, 3010 Australia \and Department of Physics, The University of Washington, Seattle WA, USA} \date{Received ; accepted } \abstract {The Multi-frequency Angular Power Spectrum (MAPS) is an alternative to spherically-averaged power spectra, and computes local fluctuations in the angular power spectrum without need for line-of-sight spectral transform. } {To test different approaches to MAPS and treatment of the foreground contamination, and compare with the spherically-averaged power spectrum, and the single-frequency angular power spectrum.} {We apply the MAPS to 110~hours of data in $z=6.2-7.5$ obtained for the Murchison Widefield Array Epoch of Reionisation experiment to compute the statistical power of 21~cm brightness temperature fluctuations. In the presence of bright foregrounds, a filter is applied to remove large-scale modes prior to MAPS application, significantly reducing MAPS power due to systematics. } {The MAPS shows a contrast of 10$^2$--10$^3$ to a simulated 21~cm cosmological signal for spectral separations of 0--4~MHz after application of the filter, reflecting results for the spherically-averaged power spectrum. The single-frequency angular power spectrum is also computed. At $z=7.5$ and $l=200$, we find an angular power of 53~mK$^2$, exceeding a simulated cosmological signal power by a factor of one thousand. Residual spectral structure, inherent to the calibrated data, and not spectral leakage from large-scale modes, is the dominant source of systematic power bias. The single-frequency angular power spectrum yields slightly poorer results compared with the spherically-averaged power spectrum, having applied a spectral filter to reduce foregrounds. Exploration of other filters may improve this result, along with consideration of wider bandwidths.} {} \keywords{instrumentation:interferometers, methods:statistical; cosmology: dark ages, reionization, first stars} \section{Introduction} \label{sec:int} Exploration of the Epoch of Reionisation (EoR; $z=5.3-10$) through the 21~cm neutral hydrogen transition remains a primary goal of many current and upcoming interferometric low-frequency radio telescopes, including the Murchison Widefield Array \citep[MWA,][]{tingay13_mwasystem,bowman13_mwascience,wayth18}, LOFAR{\footnote[1]{http://www.lofar.org}} \citep{vanhaarlem13}, Hydrogen Epoch of Reionization Array (HERA){\footnote[3]{http://reionization.org}} \citep{deboer17} and the upcoming Square Kilometre Array (SKA){\footnote[4]{http://skatelescope.org}} \citep{koopmans15}. Evolution of the signal over redshift is also explored through the global spatially-averaged temperature, using both single element and short-spacing interferometric arrays \citep{edges18,saras322,2022arXiv220310466T}. The EDGES experiment \citep{edges18} reported the detection of a deep absorption trough at 78~MHz, suggested to mark the birth of the first stars and the commencement of Lyman-$\alpha$ coupling, however the cosmological origin of this measurement has been disputed by recent measurements by the SARAS3 experiment \citep{saras322}. These substantial international efforts have steadily moved the field closer to the detection, and characterisation, of this cosmological signal, but as yet experiments have not reported success, being hampered by the complex structure of the bright foreground signal from radio galaxies and our Galaxy, and the difficulties associated with performing precision experiments with low-frequency radio telescopes. As such, the use of even seemingly simple metrics, such as the spatial power spectrum, have not yielded success. The Multi-Frequency Angular Power Spectrum (MAPS) was first proposed under that name by \citet{2007MNRAS.378..119D}, although it had been studied earlier by for example \citet{2005MNRAS.356.1519B} and \citet{2005ApJ...625..575S}. \citet{mondal18,mondal19} developed it further and a recent paper provided the first study of its performance in the estimation of EoR parameters using an MCMC framework \citep{mondal22}. \citet{pal21} recently applied MAPS to 150~MHz data from the GMRT telescope, using the Tapered Gridded Estimator, and placing limits on the power spectrum at $z=8.28$. MAPS computes the angular power spectrum as a function of spectral separation, characterising the 21~cm signal correlation as a function of scale. It is a useful statistic in the presence of light-cone effects, whereby the evolution of signal along the line-of-sight destroys signal ergodicity for three-dimensional power spectrum analyses. MAPS is straightforward to implement, without need for a line-of-sight transform, which otherwise can be problematic. Application to real data, however, is complicated by spectrally-correlated continuum foreground sources, which can be 3--4 orders of magnitude brighter than the EoR signal. In this work, we apply MAPS to 110 hours of data from the EoR0 observing field obtained with the MWA between 2013 and 2016 in frequency range 167--197~MHz ($z=6.2-7.5$). MAPS is applied before and after application of a foreground filter, which is designed to suppress structure on delays shorter than a set value (e.g., 180~ns), while retaining signal on faster varying scales. The MAPS software is available from Github\footnote{\url{https://github.com/rajeshmondal18/MAPS}}, however this uses image-based datacubes as inputs. We will apply the MAPS algorithm to gridded visibility data from the MWA experiment, and therefore develop our own software, following the algorithm described in \citet{mondal20,mondal22}. The dataset is identical to that used in \citet{trott20} to extract cylindrically- and spherically-averaged Fourier power spectra. \section{METHODS} The single-frequency, $\nu$, angular power spectrum is defined by: \begin{equation} C_l^s(\nu) = C_{2\pi U}(\nu) = \frac{1}{\Omega}\langle \tilde{T}_b({\bf U})\tilde{T}_b(-{\bf U}) \rangle, \end{equation} for beam field-of view $\Omega$ sr and angular multipole $l=2\pi U$. By extension, MAPS is defined by \citet{mondal18}: \begin{equation} C_l(\nu_1,\nu_2) = C_{2\pi U}(\nu_1,\nu_2) = \frac{1}{\Omega}\langle \tilde{T}_b({\bf U},\nu_1)\tilde{T}_b(-{\bf U},\nu_2) \rangle. \end{equation} If we impose ergodicity and periodicity along the frequency direction, we have $C_l(\nu_1, \nu_2) \equiv C_l(\Delta \nu)$. The dimensionless MAPS is: \begin{equation} \mathcal{D}_l(\nu_1, \nu_2) = \frac{l(l+1)C_l(\nu_1, \nu_2)}{2\pi}, \end{equation} which has units of temperature squared. Similarly, the dimensionless single-frequency angular power spectrum is computed likewise: \begin{equation} \mathcal{D}_l^s(\nu) = \frac{l(l+1)C_l^s(\nu)}{2\pi}. \label{eqn:aps} \end{equation} With an interferometer, we measure visibilities (coherence of the electric field) as a function of scale (baseline length), in units of Jansky. For power spectrum estimation, it is typical to grid measurements onto a common $uv$-plane to allow for coherent addition of data (lower noise). For this, a gridding kernel, $K$, is employed, which matches or mimics the Fourier representation of the instrument primary beam\footnote{In an optimal power spectrum estimator, the gridding kernel is the Fourier Transform of the telescope response function to the sky (i.e., the beam), because this correctly represents the smearing of information due to the finite station size. In practise, the MWA beam is highly-structured with sidelobes, and we instead employ a size-matched Blackman Harris 2D window function as the kernel, as is done in \citet{trott16chips}.}. The averaged sky signal after this process for cell $u,v$ is given by: \begin{equation} \mathcal{V}(u,v) = \frac{\sum_i V(u_i,v_i)K(u-u_i,v-v_i)}{\sum_i K(u-u_i,v-v_i)} \,\,\text{Jy}. \end{equation} In order to remove power bias due to additive noise, power spectrum estimators, such as CHIPS \citep{trott16chips} also typically use some spectral or temporal differencing such that data with different noise realisations, but matched signal, are multiplied, \begin{equation} P(k) \propto \langle \mathcal{V}(t_1) \mathcal{V}^\ast(t_2) \rangle = \frac{1}{4}(P_{tot}(k) - P_{diff}(k)), \end{equation} where $P_{tot} = |\mathcal{V}(t_1) + \mathcal{V}^\ast(t_2)|^2$ and $P_{diff} = |\mathcal{V}(t_1) - \mathcal{V}^\ast(t_2)|^2$, which can be numerically easier to implement. CHIPS outputs these gridded total and differenced visibilities, and their weights. For gridded visibility data, the MAPS algorithm can be applied directly, such that: \begin{equation} C_l(\nu_1,\nu_2) = \frac{1}{\Omega}\frac{\sum \mathcal{V}(U,\nu_1)\mathcal{V}^\ast(-U,\nu_2)W(\nu_1)W(\nu_2)}{\sum W(\nu_1)W(\nu_2)}, \end{equation} where $W(\nu)$ gives the weight at that frequency. For time-interleaved data, the final power is: \begin{equation} C_l(\nu_1,\nu_2) = \frac{1}{4}(C_{l,tot} - C_{l,diff}), \end{equation} and the dimensionless form is: \begin{equation} \mathcal{D}_l(\nu_1,\nu_2) = \frac{l(l+1)}{8\pi}(C_{l,tot}(\nu_1,\nu_2)-C_{l,diff}(\nu_1,\nu_2)) \,\,\,\, \text{mK}^2. \label{eqn:dcl} \end{equation} Equation \ref{eqn:dcl} computes the local MAPS between all sets of spectral channels, providing the greatest flexibility to studying the signal, and natively allowing for signal evolution, but at the expense of signal-to-noise. I.e., each spectral channel difference is individually computed, and therefore the evolution of a given $\Delta\nu$ can be studied. Alternatively, one may consider a smaller cube, where the signal is ergodic throughout (i.e., no significant signal evolution from the back to the front of the cube), and spectral differences stacked to increase signal-to-noise ratio. In this calculation, any evolution of signal on a given scale is erased. This latter approach matches more closely the conditions for a spherically-averaged power spectrum analysis, as shown by \citet{mondal22}. In this case, we compute: \begin{equation} \mathcal{D}_l(\Delta\nu) = \frac{l(l+1)\displaystyle\sum_{\nu_1,\nu_2}(C_{l,tot}(\Delta\nu)-C_{l,diff}(\Delta\nu))}{8\pi} \,\,\,\, \text{mK}^2. \label{eqn:ergodic} \end{equation} Both approaches are implemented in this work. The noise is obtained by considering the differenced visibilities: \begin{equation} \Delta \mathcal{D}_l(\nu_1,\nu_2) = \frac{l(l+1)}{8\pi}(C_{l,diff}(\nu_1,\nu_2)) \,\,\,\, \text{mK}^2. \end{equation} \subsection{Foregrounds} As discussed by \citet{mondal22}, foreground contaminating sources are problematic for MAPS due to their large spectral coherence (continuum sources). Despite some bright point sources being removed from the dataset, foregrounds remain a significant problem for 21~cm studies. To combat this, we employ a non-parametric foreground that has been designed to match that for band-limited discrete flat spectrum sources; DAYENU. The DAYENU filter \citep{dayenu} was introduced to provide a clean filter that suppresses signal on line-of-sight modes slower than a defined delay, while leaving faster modes (including those with EoR signal) unfiltered. We employ an adjusted version of this filter to suppress foregrounds. Missing frequency channels are first handled through a least squares \citep[CLEAN; ][]{parsons09} solution, such that we apply a restoration filter: \begin{equation} \mathcal{R} = \mathcal{I}w - \mathcal{R}_D, \end{equation} where, \begin{equation} \mathcal{R}_D = A (A^T w A)^{-1} A^T w \mathcal{I}, \end{equation} where $A$ is a matrix of the eigenvectors of the DAYENU matrix (Discrete Prolate Spheroidal Series), \begin{equation} \mathcal{C}_{mn} = \frac{2\pi\Delta\nu}{\epsilon} \mathrm{sinc}(2\pi\tau(\nu_m-\nu_n)), \end{equation} which operates on the spectral data. Fundamentally, the sinc function is the Fourier Transform of the Heaviside function of a set of foregrounds extending to a fixed delay (e.g. the horizon) in the power spectrum. The delay ($\tau$ ns) and depth ($\epsilon$) of the filter can be adjusted to shape the filter. A value of $\epsilon=10^9$ is used throughout. \subsection{Data} The data used for this work are those processed to a Fourier power spectrum in \citet{trott20}. These data were obtained with the MWA over 2013--2016, and comprise 110~hours of observation on the EoR0 field (RA=0h, Dec=$-$27deg) spanning 30.72~MHz from 167--197~MHz, with 80~kHz spectral resolution. Only the East-West polarization is used, because it has much lower response to power from the Galaxy for this field. MWA data contain regular missing spectral channels due to the channelisation filter, yielding two missing channels each 1.28~MHz bandwidth. These are treated as part of the filter process, but are omitted in calculation of the MAPS (assigned zero weights). These data were calibrated with the RTS software pipeline, with 1000 sources subtracted (residual source flux density less than 50~mJy). % In addition to the data, we also have a wide ($\sim7.5$~Gpc on a side) 21cmFAST \citep{mesinger11} simulation cube designed to match the large MWA primary beam and bandwidth in the 167--197~MHz frequency range \citep{greig22}. 21cmFAST efficiently generates 21~cm brightness temperature cubes using an excursion set formalism to calculate ionisations by UV photons emitted from galaxies described using a simple, six parameter astrophysical model. Specifically, these describe both a mass-dependent star-formation and escape fraction efficiency, a star-formation time-scale and a minimum mass threshold for active star-forming galaxies. Here we use the parameters defined in \citet{barry19} consistent with those from \citet{park19}. This simulation additionally assumes that the intergalactic neutral gas is sufficiently heated by a background heating source (e.g. X-rays) such that the spin temperature is larger than the cosmic-microwave background. The 21cmFAST cube has been projected onto 384 Healpix nside 2048 maps, each corresponding to a frequency channel matching those of the real data detailed above. These maps were fed into the \texttt{WODEN} simulator~\citep{Line2022}, with each pixel in the Healpix maps input as a point-source (a delta function). \texttt{WODEN} then calculates the measurement equation for all directions, essentially performing a direct Fourier transform for $\sim$ 25 million directions per frequency channel. In addition, a frequency-interpolated version of the MWA Fully Embedded Element primary beam model was calculated for all directions and applied to add the instrumental response. The simulated outputs can be processed through the same framework, using the gridded visibilities to produce the MAPS. \section{RESULTS} \label{sec:res} \subsection{Filter} The filter is designed to suppress structure on scales with delay smaller than a specified value. For this work, we aim to suppress the large-scale modes, while retaining signal on spectral separations smaller than $\sim$5~MHz; \citet{mondal22} shows that little power is expected on larger separations. The filter is applied to the full-band (30.72~MHz) data, prior to any sub-division of the band to smaller cubes (8--10~MHz). This produces the cleanest filter response. Figure~\ref{pdur} shows the recovery of MAPS as a function of spectral separation, for four different delays that are typical for foreground-dominated modes - 200~ns, 180~ns, 150~ns, 100~ns. The recovery is computed as the ratio of the post-filter to the pre-filter power in a unity-amplitude complex sinusoid, $s(\nu,\tau)=\exp{2\pi{i}\nu\tau}$. The 180~ns delay is chosen herein because it has excellent recovery below 5~MHz, but with the maximum suppression of larger scales. Despite the excellent recovery, there is some signal loss, which is corrected after application of the filter. We start by demonstrating the effect of the filter on the output of the MAPS algorithm. The ergodic MAPS algorithm is considered, most akin to the power spectrum, whereby only the spectral difference is considered, and the data are aggregated. \subsection{MAPS - no filter} Figure \ref{fig:filtercompare} (dashed) shows the output of MAPS for the same 192 channels (15~MHz) at the lower end of the high-band data ($z=6.8-7.5$), but without the 180~ns filter, for eight computed $l$ modes. Note that $l=100$ corresponds to $k_\bot=l/DM=0.017 h$~Mpc~$^{-1}$. The power is consistently high across all estimated $l$-modes, with performance worsening on scales larger than 3~MHz. These values exceed the expected 21~power by at least six orders of magnitude, and are not useable for science. \subsection{Filtered MAPS -- ergodic} The output of Equation \ref{eqn:ergodic} is plotted for the first eight $l$ modes for the same $z=6.8-7.5$ set with the 180~ns filter, using logarithmic scales (Figure \ref{fig:filtercompare}, solid lines). The absolute value is plotted, but, in general, some modes are negative owing to the filter response. A correction is applied to the power for spectral differences smaller than 4~MHz to alleviate the small signal loss that occurs due to the filter. It is clear that application of the filter reduces the signal by approximately 2--5 orders of magnitude depending on scale, and therefore is worthwhile to apply. There are noticeable features present in the filtered MAPS, however, and these correspond to the oscillatory structure observed in the filter response in Figure \ref{pdur}. Further tuning of the filter may be of benefit, but for these data is unlikely to yield further improvements. Excellent results are produced for $l=100, 200$, with larger modes showing commensurately poorer results in line with the $l^2$ scaling of the dimensionless MAPS. As a comparison, the 21cmFAST cube output is shown for the same redshift range, and binning (Figure \ref{fig:21cm}). The results are similar to those in \citet{mondal22}, but with lower amplitude, commensurate with the higher redshift of this cube. In general, the data yield MAPS amplitudes two orders of magnitude higher than the simulated signal, matching the Fourier power spectrum results produced for the \citet{trott20} work. We also plot the first three $l$-modes for the full data (no filter), full data (180~ns filter), the 21cm simulation (with filter), and the data noise level, in Figure \ref{fig:lrange}. The filter is again shown to have an impact on the contaminating power, but the data remain above the expected noise level, and the 21~cm signal power. The measured power bias relative to the thermal noise is evidence of residual systematics in the data. These plots also demonstrate that these data are not sufficient to detect this model signal, even in the absence of systematics, but are within an order of magnitude. \subsection{Filtered MAPS -- local} The local MAPS treats the power for each set of two spectral channels individually, without averaging. Without the assumption of ergodicity, it can be applied to the full-band (30.72~MHz) data as long as we only consider spectral differences for which the filter does not suppress signal ($\Delta\nu <$ 4~MHz). Figure \ref{fig:2d_data} shows contour plots of the dimensionless MAPS as a function of frequency, for $l=200, 300$. The inset shows a subset of the data for clarity. The data have been corrected for the signal loss due to the 180~ns filter for $\Delta\nu <$ 4~MHz. The red diagonal stripe at a spectral difference of $\sim$4~MHz is an artifact of the filter (where the cross power is transitioning from positive to negative, and the ratio of the power to the weights is undefined). Spectral separations larger than 4~MHz have been omitted. In general, there are frequencies (redshifts) where the MAPS is enhanced. Figure \ref{fig:2d_21cm} then shows the equivalent filtered MAPS for the 21~cm simulation. The same filter artifact can be observed here. Figure \ref{fig:2d_contrast} then shows the contrast ratio between the expected 21~cm power and the measured data (logarithmic scale). The scale is kept the same for both $l$-modes to show the differences. Greatest contrast is seen for $l=200$ ($k_\bot=0.034h$~Mpc~$^{-1}$) where the contrast ratio is generally 10$^2$--10$^3$, but observed as low as 10$^{1}$ in some regions. The contrast is poorer than for the ergodic MAPS, where data averaging reduces the noise and systematics further. We also tried averaging the data to 160~kHz resolution, and performing the local MAPS on those data. As expected, the contrast between 21cm signal and the data did not change significantly, due to the data being systematics-limited. \section{Angular Power Spectrum} The data are also used to compute the single frequency angular power spectrum, to connect with more traditional studies of this statistic, and corresponding to the $\nu_1=\nu_2$ term of Equation \ref{eqn:dcl} and Equation \ref{eqn:aps}. All spectral channels are foreground dominated, and so the same filter is applied to reduce the level of contamination. The single channel case also allows for the data distribution to be studied, in contrast to the sample variance estimator, which uses all of the data blindly in the summation described in Equation \ref{eqn:dcl}. Figure \ref{fig:aps} shows data, 21~cm and noise dimensionless angular power spectra as a function of $l$ and for a set of redshifts, after application of the same 180~ns filter. The data are binned into coarse $\Delta{l}=100$ ($\Delta k_\bot=0.016h$~Mpc~$^{-1}$) bins, and have been averaged over the central 880~kHz in each 1.28~MHz coarse channel. The filtered data show the lowest values at low redshift, but are significantly higher than the expected signal with a contrast that exceeds that observed in the spherically-averaged power spectrum, but only by a factor of 3--5. The expected 21~cm signal is also lowest at this redshift. The residual foreground power in each spectral channel is therefore less easily treated with the filter than with a line-of-sight spectral transform, but the results are not too degraded compared with the spherically-averaged power spectrum, and allow for greater granularity to study the signal evolution. The variance (power spectrum) can also be extracted from the data histograms. Typically, the data would be expected to be close to Gaussian-distributed, reflecting the near-Gaussian expectation for the signal, and the Gaussian radiometric noise. Estimation of the data variance directly from the sample data histograms should yield the same value as the sample variance computation (which is used for power spectrum estimation normally, e.g. Equation \ref{eqn:dcl}). Presence of bright foregrounds may skew this distribution, leading to outlier $k$-modes in the dataset. To compute the sample data distributions, the real and imaginary components of the gridded visibilities are used to compute a histogram of the data for each $l$-mode, and at each coarse channel, with data being combined across spectral channels to form an effective 880~kHz bandwidth. A Gaussian is then fitted to the gridded visibilities of the `totals' and the `differences' to estimate the sample variance. A histogram resolution of 0.1 mK is chosen to avoid discretization effects. Figure~\ref{fig:apsh} (left) shows example histogrammed data for the dimensionless angular power spectrum for totals visibilities (red) and the differenced visibilities (blue), for $l=200$ and $z=6.8$. Figure \ref{fig:apsh} (right) shows the comparison of the angular power spectra using the sample variance (dashed) and computed from Gaussian fits to the histograms (solid). Use of the histograms generally yields improved results (lower power) across redshift and angular mode. At $l=200$, where some of the best results have been found with these data, the angular power is 53~mK$^2$ at $z=7.5$, compared with the 21~cm simulated power of 0.05~mK$^2$, yielding a data-to-signal contrast ratio of $\sim$1000, which exceeds that found in \citet{trott20} for the spherically-averaged power spectrum with the same data, by a factor of 3--5. A similar ratio is obtained at $l=100$, where the measured power is $\Delta^2=10$~mK$^2$. \section{Discussion} The foreground filter does a reasonable job of removing a lot of the correlated spectral structure, but there remains spectral structure on EoR scales that leaves excess power in the MAPS. In spherically-averaged power spectrum estimation, a line-of-sight spectral transform must be performed to move from configuration to $k$-space, and this is problematic due to the data being discrete and band-limited. Typically, a spectral taper is applied prior to transform to suppress sidelobes, but this is prone to error when applied to discrete data, and comes at the expense of a broader main lobe (correlating line-of-sight $k$-modes). Several groups have studied the effect of the taper application and spectral transform on leakage of foreground power into EoR modes \citep[e.g.,][]{lanman20,barry22}, showing that leaked power can be a source of excess power. With MAPS, no such transform is required, and foregrounds need to be treated separately. Thus, the residual (excess) power relative to the noise level observed in these data is indicative of inherent spectral variability on those scales, and not from leaked power from larger modes. This conclusion is not unreasonable, because the data calibration only considered unpolarized point sources and extended foregrounds, with unpolarized diffuse and polarized diffuse, and point source models omitted. Polarized emission, which is known to be significant in this observing field \citep{bernardi13,lenc17} will imprint spectral structure for sources with non-zero Faraday depth. In general, the single-frequency angular power spectrum yields a higher contrast ratio than the local MAPS, suggesting that there are benefits to considering small spectral differences, where the 21~cm signal power is reduced, but the foregrounds also partially decohere. \section{Conclusions} The Multi-Frequency Angular Power Spectrum (MAPS) has been applied to a deep 110-hour integration of the EoR0 field from the MWA EoR project in $z=6.5-7.5$. These data have been previously used with a spherically-averaged power spectrum to produce scientifically-relevant upper limits on the power spectrum of brightness temperature fluctuations in the hyperfine transition line of cosmological neutral hydrogen. The angular power spectrum has the advantage of not requiring a band-limited line-of-sight spectral transform, which mixes line-of-sight $k$-modes and needs to be performed on (approximately) ergodic subsets of a full dataset. However, it suffers from contamination from residual continuum foreground signal, which is highly-dominant and cannot easily be distinguished without spectral information. Here, a filter is applied to the broadband dataset prior to estimation of the angular power spectrum, to remove smoothly-varying signal structure. This improves the power spectrum estimation by two orders-of-magnitude, but still yields poorer results relative to the expected 21~cm signal compared with the spherically-averaged power spectrum. Treatment of foregrounds differs between the angular and spherically-averaged power spectrum approaches; the former using a filter with the latter using the properties of the Fourier Transform. The filter employed here is shown to improve its performance with larger initial bandwidths \citep{dayenu}, whereas the Fourier Transform cannot be used over bandwidths that destroy the assumption of signal ergodicity. Thus, increasing the bandwidth of experimental data may improve the performance of the APS compared with the spherically-averaged power spectrum. Additionally, other foreground filters may be explored and employed \citep{pal21}. In future, a combination of spectral and spatial \citep[e.g., GMCA, ][]{chapman14} filtering may be required to yield improved results. \begin{acknowledgement} This research was partly supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. CMT is supported by an ARC Future Fellowship under grant FT180100321. The International Centre for Radio Astronomy Research (ICRAR) is a Joint Venture of Curtin University and The University of Western Australia, funded by the Western Australian State government. The MWA Phase II upgrade project was supported by Australian Research Council LIEF grant LE160100031 and the Dunlap Institute for Astronomy and Astrophysics at the University of Toronto. This scientific work makes use of the Murchison Radio-astronomy Observatory, operated by CSIRO. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. Support for the operation of the MWA is provided by the Australian Government (NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. We acknowledge the Pawsey Supercomputing Centre which is supported by the Western Australian and Australian Governments. Data were processed at the Pawsey Supercomputing Centre. RM is supported by the Wenner-Gren Postdoctoral Fellowship and GM acknowledges support by Swedish Research Council grant 2020-04691. RM is also supported by the Israel Academy of Sciences and Humanities \& Council for Higher Education Excellence Fellowship Program for International Postdoctoral Researchers. \end{acknowledgement} \bibliographystyle{aa} %
Title: Classical OBe Stars as Post-Supernova Runaways: Confirming Binary Origins
Abstract: Massive binaries play an important role in fields ranging from gravitational wave astronomy to stellar evolution. We provide several lines of evidence that classical OBe stars in the Small Magellanic Cloud (SMC) obtain their rapid rotation from mass and angular momentum transfer in massive binaries, which predicts that the subsequent supernovae should often eject OBe stars into the field. We find that (1) OBe stars have a higher field frequency than OB stars; (2) our cumulative distribution function (CDF) of stellar distances from O stars shows that OBe stars are indeed much more isolated than ordinary OB stars of corresponding spectral types; (3) the CDFs of OBe stars approach that of high-mass X-ray binaries (HMXBs), which are confirmed post-supernova objects; and (4) Oe stars are as isolated from clusters as Be stars, implying that their final masses are relatively independent of their initial masses, consistent with major mass transfer. Lastly, we also find that the spatial distribution of supergiant OBe stars differs from that of classical OBe stars, consistent with the different mechanism responsible for their emission-line spectra.
https://export.arxiv.org/pdf/2208.10408
\title{CLASSICAL OBe STARS AS POST-SUPERNOVA RUNAWAYS: CONFIRMING BINARY ORIGINS} \correspondingauthor{M. S. Oey} \email{msoey@umich.edu} \author[0000-0001-9692-9751]{Matthew M. Dallas} \affil{Department of Astronomy, University of Michigan, 1085 South University Ave., Ann Arbor, MI 48109-1107, USA \\ } \affil{Present address: Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218} \author[0000-0002-5808-1320]{M. S. Oey} \affil{Department of Astronomy, University of Michigan, 1085 South University Ave., Ann Arbor, MI 48109-1107, USA \\ } \author[0000-0003-0521-473X]{Norberto Castro} \affiliation{Leibniz-Institut fГјr Astrophysik Potsdam (AIP), An der Sternwarte 16, 14482, Potsdam, Germany} \keywords{Be stars --- Oe stars --- core-collapse supernovae --- stellar evolutionary models --- high-mass X-ray binary stars --- circumstellar disks --- interacting binary stars --- multiple star evolution --- compact objects --- star clusters --- Small Magellanic Cloud --- runaway stars} \section{Introduction} \label{sec:intro} Classical OBe stars are non-supergiant OB stars that exhibit Balmer emission lines in their spectra, first observed in 1866 \citep{Secchi1866}. The emission lines result from circumstellar disks that are expelled by near-critical stellar rotation \citep[e.g.,][]{Rivinius2013}, and how the stars obtained their fast rotation has not been not well understood. The transfer of mass and angular momentum in binary interactions \citep[e.g.,][]{Kriz, Pols} is a model that has recently gained much traction. The vast majority of OB stars are binaries \citep[e.g.,][]{Sana, Chini}, and binary population synthesis models produce fast-rotating populations consistent with the parameters of the observed Be star population in the Milky Way \citep{Shao, Boubert}. Moreover, this model predicts that Be stars should often have post main-sequence binary companions, and these have been observationally verified for a number of objects \citep[e.g.,][]{Wang, Klement2019, Bodensteiner}. In the binary model for OBe stars, the more massive primary fills its Roche lobe and becomes a mass donor to the companion, thereby increasing the mass gainer's angular momentum enough to generate the decretion disk \citep[]{Pols}. Massive donors later explode as supernovae (SNe), accelerating the mass gainers and often unbinding them from star clusters \citep[e.g.,][]{Blaauw1961}. This type of ejection is dubbed the binary supernova scenario \citep[BSS; ][]{Hoogerwerf}. Be stars have higher proper motions than B stars \citep[]{Berger}, supporting this scenario. Furthermore, the number of OBe field stars in the Small Magellanic Cloud (SMC) is consistent with the number of expected BSS ejections \citep[]{Johnny}, and almost all high-mass X-ray binaries (HMXBs) in that galaxy are emission line stars \citep[e.g.,][]{Maravelias}. Recently, \citet{Hastings21} find that the frequency of OBe stars in clusters is consistent with binary mass-transfer origins. However, the possibility also remains that some OBe stars originate from a different mechanism \citep[e.g.,][]{Langer98}. Here, we further examine whether massive OBe stars are largely post-SN objects. If so, their enhanced transverse velocities \citep[e.g.,][]{Renzo} would cause them to be more isolated than they would be in single-star models for the OBe phenomenon \cite[e.g.,][]{Ekstrom2008}. \citet[][hereafter ST15]{ST} used this principle to demonstrate that luminous blue variables are likely mass gainers that are "kicked" into the field by BSS ejections. We apply the same spatial analysis used by ST15 to test whether OBe stars systematically avoid clusters compared to non-emission line stars. Following ST15, we compile the cumulative distribution function (CDF) of the projected separations between OBe stars and their nearest O stars, and we compare with the non-OBe stars. Since we expect O stars to not travel far from their birth clusters, the distance to the nearest O star effectively measures the relative isolation of the target star. \section{The SMC OB Star Sample} \label{sec:Sample Description} We use the \citet[][hereafter OKP]{OKP} sample of 1360 OB stars in the SMC with spectral types earlier than $\sim$B2. This sample is photometrically selected, based on the $UBVI$ survey of \citet[]{Massey2002}, and it is spatially complete over most of the star-forming body of the SMC. The field component is largely complete for masses $\gtrsim 20\ \rm M_\odot$ \citep{Lamb2013, RIOTS4}, with a higher limit for clusters. Existing OB spectral types are obtained from spectroscopic data compiled in the SIMBAD database, primarily from the SMC surveys of \citet[][RIOTS4]{RIOTS4}, \citet{Massey2002}, and \citet{Evans2004}. In cases where there were two conflicting spectral types reported for a given star, we generally adopted the RIOTS4 value if available, and otherwise the type with the highest spectral resolution was chosen; if more than two spectral types were found, the most frequently identified type was adopted. Stars with no published spectral type are retained as OB candidates.The adopted spectral types are given in Table~\ref{tab:Catalog}. \begin{deluxetable*}{ccclccccll} \tablecaption{OKP Sample} \tablewidth{0pt} \tablehead{ \colhead{[M2002]\tablenotemark{a}} & \colhead{RAdeg (J2000)} & \colhead{DEdeg} & \colhead{SpType\tablenotemark{b}} & \colhead{Field/Group\tablenotemark{c}} & \colhead{OBe\tablenotemark{d}} & \colhead{$m_{\rm up}\ (M_\odot)$\tablenotemark{e}} & \colhead{$m_{\rm low}\ (M_\odot)$\tablenotemark{e}} } \startdata 107 & 10.1171 & --73.5425 & Be3 & F & e & 48 & 48\\ 298 & 10.1832 & --73.4063 & B1e3+ & F & e & 17 & 17\\ 1037 & 10.3880 & --73.4256 & B0.5 V & F & \nodata & 14 & 14\\ 1600 & 10.5417 & --73.2323 & O8.5 V / Be? & F & e? & 22 & 22\\ 1631 & 10.5515 & --73.3866 & B1e2 & F & e & 22 & 22\\ \enddata \tablenotetext{a}{ID from \citet{Massey2002} SMC catalog.} \tablenotetext{b}{Spectral types obtained as described in the text.RIOTS4 OBe classifications include Arabic numerals corresponding to \citet{Lesh1968} emission classes following the "e" designation. } \tablenotetext{c}{Field or Group membership (see text), denoted as F and G, respectively.} \tablenotetext{d}{OBe and candidate OBe stars are indicated with "e" and "e?", respectively. Note that OBe supergiants are also identified.} \tablenotetext{e}{Empty values are those where our method was unable to obtain a mass estimate.} \tablecomments{Table~\ref{tab:Catalog} is published in its entirety in machine-readable format. A portion is shown here for guidance regarding its form and content.} \label{tab:Catalog} \end{deluxetable*} OKP OBe stars not identified from the main spectroscopic surveys above were mostly identified from \citet[]{Meyssonnier1993}, who carried out a spatially complete, objective prism survey of the entire SMC, targeting H$\alpha$ emission-line objects down to a photographic magnitude of 18 in the continuum. We also utilize \citet[]{Mennickent2002}, who identified Be candidates primarily from photometric variability. If a star was listed in these catalogs as an emission-line object, and was given a spectral type of O or B in a different catalog, it was included in our certain or candidate Oe and Be star lists and identified as "Em*" \citep{Meyssonnier1993} and/or "Be?" \citep{Mennickent2002} in Table~\ref{tab:Catalog}. \citet{Aadland2018} discuss how the nearest O-star CDFs for different OB spectral classes can depend on the completeness of spectral type availability. Since \citet[]{RIOTS4} obtained spectra of nearly all the field stars, the parent OKP sample has a strong bias for available spectral types in the field relative to groups. Appendix~\ref{sec:completeness} reviews the completeness of the OKP sample relative to other SMC OBe and OB surveys. However, we stress that what matters here is whether there is a {\it relative} bias against OKP OBe detections in clusters compared to field. Comparison with the Evans surveys show no evidence of such a bias, especially considering that \citet{Evans2006} specifically considers the rich clusters NGC 330 and 346 (Table~\ref{tab:Completeness}). Indeed, OBe stars are more easily detected than OB stars since they are unlikely to be newborn, embedded objects, and therefore have enhanced luminosities relative to the ZAMS. Moreover, the disk emission contributes to their optical fluxes and provides easily-detected line emission. Thus we caution that these effects generate a bias favoring the detection of OBe stars, potentially setting a lower mass selection than for OB stars. On the other hand, general incompleteness in a few highly reddened regions may dilute this selection bias, but this mainly applies to the densest clusters like NGC~330 \citep[e.g.,][]{BodensteinerN330} and NGC~346. \begin{deluxetable*}{llcccccccc} \tablecaption{Populations of SMC OB stars} \tablewidth{0pt} \tablehead{ & & \multicolumn{4}{c}{Numbers in Populations\tablenotemark{a}} & \multicolumn{3}{c}{Nearest O-Star Separations (pc)\tablenotemark{b}} \\ \colhead{Row} & \colhead{Spectral Type} & \colhead{Field} & \colhead{Groups} & \colhead{Total} & \colhead{Proportion in Field} & \colhead{Median} & \colhead{ Mean} & \colhead{Std Err} } \startdata (1) & Early O (O3--O7) & 15 & 39 & 54 & $0.28\pm0.08$ & 19 & 28 & 5.1 \\ (2) & Late O (O8--O9) & 60 & 75 & 135 & $0.44\pm0.07$ & 28 & 40 & 4.2 \\ (3) & B (B0--B2) & 91 & 102 & 193 & $0.47\pm0.06$ & 30 & 45 & 4.4 \\ (4) & OB\tablenotemark{c} & 11 & 383 & 394 & $0.03\pm0.01$ & 13 & 21 & 1.6 \\ (5) & Oe & 22 & 16 & 38 & $0.58\pm0.16$ & 34 & 56 & 9.4 \\ (6) & Be & 115 & 108 & 223 & $0.52\pm0.06$ & 39 & 63 & 4.9 \\ (7) & OBe\tablenotemark{c} & 11 & 107 & 118 & $0.09\pm0.03$ & 21 & 30 & 2.6 \\ (8) & OB or OBe\tablenotemark{c} & 5 & 24 & 29 & $0.17\pm0.08$ & \nodata & \nodata & \nodata \\ (9) & O,B HMXB\tablenotemark{d} & 4 & 3 & 7 & $0.57\pm0.36$ & 54 & 54 & 5.5 \\ (10) & OBe HMXB & 19 & 20 & 39 & $0.49\pm0.14$ & 47 & 61 & 8.1 \\ \hline % (11) & {Total confirmed O\tablenotemark{e,f} }& 81 & 121 & 202 & $0.40\pm0.05$ & 26 & 37 & 3.1 \\ (12) & Total OB\tablenotemark{f,g} & 174 & 599 & 773 & $0.23\pm0.02$ & 22 & 36 & 1.8\\ (13) & Total OBe\tablenotemark{g} & 167 & 251 & 418 & $0.40\pm0.04$ & 36 & 57 & 3.3\\ (14) & Total OB and OBe\tablenotemark{f,h} & 346 & 874 & 1220 & $0.28\pm0.02$ & \nodata & \nodata & \nodata \\ (15) & {Total HMXB\tablenotemark{i}} & 24 & 25 & 49 & $0.49\pm0.12$ & 48 & 58 & 6.5\\ \hline (16) & Cross-class binaries\tablenotemark{f} & 7 & 3 & 10 & $0.47\pm0.21$ & \nodata & \nodata & \nodata \\ (17) & O,B I/II & 21 & 59 & 80 & $0.26\pm0.06$ & 24 & 42 & 4.9 \\ (18) & OBe I/II\tablenotemark{d} & 6 & 35 & 41 & $0.15\pm0.06$ & 29 & 46 & 7.7 \\ \enddata \tablenotetext{a}{OKP stars with spectral type $\leq$ B2. Supergiants (luminosity class I/II) are excluded from all categories except in rows 15, 17, and 18. HMXBs are excluded from rows 1--8, but included in rows 11--15. "Group" stars are non-field stars.} \tablenotetext{b}{Listed values are calculated for the CDFs shown in Figure~\ref{fig:f2}a.} \tablenotetext{c}{"OB", and "OBe" stars have uncertain classification between Early vs Late O, O vs B, and Oe vs Be, respectively. "OB or OBe" have uncertain emission-line status. There are no published spectral types for 373 "OB" stars and 107 "OBe" stars. } \tablenotetext{d}{Includes one star of uncertain emission-line status.} \tablenotetext{e}{Includes 5 Field and 7 Group stars of uncertain Early vs Late O status from row 4, and 1 Field O HMXB from row 9.} \tablenotetext{f}{For our analysis, binaries with two stars of the same classification are treated as a single star of the relevant class, as are binaries with a member not meeting our selection criteria. Ten binaries in row 16 have components individually included in rows 1--4, but treated as single stars in rows 11, 12 and 14.} \tablenotetext{g}{Total number excluding row 8.} \tablenotetext{h}{Total number including row 8.} \tablenotetext{i}{All OKP HMXBs, including 1 B[e] HMXB and 2 supergiant HMXBs, which are excluded from rows 9--14. These all represent post-SN objects.} \label{tab:Populations} \end{deluxetable*} We sorted the stars into the categories shown in Table~\ref{tab:Populations}, separating out supergiants (luminosity classes I and II). We exclude 16 other stars such as B[e] stars, WR stars, and stars with spectral types $>$ B2. Our final sample size, omitting these 16, is 1344 stars. Tables~\ref{tab:Catalog} and \ref{tab:Populations} indicate field and group stars, with groups defined as having at least 3 stars associated by the friends-of-friends algorithm for a clustering length of $l_c=28$ pc \citep{OKP}. Single stars and those with only one other OB star within $l_c$ are defined here to be field stars. We adopt an SMC distance modulus of 18.9 \citep{Harries2003}. We see that of the total 418 OBe stars, $40\%\pm4$\% occur in the field, despite the field star sample only making up $28\%\pm2$\% of the sample. Figure \ref{fig:f1} also shows that OBe stars have a greater spatial extent than OB stars, extending to larger distances from the star-forming body of the SMC. \section{Spatial isolation of OBe stars} \label{sec:Results} Figure \ref{fig:f2}a plots the CDFs of separations from the nearest O-stars that are not Oe, HMXB or supergiant, for different stellar classes in Table~\ref{tab:Populations}. These are obtained by first constructing the "maximum" CDF using all stars in a given population, including those with uncertain spectral types; and second, the "minimum" CDF using only stars with spectral types that are not reported to be uncertain. This range is shown by the shaded regions for the O and Be star populations as representative examples. The adopted CDFs are then the medians between these extremes. Binary stars with members in two different populations are included in both when calculating the CDF. Table~\ref{tab:accounting} gives the numbers and detailed accounting of the membership in the OB and OBe CDFs,and Table~\ref{tab:Populations} gives the separations from nearest O stars for each category as obtained from the CDFs. Table \ref{tab:Populations} shows that almost all the OB stars with uncertain classifications are in groups, and moreover, they constitute almost half of our entire OB sample. However, we note that the OKP sample is based on uniform photometric selection of OB stars, and the uncertainty in the spectral classifications is accounted for by our method based on the maximum and minimum CDFs. Our sample does include 63 stars with spectral types identified by ranges such as "B0-5" or "B1-3", meaning that they that may be later than the nominal limit of B2. Excluding these from our sample results in an insignificant offset to the CDF medians. For B, Be, and OB supergiants, their removal affects the median by less than 1.0 pc; and for HMXBs and OBe supergiants, the median changes by 2.4 pc and 1.4 pc, respectively. Thus, the inclusion of these stars does not make an appreciable difference to our results. The CDFs in Figure~\ref{fig:f2}a allow all of the OB stars with uncertain classifications (Table~\ref{tab:Populations}, line~4) to serve as "home base" O stars, i.e., candidate nearest O stars. Note that these do not necessarily represent the parent clusters of the target stars, since the clustering length for OB stars is only 28 pc \citep{OKP}; thus the CDF should simply be regarded as a measure of relative isolation. If we allow only confirmed O stars (Table~\ref{tab:Populations}, line~11) to serve as "home base" stars, we obtain the CDFs in Figure~\ref{fig:f2}b. This can also be compared with Figure~\ref{fig:f2}c, which allows only field O stars, including "OB" stars, to serve as "home base" stars, thereby forcing all populations to follow the field star CDF. Since almost all of the uncertain "OB" stars belong to groups, we see that Figure~\ref{fig:f2}b effectively removes many groups, which moves the CDFs for some populations farther into the field. Therefore, Figure~\ref{fig:f2}a is more realistic than Figure~\ref{fig:f2}b. Figure \ref{fig:f2}a indeed shows the expected progression of early O stars being the least isolated, followed by late O stars, and then B stars, confirming the trend reported by ST15. However, we also immediately see that both Oe and Be stars are farther out in the field than their non-emission-line counterparts. Moreover, Oe and Be stars are farther in the field than even evolved supergiant OB stars. These trends are supported by Mann-Whitney statistics, which test for difference in location of distributions. The $p$-values for comparing the Late O vs Oe and B vs Be CDFs are 0.089 and 0.005, respectively. In contrast, $p > 0.5$ for both Late O vs B, and Oe vs Be. The trends are corroborated by our findings above that OBe stars have both a higher field star frequency and greater spatial distribution than their OB counterparts (Section~\ref{sec:Sample Description}). The Oe and Be CDFs in Figure~\ref{fig:f2}a are statistically indistinguishable ($p = 0.83$), in contrast to the corresponding OB distributions. This implies that on average, Be stars do not drift further into the field than Oe stars, contrary to expectations based on the relative lifespans of O and B stars. This is consistent with the mass-transfer scenario for OBe stars, whereby the masses of the gainers are altered by varying amounts of mass transfer, which can also affect both the spectral type and nuclear burning lifetime, diluting the expected relationship between spectral type and age, and therefore, field distance. Thus, these results support the scenario that OBe stars originate from BSS ejections in massive binary systems. This is consistent with the findings of \citet{Johnny}, who find that the field OBe population statistics and kinematics are broadly consistent with BSS origins. The effect may also be amplified by the two-step ejection process \citep{Pflamm2010} discussed in that work. We caution that certain effects may complicate our findings. First, the CDF trend could be mimicked if the OBe stars originate from systematically lower ZAMS masses than the OB stars. Since OBe stars in the SMC tend to be slightly older than other OB stars \citep{Martayan2007b} and have decretion disks, they appear slightly redder, which may introduce a slight bias toward selecting lower-mass objects with our photometric selection criteria. Second, there are likely many binary mass gainers and quiescent OBe stars without currently active decretion disks, which appear in the OB CDF. The two effects counteract each other, so it is unclear how they affect our final results. We therefore also construct CDFs similar to Figure~\ref{fig:f2}, but for OB and OBe stars binned by mass ranges instead of spectral type. The masses are obtained using photometry from \citet{Massey2002} and the spectral types compiled above. We convert from spectral type to $T_{\rm eff}$ following \citet[][see also Lamb et al. 2013]{Castro2021}; luminosities are obtained from $\log g$ based on the photometry, $T_{\rm eff}$, and extinction curve of \citep{Fitzpatrick2007}. The masses (Table~\ref{tab:Catalog}) are obtained by interpolating the positions of the stars on the H-R diagram using the \citet{Brott2011} evolutionary tracks for SMC metallicity. Stars with ambiguous spectral types of simply "O", "B", and "OB" are assigned ranges of O3 -- O9, B0 -- B3, and O3 -- B3, respectively. This causes an artificial bimodality in the resulting mass distribution (Figure~\ref{fig:MassDistrib}) because most of these ranges allow spectral types as early as O3 for any given star. In reality, the vast majority of stars with these uncertain types must have spectral types near the late end of their ranges, as dictated by the initial mass function. Thus, the number of high-mass stars is significantly overestimated. The main point of the comparison however, is to demonstrate that there is no signficant offset in the OBe masses relative to those OB stars, which indeed appears to be the case. For completeness, Figure~\ref{fig:CDFmass} shows the resulting CDFs for $m> 30\ \msun$, $20 < m \leq 30\ \msun$, and $m \leq 30\ \msun$. As in Figure~\ref{fig:f2}a, the adopted CDFs are the medians between the maximum CDF, which uses all stars in the given mass range, including those which have only an upper or lower limit within the range, or where a mass estimate was unable to be obtained; and the minimum CDF, which uses only stars for which both the upper and lower mass limits are within the given range. These CDFs are therefore constructed analogously to those in Figure~\ref{fig:f2}a. Representative examples of maximum and minimum CDFs are shown in the shaded regions for the OB stars with $> 30\ \msun$ and OBe stars having $< 20\ \msun$. As a result of the overestimated upper masses for stars with uncertain spectral types, the maximum CDFs tend to be overpopulated and therefore the median CDFs are biased toward smaller field distances, especially for the highest-mass population. A similar, but less pronounced, effect therefore applies to Figure~\ref{fig:f2}. Overall, Figure~\ref{fig:CDFmass} confirms the trends in Figure~\ref{fig:f2}, including the trend that OBe stars are at larger field distances than OB stars in the same mass ranges. \subsection{Comparison with HMXBs} If OBe stars are BSS-ejected objects, then their CDFs should reside at a locus similar to that of other known BSS populations. We therefore consider the HMXBs in our sample \citep[]{Haberl}, which are known post-SN objects. Table~\ref{tab:Populations} shows that 39 of the 49 HMXBs are confirmed OBe stars, thus the comparison is not independent, but this demonstrates a close link between OBe stars and HMXBs. Figures \ref{fig:f2}a, \ref{fig:CDFmass} and Table~\ref{tab:Populations} compare the CDFs for the non-HMXB Oe and Be stars, and all the HMXBs, since there are so few non-OBe HMXBs. The similarity of the Oe and Be CDFs with HMXBs ($p=0.28$ in Figure~\ref{fig:f2}) supports the scenario that post-SN systems may dominate the classical OBe star population \citep{Vinciguerra2020}. This is consistent with \citet[]{Bodensteiner}, who find no main-sequence companions for any Be systems in the Milky Way. Figures~\ref{fig:f2}a, \ref{fig:CDFmass} and Table~\ref{tab:Populations} are also consistent with HMXBs being the population furthest into the field, consistent with expectations that they represent the tightest binaries, capable of surviving the supernova kicks. Models show that they are more strongly accelerated than unbound objects \citep[][]{Renzo, Podsiadlowski}. Comparing Figures~\ref{fig:f2}a and \ref{fig:f2}b, we see that HMXBs are especially sensitive to the removal of groups, implying that they originate from sparser OB groups than the other populations. This suggests that they originate from more evolved groups, and therefore may correspond to lower-mass, rejuvenated progenitors. This needs to be confirmed, but is consistent with expectations that HMXBs dominate at lower progenitor masses at lower, SMC metallicity than at high metallicity \citep[e.g.,][]{Heger2003}. \section{Discussion} \label{sec:Discussion} The relative isolation of OBe stars supports the model whereby these emission-line stars form through massive binary interaction, since the companion gains both mass and angular momentum \citep[e.g.,][]{Pols}. If the mass gainer approaches its critical rotation velocity as a consequence of stellar evolution \citep[e.g.,][]{Zhao2020}, a decretion disk forms, and the gainer becomes an OBe star. The donor continues to evolve, likely becoming a stripped, He-burning star, and subsequently a core-collapse SN, kicking the system. The final end state of non-disrupted massive binaries may often be double compact systems. The high frequency of massive OBe stars in the SMC implies that mass transfer in our objects is dominated by Case A or B, and unlikely to have significant contributions at later evolutionary stages. This is also consistent with Be X-ray binary population synthesis models \citep{Vinciguerra2020}. The scenario that most OBe stars represent objects that have been spun up by mass transfer predicts that we should expect to see pre-SN examples of this interaction. Two of our OBe stars, [M2002]SMC-30744 and 41095, are double-lined spectroscopic binaries, one of which is also an eclipsing binary. These interacting binaries are not classical OBe stars, and thus excluded from the OBe sample. But they may represent examples of actively accreting systems that will evolve into classical OBe stars. We note that OBe supergiants are found at the same distances as OB supergiants (Mann-Whitney $p=0.33$), and not with the other OBe stars (Figure~\ref{fig:f2}a, Table~\ref{tab:Populations}). This is consistent with an evolved population that has a different origin for the emission lines, which are believed to form in the stellar winds rather than a decretion disk \citep{Puls2008}. \section{Conclusion} \label{sec:Conclusion} Binary mass transfer is increasingly seen as the mechanism for spinning up classical OBe stars, enabling formation of their decretion disks. In particular, previously hidden subdwarf companions are now observed for lower-mass Be stars \citep[e.g.,][]{Wang}, binary population synthesis models produce Be star populations that are consistent with observations \citep{Shao, Boubert}, and a lack of main-sequence companions to massive Be stars appears to be established \citep[]{Bodensteiner}. Moreover, the numbers of OBe stars seem generally consistent with post-SN binary origins \citep{Johnny}. Using the spatially complete sample of 1344 OB stars in the SMC from \citet{OKP}, we show that the spatial distribution of OBe stars further confirms that these objects experienced BSS ejections, as follows. (1) We find that our field stars correspond to $40\%\pm4$\% vs $28\%\pm2$\% of OBe and OB stars, respectively, consistent with the visual impression of their relative spatial distribution (Figure~\ref{fig:f1}). Using the CDF of stellar distances to nearest O stars to evaluate relative isolation for different populations, we find that, (2) OBe populations are more isolated than their counterpart OB populations (Figure~\ref{fig:f2}a, Table~\ref{tab:Populations}). Moreover, (3) OBe stars reside at distances into the field similar to that of HMXBs, which are known post-SN binaries. Finally, (4) the Oe and Be-star CDFs occupy the same locus, contrary to their spectroscopic life expectancies. This implies that their observed spectral types are inconsistent with their birth masses, again supporting the mass-transfer scenario. Thus, \textit{several lines of evidence} indicate that OBe stars largely originate as mass gainers in close, post-SN, massive binary systems. We also find that OBe supergiants are not as isolated as classical OBe stars, as expected from their different origin. \acknowledgments We thank Grant Phillips, Irene Vargas-Salazar, Johnny Dorigo Jones, and Lena Komarova for help and discussions. We also thank Max Moe, Mathieu Renzo, and the anonymous referees for discussions and comments that greatly improved this paper. This work was supported by the National Science Foundation, grant AST-1514838 to M.S.O., who also acknowledges office space hospitality from MDRS, LLC. N.C. acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG), CA 2551/1-1. The Python package astropy was integral to the coding in this research \citep[]{Astropy}. \clearpage \appendix \section{Sample completeness}\label{sec:completeness} Table~\ref{tab:Completeness} examines the completeness of the OKP sample relative to other SMC OBe and OB surveys, applying OKP selection criteria of spectral type $\leq$ B2, luminosity class III--V, and $B\leq15.21$ or $V\leq15.5$. We see that we recover $>50$\% of Oe stars in the \citet{Martayan2007, Martayan2010} OBe surveys, which target 85 SMC clusters. Overall, \citet{Schootemeijer2021} estimate that the SMC has a total of $\sim600$ O-stars. Including Oe stars, OKP has 239 confirmed O-stars and 778 confirmed plus candidate O-stars (Table~\ref{tab:Completeness}); thus our O-star sample is also largely complete. For comparison, the \citet{Bonanos2010} spectroscopic catalog has 250 O-stars applying the OKP criteria {\it without} the magnitude cut. \section{Subsample membership} Table~\ref{tab:accounting} gives a detailed accounting of the membership for the "maximum" CDFs constructed for Figure~\ref{fig:f2}, i.e., each spectral-type subsample that includes stars with uncertain spectral types that include possible membership in the given population. The corresponding "minimum" CDF memberships can be found in Table~\ref{tab:Populations}. \begin{deluxetable*}{lcccccc} \tablecaption{ Catalog Comparisons\tablenotemark{a}} \tablewidth{0pt} \tablehead{ \colhead{Populations} & \colhead{OKP\tablenotemark{b}} & \colhead{MA93\tablenotemark{c}} & \colhead{Martayan07\tablenotemark{d}} & \colhead{Martayan10\tablenotemark{e}} & \colhead{Evans04\tablenotemark{f}} & \colhead{Evans06\tablenotemark{g}} } \startdata O-star Total & 778 & \nodata & \nodata & \nodata & 107 & 13 \\ O-stars in OKP & 778 & \nodata & \nodata & \nodata & 94 & 9 \\ Oe Total & 171 & \nodata & 1 & 11 & 2 & 2 \\ Oe in OKP & 171 & \nodata & 1 & 6 & 2 & 2 \\ OB Total & 1220 & \nodata & \nodata & \nodata & 459 & 54 \\ OB in OKP & 1220 & \nodata & \nodata & \nodata & 239 & 30 \\ OBe Total & 447 & 461 & 14 & 23 & 29 & 16 \\ OBe in OKP & 447 & 250 & 3 & 10 & 21 & 11 \\ \hline O-star Recovery & \nodata & \nodata & \nodata & \nodata & 0.88 & 0.69 \\ Oe Recovery & \nodata & \nodata & 1.0 & 0.55 & 1.0 & 1.0 \\ OB Recovery & \nodata & \nodata & \nodata & \nodata & 0.52 & 0.56 \\ OBe Recovery & \nodata & 0.54 & 0.21 & 0.44 & 0.72 & 0.69 \enddata \tablenotetext{a}{ Recovery fractions of OKP stars in the given surveys are shown, based on the spectral types in the given survey, for OB stars of spectral type $\leq$ B2 and luminosity classes III -- V. Stars of uncertain types are included if the range is consistent with these criteria. Here, OBe stars are included in the OB survey numbers.} \tablenotetext{b}{\citet{OKP} survey, for $B\leq 15.21$, retaining 480 stars with no published spectral types as OB candidates.} \tablenotetext{c}{\citet{Meyssonnier1993} slitless SMC OBe survey, for $V\leq 15.5$, selecting their parameters "spec"="em" or "em:" and excluding objects with an "Obj" classification or with H$\alpha$ classified as sharp, "Sp=c1". MA93 do not provide spectral types.} \tablenotetext{d}{\citet{Martayan2007} OBe spectroscopic survey of rich cluster NGC 330, for $V\leq 15.5$.} \tablenotetext{e}{\citet{Martayan2010} OBe slitless survey of 84 SMC open clusters, for $B\leq 15.21$.} \tablenotetext{f}{\citet{Evans2004} SMC spectroscopic survey, for \citet{Massey2002} $B_M\leq 15.21$ and $\leq15.21$ if no $B_M$ available.} \tablenotetext{g}{\citet{Evans2006} spectroscopic survey of rich clusters NGC 330 and NGC 346, for $V\leq 15.5$.} \label{tab:Completeness} \end{deluxetable*} \begin{deluxetable*}{lcl} \tablecaption{Membership for Maximum CDF Sub-populations} \tablewidth{0pt} \tablehead{ \colhead{Subsample} & \colhead{Total} & \colhead{Membership\tablenotemark{a}} } \startdata Early O & 464 & Table~\ref{tab:Populations} rows 1, 4, 8 \\ \multicolumn{3}{l}{Subtracted from row 4: 6908, 19481, 41095, 44634, and 83235} \\ \multicolumn{3}{l}{Subtracted from row 8: 1600, 18871, 23352, 15380, 27712, 31574, 34457, 49580} \\ \hline Late O & 552 & Table~\ref{tab:Populations} rows 2, 4, 8 \\ \multicolumn{3}{l}{Subtracted from row 4: 19481} \\ \multicolumn{3}{l}{Subtracted from row 8: 15380, 18871, 23352, 27712, and 31574} \\ \hline B stars & 600 & Table~\ref{tab:Populations} rows 3, 4, 8 \\ \multicolumn{3}{l}{Subtracted from row 4: 7382, 10505, 11238, 16734, 22178, 38695, 38703, 41095, 53324, 55808, 56834, 60439, and 66160}\\ \multicolumn{3}{l}{Subtracted from row 8: 1600, 34457, and 49580}\\ \hline Oe stars & 159 & Table~\ref{tab:Populations} rows 5, 7, 8 \\ \multicolumn{3}{l}{Subtracted from row 8: 1600, 8178, 11019, 11209, 12308, 12403, 13168, 13986, 14592, 15321, 15380, 18871, 21534, 23352, } \\ \multicolumn{3}{l}{27610, 27712, 28076, 31574, 34457, 49580, 54723, 56139, 58751, 60684, 64736, 68629} \\ \hline Be stars & 370 & Table~\ref{tab:Populations} rows 6, 7, 8 \enddata \tablenotetext{a}{Each subsample consists of stars from the given rows of Table~\ref{tab:Populations}, omitting the listed stars. These are removed since they do not meet the subsample's spectral type criteria or are binary stars that are already counted in another included row for the subsample.} \label{tab:accounting} \end{deluxetable*} \clearpage \bibliographystyle{aasjournal}{} \bibliography{Dallas}
Title: Unveiling the contribution of Pop III stars in primeval galaxies at redshift $\geq 6$
Abstract: Detection of the first stars has remained elusive so-far but their presence may soon be unveiled by upcoming JWST observations. Previous studies have not investigated the entire possible range of halo masses and redshifts which may help in their detection. Motivated by the prospects of detecting galaxies up to $z\sim 20$ in JWST early data release, we quantify the contribution of Pop III stars to high-redshift galaxies from $6 \leq z \leq 30$ by employing the semi-analytical model A-SLOTH, which self-consistently models the formation of Pop III and Pop II stars along with their feedback. Our results suggest that the contribution of Pop III stars is the highest in low-mass halos of $\rm 10^7-10^9~M_{\odot}$. While high-mass halos $\rm \geq 10^{10}~M_{\odot}$ contain less than 1\% Pop III stars, they host galaxies with stellar masses of $\rm 10^9~M_{\odot}$ as early as $z \sim 30$. Interestingly, the apparent magnitude of Pop~III populations gets brighter towards higher redshift due to the higher stellar masses, but Pop~III-dominated galaxies are too faint to be directly detected with JWST. Our results predict JWST can detect galaxies up to $z\sim 30$, which may help in constraining the IMF of Pop III stars and will guide observers to discern the contribution of Pop~III stars to high-redshift galaxies.
https://export.arxiv.org/pdf/2208.01673
. \begin{document} \title{Unveiling the contribution of Pop III stars in primeval galaxies at redshift $\geq 6$} \correspondingauthor{Muhammad A. Latif} \email{latifne@gmail.com} \author[0000-0003-3518-0235]{Shafqat Riaz} \affiliation{Center for Field Theory and Particle Physics and Department of Physics, Fudan University, 200438 Shanghai, China} \author[0000-0001-6742-8843]{Tilman Hartwig} \affiliation{Institute for Physics of Intelligence, School of Science, The University of Tokyo, Bunkyo, Tokyo 113-0033, Japan} \affiliation{Department of Physics, School of Science, The University of Tokyo, Bunkyo, Tokyo 113-0033, Japan} \affiliation{Kavli Institute for the Physics and Mathematics of the Universe (WPI), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan} \author[0000-0003-2480-0988]{Muhammad A. Latif} \affiliation{Physics Department, College of Science, United Arab Emirates University, PO Box 15551, Al-Ain, UAE} \keywords{Population~III stars (1285) --- Population~II stars (1284) --- High-redshift galaxies (734) --- James Webb Space Telescope (2291)} \section{Introduction} \label{sec:intro} Tremendous progress on the observational frontier has enabled astronomers to detect galaxies up to the cosmic dawn during the past two decades. About thousand galaxies have been detected detected at z $>$6 \citep{Bowens16,Oesch16,Harikane22,schaerer22,Finkel22} with candidates up to z$\sim$ 20 being revealed in James Webb space telescope (JWST) early data release \citep{carnall22,castellano22,naidu22,Yan22,Adam22}, which may just be the tip of the iceberg \citep{dayal18}. One of the primary goals of JWST is to unveil primeval galaxies that contain Pop III stars and revolutionize our understanding of the high-redshift universe. In fact, the commissioning of JWST has shown that unprecedented sensitivity of NIRCam can detect objects with a flux of $\sim 10$\,nJy (equivalent to apparent magnitude of $\sim$ $29$) at SNR$=10$ for a standard exposure time of 10\,ks \citep{rigby22}. With longer exposure times and gravitational lensing, JWST may discover even more and fainter galaxies at redshift $>10$. Therefore, it is very timely to make predictions about the contribution and presence of Pop III stars. Such work will help in guiding forthcoming JWST observations. Pop III stars are expected to form in pristine minihalos of a few times $\rm 10^6 \Msun$ at z $\geq 10$ \citep{skinner20,schauer21}. They ushered the universe out of cosmic dark ages, initiated the process of re-ionization and shaped the formation of high-redshift galaxies via their feedback. Depending on their mass, they are expected to have short lifetimes, may go off as a supernovae (SNe) and enrich the surrounding ISM with metals \citep{Heger02}. In the aftermath of Pop III SNe, second generation stars known as Pop II stars form from metal-enriched gas with metallicity as low as $\rm \geq 10^{-5}~Z_{\odot}$ \citep{Schneider03,Omukai05}. Recent, numerical simulations including UV radiative feedback from stars suggest Pop III characteristic masses of a few ten of $\rm \Msun$ \citep{Clark11, Stacy16,Sugi20,Latif22} substantially lower than previously thought \citep{Bromm02,Abel02,Yoshida08}. However, direct observations are required to constrain their mass spectrum, which might be achieved with upcoming observations of high-redshift galaxies with JWST. In fact, the star formation rate density (SFRD) of Pop~III stars dominates at $z \geq 15$ \citep{hartwig22} suggesting their significant role in shaping high-redshift galaxies and the necessity of taking into account the contribution of Pop III stars in modeling their SEDs. \citet{zackrisson11,zackrisson17} investigated the spectral evolution of first galaxies finding that Pop III galaxies with stellar masses as low as $\rm 10^5 \Msun$ can be detected at z $\sim$ 10 and discussed various observational strategies. Renaissance simulations have examined the properties of high-redshift galaxies such as their stellar masses, SFRs, UV luminosity functions and escape fractions of ionizing radiation \citep{oShea15,xu16}. Their results suggest that large fractions of Pop III stars may remain elusive to direct detection at z=15 \citep{barrow18}. \citet{jaacks19} studied the legacy of Pop III star formation and found that the Pop~III contribution to SFRDs significantly increases beyond z$\sim$15 up to 50\% while their contribution to ionizing emissivity is about 60 \%. Recently, \citet{katz22} simulated a halo of $\rm 3 \times 10^8 \Msun$ at z=10 to investigate the possibility of Pop~III detection with JWST and found that key signatures of Pop III stars fade away quickly due to their short lifetimes. These studies could not investigate the entire range of possible halo masses and redshifts due to numerical limitations, but they indicate that Pop III stars might be detectable at high redshift. Motivated by the prospects of detecting galaxies with JWST up to $\rm z\sim 20$ \citep{Yan22}, in this letter, we perform a comprehensive study which self-consistently models the formation of both Pop III \& Pop II stars along with their chemical, mechanical and radiative feedback for a statistical sample of high-redshift galaxies. We simulate here a wide range of galaxies forming in different halo masses at $z=6-30$ because of the expected dominance of Pop III stars in this era and report their properties, such as masses and luminosities for Pop~III and Pop~II stars. These results will help to identify the possible contributions of Pop~III stars in the upcoming data of JWST. \section{Methodology} \label{sec:methods} We use the semi-analytical model \asloth\ to simulate the formation of the first galaxies \citep{hartwig22,magg22}. The model is based on dark matter merger trees, which are generated with the Extended Press-Schechter formalism \citep[EPS,][]{PressSchechter,Bond1991}. Given a halo mass and final redshift, the code first generates a dark matter merger tree backwards in time and then simulates the baryonic physics and feedback forward in time. Stars form once a halo is above the critical mass for efficient gas cooling \citep{schauer21}, which includes the baryonic streaming velocity and a Lyman-Werner background, following \citet{hartwig22}. \asloth\ includes chemical, radiative, and mechanical feedback from stars and different types of SNe and distinguishes between Pop~III and Pop~II star formation based on the ISM composition \citep{chiaki17}, which results in an effective threshold metalliticty of around $10^{-5}\,Z_\odot$. We sample individual stars based on pre-defined initial mass functions (IMFs) for Pop~III and Pop~II stars. This allows us to trace the lifetime and feedback of stars and their supernova explosions accurately in time. Moreover, we can precisely determine the surviving stellar mass at any redshift based on their lifetimes. Hence, our model does not rely on analytical star formation histories, or assumes a single stellar population. Instead, we model the formation of individual stars in high-redshift galaxies self-consistently. The model is calibrated based on six observables, such as the ionization history and the cosmic SFRD, which guarantees reliable predictions up to high redshifts. For Pop~II stars, we assume a Kroupa IMF in the mass range 0.1-100\Msun while for Pop~III stars we employ a logarithmically flat IMF in the mass range $5-210\Msun$, which best reproduces observations \citep{hartwig22}. However, the lower-mass end of the Pop~III IMF is poorly constrained and for this research we allow Pop~III stars to form down to $3\Msun$. This does not change the global properties of the model, but it allows us to show more fine-grained results due to their slightly longer lifetimes. \asloth\ resolves minihalos with $M_h \geq 10^6\Msun$ at the highest redshifts. While this high mass resolution is excellent to follow the physics accurately, such small galaxies might suffer from stochastic sampling effects because they only contain a handful of stars. Therefore, we resample each galaxy several times with different random seeds and report the median value and the central $68\%$ percentile to quantify the cosmic variance. \section{Results\label{sec:results}} We have simulated a large number of high-redshift galaxies using \asloth, which allows to self-consistently model the formation of Pop III and Pop II stars. Moreover, we have explored a variety of halos with masses ranging from $\rm 10^7-10^{11}~\Msun$ from $\rm z=30$ down to $\rm z=6$. To mimic cosmic stochasticity, each combination of redshift and halo mass is simulated about 100 times. This comprehensive study enables us to estimate the properties of high-redshift galaxies such as stellar masses, star formation rates, metallicities, luminosities and to quantify the relative contribution of Pop III stars. The average Pop III stellar mass varies from $\rm 10-10^5 ~\Msun$ for halos of $\rm 10^7-10^{11}~\Msun$ and increases with redshift, see Fig.~\ref{fig1}. At $z \leq 10$, Pop III stars form only in $<25\%$ of simulated halos due to metal pollution. The Pop II stellar mass varies from $\rm 100-10^9 ~\Msun$ for $\rm 10^7-10^{11}~\Msun$ halos. Overall, the Pop II stellar mass does not significantly change in similar mass halos from $\rm z=30 - 6$ as shown in Fig.~\ref{fig1}. Statistical variations in Pop III stellar mass from halo to halo are within a factor of a few and prominent in low-mass halos as they are more prone to stellar feedback and effects from random sampling. We have selected here a fudicial value of $\rm M_{min} =3 \Msun\ $ for the lower cutoff mass which is the logarithmic mean of the possible range of minimum Pop III stellar masses between $\rm 0.8-10 \Msun\ $ \citep{hartwig22} and therefore the most representative value. We also investigated the impact of lower cutoff masses on the survival of Pop III stars. Our findings suggest that for a cutoff mass of $\rm 5 \Msun$, Pop III stars stop contributing already at z=18 but for lower cutoff mass of $\rm 0.8 \Msun$ they can survive down to z $\sim$ 6. Overall, for the lowest cutoff mass, the Pop III stellar mass is a factor of few higher than our fiducial case at all redshifts. Typical luminosity of Pop III stellar populations varies from $\rm 10^3-10^7~L_{\odot}$ while for Pop II it ranges from $\rm 10^3-10^{12}~L_{\odot}$ as depicted in Fig. \ref{fig1}. Both Pop III \& Pop II luminosities increase with redshift and are highest at $\rm z\sim 30$. For high-mass halos, the Pop III luminosity is a few orders of magnitude smaller than Pop II, but for halos with masses $\rm 10^7-10^8~\Msun$, the Pop III luminosity is roughly comparable to Pop II. These results suggest that some massive galaxies with stellar mass of $\rm 10^{9}~\Msun$ at $\rm z>15$ have luminosities of $\rm 10^{12}~L_{\odot}$ and are as bright as their counterparts at $\rm z \sim 10$ \citep{naidu22}. Our estimates for stellar mass vs halo mass for the entire sample of galaxies from $\rm z=30-6$ are shown in Fig. \ref{fig2}. Total stellar mass varies from $\rm \sim 10^2 ~to~ 10^9 \Msun$ for halo masses of $\rm 10^7-10^{11}~\Msun$ and monotonically increases with halo mass. Scatter in the plot is due to the statistical variations in the merger trees and IMF sampling. We find no statistically significant change in the stellar mass to halo mass relation in the redshift range $6 \leq z \leq 30$, i.e., the results at all redshifts lie within their uncertainty range. Overall, our results are in good agreement with previous studies \citep{oShea15,Ceverino17,ma18,jaacks19,pallottini22}. However, the Renaissance simulation predicts more stars at a given halo mass, which according to \citet{Ceverino17} is due to the inefficient feedback. To further elucidate the contribution of Pop III stars to high-redshift galaxies, we show the ratios of Pop III to total stellar mass and Pop III to total luminosity in Fig. \ref{fig3}. It shows that contribution of Pop III stars is close to unity at $\rm z>25$ and highest in low-mass halos at all times from z=30 down to z=10. This contribution drops to below 1\% for halos of $\rm \geq 10^{10}~\Msun$. Furthermore, statistical variations in the Pop III contribution to high-redshift galaxies are two orders of magnitude. Our findings suggest that low-mass galaxies forming at $\rm z\geq 12$ are the best targets to find Pop III stars. The contribution of Pop III stars in high-mass galaxies is much lower than Pop II stars which may pose a challenge to identify them. To compare our results with observations, we estimated the bolometric apparent magnitudes of our galaxies and their statistical variations, which are shown in Fig. \ref{fig4}. To calculate the bolometric apparent magnitude, we used $m = -26.83 - 2.5\log\left(F/F_{\odot}\right),$ where $F_{\odot}$ is the Solar flux, and $F$ is the flux of our simulated galaxy, which is estimated using the luminosity distance relation for a given redshift. We compute the luminosities of Pop III \& Pop II stars separately. For Pop III stars, we use the fitting function given in Eq 3 of \cite{Winhorst18} while for Pop II stars, we use a standard luminosity-mass relation. We also show the expected apparent magnitude of Pop III and Pop II stars separately that enables us to quantify their contributions and compare them with the detection limit of JWST. The apparent magnitude of Pop III stars varies from 40-50, increases with redshift, and is brightest for a $\rm 10^{11}\,\Msun$ halo at $\rm z\sim 30$, much fainter than the JWST detection limit of 29.2. The range of apparent magnitude for Pop II stars varies from 27-50, which is much larger than for Pop III stars, but only the most massive galaxy will be visible to JWST. In fact, such a galaxy can be detected as early as z=30. We further show the fraction of Pop III luminosity against the total luminosity/AB magnitude for the entire sample of simulated galaxies in Fig. \ref{fig5}. This figure provides a convenient way to estimate the relative contribution of Pop~III stars to the total luminosity of newly detected galaxies at high redshift. It is found that faint galaxies with AB magnitude below -20 are the best candidates for finding Pop III stars across all redshifts but are well below the detection limits of JWST. The brightest galaxies contain less than 1\% Pop III stars but their AB magnitudes are within the range of JWST even at z=30. Based on the halo mass function of \cite{Warren06}, the expected number density of such galaxies is $\rm \leq 1$ per Gpc$^3$ at z=26. Therefore, these galaxies are expected to be rare at earlier times. Nevertheless, we expect such galaxies can be unveiled in upcoming wide survey JWST observations. \section{Discussion and Conclusions} We have simulated a large ensemble of high-redshift galaxies using the semi-analytical model \asloth, which has been calibrated against six independent observables and simultaneously models both Pop~III \& Pop~II stellar populations along with their feedback. This unique sample of galaxies allowed us to study the statistical variations among the properties of high-redshift galaxies, such as stellar masses, luminosities, star formation rates, fraction and contribution of Pop III stars to their host galaxies. Our results suggest that best candidates to search for Pop III stars are low-mass galaxies from $10 \leq z \leq 30$ which are challenging to be detected with JWST and the contribution of Pop III stars decreases to less than 1 \% in massive galaxies. We further predict that JWST can detect galaxies up to $z \sim 30$ as their AB magnitudes lie within its range. These findings may guide observers in planning their observations and also help to improve spectral modeling of high-redshift galaxies. We consider the impact of both baryonic streaming motions and LW radiation based on \cite{schauer21}, which increase the halo threshold mass above $\rm 10^6 \Msun$. Therefore, Pop III stars cannot form in halos with masses lower than this at $z<20$. Pop III stars can still form in halos of a few times $\rm 10^6 \Msun\ $ at $z>20$. They have typical masses of about $10^3\Msun$ but their feedback limits the formation of Pop II stars in the host halos. Interestingly, these halos hosting very young Pop III star may be directly detected with ULT in future. We also investigated the role of the Pop III IMF on our findings by varying its slope from logarithmically flat to Salpeter and found that it has negligible impact on the number of Pop III survivors and total stellar mass. Furthermore, we found that the low cutoff mass of Pop III IMF influences the number of Pop III survivors as well as their masses. We find that Pop III stars can only survive to $z \sim 6$ if their low cutoff mass is $\rm < 3 ~ \Msun$ otherwise they die on relatively short timescales and may not survive to such redshifts. Higher cutoff mass ($\rm 5 \Msun$) decreases the number of Pop III survivors. These results suggest that finding Pop III survivors at $z\leq 10$ may help in constraining the lower mass end of the Pop III IMF. Our results are in agreement with previous works, which simulated only a limited number of high-redshift galaxies (see Fig.~\ref{fig2}). In addition, \citet{barrow18} find similar Pop III stellar masses at a given halo mass in the Renaissance simulation. They also report the fraction of Pop~III stellar mass to be in the range $10^{-6}-0.3$ for halos of $10^7-10^{10}\Msun$, similar to our results. Recently, \citet{Yan22} report the discovery of three galaxy candidates with a photometric redshift of $z \sim 20$ with JWST. These galaxies have stellar masses of $\sim 10^8\Msun$. Based on our results, we can estimate the dark matter halo mass of such galaxies to be $10^{10}-10^{11}\Msun$ (Fig.~\ref{fig2}). This allows us to estimate the contribution of Pop~III stars to the bolometric luminosities of such objects to be only $\lesssim 10^{-3}$ (Fig.~\ref{fig3}). The contribution of Pop~III stars to the luminosity of such objects is hence negligible. In this work, we employed the stochastic feedback model of \asloth ~which is based on the EPS merger trees. Although this stochastic feedback is sufficiently accurate \citep{hartwig22}, future studies can improve by using \asloth's spatial feedback model based on merger trees extracted from N-body simulations to obtain more realistic results. If we were to perform 3D cosmological simulations, which are prohibitively expensive for such a large sample of galaxies, pockets of metal free gas may exist down to lower redshifts due to inhomogeneous mixing of metals. For example, \cite{Liu20} found from cosmological simulations that such pockets of metal free exist even down to z $\sim$ 4 in massive halos of $\rm \geq 10^{9} ~\Msun$. Under these conditions, some Pop III stars may form at lower redshifts. \begin{acknowledgments} TH acknowledges funding from JSPS KAKENHI Grant Numbers 19K23437 and 20K14464. MAL thanks UAEU for funding via SURE Plus 3835 and UPAR grant No. 31S390. \end{acknowledgments} \software{\textsc{a-sloth} \citep{hartwig22,magg22}, python \citep{python09}, numpy \citep{harris20}, scipy \citep{virtanen20}, matplotlib \citep{hunter07}, astropy \citep{price2018astropy}.} \bibliographystyle{aasjournal}
Title: Tsallis holographic dark energy model with event horizon cutoff in modified gravity
Abstract: We considered the Tsallis holographic dark energy model in frames of Nojiri-Odintsov gravity with $f(R)=R+\lambda R^2-\sigma{\mu}/{R}$. For IR cutoff event horizon is taken. The cosmological evolution of such universe is investigated for various initial conditions and values of parameters. The dependence of the Hubble parameter $H$ from time in the future has an oscillations. It is shown that for $\mu \neq 0$ appearance of singularities are typical and the time up to these singularities can be relatively small from cosmological viewpoint. The singularity is associated with the zero of second deribative of $f(R)$ on $R$. It is interesting to note that these models can describe observational data from Ia supernovae astrophysics and dependence of the Hubble parameter from redshift $z$ at least not worse than canonical $\Lambda$CDM model.
https://export.arxiv.org/pdf/2208.13320
\markboth{Artyom V. Astashenok, Alexander S. Tepliakov} {Tsallis holographic dark energy model with event horizon cutoff in modified gravity} \catchline{}{}{}{}{} \title{Tsallis holographic dark energy model with event horizon cutoff in modified gravity } \author{ARTYOM V. ASTASHENOK} \address{Immanuel Kant Baltic Federal University, Nevskogo, 14\\ Kaliningrad 236041, Russia\\ aastashenok@kantiana.ru } \author{ALEXANDER S. TEPLIAKOV} \address{Immanuel Kant Baltic Federal University, Nevskogo, 14\\ Kaliningrad 236041, Russia\\ atepliakov@kantiana.ru } \begin{history} \received{(Day Month Year)} \revised{(Day Month Year)} \end{history} \keywords{Dark energy; holographic energy; modified gravity.} \section{Introduction} The classical theory of gravitation, which describes the gravitational interaction, was developed by Isaac Newton in the second half of the 17th century. Based on Kepler's empirical laws, Newton showed that the observed motion of the planets is under the action of a central force and that this central gravitational force leads to motion along second-order curves (circle, ellipse, parabola, hyperbola), with the central body being in focus. Despite the fact that the theory well described the observed motions of planets and objects in the solar system, it suffered from a number of problems, such as unexplained rapidity (the gravitational force was transferred infinitely fast and through empty space), gravitational paradox (in an infinite universe with Euclidean geometry and non-zero average matter density, the gravitational potential takes infinite value everywhere). By the end of the 19th century, the French astronomer Urbain Le Verrier, who developed the theory of the motion of Mercury, discovered a discrepancy between the theory and observations: the perihelion of Mercury's orbit shifted somewhat faster than it followed from the theory. These problems prompted the search for new theories describing the gravitational interaction. The solution was proposed by Albert Einstein, who developed in 1915 general theory of relativity (GTR), which solved the problems of classical gravitation theory and postulated gravitation as a manifestation of spacetime geometry. Considering evolution of Universe Einstein assumed that it is stationary. For this he included the $\Lambda$ constant in the equations. But Edwin Hubble showed that the universe expands. After this Einstein eventually admitted that $\Lambda$ was a major mistake of his life. In 1998 an analysis of the luminosity of Type Ia supernovae in distant galaxies showed that the Universe expands with an acceleration \cite{1}, \cite{2}. To explain this acceleration we need a new substance in the Universe - so called dark energy, distributed in space with a high degree of homogeneity, having low density and interacting with ordinary matter only through gravity. The simplest explanation for dark energy is nothing else tnan Einstein Cosmological Constant (or vacuum energy) $\Lambda$. Such a model of the Universe is called the $\Lambda$CDM model \cite{LCDM-1,LCDM-2,LCDM-3,LCDM-4,LCDM-5,LCDM-6,LCDM-7}. This model satisfies to observational data with high precision and is currently the standard cosmological model. However, there are more complex descriptions for dark energy phenomena, consistent with observations. In particular, many models of holographic dark energy \cite{Miao, Qing-Guo, Qing-Guo-2} have been proposed. These models are based on the holographic principle \cite{Wang,3,4,5} derived from black hole thermodynamics. According to this principle all physical quantities inside the Universe, including the dark energy density, can be described by setting of some quantities on its boundary. Tsallis' generalization of the Boltzmann-Gibbs entropy expression for black holes \cite{Tsallis, Tsallis-2} led to a new class of dark energy model, namely the Tsallis holographic dark energy model (THDE) \cite{Tavayef,Moosavi,AA}. There are many papers devoted to THDEs. For example, various infrared (IR) cutoffs, including particle horizon, Ricci horizon and Grande-Oliveros (GO) cutoffs have been studied \cite{Zadeh}. Authors of \cite{Ghaffari,Jawad} studied the cosmological consequences of THDE in the framework of Brans-Dicke gravity and modified Brans-Dicke gravity. Most general Tsallis HDE was introduced in \cite{Nojiri:2019skr}. We considered THDE in cosmology on the Randall-Sandrum brane \cite{AA}, and recently investigated the THDE model with inclusion of a possible interaction between matter and dark energy \cite{AA2}. Another possible way to describe cosmological acceleration is modified gravity \cite{reviews1, reviews2, reviews3, reviews4, reviews5, book, reviews6}. Modified gravity models assume that the universe is accelerating due to the deviations of real gravity from GTR on cosmological scales. $F(R)$ gravity is known as the simplest modification of GTR \cite{Harko, reviews3}. In this theory, the Einstein-Hilbert action is changed by replacing of the Ricci scalar $R$ on some function $f(R)$. It is interesting to consider Tsallis holographic dark energy on background of $f(R)$ gravity by the following reasons. $f(R)$ gravity can explain some features of inflation i.e. early cosmological acceleration. And maybe late cosmological acceleration caused by another source namely holographic energy. Therefore consideration of holographic dark energy on non-GTR background has a sense. Also some new effects from holographic dark energy in a case of modified gravity may appear. In \cite{Ens}, various $f(R)$ gravity models have been considered with the inclusion of THDE with simple IR cutoff $L~1/H$ where $H$ is the Hubble parameter. We investigate the more realistic case (in GTR) of THDE with event horizon as IR cutoff in model of gravity with $f(R)=R+\lambda R^2 - \sigma {\mu}/{R}$. In next section basic cosmological equations for universe contained THDE in $f(R)$ gravity are presented. In GTR cosmological equations contain second derivatives of scale factor $d(t)$ but in $f(R)$ gravity third derivative appears. This allows to construct solutions with similar behaviour in past but which split in future. Then we considered in detail solutions with various parameters of THDE and $\lambda$, $\mu$. For considered model of gravity the main feature of cosmological evolution is that Hubble parameter oscillates with time near the dependence corresponding to pure GTR. Also singularity in future appears. This singularity corresponds to zero of second derivative of $f''(R)$. The time before singularity can be relatively small. We performed brief analysis of observational constrains and demonstrated that such models in principle are reliable from astrophysics viewpoint. \section{Basic equations} In $f(R)$ gravity action is written in the following form (hereafter we use the natural system of units, in which $8\pi G=c^2=1$): \bea S = \frac{1}{2}\int f(R)\sqrt{-g}\,d^4x + S_M, \eea where $g$ is the determinant of the spacetime metric $g_{\mu \nu}$, and $S_m$ is the matter action, $f(R)$ is a continuous, twice differentiable function of its argument. Varying the action by the metric, we obtain the equations for gravitational field: \bea R_{\mu\nu}f'(R)-\frac{1}{2}g_{\mu\nu}f(R)-\nabla_\mu\nabla_\nu f'(R)+g_{\mu\nu}\Box f'(R)= \kappa\, T_{\mu\nu}. \eea Here prime means the derivative with respect to scalar curvature $R$, $\nabla_\mu$ is the covariant derivative with respect to the coordinate $x_\mu$, $\Box\equiv\nabla_\mu\nabla^\mu $ and $T_{\mu\nu} $ is the energy-momentum tensor defined by the relation \be T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta S_M}{\delta g^{\mu\nu}}. \ee Next, consider the spatially flat universe described by the Friedman-Lemetre-Robertson-Walker (FLRW) metric: \bea ds^2 = -dt^2 + a^2(t)\left[dr^2 + r^2d\theta^2 + r^2sin^2\theta d\phi^2\right], \eea where $a(t)$ is the scale factor. For this metric the cosmological equations can be written as follows: \be H^2 = \frac{1}{3f'(R)}\left(\rho + \frac{Rf'(R)-f(R)}{2} - 3H\dot{R}f''(R)\right),\label{friedmann00} \ee \be \dot{\rho }+3H(\rho +p)=0. \label{friedmann01} \ee Here $H = {\dot{a}}/{a}$ is the Hubble parameter, $\rho = \rho_{de}+\rho_{m}$ is the total energy density in the Universe (we neglect the contribution of radiation, which is essential only at early times of the cosmological evolution), where $\rho_{de}$ is the dark energy density and $\rho_{m}$ is the matter density. The continuity equation (\ref{friedmann01}) is valid for each of the components separately in the absence of interaction between them. The scalar curvature for the FLRW metric is expressed as follows: \begin{equation} \label{eq:R} R = 6\left ( \dot{H} + 2H^2\right ). \end{equation} The density of the Tsallis holographic dark energy is given by the expression: \begin{equation} \label{eq:rho_de} \rho_{de} = \frac{3C^2}{L^{\alpha}}, \end{equation} in which $\alpha = 4-2\gamma$ and $C = \mbox{const}$. For $\gamma=1$ we obtain the simplest holographic model. For IR cutoff $L$ one can choose the Hubble horizon, event horizon, the particle horizon or its combination. We investigate in details the case when $L$ is the event horizon: \begin{equation} \label{eq:8} L = a\int_{t}^{t_{s}}\frac{dt}{a}. \end{equation} Here $t_{s}$ is the time of possible future singularity. For $\gamma=1$ this choice to be the most relevant in terms of consistency with the observational data. Then the total energy density is \begin{equation} \label{eq:10} \rho =\rho_{de} + \rho_m = \frac{C^2}{\tilde{L}^\alpha a^\alpha} + \frac{1-\Omega}{a^3}, \end{equation} where $$\tilde{L}= \dfrac{L}{a} = \int_{t}^{t_{s}}\dfrac{dt}{a}$$ and $$\rho_m = \dfrac{1-\Omega}{a^3}$$ is the matter density. Parameter $\Omega$ has a simple sence of dark energy fraction in total energy budget of universe at current moment of time $t=0$ (without loss of generality $a=1$ for $t=0$ is assumed). We take $\Omega=0.72$ similar to the standard cosmological model. {Note that Eq. (\ref{eq:rho_de}) can be considered as some limit of general holographic HDE was firstly introduced in \cite{Nojiri:2005pu}. In this work the following class of model was proposed}: $$ \rho_{de} = \frac{3C^2}{L^{2}_{\Lambda}}, $$ $$ {L_\Lambda}=\frac{ \left( \frac{t_s B\left( 1+h_0,1-h_0 \right)}{L_p + L_f}\right)^{1/h_0}}{ h_0 \left\{ 1 + \left(\frac{t_s B\left(1+h_0,1-h_0\right)}{L_p + L_f}\right)^{1/h_0}\right\}^2} \ ,\quad h_0>0\ . $$ {Here $B$ is beta-function and $L_{p}$ and $L_{f}$ are particle and future horizon correspondingly. If $t_{s}$ is large (for cosmological model without singularity $t_{s}=\infty$) we can neglect first term in denominator and obtain simply that} $$ {L_\Lambda} \sim (L_{p}+L_{f})^{1/h_{0}}. $$ {Therefore model (\ref{eq:rho_de}) is a particular case of general model from \cite{Nojiri:2005pu} with replacement $L_{p}+L_{f} \rightarrow L_{f}$ and $t_{s}\rightarrow \infty$.} We consider the gravity model obtained by combining the models proposed by Starobinsky and Carroll-Duvvuri-Trodden-Turner (CDTT) \cite{Kausar, Sharif2}, and firstly considered in \cite{gCDTT} (see also \cite{Nojiri:2017opc}, \cite{Nojiri:2021iko} for applications of generalised Nojiri-Odintsov HDE to modified gravity): \begin{equation} \label{eq:f(R)} f(R)= R+\lambda R^2- \sigma \frac{\mu}{R}, \end{equation} where $\lambda$ and $\mu$ are small positive constants and $\sigma \pm 1$. At $\mu=0$ we obtain the usual Starobinsky model. \textbf{Below we will refer to this gravity model as the Nojiri-Odintsov gravity model.} For convenience, let us pass to the variable $z$ instead of the time variable $z = {a(0)}/{a}-1$. For past times variable $z$ has the sense of a redshift. Then the time derivative ${d}/{dt}$ is related to ${d}/{dz}$ by the following relation: \begin{equation} \label{eq:05} \dfrac{d}{dt} = -H(z+1)\frac{d}{dz}. \end{equation} The first and second derivatives of the function $f(R)$ on argument are \begin{equation} \label{eq:f(R)2} {f}'(R) = 1+2\lambda R+ \sigma \frac{\mu}{R^2}, \quad {f}''(R) = 2\lambda -\sigma \frac{2\mu}{R^3}, \end{equation} By substituting the expressions (\ref{eq:f(R)}), (\ref{eq:f(R)2}) into equation (\ref{friedmann00}) and passing to variable $z$, we obtain the equation: \begin{equation} \frac{d^2H}{dz^2} = - \frac{f'(R)}{6H(z+1)^2f''(R)} + \frac{1}{18H^3(z+1)^2f''(R)}\left ( \rho + \frac{Rf'(R)-f(R)}{2} \right ) + \nonumber \label{eq:13} \end{equation} $$ + \frac{1}{H(z+1)}\left ( \left ( \frac{dH}{dz} \right )^2 (z+1)- 3H\frac{dH}{dz} \right ). $$ Supplementing (\ref{eq:13}) with the formula for the total energy density (\ref{eq:10}) and the equation for the variable $\tilde{L}$: $$\frac{d\tilde{L}}{dz}=\frac{1}{H}$$ we obtain a system of equations whose solution for given initial conditions determines the cosmological evolution of the Universe. \section{Analysis of the model} For integration of equations describing the cosmological evolution in modified gravity, it is necessary to specify not only the initial scale factor and Hubble parameter $H(0)$ (Einstein-Friedmann equations have the second order on derivatives $a$), but also the second derivative of the scale factor $a$ on time at $t=0$. This is equivalent to the initial condition, imposed on $\dot{H}$ at $t=0$. In the analysis below (for $\lambda\neq 0$), we assume that $\ddot{a}(0)$ has the same value as in the $\Lambda$CDM model with $\Omega_{\Lambda} = 0.72$. This value is determined from the equation for $\dot{H}$ in Friedmann cosmology: $$ \dot{H}=-\frac{1}{2}\left(\rho+p\right). $$ As an initial density condition, we can take the values $\rho(0)=\rho_d(0) + \rho_m(0) = 3H_{0}^{2}$. Given that $p_{\Lambda}=-\rho_{\Lambda}$ and $p_{m}=0$, we obtain for $\Omega=0.72$ that $\dot{H}(0)=-0.42 H_{0}^{2}$. In terms of the $z$ variable, this condition means that $dH/dz=0.42 H_{0}^{2}$ for z=0. For $\lambda = \mu = 0$ $\dot{H}(0)$ is determined by parameter $C$ and $\Omega$. If $\gamma=1$ current equation-of-state parameter $w=p_{d}/\rho_{d}$ is $$ w=-\frac{1}{3}-\frac{2}{3}\frac{\sqrt{\Omega}}{C} $$ and therefore $$ \dot{H}(0)=-\frac{1}{2}\left(1-\Omega + \frac{2}{3}\left(1-\frac{\sqrt{\Omega}}{C}\right)\Omega\right) $$ and slightly differs from value $0.42 H_{0}^{2}$ in $\Lambda$CDM cosmology. Let us analyze the case when in the expression (\ref{eq:f(R)}) $\mu = 0$. For $\lambda \neq 0$ in the future the Hubble parameter $H(t)$ begins to oscillate around that dependence which is observed for the holographic dark energy model in GR. The amplitude of these oscillations increases with time and value of $\lambda$. The analysis of the time derivative of the Hubble parameter $\dot{H}$ is also interesting. In GR the value of $H$ tends to a constant value, which corresponds to the fact that $\dot{H}\rightarrow 0$, and the derivative tends to zero from below. And for $\lambda \neq 0$ the function $\frac{dH}{dt}$ oscillates around zero at long times. The same future oscillations are observed for the scalar curvature $R$. In the past, the holographic dark energy model at $\lambda \neq 0$ behaves also differently in comparison with GR: the Hubble parameter and $|\dot{H}|$ increase not so strongly with time. It is interesting to note that relation between $H^2$ and $\dot{H}$ changes so that scalar curvature oscilates near the zero in past in modified gravity. In a case of Friedmann cosmology $R$ strongly increases and tends to $\infty$ which corresponds to big bang singularity. If $\mu > 0$, the resulting dependences are very similar to the case when $\mu=0$. For $\mu=0.001$ the amplitude of oscillations of $H$ becomes smaller compared to the case of pure $R^2$-gravity (see Fig. 2). The main peculiarity is that singularities appear in the future, corresponding to the zero-point of the second derivative $f''(R)$. For $\sigma=-1$ the amplitude of the Hubble parameter oscillations, on the contrary, increases (Fig. 3). When the value of $\mu$ increases, the amplitude of oscillations of the Hubble parameter decreases for $\sigma=+1$ and increases for $\sigma=-1$. \begin{table}[ht] \begin{center} \begin{tabular}{|c||c|c|c|c|c|} \hline \backslashbox{$\lambda$}{$\sigma\mu $} & -0.003 & -0.001 & 0 & 0.001 & 0.003 \\ \hline \multicolumn{6}{|c|}{$\gamma = 1$} \\ \hline 0.001 & 1.5005 & 1.4696 & 2.3678 & 0.9069 & 0.6107 \\ \hline 0.005 & 2.0362 & 1.6923 & 1.9583 & 1.4120 & 0.8412 \\ \hline \multicolumn{6}{|c|}{$\gamma = 1.25$} \\ \hline 0.001 & 1.5112 & 1.4551 & 3.2991 & 0.6302 & 0.3409 \\ \hline 0.005 & 1.5234 & 1.7937 & 2.6955 & 0.8382 & 0.7934 \\ \hline \end{tabular} \caption{\label{tab:singularity}Time to the final singularity for different $\mu$ and $\lambda$ ($C = 1$, $\Omega = 0.72$).} \end{center} \end{table} We calculated the time before the singularity for these models (see Table I). Let's consider the cosmological evolution for different values of $\dot{H}(0)$. For the $\Lambda$CDM model at $\Omega_{\Lambda} = 0.72$ the value of $\dot{H}_{\Lambda}(0)= - 0.42 \cdot H^{2}_{0}$. In units of $H_{0}^{2}$ therefore $\dot{H}(0) = - 0.42$. As mentioned above, in the $f(R)$-gravity model $\dot{H}(0)$ should be given as an initial condition for solving the equation. We consider several values for $\dot{H}_{0}$ = $-\dot{H}_{\Lambda (0)}; -2\cdot\dot{H}_{\Lambda}(0); -0.5\cdot\dot{H}_{\Lambda} (0)$. The calculations show that the evolution in the past weakly depends on the initial condition imposed on $\dot{H}(0)$ (see fig. \ref{fig6}), but the time before future singularity is significantly determined by this condition (Table \ref{tab:singularity1}). Can these models be consistent with observational data? We analyzed this question by comparing the canonical $\Lambda$CDM model and the holographic dark energy model with $\lambda\neq 0$ and $\mu \neq 0$. Usually the following observational data tests are frequently used for cosmological models: (i) relationship between apparent magnitude and redshift for Ia supernova from the Supernova Cosmology Project, (ii) dependence of the Hubble parameter from redshift obtained from space chronometry and baryonic acoustic oscillation data. Let's consider these observational limits in detail. Apparent magnitude $\mu(z)$ of star for a redshift $z=a_{0}/a-1$ is \be \mu(z)=\mu_{0}+5\log D(z)\, , \ee where $D_{L}(z)$ is the photometric distance defined by relation (for spatially flat universe) \be \label{DLSC} D_{L}(z)=\frac{c}{H_{0}}(1+z)\int_{0}^{z} h^{-1}(z)d z, \quad h^{2}(z)=\rho(z)/\rho_{0}. \ee Here $c$ is the speed of light. The best fit for SNe Ia is given in the framework of $\Lambda$CDM cosmological model. For this model $h(z)$ is \be h(z)=(\Omega_{m}(1+z)^{3}+\Omega_{\Lambda})^{1/2}. \ee Here $\Omega_{m}$ is the matter density fraction, and $\Omega_{\Lambda}=1-\Omega_{m}$ is the vacuum energy fraction. The constant value $\mu_{0}$ depends from the current value of the Hubble parameter: $$ \mu_{0}=42.384-5\log h,\quad h=H_{0}/100 \mbox{ km/s/Mpc} $$ To analyze the SNe data, it is necessary to calculate the parameter $\chi^{2}$, which is defined by the standard relation \begin{equation} \chi^{2}_{SN}=\sum_{i}\frac{(\mu_{obs}(z_{i})-\mu(z_{i}))^{2}}{\sigma^{2}_{i}}, \end{equation} where $\sigma_{i}$ is the corresponding $1\sigma$ error. We use known data for 580 SNe Ia from \cite{Amman}. The parameter $\mu_{0}$ is data-independent, so we can minimize $\chi^2 $ relative to $ \mu_{0} $. It should be noted that \begin{equation}\label{chi} \chi^{2}_{SN}=A-2\mu_{0}B+\mu_{0}^{2}C, \end{equation} where $$ A=\sum_{i}\frac{(\mu_{obs}(z_{i})-\mu(z_{i};\mu_{0}=0))^{2}}{\sigma^{2}_{i}}, $$ $$ B=\sum_{i}\frac{(\mu_{obs}(z_{i})-\mu(z_{i}))}{\sigma^{2}_{i}},\quad C=\sum_{i}\frac{1}{\sigma^{2}_{i}}. $$ The value of $\chi$-square (\ref{chi}) has a minimum at $\mu_{0}=B/C$, and this minimum is $$ \bar{\chi}_{SN}^{2}=A-B^{2}/C. $$ One can minimize $\bar{\chi}_{SN}^{2}$ instead of ${\chi}_{SN}^{2}$, and calculate corresponding optimal value of $H_{0}$. For 580 SNe samples and $\Lambda$CDM model minimal value $\bar{\chi}_{SN}^{2}=553.231$ for $\Omega=0.722$ and $H_{0}=70.05$ km/s/Mpc. For measuring the Hubble parameter for different redshifts $ z $ there are different methods. The lot of of data are obtained using the cosmic chronometry. The Hubble parameter depends from the differential age of the Universe as a function of redshift: $$ dt=-\frac{1}{H}\frac{dz}{1+z}. $$ Measurements of $ dz/dt $ (and, as a consequence, measurements of $H(z)$) are possible due to that absolute age data for galaxies is determined by fitting stellar population models. Results of these measurements are given in \cite{Zhang}, \cite{Simon}, \cite{Moresco2}, \cite{Moresco3}, \cite{Stern}, \cite{Ratsimbazafy}. There are also three correlated $H(z)$ measurements from the radial BAO signal \cite{Alam} in the galaxy distribution and two values for the large redshift ($z=2.34$ and $2.36$) measured from the BAO signal in the Lyman-alpha forest distribution \cite{Delubac}, \cite{Font-Ribera}. These 36 measurements of Hubble parameter $H(z)$ are listed in Table II. \begin{table} \label{Table1} \begin{centering} \begin{tabular}{|c|c|c||c|c|c|c|} \hline $z$ & $H_{obs}(z)$ & $\sigma $ & $z$ & $H_{obs}(z)$ & $\sigma$ \\ & km s$^{-1}$ Mpc$^{-1}$ & km s$^{-1}$ Mpc$^{-1}$ & & km s$^{-1}$ Mpc$^{-1}$ & km s$^{-1}$ Mpc$^{-1}$ \\ \hline 0.070 & 69 & 19.6 & 0.480 & 97 & 62 \\ 0.090 & 69 & 12 & 0.510 & 90.8 & 1.9\\ 0.120 & 68.6 & 26.2 & 0.593 & 104 & 13\\ 0.170 & 83 & 8 & 0.610 & 97.8 & 2.1\\ 0.179 & 75 & 4 & 0.68 & 92 & 8\\ 0.199 & 75 & 5 & 0.781 & 105 & 12\\ 0.200 & 72.9 & 29.6 & 0.875 & 125 & 17\\ 0.270 & 77 & 14 & 0.880 & 90 & 40 \\ 0.280 & 88.8 & 36.6 & 0.900 & 117 & 23 \\ 0.352 & 83 & 14 & 1.037 & 154 & 20 \\ 0.38 & 81.9 & 1.9 & 1.300 & 168 & 17 \\ 0.3802 & 83 & 13.5 & 1.363 & 160 & 33.6\\ 0.400 & 95 & 17 & 1.430 & 177 & 18 \\ 0.4004 & 77 & 10.2 & 1.530 & 140 & 14 \\ 0.4247 & 87.1 & 11.2 & 1.750 & 202 & 40 \\ 0.4497 & 92.8 & 12.9 & 1.965 & 186.5 & 50.4\\ 0.470 & 89 & 50 & 2.34 & 223 & 7\\ 0.4783 & 80.9 & 9 & 2.36 & 227 & 8\\ \hline \end{tabular} \caption{The dependence of the Hubble parameter $H(z)$ from observations used in our analysis of THDE model in gCDTT gravity} \end{centering} \end{table} The parameter $\chi^{2}_{H}$ is \begin{equation} \chi^{2}_{H}=\sum_{i}\frac{(H_{obs}(z_{i})-H(z_{i}))^{2}}{\sigma^{2}_{i}}. \end{equation} We can also perform averaging over the unknown parameter $H_{0}$. We obtain that $$ \chi^{2}_{H}=A_{1}-2B_{1}H_{0}+H_{0}^{2}C_{1}, $$ $$ A_{1}=\sum_{i}\frac{H_{obs}(z_{i})^{2}}{\sigma^{2}_{i}},\quad B_{1}=\sum_{i}\frac{h(z_{i})H_{obs}(z_{i})}{\sigma^{2}_{i}},\quad $$ $$ C_{1}=\sum_{i}\frac{h(z_{i})^2}{\sigma^{2}_{i}}. $$ For $ H_ {0} = B_{1}/C_{1} $ the parameter $\chi^{2}_{H} $ is minimal. $$ \bar{\chi}_{H}^{2}=A_{1}-B_{1}^{2}/C_{1}. $$ As in the case of supernova data, we can find a minimum of $\bar {\chi}_{H}^{2} $ instead of ${\chi}_{H}^{2}$. For $\Lambda$CDM model we have that $\bar{\chi}^{2}_{H}$ is minimal for $\Omega_{\Lambda}=0.737$ and equal to $19.262$. Corresponding $H_{0}$ is 70.32 km/s/Mpc. Therefore analysis of two data sets gives similar results for $H_{0}$ and optimal value of $\Omega_\Lambda$. Analysis of THDE model in gCDTT gravity shows some interesting moments. Firstly observational data favor to $\gamma=1$ (canonical model of HDE). Secondly data set for Hubble parameter is described by holographic model better in comparison with $\Lambda$CDM model. For example if $\lambda=0.001$, $\mu=0.001$, $\sigma = 1$ we have following minima of $\bar{\chi}_{H}^{2}$ for various $\dot{H}(0)$: 17.097 ($H_0=67.89$ km/s/Mpc) for $\Omega=0.717$, $\dot{H}(0)=-0.21 H_{0}^{2}$; 16.955 ($H_0=67.64$ km/s/Mpc) for $\Omega=0.713$, $\dot{H}(0)=-0.42 H_{0}^{2}$; 16.799 ($H_0=67.13$ km/s/Mpc) for $\Omega=0.705$, $\dot{H}(0)=-0.84 H_{0}^{2}$. But for these parameters SNe data are described worse: for given $\dot{H}(0)$ there is a significant discrepancy between optimal value of $\Omega$ from two data sets. However for some $\Omega$ and $\dot{H}(0)$ we obtained that SNe and Hubble data sets in general are fitted with same accuracy as for $\Lambda$CDM model. In particular for $\dot{H}(0)=-0.84 H_{0}^2$ we have that minimum of $\bar{\chi}^{2}_{SN}+\bar{\chi}^{2}_{H}$ is 572.6723 for $\Omega=0.745$. This corresponds to $\Lambda$CDM model with $\Omega_{\Lambda}=0.732$. Therefore one conclude that THDE model in frames of gCDTT gravity can be considered as quite realistic model of cosmological acceleration. \begin{table}[ht] \begin{center} \begin{tabular}{|c||c|c|c|} \hline \backslashbox{$\sigma \mu$}{$\dot{H}(0)$} & -0.21 & -0.42 & -0.84 \\ \hline \multicolumn{4}{|c|}{$\gamma = 1$} \\ \hline -0.003 & 1.2041 & 2.0521 & 1.6429 \\ \hline 0.003 & 0.8162 & 0.8412 & 1.3122 \\ \hline \multicolumn{4}{|c|}{$\gamma = 1.25$} \\ \hline -0.003 & 1.2718 & 1.5116 & 1.9824 \\ \hline 0.003 & 0.7931 & 0.7970 & 1.2959 \\ \hline \end{tabular} \caption{\label{tab:singularity1} Time before the final singularity (in units of $1/H_0)$ for different $\mu$ and $\dot{H}(0)$ ($C = 1$, $\Omega = 0.72$, $\lambda = 0.005$). $\dot{H}(0)$ is given in units of $H_{0}^{2}$.} \end{center} \end{table} It is also interesting to compare the evolution of the Universe in the model of Nojiri-Odintsov gravity without dark energy and in the models with holographic dark energy against the background of GR and the considered model of gravity (see fig. 7). Calculations leads to conclusion that oscillations of Hubble parameter in future is specific feature of universe filled THDE in Nojiri-Odintsov gravity. \section{Concluding remarks} We investigated the Tsallis holographic dark energy model with assumption of the Nojiri-Odintsov gravity model is valid. The equations describing the cosmological evolution in this case contain third derivative of the scale factor on time. Therefore this requires to impose initial condition on the second derivative $a$ (which is equivalent to the condition on $\dot{H}(0)$). The evolution of the universe is studied in detail for the case when $\dot{H}(0)$ coincides with the value in the standard cosmological model with $\Omega_\Lambda = 0.72$. Solutions have interesting feature namely Hubble parameter ``oscillates'' near dependence corresponding to THDE in General Relativity. The amplitude of this oscillations grows with time in future. For $\mu\neq 0$ a future singularity arises corresponding to zero of second derivative of $f(R)$ for some $R$. The time before singularity, as determined by the value of $\dot{H}$ for the initial moment in time, can vary in wide limits. Dynamics of the universe in the past is not especially sensitive to this initial condition and is close to that in the model of holographic dark energy in the background of GTR (the differences appear only at times close to the initial singularity of the Big Bang). Our analysis shows that such models for some parameters can describe observational data for SN Ia and dependence $H(z)$ with sufficient accuracy especially for $\gamma=1$ and larger values of $\dot{H}$ in comparison with $\Lambda$CDM model. Also one note that $H(z)$ data are described better in frames of THDE on modified gravity backgroud. This means that the models considered by us can be quite realistic. \section*{Acknowledgments} This work was supported by Ministry of Education and Science (Russia), project 075-02-2021-1748.
Title: The formation of clusters and OB associations in different density spiral arm environments
Abstract: We present simulations of the formation and evolution of clusters in spiral arms. The simulations follow two different spiral arm regions, and the total gas mass is varied to produce a range of different mass clusters. We find that including photoionizing feedback produces the observed cluster mass radius relation, increasing the radii of clusters compared to without feedback. Supernovae have little impact on cluster properties. We find that in our high density, high gas mass simulations, star formation is less affected by feedback, as star formation occurs rapidly before feedback has much impact. In our lowest gas density simulation, the resulting clusters are completely different (e.g. the number of clusters and their masses) to the case with no feedback. The star formation rate is also significantly suppressed. The fraction of stars in clusters in this model decreases with time flattening at about 20\%. In our lowest gas simulation model, we see the formation of a star forming group with properties similar to an OB association, in particular similar to Orion Ia. We suggest that low densities, and stronger initial dynamics are conducive to forming associations rather than clusters. In all models cluster formation is complex with clusters merging and splitting. The most massive clusters which form have tended to undergo more mergers.
https://export.arxiv.org/pdf/2208.14930
\label{firstpage} \date{\today} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \pubyear{2022} \begin{keywords} galaxies: star clusters: general, galaxies: star formation, stars: formation, ISM: evolution, galaxies: spiral \end{keywords} \section{Introduction} Whilst numerous studies now follow cluster formation on molecular cloud, and even galaxy scales, we still do not yet have a good understanding of how different types of clusters or stellar groups form, under what conditions and what sets the properties of stellar clusters. Groups of stars can be categorised broadly as globular clusters, massive clusters, open clusters, OB associations, T associations and moving groups. In our own Milky Way Galaxy, young massive clusters, characterised by masses $\gtrsim10^4$ M$_{\odot}$ \citep{PZ2010} and usually small age spreads ($\sim$ Myr) \citep{Longmore2014}, appear to be rare, with only a few examples away from the Galactic Centre \citep{Joshi2016}. Most ongoing sites of star formation appear to be lower mass, likely open clusters or larger more dispersed associations. For example, in \citet{PZ2010}, 8 clusters are shown as lying in the parameter space of YMCs (Fig. 2) compared to 10’s of open clusters with masses up to a few 1000 M$_{\odot}$. For external galaxies, studies of clusters, e.g. mass, age functions, are often limited to $\gtrsim5000$ M$_{\odot}$ (e.g. \citealt{Bastian2012,Chandar2014,Mulia2016, Messa2018,Krumreview2019}), except for Local Group galaxies (e.g. \citealt{Johnson2017}). However using the recent LEGUS \citep{Calzetti2015} survey, \citet{Brown2021} study over 6000 clusters from many nearby galaxies down to lower masses, including unbound associations, although they find LEGUS still preferentially selects bound clusters. \citet{Elmegreen1977} supposed that larger associations of stars were produced by consecutive feedback events which triggered successive star formation events, which would explain the relatively large age spreads and multiple episodes of star formation observed (e.g. \citealt{Zari2019}). \citet{Elmegreen1985} proposed that such associations, rather than bound clusters, could arise in low mass clouds, or smaller quiescent regions of higher mass clouds. Following \cite{Lada2003}, associations were supposed to be remnants of bound clusters, following gas expulsion, stellar evolution and / or N-body interactions \citep{Goodwin2006,Bastian2008,Pfalzner2013}. However recent kinematic studies of associations suggest there is no simple picture of expansion \citep{Wright2016,Melnik2020,Ward2020,MApellaniz2020}. Rather the structure of associations is set by the structure of the gas when they are formed \citep{Wright2018,Goul2018,Lim2019,Ward2018}. Simulations to trace back currently observed associations suggest that they originate from sub-virial and highly structured initial conditions \citep{Farias2018,Schoettler2019}. Numerical simulations of galaxies have generally tended to follow higher mass clusters rather than smaller clusters or associations. Again this is a consequence of resolution, and the time the clusters are evolved for. On galaxy scales, it is difficult to fully resolve clusters. Simulations of dwarf galaxies resolve the formation and evolution of clusters following the mass of stars or clusters down to the mass of massive stars \citep{Hu2016,Emerick2018,Lahen2019,Lahen2020,Hislop2022}. \citet{Hislop2022} find that at higher star formation rates, some clusters disperse, although they are not able to form realistic clusters $<1000$ M$_{\odot}$. On smaller scales, \citet{Grudic2021} perform simulations of isolated, massive, bound, turbulent GMCs and find that they typically evolve to form a large number of small bound clusters which reside in a larger association, along with many unbound stars. \citet{Hajime2021} find that unless the star formation efficiency is high, individual clusters tend to disperse. Ionising feedback leads to less dense clusters at lower surface densities, but has little effect at high surface densities where globular clusters or YMCs are likely to form \citep{Hajime2022}. Isolated cloud simulations however may miss external gas dynamics, such as continuing gas inflow which may continue to produce star formation. \citet{Bending2020} found that ionising radiation propagating into the surrounding medium can also trigger further star formation. Furthermore even on molecular cloud scales, it is difficult to resolve stars fully down to brown dwarfs, limiting the total mass of such simulations to around $10^4$ M$_{\odot}$. Simulations on molecular cloud scales also don't necessarily have the capacity to follow the clusters for long enough to see whether they would likely stay as open clusters, or disperse (e.g. \citealt{Bate2003,Bonnell2003}). Simulations can take into account of surrounding gas by resimulating, or zooming in on small sections of whole galaxy simulations \citep{vanloo2013,Smilgys2017,Bending2020,Ali2021,Rieder2022,Smith2020,Dobbs2022}. \citet{Dobbs2022} showed that colliding flows on larger scales are significant in determining the final cluster mass, partly because they lead to more mergers between clusters. Generally, simulations show that at least for the formation of more massive clusters, cluster formation is hierarchical and mergers are frequent \citep{Bonnell2003,Smilgys2017,Fujii2015,Fujii2021a,Guszejnov2022,Dobbs2022}. To fully model clusters, one possibility for cosmological or galactic simulations is to include the cluster evolution separately using a theoretical or empirical model \citep{Pfeffer2018} or by separately evolving a sample of clusters \citep{Rodriguez2022}. An alternative approach is to model the full population of stars with Nbody dynamics \citep{Fujii2021a,Rieder2022,Liow2022}, even if the gas resolution is low, thus achieving the same resolution in the stars as individual cluster simulations. This has the advantage that the cluster dynamics can be followed explicitly. However the resolution of the stellar component would still be limiting on larger galaxy scales as modelled for longer periods of time. In this paper, we simulate cluster formation and evolution in spiral arm regions, taken from global spiral galaxy simulations, with photoionizing and supernovae feedback. We simply include sink particles when star formation occurs, so are unable to resolve the full stellar population and dynamics, but we do follow the clusters rather than including any subgrid description of cluster evolution. We look at the effect of feedback on the star formation rate for the different regions, which are characterised by different masses (Section~\ref{sec:sfr}). We compare the evolution of clusters as determined by finding clusters at each time frame (Section~\ref{sec:cluster_time}), and by following constituent cluster sink particles (Section~\ref{sec:partevol}). We compare cluster properties with and without feedback (Section~\ref{sec:properties}) and also compare the outcomes of our models to known OB associations (Section~\ref{sec:OBassociations}). In Section~\ref{sec:supernovae} we examine the impact of including supernovae, versus ionization only. \section{Method} In \cite{Dobbs2022}, we performed simulations of two sections of a Milky Way-like spiral galaxy from \citet{Pettitt2015}. In that paper we denoted these sections Regions 1 and 2, where Region 1 exhibited strongly converging flows. We concluded that the velocities, and gas densities, in particular for Region 1 of those simulations represented the most extreme environment from the initial galaxy scale simulation and so resulted in very massive clusters. Even in our second region, Region 2, the gas densities were still high compared to typical Milky Way densities. In this paper we extend the work of \citet{Dobbs2022}, by rerunning the regions presented in the previous work but with lower initial gas masses. Otherwise we keep the physics included in the simulations exactly the same as in previous work, so the only parameter which is varied is the gas mass. We could have rerun the original galaxy simulations with a lower gas mass, however, as well as avoiding running multiple whole galaxy simulations, the approach here also allows us to compare simulations where the initial conditions still reflect large scale processes such as spiral arms but are the same except for the initial densities. Rerunning the whole galaxy would likely substantiate the conclusions from our different density runs, as for example the gas would likely also be less concentrated in the midplane, with less cold gas, further increasing the difference between lower and higher density models. \begin{table*} \begin{tabular}{c|c|c|c|c|c} \hline Model & Region & Gas mass & Feedback & Time evolved & Time of first\\ & & ($10^5$ M$_{\odot}$) & & (Myr) & supernova (Myr) \\ \hline M1R1 & 1 & 1 & N & 40 & - \\ M1R1FB & 1 & 1 & Y & 40 & 6.26 \\ M5R1 & 1 & 5 & N & 4 & - \\ M5R1FB & 1 & 5 & Y & 4 & (4.73) \\ M5R2 & 2 & 5 & N & 6.5 & - \\ M5R2FB & 2 & 5 & Y & 6.5 & 5.1 \\ M25R1 & 1 & 25 & N & 1.2 & - \\ M25R1FB & 1 & 25 & Y & 1.2 & (4.13) \\ M25R2 & 2 & 25 & N & 4.5 & - \\ M25R2FB & 2 & 25 & Y & 4.5 & 4.35\\ \hline \end{tabular} \caption{Table showing the different simulations presented. The lower four were all shown in \citet{Dobbs2022}. As indicated in the Table, some simulations run past where supernovae occur, whilst others do not reach the point where supernovae start to occur.} \label{tab:simulations} \end{table*} \subsection{Details of simulations} The physics included in the simulations is essentially the same as \citet{Dobbs2022}, except for the inclusion of supernovae, but we include a summary of the previous methods here as well. We use the Smoothed Particle Hydrodynamics (SPH) code sphNG to carry out our simulations. The code is based on an original version by \citet{Benz1990}, but has since been substantially modified to include sink particles, cooling and heating, and stellar feedback. We model the whole stellar disc of the galaxy using star particles, but only include gas particles in the regions we select. The halo of the galaxy is represented by an NFW potential \citep{NFW1997}. For the gas we include self gravity, and heating, cooling, H$_2$ and CO chemistry according to \citet{Dobbs2008}, originally from \citet{Glover2007}. Sink particles are inserted once gas reaches a certain density, and the gas is converging and bound according to \citet{Bate1995}. We use the same sink criteria and parameters as the Set 2 (highest resolution) simulations from \citet{Dobbs2022}. Sinks are created for densities $>1000$ cm$^{-3}$ if they meet the above criteria, but similarly to \citet{Dobbs2022}, we enforce sink creation if densities reach $10^5$ cm$^{-3}$. The sink radius is set to 0.1 pc, the merger radius is 0.001 pc. Sink masses can be of order 10 M$_{\odot}$ in our lowest mass simulation, thus potentially representing a single very massive star, but more likely a number of low mass stars. In our highest mass simulations (i.e. M25R1, M25R1FB, M25R2 and M25R2FB shown in \citealt{Dobbs2022}), the sinks typically represent 100's of solar masses and therefore sample a full IMF of stars. \subsubsection{Stellar feedback} Once sinks are formed, they are allocated a population of stars according to a Kroupa \citep{Kroupa2001} Initial Mass Function (IMF). Stars above a given mass are treated as ionising sources. The photoionisation scheme for our models is the same as described in \citet{Dobbs2022} and \citet{Bending2020}. The ionisation is calculated using a line of sight method, which determines the change in ionisation fraction for every gas particle which lies within a given distance (see below) of each sink particle. Column densities are computed by summing over all particles whose smoothing length overlaps with the line of sight. Specifically, the ionisation fraction is evolved as \begin{equation} \begin{aligned} \frac{{\rm d}H_{\rm II}}{{\rm d}t} = \frac{h^2}{r^2} \left( \frac{Q_{\rm H}}{4 \pi} \right. & - \int^{r}_{0} r^{\prime 2} n(r^\prime)^2 \alpha_{\rm B} {\rm d}r^\prime \\ & - \left. \frac{1}{\delta t} \int^{r}_{0} r^{\prime 2} n(r^\prime) [1-H_{\rm II}(r^\prime)] {\rm d}r^\prime \right), \label{eq:evol_ionisation} \end{aligned} \end{equation} where $h$ is the smoothing length, $Q_{\rm H}$ is the ionising flux, $n$ is the number density, $\alpha_B$ is the recombination efficiency (here $2.7\times 10^{-13}$ cm$^3$ s$^{-1}$), $\delta t$ is the time interval and \ion{H}{II} is the ionisation fraction of the gas (from 0 to 1). Gas which is ionised is heated to a temperature of $10^4$ K. As in \citet{Dobbs2022}, the ionisation is only calculated to a certain distance, to save computational time, and the same as for Dobbs et al. 2022, this distance is roughly half the size of the simulated region. As in \citet{Dobbs2022}, we use an efficiency of 50\% such that half the mass of sinks is converted to stars, which correspondingly sets the number of massive stars undergoing feedback. Unlike \citet{Dobbs2022}, we include this efficiency when working out stellar and cluster masses; this efficiency reflects that not all gas will be turned into stars on the scales we model, and means that when we compare the masses of clusters with how many massive stars they contain this will be consistent. However there is an inconsistency in that 50\% of gas should ideally be returned to the ISM but it is retained in sink particles. In addition to ionisation, we also include supernovae. Supernovae are inserted the same way as described in \citet{Bending2022}, which is similar in turn to \citet{Dobbs2011}. We insert supernovae for stars >18 M$_{\odot}$. For each star >18 M$_{\odot}$ we determine the star's lifetime using the SEBA program \citep{PZ2012, Toonen2012}. Once stars reach the end of their lifetimes, they are assumed to undergo supernovae. The energy of the supernovae is inserted as both kinetic and thermal energy and follows a snowplough solution. Supernovae do not occur until late on in our simulations, and some simulations do not reach the point where supernovae occur. The time of the first supernova, and the total evolution time for the simulations are shown in Table~\ref{tab:simulations}. Winds are not included, however we found in \citet{Ali2022} that winds have little effect in our simulations. We note that in our sampling and feedback prescription, sinks may exhibit masses which are much less than that required to resolve the IMF. However as described in \citet{Bending2020}, we presample the stellar masses, similar to \citet{Geen2018}, and use the total mass of the simulation to assign stars of a given mass to sink particles (according also to the size of the sink particles). As such the IMF is sampled from the collective mass, which does sample the IMF. \citet{Liow2022} propose a method whereby sinks are grouped together to assign stars, which we don't use here; our method is more similar to the `All grouped' method in their paper, which well reproduces the IMF. \subsubsection{Initial conditions} We take our initial conditions from \citet{Pettitt2015} as described in \citet{Dobbs2022}. In the latter paper we performed two successive resimulations of regions in the galaxy such that we obtained a resolution suitable to resolve clusters. We selected two regions, one associated with two merging or joining spiral arms where the densities and converging velocities are high (Region 1) and another from simply a typical region along a spiral arm (Region 2) with moderate converging velocities. For both these regions, the gas densities were fairly high and massive clusters were formed. Here we run simulations with lower gas densities for comparison. We run a model of each Region, with 5 x lower gas mass, and we also run a further model of Region 1 with 25 times lower gas mass. We list these different simulations in Table~\ref{tab:simulations}. The models M25R1FB, M25R1, M25R2FB and M25R2 are those presented in \citet{Dobbs2022}, whilst the other 6 simulations (which we primarily focus on) are presented here for the first time. The initial conditions for the new Region 2 simulations (M5R2 and M5R2FB) are exactly the same as those used in \citet{Dobbs2022}, so we can compare these directly with M25R2FB and M25R2. For the Region 1 simulations, the initial conditions from \citet{Dobbs2022} already contained sink particles, so we re-ran the first resimulation (denoted R1z85sI in \citealt{Dobbs2022}) with 5 times lower mass, and used these as initial conditions for the Region 1 simulations presented here (M1R1, M1R1FB, M5R1, M5R1FB). Unlike \citet{Dobbs2022}, the initial conditions for the Region 1 simulations presented here contain no initial sink particles. All the new Region 1 simulations here start with the same initial conditions, except that the particle masses differ. The initial conditions will be slightly different from M25R1 and M25R1FB, but we only use these previous simulations for Region 1 to compare the star formation rates, the rest of the analysis does not involve those simulations. In all simulations we include 1.1 million particles to model the stellar disc. The initial number of gas particles for the Region 1 and Region 2 simulations are 6126990 and 5025840 respectively. The mass of the particles is 0.1 M$_{\odot}$ in the models where we decrease the mass by a factor of 5 (M5R1, M5R1FB, M5R2 and M5R2FB), and 0.02 M$_{\odot}$ in M1R1 and M1R1FB. In Figure~\ref{fig:tanfigure} we indicate where our simulations would lie in a plot of surface density versus mass, which is a simplified version of the figure shown in \citet{Tan2013}. We only used the simulations with feedback for this figure. The filled panels show the range of gas surface densities in the simulations, calculated at times midway through when stars are forming and there is still dense gas in the vicinity of the clusters. The width in the $x$ direction is simply one order of magnitude width up to the total gas mass in the simulation. The dashed box regions show the typical range of cluster masses and surface densities. As indicated by Figure~\ref{fig:tanfigure}, our high mass simulation M25R2FB encompasses massive clusters in the LMC, Milky Way and M82, such as R136 and Quintuplet. The lowest mass simulation (M1R1FB) lies in the same regime as the ONC, and lower mass clusters inidicated by the SF clumps. \section{Results} \subsection{Evolution of models} We show the evolution of the Region 1 simulations in Figure~\ref{fig:region1}, the lowest gas mass (M1R1 and M1R1FB) simulations are presented in the top panels and the moderate gas mass simulations (M5R1 and M5R1FB) in the lower panels. In each case the model with feedback included is shown on the top. For both we see that feedback has a significant effect on both the gas and the distribution of stars. For example if we look at the middle (t$=4$ Myr) panels for the low gas mass simulations, with feedback included we see large regions which are relatively devoid of gas, whilst the remaining gas is arranged into fairly dense, sharp features. By contrast without feedback, the gas is more uniformly distributed. At first glance, the ionization appears to have a greater effect in the lowest mass simulation. However if we compare at an equivalent time, e.g. the 4 Myr panels, with the low mass (top panels) the ionization produces regions devoid of very low gas density, whereas at the medium mass (lower panels), the ionization is producing large amounts of low density diffuse gas, but in both cases the ionization is having a strong effect on the gas morphology. In the low gas mass model, by 7 Myr, ionization has largely emptied the regions around the clusters of gas. Gas is present further away, so the rest of the evolution of the cluster is largely due to the N-body dynamics of the sinks. The main difference we see in the stars is that the clusters appear larger (i.e. over a larger size scale) and more dispersed in the models with feedback compared to those without. Particularly with the medium gas mass models, the clusters cover a larger area. We also see for the low gas models (top panels) that at 7 Myr there are around 4 apparent groups of stars which are quite dispersed, whereas in the no feedback case we see three or possibly four very concentrated small clusters. In Figure~\ref{fig:region2} we show the evolution of the Region 2 simulations, showing the medium gas mass runs (M5R2 and M5R2FB) and the high gas mass models (M25R2 and M25R2FB), again both are before supernovae occur. Again we see that the clusters are spatially much more extended with feedback. We also see the impact of the ionization producing very low density regions in the medium mass models (top) and creating large amounts of diffuse gas in the high mass models. We also see a difference between the medium and high gas mass runs. At 2.5 Myr for the medium gas mass case, there are separate clear filaments in which clusters are forming. At 4.5 Myr, these separate filaments and clusters are still apparent. However in the high mass models (M25R2 and M25R2FB), these filaments have more or less merged into one single structure. So in the higher mass model, the gravity from the higher gas mass, as well as the initial gas velocities, is producing a different morphology and clusters which are spatially closer together, and which are likely in the process of merging together. \subsection{Star formation rates} \label{sec:sfr} We show the mass of stars formed versus time for the different models in Figure~\ref{fig:massofstars} with and without feedback. In all cases, feedback reduces the mass of stars formed. We also see that the mass of stars forming slows down with time. In the lowest mass model with feedback, M1R1FB, star formation has largely ceased by around 7 Myr, though in the other models star formation is ongoing. We see that whilst star formation ceases in model M1R1FB, star formation continues throughout the duration of the equivalent model with no feedback, M1R1. This is because in the model with feedback, gas has been mostly dispersed by 7 Myr and is low density, whereas without feedback, there is still quite a lot of dense gas in and around the central clusters. Consequently after 20 Myr, the mass of stars formed in the model with feedback is around half that of the model without feedback. The reduction in mass with feedback included is lower in the other models, however these run for shorter times. If we compare at the same time (e.g. 5 Myr) then the difference with and without feedback is fairly similar for all the models. This suggests that the main difference with the models is the timescales over which star formation occurs. For the higher mass models, star formation occurs over relatively short periods before ionisation has much effect. Whilst for the low mass models, there is opportunity for massive stars to form and significantly ionise the surrounding gas before further star formation occurs. Similarly to the results in Figures~\ref{fig:region1} and \ref{fig:region2} the ionisation is having a similar impact in the models at different times, but for the high mass models, large amounts of star formation has already occurred by later times. We further compare the reduction in the mass of stars formed with and without feedback in Figure~\ref{fig:massratio}, where we plot the mass of stars in the model with feedback divided by the mass of the stars in the equivalent model without feedback, versus time. This figure supports the impression from Figure~\ref{fig:massofstars} that the addition of feedback does not significantly differ between the different models at a given time frame, although for the low mass models, feedback has an impact straight after star formation, possibly having a larger effect on dense gas around new sink particles compared to in the higher mass cases (in the M5R2FB model the ionization is actually having a slight net triggering effect at early times, though there has been relatively little star formation at this point). \subsection{Stellar clusters} \subsubsection{Clusters, OB associations, and the approach used here} In the next sections we look at the evolution of clusters based on i) identifying clusters at different time frames (Section~\ref{sec:cluster_time}), and ii) identifying clusters at one time frame and following the constisuent sink particles over time (Section~\ref{sec:partevol}). For i) we refer to these simply as clusters. For ii) we identify objects where the sink particles stay clustered together and where they disperse, grouping these into clusters which remain or which disperse. For identifying clusters, we simply use a clustering algorithm, as typically used by observers (e.g. \citealt{2022A&A...661A.118C}, \citealt{2021A&A...646A.104H}, \citealt{2020A&A...635A..45C}, \citealt{2019ApJS..245...32L}, \citealt{2017A&A...600A.106C}). As such the clusters do not necessarily need to be gravitationally bound. OB associations are groups of stars which are thought to originate from the same star formation event but are spread out on the sky. In their review, \citet{Wright2022} suggest that densities of Galactic OB associations are typically $0.001-0.1$ M$_{\odot}$ pc$^{-3}$, based on their typical sizes and masses. Although the OB associations listed in \citet{PZ2010} tend to be fairly compact, other authors identify OB associations as spatially larger. The Galactic OB associations in \citet{Wright2020} can be 100 pc or more in size, and within them contain open clusters or smaller OB associations, whilst OB associations listed in external galaxies are typically $\sim$ 80 pc (e.g. \citealt{Elmegreen1999}). We simply compare the properties of our star forming regions with characteristics of Galactic OB associations using \citet{Wright2020} as a basis (Section~\ref{sec:OBassociations}). \subsubsection{Evolution of clusters} \label{sec:cluster_time} Our FoF algorithm is similar to the density-based clustering algorithm DBSCAN \citep{Ester1996} but slightly simpler and easier to automate. Sinks are selected which are within a given distance of each other and are assigned into groups. We choose a distance of 2 pc, which is larger than the value of 0.5 pc used in \citet{Dobbs2022}, for a few reasons. Firstly this identified clusters which would be picked out by eye across the different data sets. Secondly, when following the clusters over time their relative size, shape, positions and memberships are liable to changes whilst the number of sinks unassociated with clusters increases (see e.g. Fig.\,\ref{fig:region1}). With this distance the algorithm is able to correctly identify a cluster at an earlier time and then find the same cluster (as would be picked out by eye) at later times, after the changes. Thirdly we found that 2 pc was reasonable using the approach of \citet{Rahmah2016} to find the optimal distance scale of a dataset. Their approach works by calculating the distance from every star to its $N^{\rm th}$ nearest neighbour (here, $N=20$), sorting these distances from lowest to highest. Plotting order number vs distance, the scale is defined as the distance at which the maximum change in gradient of the slope occurs (see also \citealt{Buckner2022}). We checked the results of our algorithm with those obtained using DBSCAN, finding them to be very similar and often the clusters identified are identical. When deciding the best algorithum to use we also considered HDBSCAN \citep{HDBSCAN_ref}, a hierarchical implementation of DBSCAN better at finding clusters in varying density datasets. Unlike the original implementation, HDBSCAN does not require the user to specify a distance scale for a dataset, only the minimum cluster size (number of members). We found it produced clusters which agreed well with what would be picked out by eye, but as the optimum minimum cluster size needed to be found for each dataset it wasn't ideal for our intended purpose. Furthermore in some instances additional settings (e.g. a minimum distance between clusters for them to be considered separate) were needed to produce sensisble results, eliminating the advantage of the algorithm that it does not require user-defined distance scales. As such, we decided against using HDBSCAN in favour of our FoF algorithm. To be considered a cluster by our FoF algorithm, we required spatial groupings to have more than 10 sink particles, although we reduced this to 7 for the M1R1 run because there were so few sink particles in this model. In all models, the number of sinks was larger (but their mass smaller) in the models with feedback. We show the evolution of the clusters as cluster merger trees (see also \citealt{Guszejnov2022}), similar to galaxy merger trees, in Figure~\ref{fig:mergertree}. We follow the evolution of clusters over time, by running our algorithm at different timeframes, and identifying which particles lie in the clusters at different times (as seen in the appendix, the clusters' evolution also makes sense by eye). In Section~\ref{sec:partevol}, we take an alternative approach, where we take clusters at a particular time, and follow what happens to the constituent sink particles over time. Figure~\ref{fig:mergertree} shows that mergers and splitting of clusters (as denoted by the clustering algorithm picking up multiple versus single clusters at adjacent times) is commonplace in all models. There is also a tendency for more mergers and splits to occur in the models with feedback. As shown in \citet{Dobbs2022}, the massive clusters tend to form via the merger of a number of smaller clusters. The lower mass clusters, unless they have recently split from more massive clusters, tend to be clusters which have formed with a lower mass and stayed low mass for the duration of the simulation. There are more mergers in the simulations with higher gas masses, here gravity will be stronger and driving additional mergers, as well as promoting the formation of more clusters. As inidicated by the figures in Appendix~\ref{sec:appendix1}, clusters are not spherically symmetric (though they become more so with time), and sinks in the outer parts can be more loosely associated and may be unbound from the rest of the cluster and later split. Groups of stars which are not bound to the cluster could arise because they are spatially coincident, but not bound. Either they simply form close to the cluster, or dynamically come close together, but are not bound. The end cluster masses (at least for the more massive clusters) mostly tend to be similar with and without feedback, which is perhaps not surprising when the total stellar masses are more similar (Figure~\ref{fig:mergertree}). The exception is the lowest gas mass models, M1R1FB and M1R1. At the end point shown in Figure~\ref{fig:mergertree}, the model without feedback, M1R1 produces a cluster which is an order of magnitude more massive than any of the clusters which occur with feedback. We also see that typically there are more clusters in the model with feedback, and for example after 15 Myr, with no feedback there is only one cluster compared to typically three with feedback. Even allowing for the caveat that the sink particle numbers are different, by eye there appear to be more clusters in the model with feedback at later times, whereas in the model with no feedback, the sinks all merge into the cluster seen at $x=3.95$ and $y=4.315$ in Figure~\ref{fig:region1} at the 7 Myr time (see also Figures~\ref{fig:app1} and \ref{fig:app2} in the appendix). Even though at earlier times in model M1R1FB there is a more massive cluster, this is fairly asymmetric, and at least at 5 and 6 Myr it is bipartite, and simply splits apart by 7 Myr. Some clusters cannot be followed to the final time, sometimes these are ones which occur as a result of a split and disperse. In addition, in some of the no feedback models, the number of particles in the clusters is low which artifically leads to cluster dispersal (see Section~\ref{sec:partevol}). Whether stars form in clusters or not has been the subject of considerable debate \citep{Bressert2010,Ward2018,Ward2020,Grudic2021}. In Figure~\ref{fig:fraction} we show the fraction of stars, by mass, which lie in clusters with time for each model. All show a decreasing fraction of stars with time. The high fractions at early times suggest that stars do tend to form in clusters, but due to the dynamics, many are ejected. Some are only loosely associated with the cluster and simply not picked up by the nearest neighbour algorithm at later times. The low mass models are the only models which run to longer timescales. In these cases, the model with feedback (M1R1FB) stays around 20\% up to 40\,Myr (the model with no feedback is higher at $\sim30\%$). This is not dissimilar to the fiducial simulation of \citet{Grudic2021}, where the fraction is 10\%, though there they see a wide range of fractions of stars in bound clusters for their models. \subsubsection{Cluster properties} \label{sec:properties} In Figure~~\ref{fig:brownfigure} we show the masses and radii of clusters formed in all the simulations except M25R1 and M25R1FB. We use the half mass radius, as observers use; in \citet{Dobbs2022}, we did not use the half mass radius as there were too few stars, but the larger spatial scale for our clustering algorithm here is probably a better choice and means that taking the half mass radius here is more practical\footnote{The results here with the half mass radius are similar to taking a lower spatial scale in \citet{Dobbs2022} with the full radius.}. The clusters are taken from the end points of the simulations except for the low mass simulations, where we take a time of 9 Myr, and are overplotted on observed clusters from \citet{Brown2021}. The observed clusters are taken from a sample of nearby galaxies (LEGUS). Figure~\ref{fig:brownfigure} shows that the clusters from the simulations with feedback match the observations much better than without feedback. We see that without feedback, the cluster radii are on average too small compared to the observed clusters. Figure~\ref{fig:brownfigure} confirms what we could see by eye in Figures~\ref{fig:region1} and \ref{fig:region2}, that the clusters are larger in size with feedback. This increase of radiues with feedback is expected, since as gas is dispersed away from the cluster, the potential changes, and the cluster will then expand \citep{Geyer2001,Pfalzner2011}. We see significant expulsion of gas from comparing the gas mass within the vicinity of the cluster with and without feedback (see Section~\ref{sec:gas}). We also examined whether the velocity dispersion of the stars contributed to expansion, but there was not a significant difference with and without feedback. We compared radii of clusters with and without feedback at different times, and found that they start to diverge when clusters were around 1-1.5 Myr old. We also see that there is a slight tendency for higher mass clusters to have larger radii. \citet{Brown2021} find that the radii of the clusters over their full sample vary with the mass according to \begin{equation} R=2.55\bigg(\frac{M}{10^4}\bigg)^{0.24}. \end{equation} For our models with feedback, we find a relation of \begin{equation} R=2.26\bigg(\frac{M}{10^4}\bigg)^{0.27}, \end{equation} in excellent agreement with the data. By contrast, without feedback, we find a relation of \begin{equation} R=1.09\bigg(\frac{M}{10^4}\bigg)^{0.085}. \end{equation} Compared to the observations, the simulated clusters still have a slightly larger spread than the observations in mass and radii, even with the better matching clusters from the simulations with feedback. The higher masses could reflect that probably the star formation rates are high as in reality there would likely be a lower efficiency for star formation when we form sink particles. The observations are also not complete, particularly at lower masses and radii. Consquently we cannot compare the relative numbers of different mass clusters. In Figure~\ref{fig:massradevol} we take ten clusters from the different simulations which we are able to follow for a significant fraction of the simulation (and whose evolution is independent from each other, so for example not the result of splitting from another cluster which is shown), and show how their mass and radius evolves over time. All clusters are from the simulations with feedback. Taking clusters where we can follow the evolution biases us more towards clusters which end up relatively massive. As expected, most clusters show a tendency to increase in mass and radius and thus move diagonally upwards across the plot. The observed relation between mass and radius likely reflects this typical evolution. The increase in mass comes from additional star formation and mergers of clusters whilst the increase in radius likely comes from the clusters tending to be virialised, so that the radius increases as they grow in mass, plus the effect of feedback increasing the radius. However not all clusters follow a path of increasing mass and radius. We see some, e.g. grey, coral lines, where the radius increases but the cluster does not experience a significant increase in mass. We would expect some clusters to behave like this for a number of reasons, to fill the parameter space of observed clusters with smaller masses and larger radii, and because we know in the Galaxy OB associations exist which have large radii and small masses (the black line is from the M1R1FB model). The cluster represented by the green line shows quite different behaviour. This radius of this cluster decreases with time. This is the result of a cluster splitting into two subclusters - the cluster grows by attaining more stars or merging with another group of stars, but then splits apart. We also looked at the kinetic and gravitational energy (see also \citealt{Dobbs2022}, but found that the ratio of kinetic to gravitational energy was strongly concentrated around 0.5 ($0.5 \pm 0.05$) indicating the clusters were virialised, and showed no particular trends. We found that the 1D expansion / contraction was a slightly better predictor of cluster evolution and we discuss this further in the next section. \subsubsection{Evolution of constituent sink particles in a cluster} \label{sec:partevol} In this section, rather than identifying clusters at different times and determining which correspond to the same object (constituting a largely similar but not necessarily identical set of constituent sink particles), we simply take the particles which constitute clusters at one time and show them at a later time. This has the advantage that we can follow any clusters which disperse as well as bound clusters. In Figure~\ref{fig:partevol1} we show clusters from the low mass region 1 simulations, M1R5 and M1R5FB, at a time of 7 Myr, and the locations of these particles at a time of 16 Myr. We also show the average expansion / contraction velocity calculated over all particles in the clusters at 7 Myr. This velocity is calculated as \citep{Kuhn2019,Buckner2022} \begin{equation} v_{\rm out}=\bf{v}.\bf{\hat{r}} \end{equation} where $\bf{v}$ and $\bf{\hat{r}}$ are calculated relative to the centre of mass of the cluster. We show all 6 clusters found from the nearest neighbour algorithm at 7 Myr for the M1R5FB model. At 16 Myr, it is evident that in the model with feedback (M1R1FB) these clusters have expanded quite a lot. The clusters which are picked out by the algorithm at 16 Myr tend to be the cores of the distributions highlighted at 7 Myr, plus possibly stars previously identified in another cluster. Many stars of the `original' clusters are now at larger radii from these cores. The clusters shown in magenta and blue have more or less dispersed at 16 Myr. We find poorly resolved clusters ($<20$ particles) tend to disperse, these are stated explicitly in the text as poorly resolved. By the end of the simulation (40 Myr, not shown), there are three clusters picked out with the algorithm. Two contain constituent stars from the black, cyan and blue clusters at 7 Myr. A third contains constituent stars from the cluster shown in grey. The clusters shown in green (which is poorly resolved) and magenta (which is well resolved) at 7 Myr have dispersed, i.e. the clusters picked out at 40 Myr don't contain any of these sink particles. The velocities for the clusters in Figure~\ref{fig:partevol1} (top panel) are all fairly low, and most are positive or zero, indicating expansion or no evolution. The cluster shown in blue has the largest expansion velocity, and by eye this cluster does appear to cover the greatest spatial extent at 16 Myr. Nevertheless some sink particles from this cluster still belong to clusters identified at 16 Myr, and as discussed above even 40 Myr. In the lower panels of Figure~\ref{fig:partevol1}, we show clusters at the same times from the model with no feedback (M1R1). Here only two clusters are picked out. The cluster in blue has a high positive velocity but is poorly resolved, and is dispersed after 16 Myr, though by this time it has actually collided with the cluster in green. The one cluster picked out at 16 Myr is comprised of some sinks from the 7 Myr clusters, but also newly formed sinks, since as we show in Figure~\ref{fig:massofstars}, star formation continues in the no feedback case, unlike the feedback model. The cluster in green (well resolved) remains more intact, and has a lower expansion velocity, and also lower virial parameter. \begin{table} \begin{tabular}{c|c|c} \hline Model & No. clusters which & No. clusters \\ & remain in some form & which completely disperse \\ \hline M1R1FB & 4 & 1 \\ M1R1 & 1 & 0 \\ M5R2FB & 7 & 0 \\ M5R2 & 5 & 0 \\ M5R1FB & 4 & 4 \\ M5R1 & 2 & 4 \\ \hline \end{tabular} \caption{Table showing the resultant evolution of clusters in the different simulations, as measured by the eventual positions of the particles which make up clusters at an earlier time. Clusters where just some core remains (more typical in the M1R1FB model) are listed as `remain in some form'. The table only includes clusters with $>20$ sink particles.} \label{tab:cluster_evolution} \end{table} In Figure~\ref{fig:partevol2}, we show clusters from the M5R2 and M5R2FB runs. Because there are many more clusters in M5R2FB, for clarity, we only study clusters with masses greater than $2\times 10^3$ M$_{\odot}$ which means that the selected clusters have more particles (at least 39). For the M5R2 model, all clusters are $>2\times 10^3$ M$_{\odot}$ and the minimum number of sink particles is 11. Although the clusters in the feedback model are spatially larger than without feedback, we otherwise see much greater similarity between the clusters selected with and without feedback compared to in the lower mass M5R1 models. At 4 Myr (left panels) the same clusters are selected in each simulation. Thus feedback appears to be having less effect on the clusters, as is also the case for the other higher mass simulations. We also see from the velocities, that there are more negative values indicative of contraction compared to the lower mass models. In the M5R1FB model, the velocities are all positive or only borderline negative. Most of the clusters are readily identifiable at the 6 Myr timeframe, and most are still fairly compact. We see in the feedback model that some of the clusters exhibit some expansion (e.g. green, grey), others appear very similar (e.g. cyan, brown) which mostly exhibit negative velocities at both time frames. The cluster shown in grey has positive velocities at both times, and shows more evident dispersal. In the model without feedback (lower panels), the velocities appear a fairly good indicator of short term cluster evolution. The clusters shown in green and blue have high negative velocities and visually appear to contract slightly. The clusters shown in grey and cyan have high positive velocities, are poorly resolved and disperse rapidly, but the velocities are in agreement with the evolution. We also performed the same analysis on clusters from the M5R1FB and M5R1 models at timeframes of 2 and 4 Myr though we do not show the plots. Between 2 and 4 Myr, nearly all the clusters picked out undergo collisions or interactions with each other. Hence the velocity analysis is not very meaningful because the clusters do not evolve in isolation at all (though to some extent this is also true of M1R1FB). Unlike the Region 2 models, we do see some examples of clusters identified at 2 Myr where by 4 Myr the constituent sink particles are fairly widely dispersed (and certainly no longer identifiable as clusters). However due to the compression of the gas, the clusters all end up quite close together, and some constituent sinks may have joined another cluster. There is some suggestion of similar behaviour to M1R1FB, whereby the region is compressed together, the clusters form and merge or collide, and in the resulting stellar population, some clusters remain as clusters, whereas some are or have dispersed. For model M5R2FB, which does not have such strong dynamics, and the clusters interact less, the clusters seem to remain intact at least for the duration we study them for. We also show the evolution of clusters from the different models in Table~\ref{tab:cluster_evolution}, indicating only those that are well resolved. Only the more dynamic, Region 1 simulations contain clusters which disperse. \subsubsection{OB associations} \label{sec:OBassociations} Given the small number of OB associations in our Galaxy with detailed information, we simply compare with those listed in \citet{Wright2020} where we have information on mass, size, ages and number of subgroups. For our higher mass simulations, the stars tend to form in fairly bound clusters, roughly at similar times and are atypical of nearby OB associatons. The M1R1FB region however contains a lower stellar density ($\lesssim 0.1$ M$_{\odot}$ pc$^{-2}$), stars form over a prolonged time, and groups can be unbound themselves, and with respect to each other. We compare the region in M1R1FB with specific regions given in \citet{Wright2020} in Table~\ref{tab:OBassociations}. As indicated in Table~\ref{tab:OBassociations}, some nearby OB regions are larger, and correspondingly tend to be older than M1R1FB, whilst some lesser known ones tend to be smaller and have fewer subgroups. Of the observed associations we have extensive information for, our simulated region is probably closest to Orion Ia. We show a visual comparison with Orion Ia in Appendix B. M1R1FB is similar mass and size to Orion Ia. The age spread is a little lower, Orion Ia is slightly smaller and also there is likely more gas still in the vicinity of Orion Ia. These could be related to the amount of feedback in the simulation, e.g. with less feedback, star formation may be more prolonged and larger volumes of gas remain. Prolonged gas inflow into the region could also lead to more gas present at later times. Again with no feedback at all however (model M1R1) there is simply one cluster, which is not at all like Orion Ia, or indeed any of the other listed associations. \begin{table*} \begin{threeparttable} \begin{tabular}{c|c|c|c|c|c} \hline Region & Mass & Size & No. & Age &Comparison \\ & (M$_{\odot}$) & (pc) & subregions & spread (Myr) & to M1R1FB \\ \hline Sco OB2 & 4000 & $>100$ & 3 & $\sim 20$ & too large, star formation occurring over longer time \\ Orion Ia & 8500$\pm$1500 & $100$ & $\sim5$ & $\sim12$ & slightly bigger, slightly larger age spread \\ Vela OB2 & $>2300$ & - & 8 & 40 & older, more subgroups \\ Cygnus OB2 & 16500 & 200 & - & $\sim 7$ & too big \\ Perseus OB2 & 6000 & 40 & - & $>5$ & too small \\ Carina OB1 & $>2\times 10^4$ & $~100$\tnote{\textdagger} & 9 & - & too massive \\ Lacerta OB1 & 1 O star & - & 2 & few Myr & too small \\ \hline M1R1FB & $10^4\pm1500$ & 70 & $\sim4$ & $\sim 7$ & - \\ \hline \end{tabular} \begin{tablenotes} \item[\textdagger] \citet{Melnik2020} \end{tablenotes} \end{threeparttable} \caption{Table showing observed OB associations \citep{Wright2020}, their properties and a comparison to the stellar distribution in model M1R1FB. The observed OB association which best resembles M1R1FB is Orion Ia (see Appendix~\ref{sec:appendix2}).} \label{tab:OBassociations} \end{table*} \subsubsection{Evolution of gas} \label{sec:gas} In this section we examine the evolution of gas, and also the stars, within a particular radius of the clusters in the simulations. This is to see how quickly / how much photoionization clears away the gas from clusters, and the effect on stellar mass. In \citet{Dobbs2022}, we chose radii of 1 and 2 pc. Here we choose a larger radius of 5 pc, partly because the ionisation has a stronger effect here on the surrounding gas, and also because the lower densities, lower amounts of star formation, and smaller number of clusters, means that in most cases (though not always) a radius of 5 pc will just contain one cluster. We show results for a sample of clusters from the M1R1 and M1R1FB (top), M5R1 and M51R1FB (middle) and M5R2 and M5R2FB simulations (lower panels) in Figure~\ref{fig:gasplot}, which correspondingly have the lowest to highest mass clusters forming. We only show clusters which can be followed for most of the simulation, exclude other clusters which merge or split with those already shown, and only include those with a clear conterpart in the no feedback model. The mass of gas within 5 pc is shown in the left, and the mass of stars on the right. Lines of the same colour represent the same cluster identified from models with and without feedback. Recalling \citet{Dobbs2022}, where very massive clusters formed, there we saw a relatively small reduction in the stellar mass, up to around 25\%, whilst the gas mass could decrease by up to an order of magnitude. Starting with the low mass simulations (top panels) we see that the inclusion of ionisation has a dramatic effect on the mass of gas within 5 pc. We find that within 5 Myr (from 2 to 7 Myr) the mass of gas within 5 pc drops by 1 or 2 orders of magnitude for the four clusters when ionisation is included. By contrast, with no ionisation, the gas mass within 5 pc stays relatively constant. The mass of stars within 5 pc is also substantially reduced, by a third or so at earlier times and more later, and again substantially more compared to \citet{Dobbs2022}. In the M5R2FB and M5R2 simulations (middle panel), the ionisation again has a clear effect on the gas, decreasing the gas mass by an order of magnitude and a half for the clusters shown in red, green and blue over 3 or 4 Myr. The effect on the cluster in magenta is much less, though this cluster evolves for less time. Again, for all the clusters with ionisation, the gas mass decreases over time, whereas it stays more or less constant with no ionisation. The stellar mass shows a more moderate change, unsurprisingly there is minimal change for the cluster shown in magenta, though for the cluster shown in red, the mass is less than half with ionisation. Finally the lower panels show the M5R1FB and M5R1 models, where higher mass clusters form, so closer to although still somewhat less massive than those in \citet{Dobbs2022}. Again ionisation has a clear effect on the gas mass. The gas mass decreases by a factor of around 2 to 4 over a timescale of 2 to 2.5 Myr. So this is less than the other simulations, but the timescale is shorter. The gas mass in the models without ionisation shows some small decrease, possibly because comparably more gas mass has been converted into stars. The ionisation reduces the mass by a factor of 2 for the clusters shown in red and green, but has minimal effect on the cluster shown in blue, which is also the most massive in any of the simulations. Overall there is a tendency for ionisation to have a greater effect on lower mass clusters. The trend is not completely clear because the timescales are longer for the lower masses. However more gas is removed and the stellar masses tend to be more reduced by ionisation after 3 Myr for the lower mass clusters compared to the higher mass clusters. Generally, gas is expelled from the vicinity of clusters on Myr timescales in agreement with observations. And, as discussed in Section~\ref{sec:sfr}, for the high mass clusters, stellar masses of $10^4$ or $10^5$ M$_{\odot}$ are accumulated before feedback has much effect, whereas for the lower mass clusters, mass is accumulated over a longer time and feedback has more chance to impact the cluster properties. \subsection{Impact of supernovae} \label{sec:supernovae} In the results presented so far, we include supernovae. We also repeated runs M1R1FB, M5R2FB and M25R2FB with only ionisation, starting from just before the first supernovae. Like \citet{Bending2022}, we did not switch ionisation off when stars were the age to undergo supernovae (see Herrington et al., in prep for this). We compare the M1R1FB, M5R2FB and M25R2FB models with and without supernovae in Figure~\ref{fig:supernovae}. The models are shown at the end points afor M5R2FB and M25R2FB, for M1R1FB, M1R1FB we show at 12 Myr, but the evolution of M1R1FB does not change significantly after 7 or 8 Myr, the gas simply becomes more diffuse. As seen in \citet{Bending2022}, the supernovae don't have a big impact and are secondary compared to the ionisation. The supernovae simply appear to fill the low density regions created by the ionisation with even lower density, hotter ($10^8$\,K) gas, marked explicitly for the M5R2FB and M25R2FB models. The supernovae appear to have a more marked effect in the lowest gas mass model, M1R1FB, which has had a similar number of supernovae to the M5R2FB model. The supernovae create a very clear cavity, and at the edges of this the gas is denser and more compressed compared to the case with only ionisation. The size of this cavity is also quite large, of order 100 pc. The supernoave have a very minor impact on the star formation rate and clusters. The supernovae slightly increase the star formation rate compared to without supernovae; the density enhancement we see at the edges of the supernovae bubbles in Figure~\ref{fig:supernovae} can be enough to lead to slightly more stars. The supernovae can also slightly change the positions of the sinks, which changes the groupings of the sinks into clusters, but their spatial distribution is still fairly similar, and the overall properties and trends, e.g. the cluster radius relation do not change. \section{Conclusions} We have performed simulations of two sections along spiral arms with different initial gas densities, including photoionizing and supernovae feedback. The initial conditions are taken from previous galaxy scale simulations, and exhibit converging flows in spiral arms. One region exhibits strongly converging flows, and the other moderately converging flows. We change the initial gas mass to run lower gas density simulations whereby lower mass clusters form, to produce a population of clusters across a range of masses. We find that photoionising feedback has a notable effect on cluster radii, and is required to produce the observed cluster radius mass relation (in agreement with \citealt{Hajime2022}). As photoionization clears away gas from the vicinity of the cluster, the gravitational potential is reduced and the cluster expands. Similarly, N-body simulations of clusters where gas is explicitly removed also show that the radius of the cluster thereafter increases \citep{Goodwin2006,Moeckel2010,Lughausen2012}. Supernovae have little impact on cluster properties. Typically they occur after the cluster masses and radii have become established. In terms of cluster masses, spatial distribution and simply the number of clusters, we find that the effect of photoionization is much greater on these properties when lower mass clusters form, i.e. at lower gas densities and lower converging flows. This is because star formation is prolonged, so photoionization has time to act, whereas the high mass clusters form quickly with high star formation rates. The star formation rate, and mass of clusters formed, is suppressed more in the lower density regions (although the total stellar mass is only reduced by a factor $\lesssim 2$). This is similar to previous results studying individual molecular clouds \citep{Dale2012,Dale2017,Colin2013,Gavagnin2017,Ali2019,Kim2021b}. One caveat to this result is that we don't include radiation pressure, which is expected to be more relevant in high density regimes \citep{Krumholz2009,Kim2018}. IR radiation from dust is also expected to be relevant in high density dust rich regimes \citep{Skinner2015,Raskutti2016,Tsang2018} although very recent work by \citet{Menon2022} finds that radiation from dust only has a small effect on the star formation rate, and may not be that effective at dispersing the gas \citep{Tsang2018,Ali2021}. During cluster evolution, mergers and splits are common, particularly in higher density, more dynamic regions. The most massive clusters which form in each simulation tend to be those which have undergone mergers. Some clusters move diagonally towards higher masses, and higher radii, in the cluster radius mass plot, as might be expected if clusters grow. However clusters can stay at, or even move to lower masses, simply because the cluster is relatively isolated, or where a cluster has split into lower mass clusters. The fraction of stars in clusters also decreases with time, as the clusters interact and stars are ejected, and through dynamical ejection. For our longest run model, the lowest mass model, this fraction flattens out around 20\%. We only see a low density group of stars similar to an OB association form in our low mass model with feedback (M1R1FB). This is not surprising as this is the least gravitationally dominated, most dynamic model. Unlike \citet{Grudic2021}, who model a $10^7$ M$_{\odot}$ cloud, and obtain a dozen or so clusters in one association, our region is lower density, lower mass and produces a smaller number of groups, some of which themselves are more like associations. The star forming region here is not dissimilar to Orion Ia. The evolution of this region supports the findings from recent work \citep{Wright2016,Ward2018} that OB associations are not simply clusters which are expanding, but their evolution is more complex and there is no simple picture of uniform expansion. \section*{Data Availability} The data underlying this paper will be shared on reasonable request to the corresponding author. \section*{Acknowledgments} We thank the referee for a useful report which helped clarify some aspects of the paper. This work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. Some of the figures in this paper were made using splash \citep{splash2007}. We also used the graphviz package to make Figure~\ref{fig:mergertree}. CLD, TJRB and ASMB acknowledge funding from the European Research Council for the Horizon 2020 ERC consolidator grant project ICYBOB, grant number 818940. ARP acknowledges the support of The Japanese Society for the Promotion of Science (JSPS) KAKENHI grant for Early Career Scientists (20K14456). \bibliographystyle{mn2e} \bibliography{Dobbs} \bsp \appendix \section {Evolution of clusters for each model} \label{sec:appendix1} In this appendix we show the clusters identified by our friends of friends algorithm over a series of timesteps for the M1R1, M1R1FB, M5R1, M5R1FB, M5R2 and M5R2FB models, i.e. all the medium and low mass models presented, and for which most of the analysis is focused on. In Figures~\ref{fig:app1} and \ref{fig:app2} we show the cluster evolution for M1R1FB and M1R1 respectively up to a time of 15.5, though we do follow the clusters for a time up to 40 Myr. In Figures~\ref{fig:app3} and \ref{fig:app4} we show the cluster evolution for M5R1FB and M5R1, and in Figures~\ref{fig:app5} and \ref{fig:app6} we show the cluster evolution for M5R2FB and M5R2. \section {Comparison to Orion} \label{sec:appendix2} We show a visual comparison of our M1R1FB model with Orion in Figure~\ref{fig:Orion}. The simulation figure (right panel) shows the region in the $y-x$ plane (i.e. as would be face on). Although not shown, the region displays a similar morphology in terms of more gas at the lower part of the image, and the stars occupying a broad diagonal distribution to the top right, in the $z-y$ plane (i.e. as would be viewed through the plane of the disc in our Galaxy). \label{lastpage}
Title: Mitigation of the Magnetic Field Susceptibility of Transition Edge Sensors using a Superconducting Groundplane
Abstract: Transition edge sensor (TES) microcalorimeters and bolometers are used for a variety of applications. The sensors are based on the steep temperature-dependent resistance of the normal-to-superconducting transition, and are thus intrinsically sensitive to magnetic fields. Conventionally the detectors are shielded from stray magnetic fields using external magnetic shields. However, in particular for applications with strict limits on the available space and mass of an instrument, external magnetic shields might not be enough to obtain the required shielding factors or field homogeneity. Additionally, these shields are only effective for magnetic fields generated external to the TES array, and are ineffective to mitigate the impact of internally generated magnetic fields. Here we present an alternative shielding method based on a superconducting groundplane deposited directly on the backside of the silicon nitride membrane on which the TESs are located. We demonstrate that this local shielding for external magnetic fields has a shielding factor of at the least ~ 75, and is also effective at reducing internal self-induced magnetic fields, as demonstrated by measurements and simulation of the eddy current losses in our AC biased detectors. Measurements of 5.9 keV X-ray photons show that our shielded detectors have a high resilience to external magnetic fields, showing no degradation of the energy resolution or shifts of the energy scale calibration for fields of several microTesla, values higher than expected in typical real-world applications.
https://export.arxiv.org/pdf/2208.10775
\title[Mitigation of B-field Susceptibility TESs]{Mitigation of the Magnetic Field Susceptibility of Transition Edge Sensors \\ using a Superconducting Groundplane} \author{M. de Wit} \affiliation{NWO-I/SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, The Netherlands} \email{M.de.Wit@sron.nl} \author{L. Gottardi} \affiliation{NWO-I/SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, The Netherlands} \author{M.L. Ridder} \affiliation{NWO-I/SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, The Netherlands} \author{K. Nagayoshi} \affiliation{NWO-I/SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, The Netherlands} \author{E. Taralli} \affiliation{NWO-I/SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, The Netherlands} \author{H. Akamatsu} \affiliation{NWO-I/SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, The Netherlands} \author{D. Vaccaro} \affiliation{NWO-I/SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, The Netherlands} \author{J.-W.A. den Herder} \affiliation{NWO-I/SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, The Netherlands} \affiliation{Anton Pannekoek Institute, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, the Netherlands} \author{M.P. Bruijn} \affiliation{NWO-I/SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, The Netherlands} \author{J.R. Gao} \affiliation{NWO-I/SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, The Netherlands} \affiliation{Optics Research Group, Department of Imaging Physics, Delft University of Technology, Van der Waalsweg 8, 2628 CH, Delft, The Netherlands} \date{\today} \section{Introduction} \label{sec:intro} Transition Edge Sensors (TESs) are among the most sensitive micro-calorimeters currently available. A TES is a cryogenic detector that can be used to measure the energy of a photon or particle with very high resolving power by utilizing the steep temperature-dependent resistance of the \Mark{superconducting-to-normal} transition \cite{Irwin1996}. They are often used as X-ray spectrometers, both for ground-based experiments such as HOLMES \cite{Puiu2018} and several setups at beamline facilities \cite{Doriese2017}, as well as for future space-borne systems such as X-IFU \cite{Barret2020} and HUBS \cite{Cui2020}. TESs for this application typically consist of a superconducting bilayer with a finely tuned critical temperature ($T_c$) and normal resistance ($R_n$) fabricated on a membrane to ensure sufficient thermal isolation between the bilayer and its surroundings. \Mark{A schematic overview of a TES is visible in Fig. \ref{figure:Array}(a).} In most applications, large numbers of TESs have to be \Mark{read out} simultaneously at cryogenic temperatures, requiring the \Mark{use} of multiplexing techniques. These can be either based on DC-biased readout of the detector such as Time Division Multiplexing (TDM) \cite{Doriese2016}, Code Division Multiplexing (CDM) \cite{Morgan2016}, and microwave SQUID multiplexing \cite{Nakashima2020}, or based on AC-biased readout such as Frequency Domain Multiplexing (FDM) \cite{Akamatsu2021}. Each readout scheme sets specific requirements on the highly tuned detectors in term of response times, resistance, and uniformity across the TES array. Considering the challenging environments in which TES arrays are often operating, one of the most pressing issues to solve is the sensitivity of the TES to magnetic fields. The main reason for this sensitivity is that, because of the coupling between the TES bilayer and the higher $T_c$ leads of the electrical circuit, the TES acts like a weak-link, similar to a Josephson junction \cite{Sadleir2010, Kozorezov2011}. The application of magnetic fields on such a structure induces Fraunhofer-like oscillations in the (critical) current \cite{Smith2013, Gottardi2018}. Additionally, the magnetic field influences the steepness of the superconducting transition, typically parameterized using the dimensionless $\alpha = \frac{T}{R}\frac{\partial R}{\partial T}$ and $\beta = \frac{I}{R}\frac{\partial R}{\partial I}$, which in turn is related to important device properties such as the time-constants and energy resolution. Changes in the pixel responsivity can lead to shifts of the energy scale calibration \footnote{\label{Vaccaro}D. Vaccaro et al., submitted to Journal of Low Temperature Physics}. The TESs are particularly sensitive to magnetic fields perpendicular to the plane of the bilayer. The sensitivity to parallel magnetic fields is much smaller primarily due to the reduced cross-section in this direction \cite{Hijmering2013}. For perpendicular fields, only a few $\upmu$T is enough to have significant impact on the individual device performance \cite{Zhang2017}. When looking at the full-array level, spatial variations of the magnetic field could be even more detrimental. These spatial variations cannot be compensated by a field coil near the TES array, and they reduce the uniformity of the optimal operating parameters of the different pixels. This is a problem in particular for the DC-biased readout schemes in which multiple pixels are biased in series, which does not allow the individual tuning of pixels to their optimal operating point. In a typical experimental setup or instrument there are many potential sources of stray magnetic fields that might cause issues for the performance of the TESs. In general we can divide them into two categories. The first is external magnetic fields originating from outside of the TES array and readout, \Mark{such as Earth's magnetic field}, stray fields from nearby equipment, or fields generated by mechanical cryocoolers or adiabatic demagnetization refrigerators \cite{Jackson2018}. In the particular case of X-IFU another source is the bias currents in the anti-coincidence detector located a very short distance below the TES array \cite{DAndrea2020}. The second type of fields is internal magnetic fields that arise from the electrical currents within the devices themselves, such as the current used to bias the TESs within the superconducting transition. The magnitude of external magnetic fields is generally reduced using a combination of mu-metal and superconducting shields \cite{Bergen2016, Vavagiakis2021}. However, the use of these magnetic shields can be problematic, especially in space applications where there are strict limitations on the size and weight of an instrument. Additionally, the effectiveness of the external magnetic shields is always limited by the openings necessary to let in the light to be detected or for wiring or thermalization. This not only reduces the attainable shielding factor, but can also lead to spatial variations in the residual field. Considering the internal magnetic fields; these are inherently difficult to mitigate, as they are generated within or very close to the detectors. One proposed solution is the fabrication of devices with turn-around style electrical leads, where the return current follows the same path as the incoming current \cite{Ishisaki2008,Swetz2012}. However, this solution requires significant changes to the fabrication process, and has for this reason not been widely adopted in high performance TES arrays. In this work, we have followed an alternative approach to mitigate the effects of both internal and external magnetic fields, namely by using an on-chip superconducting layer to reduce the magnitude of magnetic fields perpendicular to the TES bilayer. The application of superconducting layers for this purpose has been done before, embedding the layer in the membrane or making it part of the wiring \cite{Luukanen2003, Ishisaki2008} or burying it in the substrate \cite{Finkbeiner2011}. An alternative approach was taken by Harwin \textit{et al.} \cite{Harwin2020}, who fabricated an array of micro-pillars capped with a superconducting film, designed such that the pillars would fit within the silicon grid of the membranes of the TES array. Here, we extend on these previous studies, and shield our detectors using a superconducting groundplane (SGP) located underneath the TES at the backside of the silicon nitride membrane. The advantage of this method is that the deposition of this layer can be done on fully finished arrays, requiring no changes to the fabrication process. At the same time, it allows us to place the SGP very close to the TESs, improving the shielding factor, with the spacing given by the thickness of the silicon-nitride membrane. We have directly compared devices with and without SGP on the same chip. We have observed no impact on important detector properties, implying that the SGP can be applied to TES arrays without requiring alterations to the readout circuit. At the same time, we demonstrate the effectiveness of the shielding to both external magnetic fields as well as to self-generated internal magnetic fields. \section{TES Array} \label{sec:setup} We have studied a selection of pixels on a $5 \times 5$ array containing five different TES designs of varying geometries and aspect ratios. The TESs consist of a rectangular Ti/Au bilayer with a sheet resistance of about 26~m$\upOmega$/$\square$ and $T_c$ between 93 and 96~mK, depending on the geometry. The TESs are connected to niobium leads and coupled to 240$\times$240~$\upmu$m$^2$ gold X-ray absorbers. The absorbers have a thickness of 2.35~$\upmu$m and are placed at a height of 3.5~$\upmu$m above the TES. The TESs are fabricated on top of a 0.5~$\upmu$m thick silicon-nitride membrane for thermal isolation between the TES and the thermal bath. Typical values for the thermal conductance between the bilayer and the thermal bath are 50~-~120~pW/K, depending on the TES geometry. Extensive measurements of the fabrication, properties, and performance of this type of array are presented elsewhere \cite{Nagayoshi2019, Wit2020}. A schematic overview of the TES design is shown in Fig.~\ref{figure:Array}(a). There is a small variation in the design of the support stems between the top two rows of the array, which have a stem diameter of 5~$\upmu$m, and the bottom three rows, which have a diameter of 10~$\upmu$m. This change causes a reduction of the thermal conductance of the top two rows compared to the bottom three rows of about 20 pW/K \cite{Taralli2021}. This difference does not influence the results presented in the next sections. A Nb film was deposited through the silicon grid on the backside of membrane by magnetron sputter deposition, \Mark{coating both the membrane and muntins with a layer of Nb}, as indicated in Fig.~\ref{figure:Array}(a). During the deposition the silicon grid acts as a collimator, limiting the angles with which the sputtered Nb strikes the backside of the membrane. The reduction of deposition rate due to the collimator effect was compensated for. The final film thickness is 65~nm with a low $T_c$ of approximately 4~K due to the sub-optimal deposition conditions. This was measured on a representative monitor structure for the deposition of the Nb on the backside of the array. \Mark{The thickness on the vertical surfaces have not been measured, but is expected to be less than 65~nm due to the partial shadowing of the deposition. The thickness of 65~nm was selected as a compromise between having a layer thick enough to have a sufficiently high $T_C$, and thin enough to minimize the risk of introducing stress in the membrane or overheating the TES during deposition. Additionally, a thin layer keeps the parasitic thermal conductance and heat capacity as low as possible. A part of the array was covered during the deposition such that} the Nb film only covers the bottom three rows of pixels, leaving the top two rows without SGP to serve as a reference for comparison. This situation is clearly visible in Fig.~\ref{figure:Array}(b), in which the niobium film is visible at the edges of the array as the blue-greenish color at the backside of the silicon-nitride membranes. For these proof-of-principle tests, the SGP was fabricated at the backside of a fully finished detector array. Based on comparison measurements between the devices with SGP and without SGP, the impact of adding the Nb on the most important properties of the TESs is negligible in the absence of magnetic fields. \Mark{Theoretical estimations indicate that the additional heat capacity is negligible and the parasitic thermal conductance between the TESs and the thermal bath is very small. Indeed, measurements of the critical temperature and thermal conductance show no difference between the pixels with and without SGP within the measurement uncertainty.} There also seems to be no significant impact on the shape of the superconducting transition ($\alpha$ and $\beta$), noise characteristics, or energy resolution. At low bath temperatures we measure an increased critical current for the pixels with SGP, presumably due to \Mark{an increased uniformity of the current \cite{Barone1982}.} However, for temperatures close to the $T_c$ of the bilayer the differences in the critical current between the pixels with and without SGP vanish. \Mark{Some exemplary data comparing devices with and without SGP can be found in the Supplemental Material \footnote{See Supplemental Material at [URL will be inserted by publisher] for theoretical estimations of the thermal impact of the SGP, as well as exemplary data comparing devices with and without SGP}.\label{SM}} The array is mounted on a copper bracket at the mixing chamber of a dilution refrigerator. Magnetic shielding is achieved using a lead and cryoperm shield around the setup at the mixing chamber, and a mu-metal shield around the outside of the cryostat. These measures reduce the magnitude of stray magnetic fields at the TES array to $\ll 1 \upmu$T, as measured using the reference pixels. A small Helmholtz coil placed around the copper bracket can be used to apply a uniform magnetic field perpendicular to the TES array. All TESs are biased using an alternating current in an FDM readout system operated in single pixel mode. In this scheme, each pixel is connected to a superconducting LC resonator with a specific bias frequency between 1-5~MHz \cite{Bruijn2012}. This allows us to characterize many pixels in a single cryogenic run. Details of the FDM readout and measurement setup are given elsewhere \cite{Akamatsu2021, Wit2020}. In general the bias frequency used to read out the pixels influences the pixel performance due to the frequency-dependent weak-link effect. Therefore, when looking for effects of the SGP we compare pixels of the same geometry measured at similar bias frequencies (within $\sim$ 200~kHz). Furthermore, we will not study any devices in the center row of the array, since this row is located at the edge of the SGP where the quality of the coverage of the Nb layer is uncertain. \section{Shielding External Magnetic Fields} \label{sec:External} We start by investigating the effects of the SGP on external magnetic fields. For best operation of a TES, the component of the magnetic field perpendicular to the TES should be as close to zero as possible. Typically this is done using magnetic shielding (superconducting and high magnetic permeability materials) around the cryogenic setup and TES array in combination with a Helmholtz coil to tune the local magnetic field to zero. However, particularly for large TES arrays, the presence of local variations in the magnetic field cannot be fully prevented using these methods. The effectiveness of the SGP to shield the TESs from external magnetic field can be expressed in terms of a shielding factor ($SF$), defined as the ratio between the magnitude of the residual magnetic field at the location of the TES with the SGP and the magnitude of the external magnetic field without SGP. Assuming a perfect Meissner state, this shielding factor is purely dependent on the geometry of the system. For simplicity, we ignore the complex corrugated shape of the SGP due to the silicon grid, and instead assume a flat superconducting disk with radius $a$. For that situation the axial component of the magnetic field at a height $z$ above the disk is given by Claycomb \textit{et al.} \cite{Claycomb1999}: \begin{equation} B(z) = B_0 \left[ 1 - \frac{2}{\pi} \left( \tan^{-1}{ \left( \frac{a}{z} \right)} - \frac{az}{a^2 + z^2} \right) \right] \approx B_0 \left[ \frac{4}{\pi}\frac{z}{a} \right], \end{equation} where the right-hand side of the equation is valid in the limit $a \gg z$. For our system $z = 0.5 \upmu$m, as the separation between the TES and the SGP is given by the thickness of the membrane. \Mark{Defining an effective radius $a$ of the SGP is not so straight-forward for our array. A conservative estimate is made by assuming only the SGP below the membrane itself contributes to the screening, in which case $a/z \sim 200$. Thus from a purely geometric point of view, the residual field at the TES is attenuated by a few orders of magnitude. In practice, the attenuation must be limited by the London penetration depth, defects in the Nb film, and flux-focusing effects \cite{Brandt1998}.} We characterize the shielding to external magnetic fields provided by the SGP using measurements of the TES current-voltage characteristic (IV curve) for a number of applied magnetic fields, shown in Fig. \ref{figure:B-shielding}. Fig. \ref{figure:B-shielding}(a) shows the IV curves for a pixel without SGP as a reference. The external magnetic field effectively lowers the $T_c$ of the bilayer, reducing the required power to bias the TES in the transition. As a result, the IV curve shifts downwards, as visible in the main figure. In the inset, the magnetic field dependence of the current in the TES ($I(B)$) is visible for three different bias points defined by the resistance at $B = 0~\upmu$T. A Fraunhofer-like pattern is visible resulting from the interaction between the magnetic field and the TES bilayer acting as a weak-link under influence of the niobium leads. Now let us compare these results with Fig. \ref{figure:B-shielding}(b), which shows the IV curves for one of the pixels with SGP. Whereas for the unshielded pixel the magnetic field clearly induces a shift of the IV curves towards lower power, no significant reduction in the detector power is observed for the shielded pixel (visible from the overlap of the IV curves measured at different fields). The measured $I(B)$ curve shown in the inset confirms the strongly reduced impact of the external magnetic field on the superconducting state of the TES. To obtain an estimate for the shielding factor for external magnetic fields, we can look at the field dependence of the TES current of the most sensitive pixel design \Mark{($L \times W = 80 \times 40~\upmu$m$^2$)}. This data is shown in Fig. \ref{figure:B-shielding}(c). For this type of pixel, at the highest applied magnetic fields the pixel with SGP (red line) shows a small reduction in the TES current of $\sim 0.2$ \% at $B \sim 40~\upmu$T. By comparing this data to a similar pixel without SGP (blue line) we find which magnetic field is required to achieve the same relative reduction in the TES current. From this we can estimate the effective field felt by the TES, given by the $B/SF$. The best match between the pixels is found for $SF \sim 80$, indicating that the SGP can reduce the magnitude of external magnetic fields by nearly two orders of magnitude. An important remark concerning the external magnetic fields and the use of an SGP is that it is very important to cool the setup in zero field. When the SGP transitions from the normal to the superconducting state in the presence of a magnetic field, this magnetic field can be trapped even after the external field is removed. Trapping of magnetic flux in SGP structures has been reported before \cite{Ishisaki2008, Finkbeiner2011}. We have tested the effect of field cooling by warming up the full setup to approximately 10K, a temperature well above the $T_c$ of the niobium film. At this temperature, we applied an external magnetic field using the Helmholtz coil which was maintained during the cooldown to a base temperature of $T = 50$~mK. After removal of the external magnetic field, the residual field due to flux trapping was determined by measuring $I(B)$ for the pixels without SGP. We observed a residual field of \Mark{$\sim$20-30~\%} of the field that was applied during the cooldown \Mark{(See the supplemental material for more information [30])}. Whether the only partial trapping of the applied field is due to the intrinsic properties of the film or a geometric effect is unknown. We assume the measured residual field for the pixels without SGP is representative for those with SGP. The trapped field can be removed by doing a thermal cycle above the $T_c$ of the SGP. In order to prevent trapping of flux in the film, zero-field cooling is required. The magnitude of the field that can be accepted during the cooldown depends on the sensitivity of the detectors and readout chain. For narrow devices (width 20$~\upmu$m or less) measured under AC bias, detectors with SGP can be cooled down in fields of up to 1-2$~\upmu$T without impact on the detector performance. \section{Shielding Self-Induced Magnetic Fields} \label{sec:Internal} The second category of magnetic fields that interact with the TESs is the self-induced magnetic fields. These fields are generated by the bias currents in the bilayer and leads. These self-induced fields can be particularly important when using low resistance devices for which relatively high bias currents are needed to bias the detectors within the superconducting transition. These self-fields are shown to reduce the steepness of the superconducting transition, reducing the detector sensitivity \cite{Swetz2012, Sakai2018}. Here we will show that the SGP is also an effective method to reduce the impact of self-induced fields. The way in which the self-induced fields can be measured depends on the way in which the detectors are biased. In the case of DC biased detectors the presence of the self-induced field is visible as a skewing of the $I(B)$ curve proportional to the magnitude of the bias current. Related to this is the appearance of discontinuities in $\alpha$ and $\beta$, the magnitude of which depends on the geometric coupling between the bilayer and bias leads \cite{Smith2013}. Under AC biasing, it is more difficult to identify the effects of the self-induced fields. In general the TESs designed for operation under AC bias have a higher normal resistance \cite{Wit2020}, meaning typical operating currents (and hence self-induced fields) are smaller. Additionally the shift of the $I(B)$ curve can no longer be used as an indicator, since the self-induced field is alternated at MHz frequencies, smoothing out the effects. However, we are able to see the presence of self-induced fields via their interaction with normal metal in the vicinity of the leads and bilayer: the alternating fields induce eddy currents in these metal structures leading to AC losses \cite{Gottardi2017, Sakai2018}. These losses are mainly located in the X-ray absorber as this is located at merely $3.5 \upmu$m above the plane of the TES. The AC losses are measured as an additional parasitic resistance in the LC-resonator circuit. While for our devices these AC losses are sufficiently small that they do not affect the detector performance, here we will use them to demonstrate the effectiveness of the SGP to reduce the impact of self-fields. We can theoretically estimate the effect of the SGP on self-induced fields using the method of images (assuming perfect Meissner screening)\cite{Jackson2018}. We calculate the magnetic field from two current-carrying wires representing the bias leads, placed at a height $h$ above an infinite superconducting plane placed at $z = 0$. The magnetic field distribution above the SGP is then given by the superposition of the original current source and an image current flowing in the opposite direction at a height -$h$ underneath the SGP. The field produced by the screening current is identical to the field emanating from the image current \cite{Claycomb1999}. The field of each wire is calculated using the Biot-Savart law. In Fig. \ref{figure:B_calc} we illustrate the resulting field distribution with (top) and without (bottom) the SGP. The magnetic field without SGP resembles that of a magnetic dipole, while the SGP effectively changes this into a quadrupole field which falls off at a much faster rate as the distance to the wires increases. The precise attenuation of the field due to the presence of the SGP depends on the separation between the SGP and the current carrying wire. For the simplest case in which we only consider the field at a height $z$ directly above a single wire, the attenuation $\Gamma$ is given by: \begin{equation} \Gamma = \frac{B_{w.SGP}}{B_{w/o.SGP}} = 1 - \frac{(z-h)}{(z+h)} \end{equation} Here $\Gamma = 1$ corresponds to zero attenuation, while $\Gamma = 0$ corresponds to total attenuation of the field. At the bottom of the absorber (corresponding to $z = 4~\upmu$m with $h = 0.5~\upmu$m), this leads to $\Gamma = 0.22$. A higher shielding factor can be attained by fabricating the SGP directly underneath the TES instead of on the backside of the silicon nitride membrane, or alternatively by increasing the height of the absorber. We experimentally estimate the AC losses by measuring the damping of each of the LC resonators coupled to a detector. \Mark{This is done by acquiring the SQUID power spectral density (PSD) with all the TESs in the superconducting state. The only excitation is the thermal noise in the LC resonator. In this way, the measured power spectrum reflects the transfer function of the LCR circuit formed by the LC resonator in series with any resistance. The contribution to the PSD of each resonator is fitted with a Lorentzian function to extract the Q-factor and resonance frequency $\omega_0 = 2\pi f_0$. The measured Q-factor for each peak is shown in Fig. \ref{figure:ACLosses}(a), with in the inset an example fit of a Lorentzian to the PSD for one of the resonators.} The \Mark{Q-factors for the different pixels range from} 5\,000 to 20\,000, increasing with the bias frequency due to the decreasing C. For the highest frequencies we see a saturation as the AC losses increase with frequency. The Q-factors of the pixels with SGP (red) are slightly higher than those of the pixels without SGP (blue). The total damping resistance in the RLC circuit is obtained from the measured parameters using $R_{tot} = \omega_0 L / Q$ (with $L = 1.17~\upmu$H the effective inductance). In Fig. \ref{figure:ACLosses}(b), we show $R_{loss}$, the residual losses in the LC-TES circuit after subtracting the impact of the shunt resistor, given by: \begin{equation} R_{loss} = R_{tot} - R_{sh} \left(\frac{C}{C_{bias}} \right)^2 \end{equation} Here $R_{sh} = 0.72 \Omega$, and $C/C_{bias} = 25$. The obtained $R_{loss}$ is the sum of the AC losses in the TES due to the self-fields and the residual losses in the electrical circuit, such as for instance the LC resonators. The fact that these two loss factors are both intrinsic to the LC-TES circuit means they are difficult to separate. However, we distinguish between the two using their different dependencies on the bias frequency; while the losses in the electrical circuit are expected to be frequency independent (at least for this limited frequency range), the AC losses increase with the square of the bias frequency. We estimate the residual losses in the electrical circuit to be $0.19 \pm 0.10$ m$\Omega$ based on the $R_{loss}$ measured for the pixels with SGP. This level of losses is indicated in Fig. \ref{figure:ACLosses}(b) by the black dashed line. The remaining losses are attributed to the eddy current in the normal metal absorber. For the pixels with SGP, we see increasing losses as the frequency increases, while this is not observed in the pixels with SGP, which show a roughly constant value for all pixels. We have used Finite Element Method (FEM) modeling to simulate the expected losses as a function of the bias frequency, both with and without the SGP. In Fig. \ref{figure:Heatmaps} we show an example of the calculated volumetric loss density in the absorber for a $80 \times 20~\upmu$m$^2$ TES measured at a bias frequency of 2 MHz. The geometry of the TES, wiring, and absorber all match those of the actual devices as outlined in Sec. \ref{sec:setup}. The only parameter that is varied in the simulation is the electrical conductivity $\sigma$ of the gold of the absorber, which for this figure was set to $0.5 \cdot 10^9$ S/m, a reasonable value for our gold films at low temperatures \Mark{\cite{Sakai2018, Wakeham2019}}. Integration over the full volume of the absorber gives the total loss power $P_{loss}$, from which the loss factor $R_{loss}$ can be calculated by dividing by the squared bias current. Note that in practice $R_{loss}$ \Mark{does not depend on} the bias current and only depends on the bias frequency, absorber conductance, and the geometry of the TES and wiring. The SGP is simulated as a plane located $0.5 \upmu$m below the TES with boundary conditions such that the normal component of the magnetic field must be zero at the interface. The field distribution arising from the FEM simulation closely matches the analytical results obtained from the method of images. Fig. \ref{figure:Rloss_Sim} shows both the measured and simulated $R_{loss}$ for all pixels, separated to the different geometries. The central blue dashed line indicates the simulated losses assuming $\sigma = 0.5\cdot 10^9$ S/m, while the filled area around it marks the range between $\sigma = 0.25\cdot 10^9$ - $0.75\cdot 10^9$ S/m. The measured data for the pixels without SGP seems reasonably well explained by the losses in the absorber, even though the scatter in the data is too large to determine the $\sigma$ of the absorber with high accuracy. \Mark{For the pixels with SGP}, the total losses are dominated by the residual losses in the electrical circuit, as the FEM results confirm that the AC losses in the absorber never exceed $\sim 10~\upmu \Omega$. Thus, we again find that the SGP has the potential to significantly reduce the impact of self-induced magnetic fields on the TESs. Note that we observe a clear deviation from the expected squared dependency of the losses on frequency. This is due to the skin effect; for the higher bias frequencies and conductance of the absorber, the skin depth $\delta = \sqrt{2 / \omega \sigma \mu}$ (with $\mu$ the magnetic permeability of gold) becomes similar or smaller than the thickness of the absorber, meaning the absorber effectively shields itself from the self-induced magnetic fields, reducing the integrated losses. This effect is also visible in the data presented by Sakai \textit{et al.} \cite{Sakai2018}. \section{Gain and Energy Resolution} The ultimate figure of merit for the TES is the energy resolution. Magnetic fields are known to affect the main detector properties, which could reduce the intrinsic sensitivity of the detectors. The fields also affect the gain of the detectors which results in a shift of the energy scale calibration. We assess the energy resolution of our devices by exposing the TES array to 5.9 keV X-rays originating from a $^{55}$Fe calibration source with a typical count rate of approximately 1 count per second per pixel. To characterize the sensitivity of the gain to changes in the magnetic field environment, we follow the method as outlined by Vaccaro \textit{et al.} [14]: We acquire about 500 X-ray and 500 noise events for a range of applied magnetic fields. The energy of the collected X-rays is determined using the optimal filtering technique, which aims to maximize the signal-to-noise ratio by appropriately weighing the various frequency components of the pulses \cite{Szymkowiak1993}. For every magnetic field value we analyze the pulses using a single filter template calculated for the events collected at $B = 0$. We then determine the shift of the energy $dE = E - E_0$, with $E$ the calculated pulse energy from the pulse and filter template, and $E_0$ the known position of the K$\alpha_1$ line at 5898.75~eV. In this way, for each magnetic field we can determine the shift of the K$\alpha_1$ line caused by the field-induced variations in the TES gain. Fig. \ref{figure:Gain}(a) shows the measured shift of the energy for two pixels, one without (blue) and one with (red) SGP. For the pixel without SGP a big shift in the calibrated energy is observed as the magnetic field is increased. The inset shows a zoom in of the data close to $B = 0$. From the local derivative as shown in Fig. \ref{figure:Gain}(b) we find around $B = 0$ a gain scale sensitivity at a level of several tens of meV/nT, and a maximum sensitivity of $\sim$ 0.8~eV/nT near $B = \pm~4~\upmu$T. In contrast, looking at the data obtained from the pixel with SGP, the remaining magnetic field sensitivity is on the order of few meV/nT over the full magnetic field range, meaning no shift is observed within the statistical error of the energy measurement with only 500 counts. \Mark{Similar data for the other available geometries can be found in the supplemental material [30].} To demonstrate the impact of magnetic fields on the intrinsic detector performance, and how the SGP also mitigates this effect, we have measured a high number of X-ray events (approximately 10000 per pixel per setting) for a number of applied magnetic fields. At each magnetic field value, the data is analyzed using the optimal filter template based on noise events measured at that specific field (as opposed to always using the template calculated at $B = 0$ such as was done for Fig. \ref{figure:Gain}). In this way the energy resolution is not affected by changes in the calibration of the energy scale, but is only determined by the pure detector properties. Example X-ray spectra of the Mn K$\alpha$ lines are shown in figure \ref{figure:Energy_Res}(a) for $B = 0$ and $B = 6.1~\upmu$T. The extracted intrinsic energy resolutions of the pixels as a function the applied field are shown in Fig. \ref{figure:Energy_Res}(b). Both pixels achieve similar energy resolutions around 2~eV for the selected bias points. As the applied magnetic field is increased, the energy resolution degrades for the pixel without SGP (blue) quickly degrades to over 4~eV at $B = \pm~6~\upmu$T, while the shielded pixel (red) achieves the same energy resolution within the statistical error ($\pm~0.11$~eV). The degradation of the energy resolution for the unshielded pixel is accompanied by an increase in the pulse fall-time from 0.6~ms at $B = 0$ to 1.5~ms at $B = \pm~6~\upmu$T. See Fig. \ref{figure:Energy_Res}(c) for example pulse shapes at the different fields. The increasing fall-time indicates a smaller loop-gain of the electro-thermal feedback due to a reduction of both the TES power and $\alpha$. Fig \ref{figure:Energy_Res} clearly shows that even for these narrow devices measured under AC bias, which are relatively insensitive to magnetic fields when compared to the low-resistance, broad devices typically used for DC biased readout, small magnetic fields of only few microTesla are enough to significantly affect the TES performance. However, the SGP highly reduces the susceptibility of the pixels to magnetic fields, relieving the strict requirements set for external magnetic shielding. \section{Conclusions} To summarize, we have studied in what capacity the presence of a superconducting groundplane affects the magnetic field sensitivity of TES X-ray calorimeters. We have shown that the SGP provides mitigation to the effects of both external and self-induced magnetic fields, allowing the TES to retain their optimal performance even in the presence of magnetic fields \Mark{of} several tens of microTesla. This observation is of great importance to practical applications of TES arrays in situations with strong, non-uniform stray magnetic fields and strict limitations on the available mass or volume for magnetic shielding, such as in space applications. The SGP has the potential to give additional shielding to magnetic fields with a shielding factor of up to two orders of magnitude without adding fabrication complexity. \Mark{Future experiments have to confirm whether the performance of the SGP scales well when increasing the size of the TES array}. One important point to stress is the necessity to cool the SGP in zero-field, as discussed in Sec. \ref{sec:External}. The possibility of flux trapping in these structures has been reported before, and was confirmed experimentally for this TES array. In the case of trapping, a thermal cycle above the critical temperature of the SGP is required to remove the fields. This means that for most applications the SGP alone will not be enough to be able to properly operate the TESs. Near zero-field cooling has to be achieved by combining the SGP with (a lightened version of) traditional low-temperature magnetic shielding based on mu-metal and superconducting materials external to the TES array \cite{Bergen2016}. The shielding capability of the SGP increases when the distance between the TES and the SGP is decreased, which means that in principle integration of the SGP in the array fabrication is preferable for optimal shielding. However, here we show that the SGP can even be effective when deposited on the backside of an existing array. In this case, it provided significant shielding to magnetic fields without altering any of the vital detector properties in a significant way. This means that the SGP can be used as an easy to implement, cost-effective method to mitigate the magnetic field sensitivity of TES arrays without requiring a careful re-tuning of the full readout circuit. \section{Acknowledgement} The authors thank R. den Hartog for proofreading the manuscript. \Mark{SRON Netherlands Institute for Space Research is supported financially by NWO, the Dutch Research Council. This work was funded partly by NWO under the research programme Athena with Project No. 184.034.002 and partly by the European Space Agency (ESA) under ESA CTP Contract No. 4000130346/20/NL/BW/os.} It has also received funding from the European Union’s Horizon 2020 Program under the AHEAD (Activities for the High-Energy Astrophysics Domain) project with Grant Agreement Number 654215. \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{References} \bibliography{SG_bibliography}
Title: On the issue of magnetic monopoles in the prospect of UHE photon searches
Abstract: (also inside: this manuscript introduces the reader to the argument against the existence of magnetic monopoles, which forms an essential part of Staruszkiewicz's Quantum Mechanics of the Electric Charge) Ultra-high energy (UHE) photons with energies exceeding $10^{18}\eV$ can potentially be observed. They are produced in various processes involving electrically charged particles. However, more exotic scenarios are also possible. UHE photons could be emitted in encounters of massive magnetically charged monopole--antimonopole pairs or in processes associated with monopoles accelerated to high energies, typically $10^{21}\eV$ or beyond. Observing UHE photons can pose constraints on the properties of magnetic monopoles. There are compelling theoretical reasons in favor of the presence of magnetic monopoles in nature. The predicted observational signatures of these particles are therefore searched for in dedicated experiments currently in operation. Despite these attempts, magnetic monopoles have yet to be empirically proved. There are also theoretical reasons why magnetic monopoles allowed by Dirac's theory might not be realized in nature in the form of isolated particles. Detection or non-detection of UHE photon signatures of magnetic monopoles would bring us closer to solving this fascinating puzzle.
https://export.arxiv.org/pdf/2208.09043
\title{On the issue of magnetic monopoles in the prospect of UHE photon searches.} \author{{\L}ukasz Bratek \ and \ Joanna Ja{\l}ocha} \address{Institute of Physics, Cracow University of Technology, ul. Podchor{\k a}{\.z}ych, PL-30084 Krak{\'o}w, Poland} \email{Lukasz.Bratek@pk.edu.pl} \date{\today} \section{Introduction} The upper limit on the distance from which high-energy cosmic rays can travel is one of the key predictions in cosmic ray physics \cite{1_greisen1966,2_zatsepin1966}. Interactions with background radiation result in significant energy loss for protons with energies greater than $5{\times}10^{19}\eV$. These particles are expected to travel only a few mega-parsecs before their energy falls below the threshold for photoproduction. Similarly, if the cosmic ray particle is a heavier ion, the interaction with background photons reduces its energy. Therefore, a truncation in the cosmic ray spectrum known as the GZK limit is expected. Cosmic rays with energies above $10^{20}\eV$, coming from sources at distances greater than a few dozens of mega-parsecs, should be invisible due to the GZK limit \cite{3_cronin1992,4_yoshida1993}. The truncation of the cosmic ray spectrum can also be a result of the properties of the sources themselves, not just due to the GZK effect. It cannot be excluded that there are no mechanisms accelerating particles above $10^{20}\eV$. If the extension of detection capabilities would result in an increase in the observed flux of cosmic radiation at the highest energies, this would then require consideration of particle acceleration scenarios, which currently seem to be rather exotic. Among possible sources of cosmic radiation of the highest energies, most promising are those in the presence of very strong magnetic fields extending over long distances. Second-order Fermi acceleration processes \cite{fermi1949} occur in the presence of randomly moving plasma clouds carrying magnetic fields through the interstellar medium. An electrically charged particle can gain energy by repeatedly bouncing off plasma clouds. However, there is a limit to the efficiency of the acceleration process. The particle must stay in a magnetic field region for the acceleration to occur: if the particle's energy increases too much, the particle will escape from that region. The maximum energy gained is $E_F{\approx} Z \beta c e B L$, where $\beta c$ is the magnetic mirror's velocity, $B$ is the magnetic field strength, and $L$ is the size of the acceleration region. Particles can also be accelerated in shock fronts, bouncing back and forth across the shock, and if shock waves are also present, the acceleration process will be more efficient \cite{bell1978}. It was also shown \cite{gallant1999,blasi2000} that it is possible to accelerate particles in the vicinity of pulsars even to energies of $10^{21}\eV$. This is a debatable result. However, if the accelerated particle is magnetically charged, the situation changes radically. Magnetic monopoles could explain the production of particles of the highest energy even in the region of our Galaxy. As suggested in 1960 \cite{porter1960}, a number of effects observed in air showers could be understood if a fraction (about $10^{-14}$) of all primary cosmic ray particles were in form of magnetic monopoles, at that time assumed to be point-like elementary magnetic charges predicted by Dirac in 1931 \cite{dirac1931}. The monopole idea is unavoidable when considering an electrically charged quantum particle coupled to genuine Maxwell fields. There is an unobservable singularity string emanating from the monopole position-point. The monopole mass remains unspecified in Dirac's theory. It was later discovered that a magnetic monopole can be realized in terms of field-theoretical concepts. Topological magnetic monopoles emerge in unified non-Abelian gauge field theories of interactions via a mechanism similar to that of a monopole discovered independently by 't~Hooft \cite{hooft1974} and Polyakov \cite{polyakov1974} inan $SO(3)$ realization of the Georgi--Glashow model \cite{georgi1972}. The monopoles are spherically symmetric massive spatially extended solitonic solutions without a singularity string. The solitons are characterized by a topological charge -- an integer describing the winding degree of a finite energy mapping between the physical space domain and the field configuration domain. In the low energy regime of Unified Theories of Interactions, the topological charge can be reinterpreted in terms of magnetic charge of a $U(1)$ gauge potential equivalent to one of a genuine Maxwell theory. The resulting magnetic field is asymptotically equivalent to that of Dirac's monopole. The problem of the relationship of magnetic monopoles to cosmic radiation of the highest energies consists essentially of two separate questions. The first question is more experimental: {could magnetic monopoles be responsible for the production of UHE photons in the range of highest energies observed for cosmic rays?} The second question is more theoretical: { do magnetic monopoles exist?} The acceleration of monopoles in magnetic fields would happen directly and efficiently; no Fermi-type processes would be needed. Magnetic monopoles, so far hypothetical particles, could form in the dense matter of neutron stars \cite{milton2006}. Magnetic monopoles can be produced and efficiently accelerated by neutron stars' magnetic fields (especially if they contain quark matter). Magnetic monopoles may also play a role in various astrophysical processes, such as mechanisms that reduce magnetic fields in the interiors of neutron stars \cite{harvey1984}. Magnetic monopoles could be produced in strong magnetic fields via mechanism analogous to Schwinger pair production mechanism in which electrically charged particles are created by tunelling through a barrier in extremely strong electric field \cite{schwinger1951}. By duality of electromagnetic equations, magnetically charged particles are also expected to be produced by tunneling in sufficiently strong magnetic fields. This expectation is confirmed by rigor semi-classical calculations of the rate of monopole pair production in a constant magnetic field. In the case of point-like monopoles \cite{affleck1982}, the resulting production rate bears resemblance to that of Schwinger for electron-positron pair production, at least in the leading term. The rate of production in the case of non-point-like 't~Hooft and Polyakov monopole was calculated in \cite{ho2021}. It turns out that for heavy monopoles the required magnetic field for this process would exceed those in neutron stars. However, extremely strong electromagnetic fields can be generated in high energy heavy-ion collisions in which nuclei are accelerated to nearly the speed of light. Magnetic fields are induced by electric currents of positively charged nuclei and are the largest component of the electromagnetic field near vicinity of the collision center. In Earth-based laboratories one can achieve energies of $200\GeV$ for Au+Au collisions (RHIC) and $2.76\TeV$ for Pb+Pb collisions (LHC), giving rise to huge magnetic fields of $10^{18}$--$10^{19}\Gs$ and $10^{20}\Gs$, respectively, as calculated in \cite{huang2016}, exceeding highest values known for astrophysical sources, such as $10^{15}\Gs$ on the surfaces of magnetars \cite{magnetars2017}. Recently, presented was the first search for finite-size monopoles created via the Schwinger mechanism. The search was conducted by the MoEDAL experiment, designed specifically to see monopoles directly. The experiment is observing Pb-Pb collisions producing strong magnetic field in which monopole-antimonopole pairs may be created via tunneling effect. The analysis of the data based on a nonperturbative cross-section calculations allowed to place the (conservative) lower mass limit of $75\GeV$ for magnetic charges between $1$ and $3$ Dirac charges produced via Schwinger mechanism \cite{acharya2022}. It is clear that magnetic monopole signatures have the potential to be detected and should be sought for. Several other experiments have been carried out in the search for magnetic monopoles. Various upper limits have been established so far: MACRO \cite{ambrosio2002}, Baikal neutrino telescope \cite{baikal2005}, Amanda-II Detector \cite{wissing2007}, RICE \cite{rice2008}, SLIM \cite{slim2008}, ANITA II interferometer \cite{anita2011}, IceCube detector \cite{icecube2013} and Pierre Auger Observatory \cite{fujii2016}. The current state of magnetic monopole searches is discussed in \cite{giacomelli2005}. The discussion on the theoretical and experimental status of magnetic monopoles and the limits on monopole masses can be found in \cite{milton2006}. Recent advances in the theoretical and experimental physics of magnetic monopoles are discussed in \cite{mavromatos2020}. A comprehensive review on various aspects of the theory of magnetic monopoles can be found in \cite{shnir2005}. In the context of UHE photons, it is important to note that processes characterized by photon emission from magnetic monopoles with the lowest nonzero magnetic charge allowed by Dirac theory \cite{dirac1931} will be enhanced by a factor of $4692$ over similar processes involving unit electronic charge \cite{dooher1971}. Magnetic charges of monopoles realized in nature can be $n$ times greater than the allowable minimum, where $n$ is an integer; hence the number $4692$ can be still increased by a factor $n^2$ (for example, $n{=}2$ in a Schwinger field theoretical model of monopoles \cite{schwinger1966}). In effect, one may expect radiation enhanced by many orders of magnitude in processes in which magnetic monopoles take part. The highly increased electromagnetic coupling with matter also implies strong ionizing properties of magnetic monopoles, which is experimentally advantageous. Magnetic monopoles or their electrically charged counterparts dyons are hoped to be found among highly ionizing particles in the MoEDAL search experiment at the LHC \cite{mitsou2022}. It seems, however, that the highest energy scale of $10^{21}\eV$ or more, exceeding the highest energies of $10^{20}\eV$ observed so far for UHECR \cite{aab2020}, can be ascribed to the interaction of magnetic monopoles with magnetic fields in the astrophysical context in a universal way, independently of the monopole mass (this issue will be discussed later). The possibility to accelerate the monopole to extreme energies \cite{porter1960,kephart1996} or to have monopolia \cite{hill1983} present in nature is very important in the context of detection of UHE photons. If an already accelerated high energy monopole hits the regions where it interacts with matter then there will be emission of photons. Could this energy be then deposited in several or even two single UHE photons? The answer to this question seems affirmative, as discussed later. The possibility to detect photons in cosmic radiation would be very promising. If monopoles exist and are accelerated to decay, such high-energy photons should be a component of cosmic radiation. However, a photon, through processes of pair creation, can ultimately also give rise to charged particles of cosmic radiation of the highest energy. Observing such photons can pose constraints on the properties of magnetic monopoles. UHE photons with energies above $10^{19}\eV$ can be detected with the Surface Detector array of the Pierre Auger Observatory \cite{bleve2015}. As suggested in \cite{epele2008,epele2009}, monopolia might be easier to detect than free monopoles and could be discovered at the LHC. In this context photon production by monopole loops in photon fusion was studied in \cite{epele2012}, in particular, the cross-sections for the annihilation of monopoles into two photons were calculated and discussed. Such loops may manifest themselves in light-by-light scattering at the LHC \cite{ellis2017} or future colliders \cite{ellis2022}. Due to the very strong coupling between monopoles, monopole-antimonopole pairs (forming monopolium neutral bound states rather than unbound free states due to the strong coupling) may finally annihilate into highly energetic photons and thus be observed. Before making any claims regarding the presence or absence of magnetic monopoles in nature, it would be appropriate to note that this term refers to two classes of objects. The first class consists of elementary point-like magnetic monopoles described by Dirac \cite{dirac1931}, resembling the Dirac electron more in structure than a composite particle. This is an ideal mathematical construct that gives the reason for why any magnetic monopole that could be realized as a physically existing particle must have its magnetic charge as an integer multiple of some elementary magnetic charge $\frac{e}{2\alpha}$ (here, $e$ is the electronic charge and $\alpha$ is the fine structure constant). The second class consists of particular realizations of the ideal elementary magnetic monopole within a field-theoretical framework characteristic of the standard model of particles or other theories of unified interactions. These solitonic-like particles resemble more the composite proton with a complicated internal structure rather than the elementary electron. The classic example is provided by the magnetic monopole of 't~Hooft and Polyakov \cite{hooft1974,polyakov1974}. This model will be shortly discussed in order to see the conceptual difference between the elementary monopole and its physical realizations. In the context of UHE photons produced in processes involving magnetic monopoles, the present work addresses the theoretical question of the existence of a magnetic monopole as an isolated particle with a non-zero multiple of elementary magnetic charge. It is enough to focus on the elementary Dirac's magnetic monopole for this purpose. The problem of a particular field-theoretical realization of the magnetic monopole is different. Although isolated magnetically charged solitonic configurations can be considered as solutions in the standard model of particles, it might be that only magnetically neutral configurations of magnetic monopoles are realized in nature (similarly , only color-neutral quark configurations are observed in the form of isolated particles). The arguments against the existence of magnetic monopoles are formulated within the framework of the asymptotic (infrared) structure of the Maxwell theory and the principles of quantum theory (unfortunately, this regime of electromagnetic field presents some theoretical difficulties \cite{gervais1980,herdegen2012}). In Herdegen's argument \cite{herdegen1993}, the absence of magnetic charges serves as the consistency condition for the possible unambiguous extension of the definition of angular momentum to the typical situation where massive charged particles or fields (such as described by Dirac or Klein Gordon equations) are scattered from initial to final free asymptotic states. This argument will not be discussed here. The present paper focuses on Staruszkiewicz's argument against the existence of magnetic monopoles \cite{AStar1998a}. This argument is an important part of quantum theory of the electric charge \cite{AStar1989a}. This is an `emergent' theory, separated from Quantum Electrodynamics by a nontrivial limit $r{\to}\infty$. Except for explaining the quantization of electric charge in terms of a single universal constant \cite{AStar1998c} (which is done independently of the concept of magnetic monopole), the theory has a nontrivial dependence on the numerical value of the fine structure constant $\alpha{=}\frac{e^2}{\hbar c}$. In particular, this theory leads to mathematically critical spectra of $\alpha$ values (however, it is not yet known if this theory also predicts the experimentally determined value of $\alpha$). In Dirac's theory \cite{dirac1931} the very presence of a single magnetic monopole leads to quantization of electric charge; however, the value of the fine structure constant remains an arbitrary parameter. In Dirac' theory, the condition for quantization of electric charge is topological in character, whereas in Staruszkiewicz's theory the quantization of electric charge arises from a quantal eigenvalue problem. \section{Magnetic monopoles and UHE photons} In classical electrodynamics of Maxwell based on a single gauge potential four-vector, there is no place for magnetic monopoles, and the reason for this is structural. Introducing magnetic monopoles is, nevertheless, still possible with only a slight modification of the theory to account for topological changes introduced by considering quantum charged systems. The Dirac monopole theory is introduced in the framework of ordinary Maxwell theory where there is an analytical global correspondence in spacetime between a gauge-invariant field strength bivector $F$ and a single gauge potential $A$. Considering the magnetic monopole within the Maxwell theory framework would be possible only by allowing for braking this correspondence locally. In quantum mechanics, the phase of quantum states is unobservable. This fact, in conjunction with Maxwell theory, led Dirac to discover that certain singularities in phase can manifest themselves as point-like magnetic monopoles. The monopoles can be consistently considered on theoretical grounds, since arbitrariness in quantum phase can be absorbed by the arbitrariness of gauge potential of the electromagnetic field. This issue is addressed later in more detail. \subsection{Accelerated monopoles and a coincidence with the highest energies observed for cosmic rays} In the sequel to his paper on magnetic monopole \cite{dirac1931}, Dirac investigated the general problem of motion of electric and magnetic charges interacting electromagnetically with each other \cite{dirac1948}. Dirac assumed that the motion of a point magnetic charge $g$ of mass $m$ in the electromagnetic field $F_{\mu\nu}$ is described by an equation quite analogous to Lorentz's equation $m\ddot{x}_{\mu}{=}e\dot{x}^{\nu}F_{\mu\nu}(x)$ for the motion of a point electric charge $e$ of mass $m$ in the same field. Namely, the equation reads: $m\ddot{x}_{\mu}{=}g\dot{x}^{\nu}G_{\mu\nu}(x)$, where $G_{\mu\nu}{\equiv}\frac{1}{2}\epsilon_{\mu\nu}^{\phantom{\mu\nu}\alpha\beta}F_{\alpha\beta}$ is dual to $F_{\mu\nu}$ (more explicitly, $G_{01}{=}F_{23}$, $G_{23}{=}{-}F_{01}$, and the other components are obtained by cyclic permutations of indices $1,2,3$).\footnote{Here, $\epsilon$ is the completely antisymmetric Levi--Civita pseudotensor; the indices are understood to be raised or lowered with the help of the Lorentz metric tensor; the dot sign denotes differentiation with respect to the proper time of a particle.} The field includes both external fields and the particle's own field, with singularities along its worldline that have to be appropriately avoided in the process of solving the equations of motion. From the forms of the two general equations, it is evident, in particular, that the equation of motion for a magnetic monopole in an external magnetic field is mathematically the same as the equation of motion for an electric charge in an external electric field (similarly, we can expect circular motion of magnetic monopole in a uniform electric field and the associated synchrotron emission). Owing to this correspondence, we can use known textbook formulas for the motion of an electric charge in a uniform electric field to obtain analogous results for the motion of a magnetic charge in a uniform magnetic field. This will be useful for estimating the order of magnitude of energies that one can expect for magnetic charges in the astrophysical context in large-scale ordered magnetic fields. In such fields the monopoles could be accelerated to very high energies if one assumes that energy loss due to the interaction with the environment is not severe and can be neglected. The energy loss in such a process is indeed negligible \cite{kephart1996}. For the purpose of an order of magnitude estimate needed here, it is enough to consider the relativistic accelerated motion of a magnetic monopole in a uniform magnetic field. To put the reference scales into the astrophysical context, we may consider pulsar magnetospheres. In a first approximation, the magnetic field of a magnetosphere can be described by a magnetic dipole. Owing to the size of magnetospheres, we can assume that on a characteristic scale of $L_o{=}1{\rm\, km}$, the dipolar field lines can be considered to be roughly uniform. As for the characteristic scale of the magnetic field one can choose $B_o{=}10^{12}{\rm\, Gs}$, considerably lower than the maximum field strengths observed in magnetospheres. Now, using the solution to a textbook problem for relativistic accelerated motion of an electric charge in uniform electric field \cite{landau1980} and the equivalence with the motion of a magnetic monopole in uniform magnetic field, one can see see that the minimum magnetic charge predicted by Dirac's theory $g{=}\frac{e}{2\alpha}$ gets accelerated to a Lorentz factor $\gamma{=} 1{+}\frac{e B L}{2\alpha m c}$ after it has traveled a distance $L$ in uniform magnetic field $B$. Multiplying $\gamma$ by the rest energy $mc^2$ of the monopole, one obtains the total energy $mc^2{+}E_{\rm acc}$, where $E_{\rm acc}{=}\frac{eBLc}{2\alpha}$ is the kinetic energy acquired during the acceleration process. This can be compared with a similarly looking formula for a quite unrelated Fermi shock acceleration mechanism of first order, in which the energy gain is substantially, that is, $2\beta\alpha{\sim}\beta{/}68$ times lower ($\beta$ is the speed of the shock front in units of $c$). Unlike electrons, for which synchrotron radiation in a strong uniform magnetic field would be expected, the magnetic monopole can accelerate along a straight line without emission of radiation. It is possible that the GZK cutoff will not be important for very massive monopoles even though the magnetic monopole should couple with matter $\frac{eg}{e^2}{=}\frac{1}{2\alpha}{\approx} 68$ times stronger than electrically charged particles (the rest mass of the monopole is estimated to be in the range from the electro-weak scale $10^2\GeV$ up to the GUT scale $10^{16}\GeV$). The simple estimation of the energy gain could also be arrived at just from dimensional analysis and may seem not a reliable one, for example, not taking into account the energy loss due to possible multiple scattering during acceleration or other radiative processes (emission of electromagnetic radiation in general accelerated motion, e.g. synchrotron radiation in electric fields). On the other hand, one cannot exclude that much of the energy will be nevertheless carried by the monopole during its journey to the observer. If somewhere in the Universe such conditions exist as assumed above, and at the same time there are no strong large-scale electric fields, it is possible to directly and efficiently accelerate the monopole to extreme energies. Remarkably, such obtained energy gain $E_{\rm acc}$ is independent of the invariant mass of the accelerating monopole. For the adopted conservative scales $B_o$ and $L_o$ characteristic of pulsar magnetospheres, $$E_{\rm acc}{=}\frac{eB_oL_oc}{2\alpha}{\approx} 2.1{\times} 10^{21}\eV \quad {\rm for}\quad B_oL_o{=}10^{17}\,{\rm Gs}{\cdot}{\rm cm}{=}3.2{\times} 10^4\muGs{\cdot}{\rm pc}. $$ This is a value coinciding with the highest energies observed so far for UHECR (with the expected magnetic monopole mass of $100\GeV$ as for the electroweak unification energy scale, the corresponding Lorentz factor would be $\gamma{\approx}2{\times}10^{10}$). To be more realistic, the product $B_oL_o$ of the characteristic scales should be corrected by a dimensionless factor accounting for possible inhomogeneities of the field and local changes in the relative directions of magnetic field and the velocity vectors along the monopole trajectory. Similarly large scale of the product $ B_oL_o$ as for magnetospheres can be obtained in other astrophysical contexts. Typical for the Galaxy magnetic field of $2\muGs$ on a scale length of $100\,{\rm pc}$ gives the acceleration energy in the interstellar medium of $1.3{\times} 10^{19}\eV$. For other example sources the estimates are the following: interplanetary space ($50\muGs$, $1{\,\rm AU}$, $1.5{\times}10^{13}\eV$); Sun-spots ($10^3\,{\rm Gs}$, $10^{4}{\,\rm km}$, $2.1{\times}\\10^{16}\eV$); white dwarfs ($5{\times}10^6\,{\rm Gs}$, $10^{4}{\,\rm km}$, $10^{20}\eV$); radio-galaxy lobes ($10\muGs$, $10\,{\rm kpc}$, $6.3{\times} 10^{21}\eV$); clusters of galaxies ($1\muGs$, $100\,{\rm kpc}$, $6.3{\times}\\10^{21}\eV$); active galactic nuclei ($10^4\,{\rm Gs}$, $5{\,\rm AU}$, $1.5{\times} 10^{22}\eV$); and intergalactic medium ($10^{-2}\muGs$, $3{\,\rm Gpc}$, $1.9{\times}10^{24}\eV$). The reference values for the $BL$ product shown here are estimated based on figure 1 in \cite{hillas1984}. Judging based on these calculations, it appears that UHE values should be commonly attainable across the Universe in processes involving magnetic monopoles. \subsection{Selected mechanisms of production of UHE photons associated with magnetic monopoles} The acceleration energy $E_{\rm acc}$ discussed in the previous section could be released in various processes dual to that associated with high-energy electrically charged particles. The phenomenology is simple. The Lorentz transformation law of electric and magnetic fields implies that the magnetic field $B$ of an ultra-relativistic monopole `induces' an electric field of magnitude $\gamma c B$ (times a geometric factor of order unity irrelevant here) acting on electrically charged constituents of the medium at rest. It follows that the electromagnetic energy loss expected for a monopole with a unit magnetic charge $g$ should be $\frac{ge}{e^2}{=}\frac{1}{2\alpha}{\sim}68$ times greater than for an ultra-relativistic unit electric charge of comparable invariant mass and Lorentz factor $\gamma$ (this order of magnitude argument derives from the reasoning presented in \cite{kephart1996}). If the accelerated magnetic monopole were to decay or annihilate with another anti-partner or interact with charged particles, the released photon energy would be of the order of $10^{21}\eV$ or beyond if one recalls energy estimates of the previous section. Furthermore, if the accelerated monopole decays, it will produce photons of sufficiently high energy, which in turn can produce particle-antiparticle (e.g., proton, anti-proton) pairs. It seems not unlikely that the emission of UHE photons from monopoles accelerated to extremely high energy could occur near the Earth and be observed. The energy of a monopole could be significantly reduced just as a result of the emission of UHE photons even before reaching the Earth's atmosphere, and perhaps even at very high altitudes (e.g., in the van Allen belts), which would completely change the picture when it comes to detection and identification of monopoles. Then a set of atmospheric showers could disperse perhaps over a big area (depending on the photon emission place). Even in the absence of detection, observational limits on monopole properties could be set. Besides the acceleration processes, as sources of UHE photons, one can also consider massive monopolia \cite{hill1983}. A bound state of a magnetic monopole and its anti-partner, called monopolium, may be formed naturally in the Universe and be easier to find than single monopoles. If massive enough, the relict GUT monopolia with masses of $10^{16}\GeV$ produced in the very young universe may have survived up to the present in ultra-long-lived metastable states \cite{hill1983}. The monopolia would then decay by emitting radiation in the form of gluons, Z-bosons and photons, releasing a total energy of $2{\times}10^{16}\GeV$ in less than $10^{-38}\,{\rm sec}$ in the final annihilation stages. Such events, apart form the multitude of particles visible as cosmic rays, may produce high-energy gamma radiation. The number of various gluons, Z-bosons and photons produced in such an event in a given energy window can be calculated (taking into account fragmentation to secondary products). {\bf One can still expect on average several UHE photons with energy of $10^{19}\eV$ or more}, as follows from diagrams 2 and 3 in \cite{hill1983}. Due to strong coupling between photons and monopoles one can also consider multi-photon annihilation process of monopole--antimonopole pairs or their bounded state -- monopolium -- into $2n$ photons. Such decays were investigated in \cite{fanchiotti2017}. These short considerations above allow us to draw the conclusion that, if magnetic monopoles are present in the Universe, not only should UHE values be accessible across the Universe, but this energy may also be released in the form of UHE photons. Given this it is rather surprising to learn that magnetic monopole searches so far have come up empty-handed. Nevertheless, gravitational radiation was predicted by General Relativity theory. This prediction was a mathematical construct following from the relativistic structure of the gravitation theory. Scientists did not give up their trust in this prediction in light of the theory's prior success, its consistency and aesthetic appeal. It took almost a century to reach sensitivity sufficient to detect gravitational waves. Would a similar scenario occur with magnetic monopoles? \section{A magnetic monopole. What is it and why might it not exist?} A magnetic monopole of Dirac is a structureless point-par\-ticle with undefined mass which resembles more a point-like electron than a composite proton. The monopole is described in the framework of Abelian Maxwell field theory in the presence of electrically charged quantum states. The proton is not a purely electromagnetic particle, and its field-theoretical electrically charged models are considered. The magnetic monopole might be realized in nature as an extended particle more similar to the proton than to the electron. Such particles can be described in terms of non-Abelian gauge field theory as magnetically charged solitons. It is important to understand the difference between these two kinds of magnetic monopoles. \subsection{Dirac magnetic monopole} The electric charge of isolated particles is always an integer multiple of elementary charge $e$, and there is no explanation for this quantization within the framework of the standard model of particles \cite{ellis1983}. Particles quite unconnected with each other, such as protons (a large composite particle) and electrons (a point-like particle), have equal absolute values of electric charge (defined in terms of Gauss integral formula over a sphere of infinite radius) with the observational accuracy $10^{-21}$ \cite{bressi2011}. In relativistic quantum theory, the elementary charge corresponds to a pure number $\alpha{\equiv}\frac{e^2}{\hbar c}$ independent of units chosen ($\alpha^{-1}{=}137.035 999 084(21) $ is a currently recommended value \cite{CODATA}). In his work on {\it Quantized Singularities in the Electromagnetic Field} \cite{dirac1931}, Dirac put forward an idea concerned with the reason for the existence of a smallest electric charge $e$, known to exist experimentally. Although it could be understood why electric charges of elementary particles can be mathematically equal (for example, if a magnetic pole exists, then electric charges must be integer multiples of some elementary charge, as follows from Dirac's formula), it is not known why $\alpha$ corresponding to the reference electron's charge has precisely this value and how it could be computed. For a pure number of this kind one needs an explanation - this number must have to be named -- constructed algorithmically, say by identifying it with a unique convergent series, like it is for the $\pi$. It is known that the set of numbers that can be named are of measure zero in the set of all possible numbers on the real axis. It may be that there is not a mathematical structure that could be used to name $\alpha$; then nobody will ever be able to compute $\alpha$ and one will have to relay on $\alpha$'s arbitrary experimental value. Dirac was looking for some explanation of this value. This problem remains still unsolved. According to Dirac, this problem is perhaps the most fundamental unsolved problem of physics. Dirac expressed his doubt whether any really big progress would be made in understanding the fundamentals of physics until this problem has been solved \cite{dirac1978}. Dirac starts his exposition of magnetic monopole theory \cite{dirac1931} with the observation that the phase of a normalized wave function $\psi$ can be globally changed by an arbitrary additive constant. The phase of $\psi$ at a particular point is thus not definite. Only the phase difference between any two points is definite. The difference need not be independent of the curve connecting these points unless the points are close enough to each other. A change in phase may occur around a closed curve. Dirac deduced from his analysis of various observables that, in order for this not to give rise to ambiguity, the change in phase around any closed curve must be the same for all $\psi$'s (leaving aside the role of representation). This universality implies that the phase increase between any given pair of points along a particle's worldline $x^{\mu}$ connecting these points will be determined solely by the ambient electromagnetic field $A_{\mu}$ and the worldline and thus through the line integral $\frac{e}{\hbar c}\int \!A_{\mu}\ud{x}^{\mu}$ evaluated along that worldline. Hence, the total increase in phase, $\Delta \Phi{\equiv}\frac{e}{\hbar c}\oint\! A_{\nu}\ud{x}^{\nu}$, around a closed path will be nonzero for a non-integrable phase, as already stated. In view of the Bohm--Aharonov effect \cite{aharonov1959}, it is clear that only the phase factor $\exp{(\I\Delta \Phi)}$ and not $\Delta \Phi$ alone is physically meaningful \cite{wu1975}. By Stokes' theorem applied to a continuously differentiable $A_{\mu}$, the $\Delta \Phi$ could be recast as a surface flux integral $\frac{e}{2\hbar c}\iint F_{\mu\nu}\ud{x}^{\mu}\wedge \ud{x}^{\nu}$ evaluated over any surface stretched across the closed curve. For a genuine Maxwell field, the $\Delta \Phi$ will vanish in the limit when the closed curve gets shrunk to a point, even if the integration surface were to close and remain finite in that limit.\footnote{By Stokes' theorem, the discussed flux integral over a closed surface can be recast as a volume integral $\iiint\! \partial_{\alpha}F_{\mu\nu}\ud{x}^{\alpha}\wedge \ud{x}^{\mu}\wedge \ud{x}^{\nu}{\equiv}0$ identically vanishing on account of the structural identity $\partial_{\alpha}F_{\mu\nu}{+}\partial_{\mu}F_{\nu\alpha}{+}\partial_{\nu}F_{\alpha\mu}{\equiv}0$ satisfied by Maxwell fields. } However, for a closed spatial surface $\Sigma$ as perceived in some inertial reference frame, the surface integral defines the total magnetic flux through $\Sigma$ (which must be zero). Therefore, in order to stay within the Maxwell theory framework and allow for a nonzero magnetic charge, it is necessary that $A_{\mu}$ be somehow singular. The singularity might reside in a gradient contribution $\kappa_{\mu}{\equiv}\partial_{\mu} \kappa$ from a function $\kappa$ for which the integrability condition requiring $\partial_{\nu}\kappa_{\mu}{=} \partial_{\mu}\kappa_{\nu}$ everywhere (equivalent here to Schwarz' theorem for a twice continuously differentiable function) is violated.\footnote{ The classical example of such a singular function is the angular phase $\phi{=}\arctan(y/x)$ of a complex function $\exp{(\I\phi)}$ considered as a function on a plane: the line integral of the gradient field $[\partial_x\phi,\partial_y\phi]$ along a circle of any radius centered at $x{=}y{=}0$ is $2\pi$, whereas the corresponding `curl' field $\partial_xw_y-\partial_yw_x$ vanishes outside the center and is not defined there in terms of the classical definition of derivatives. To interpret this in terms of Stokes' theorem on the plane, one has either to assume that the plane is punctured at the center and there is no field in that plane, or that the plane is smooth and the curl field is a distribution supported entirely at the center.} This singularity could arise because of considering a qualitatively more complicated system than merely the field alone, that is, a quantum particle interacting with a field. Dirac showed that non-integrable derivatives $\kappa_{\mu}$ such as those originating from the indefinite quantum phase component $\kappa$, are of this kind and can be consistently reinterpreted in terms of electromagnetic field. Namely, Dirac identified the $\kappa$ responsible for the change in phase around any closed curve by noticing that any wave-function $\psi$ can be represented as a product $\psi{=}\tilde{\psi} \exp(\I \kappa)$. Here, $\tilde{\psi}$ is normalized and has a definite phase, while $\exp(\I \kappa)$ involves the indefinite part of the phase, however, with definite derivatives $\kappa_{\mu}{\equiv}\partial_{\mu}\kappa$.\footnote{Earlier, it was assumed that the phase difference between two points is definite, and therefore, this should hold in the limit of infinitesimally distant points.} Furthermore, for all wave functions and their linear combinations with constant coefficients to acquire the same change $\Delta \Phi$ along a given closed path, it suffices that $\kappa_{\mu}$'s of different $\psi$'s will differ between each other by the gradient of a smooth function (this means that it is allowed to change $\kappa$ in each instance of $\psi$ by adding an arbitrary smooth function). Now, one can suppose that $\psi$ is a state of a particle in free motion for which the indefiniteness of phase occurs. Acting with the quantum momentum operator $\hat{p}_{\mu}{=}\I\hbar\partial_{\mu}$, one can see that $ ({\rm e}^{\I\kappa} \hat{p}_{\mu} {\rm e}^{{-}\I\kappa})\psi{=}(\hat{p}_{\mu}{+}\hbar \kappa_{\mu})\psi$, which means that the classical four-momenta $p_{\mu}$ and $\tilde{p}_{\mu}$ corresponding to the respective states $\psi$ and $\tilde{\psi}$ are related by $\tilde{p}_{\mu}{=}p_{\mu}{+}\hbar \kappa_{\mu}$, and so the definite phase state $\tilde{\psi}$ corresponds to the motion of charge $-e$ in the electromagnetic potential $A_{\mu}{=}{-}\frac{\hbar c}{e}\kappa_{\mu}$. Since for an indefinite phase the integrability condition $\partial_{\nu}\kappa_{\mu}{=}\partial_{\mu}\kappa_{\nu}$ is violated somewhere, there must be an effective electromagnetic field $F_{\mu\nu}$ present. A similar calculation for a state $\psi$ in a non-vanishing field leads to gauge transformation relations between momenta and between potentials in agreement with Weyl's Principle of Gauge Invariance \cite{weyl1929}. This is the way that Dirac consistently incorporated indefiniteness of phase into the framework of ordinary quantum theory of charged particles in the presence of Maxwell fields. This can be regarded as the first essential step in the construction of magnetic monopoles by Dirac. The second essential step in Dirac's construction starts with noticing that a wave function $\psi$ is not changed when its phase gets increased by $2\pi n$, with $n$ being an integer. This implies that the difference between total increments in the phases of different wave functions around a loop will also be integer multiples of $2\pi$ in general, including the special case discussed earlier when $n{=}0$. However, by the continuity of wave functions as solutions of wave equations, this is not so when a loop is sufficiently small, in which case the total increment is only due to the encompassed flux, which also must be small and vanishing in the limit of the stretched surface (and thus also the loop) being shrunk to a point. However, there is one exception possible at nodal points where a given $\psi$ vanishes, and so the phase of $\psi$ cannot be defined. The locus of nodal points forms a line as perceived in space, because a vanishing complex valued function provides two independent scalar constraints involving four spacetime coordinates. Hence, the total change in phase around a small loop encircling the nodal line of $\psi$ will be equal to an integer multiple of $2\pi$ (depending on the particular $\psi$ considered) plus a universal contribution from the flux through a surface stretched on that small loop. The flux contribution in this case is universal for all $\psi$'s, irrespective of whether a particular $\psi$ has or has not a nodal line passing through the considered surface. Then, Dirac recalls the standard network of closed small loops argument to conclude that this result will be similar for any large closed curve -- now the nodal line of a continuous $\psi$ can pass several times through a large surface stretched on the closed curve, therefore the integer multiple of $2\pi$, now being a sum $2\pi\Sigma_i n_i$, takes into account the respective positive and negative integer contributions, while the flux integral is now taken over that large surface. The final step is to close the considered surface and to see that for any closed surface $\Sigma$ -- that is, one without a boundary closed curve -- the sum $2\pi\Sigma_i n_i$ must be proportional to the total magnetic flux $4\pi\Phi_{B}$ through $\Sigma$ (with a proportionality factor implied by the formula defining the contribution from the non-integrable phase $\kappa$ to the electromagnetic potential $A_{\mu}$, as discussed earlier), namely $$2\pi\Sigma_i n_i{+}\frac{e}{\hbar c}\cdot 4\pi \Phi_B{=}0,\qquad \Phi_B{\equiv}\frac{1}{4\pi}\cdot\frac{1}{2}\oiint_{\Sigma} F_{\mu\nu}\ud{x}^{\mu}\wedge \ud{x}^{\nu}.$$ The $\Phi_B$ is by definition the total magnetic charge enclosed within $\Sigma$. Since the flux in a given field must be the same irrespective of the particular $\psi$ being considered in that field, the result of the summation in $\Sigma_i n_i$ must be universal too (although the set of integers $n_i$ may be different for different $\psi$'s) and so it can be denoted by some unique integer. If the integer is nonzero, it means that the nodal line of a given $\psi$ ends at some point inside the considered closed surface. Moreover, since the closed surface can be made infinitesimally small, one can infer that this point must be the point of origin for nodal lines of all $\psi$'s (although the lines may be different for different $\psi$'s).\footnote{It can be recalled, that the nodal line of a $\psi$ is something different from the singularity line of the part of $A_{\mu}$ connected with the non-integrability of the phase of $\psi$; while the nodal line of a $\psi$ is physically objective (phase independent) characteristic of a $\psi$ in a given field, the latter singularity line can be deformed freely by means of smooth gauge transformations such that the endpoint of the singularity line remains fixed at the monopole position.} At the position of the point, there is a magnetic monopole present with magnetic charge $g_n$ connected with the electric charge $e$ through Dirac's simple formula \begin{equation}g_n{=}ng, \qquad g{=}\frac{\hbar c}{2e},\quad n{=}0,\pm1,\pm2,\dots\label{eq:Dirac_formula}\end{equation} with an integer $n$ characterizing the nodal lines ending at the monopole position. There is opposite singular magnetic flux along the string-like nodal line, compensating the flux of the magnetic charge through the spherical surface encompassing that charge. The singular flux could be potentially detected by means of the Bohm--Aharonov effect. However, since the corresponding change in phase is an integer multiple of $2\pi$, it has no effect on a quantum particle in the field of the string. The above relation between the smallest magnetic and electric charges $g$ and $e$ ensures that the string attached to the magnetic monopole is unobservable. Therefore, the Dirac monopole acts as a genuine magnetic charge. The presence of the nodal line (called Dirac string) emanating from the magnetic monopole position is associated with a singularity line of the gauge potential. In Cartesian coordinates in Minkowski spacetime, the static field of a Dirac magnetic monopole can be described by the gauge potential $A{=}g\frac{x \ud{y}{-}y \ud{x}}{r(r{+}z)}$ (with $r{=}\sqrt{x^2{+}y^2{+}z^2}$ and to the extent of a constant dimensional factor), which is singular on a straight semi-line $z{=}{-}r{\leqslant}0$ ($x{=}0{=}y$). The corresponding gauge-invariant field $F{\equiv} d A$ reads \\$F{=}g\frac{x \ud{y}\wedge\ud{z}{+}y \ud{z}\wedge\ud{x}{+}z\ud{x}\wedge\ud{y}}{r^3}$ and is singular only at the center $r{=}0$ (the position of the magnetic monopole). The form $A$ is indeterminate up to the total derivative of a scalar field. Another possible gauge transformed form, $A'{=}A{-}2g d\phi{=}{-}g\frac{x \ud{y}{-}y \ud{x}}{r(r{-}z)}$, where $\phi{=}\arctan(y/x)$, is singular on the straight semi-line $r{=}z{\geqslant}0$ ($x{=}0{=}y$), while $F$ remains the same (note that the scalar field used to generate the gauge transformation is not smooth). It is seen that the singularity line of the gauge potential could be altered by means of singular gauge transformations such that the end-point stays fixed at the center, and so the singularity line of the gauge potential has no physical meaning. The elementary flux of a magnetic monopole can be obtained in a more straightforward way using the non-integrability of the angular variable. This was presented on the occasion of introducing Dirac's electric monopole \cite{AStar1984} whose charge is determined by means of improper gauge transformation on a null plane. Although the electromagnetic potential $A_{\mu}$ is not gauge-invariant, the sum $\frac{e}{c}A_{\mu}{+}\partial_{\mu}S$, where $S$ is the phase of a quantum charged system, is gauge invariant: $\frac{e}{c}\delta A_{\mu}{+}\partial_{\mu}\delta S$ ${=} 0$, where the $\delta$ symbol denotes the change acquired after performing a gauge transformation (not necessarily infinitesimal). Now, performing an improper gauge transformation ${-}\delta S/\hbar{=}\phi{\equiv} \arctan{y/x}$, which leads to the allowable $2\pi\hbar$ increase in the phase round a closed loop (here, encircling the $z$ axis), it follows that $\frac{e}{\hbar c}\delta A_{\mu}{=}\partial_{\mu}\phi$, and hence $g{\equiv}\frac{1}{4\pi}\oint \delta A_{\mu}\ud{x}^{\mu}{=}$ $\frac{\hbar c}{2e}$ as expected for the elementary magnetic charge. There is also an interesting result due to Jackiw \cite{jackiw1985} which shows that it is feasible to introduce the Dirac magnetic monopole in a gauge-invariant manner without the singular electromagnetic four-vector potential. Jackiw's construction relays on the non-commutativity of kinematical momenta in the Lorentz--Heisenberg system defined by brackets: $[r^i,r^j]{=}0$, $[r^i,\pi^j]{=}\I\hbar\delta^{ij}$, $[\pi^i,\pi^j]{=}\I\frac{e\hbar}{c}B^{k}\epsilon_{k}^{\phantom{k}ij}$ and a Hamiltonian $H{=}$ $\frac{1}{2m}{\pi^i\pi_i}$. The corresponding equations of motion make sense for any (not necessarily source-free) magnetic vector $B^i$, however, with violated Jacobi identity $\epsilon_{ijk}[\pi^i,[\pi^j,\pi^k]]{=}\frac{2e\hbar^2}{c}\partial_k B^k{\neq}0$ implying non-asso\-cia\-ti\-vi\-ty in the composition of three finite translations in the presence of magnetic sources. To regain associativity required by the usual quantum formalism, a condition must be imposed on the total flux of magnetic field emerging out of a tetrahedron formed from a composition of the arbitrary three translation vectors. The condition appears equivalent to Dirac's formula for magnetic charge \eqref{eq:Dirac_formula}. Importantly, Jackiw's construction demonstrates that {\it quantal magnetic sources must be structureless point particles} \cite{jackiw2003}. This observation points to some essential difference between elementary Dirac's monopole and field-theoretical realizations of magnetic monopoles discussed in the next subsection. Finally it should be remarked that using Dirac formula one can predict a much stronger coupling constant for a magnetic monopole--photon system than that for electron--photon system. Given an electric charge in accelerated motion, various radiation formulas are obtained in terms of Li{\'e}nard-Wiechert potential method and then improved by QED corrections. According to Dirac formula \ref{eq:Dirac_formula}, the interaction force between two quanta of magnetic monopoles is ${\sim} 4692$ times stronger than the interaction between two elementary electric charges. Hence, the emission amplitudes in the leading order are expected to be $(\frac{g}{e})^2{=}(n/2\alpha)^2{\sim}4692n^2$ times greater in the case of a magnetic charge than analogous amplitudes for electric charges. Roughly speaking, this conclusion is reached by recoursing to a symmetry of Maxwell equations, thus replacing in the formulas the electric charge with magnetic charge and the Faraday tensor $F_{\mu\nu}$ with its dual $\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}F^{\mu\nu}$ (modulo some subtleties not important here); see \cite{dooher1971}. \subsection{Field-theoretical realization of elementary magnetic monopoles} The purpose of this section is to illustrate the difference between the elementary point-like magnetic monopole of Dirac and topological magnetic monopoles which are field-theoretical realizations of the elementary monopole. To obtain a field-theoretical realization of a Dirac magnetic monopole, a general unifying compact gauge group is chosen. It is such that it could be spontaneously broken down to the symmetry group of the Standard Model $U(3){\times}U(2){\times}U(1)$ so that the electromagnetic field could emerge. The $SU(5)$ grand unified theory \cite{georgi1974} is an example in which monopoles arise \cite{scott1980}. Topologically nontrivial solutions are interpreted as magnetic monopoles, whose charge appropriately identified with the topological charge of a given finite-energy field configuration becomes naturally quantized \cite{weinberg1996}. Such solutions are static non-point-like and massive soliton lumps with finite width scale and the matter field quickly fades away with the distance. Asymptotically, only the part of the system is dominant which is mathematically equivalent to the magnetic field of an isolated point pole but without a string characteristic of Dirac monopole. Predicted masses of magnetic monopoles within such theories vary by orders of magnitude and can be as high as $10^{16}\GeV$ for super-massive monopoles, which might have appeared in high abundance in the early Universe before or after the inflation epoque and survive to the present \cite{preskill1984,weinberg1996}. In 1974, 't~Hooft \cite{hooft1974} and Polyakov \cite{polyakov1974} found a stable monopole solution in a Yang-Mills-Higgs system with an $SO(3)$ gauge group and an isovector field, which is the simplest model in which magnetic monopoles appear (see \cite{witten1979}). Their construction (which avoids the introduction of Dirac's string) could be repeated for any group-valued gauge field for which the ordinary $U(1)$ gauge as a subgroup can be defined equivalent to the gauge symmetry group of electromagnetism (see, for example, a construction for general non-commutative gauge fields \cite{wu1975}). The construction will be sketched below for completeness. One starts with a gauge fields theory with some matter field forming a closed coupled system described by some Lagrangian density. The classic example in this respect considered is a non-Abelian model with a compact covering group because it has a spherically symmetric monopole as a solution. The model belongs to the class of Georgi-Glashow non-Abelian models introduced to describe hierarchy of particles' masses emerging under spontaneously broken gauge symmetry \cite{georgi1972}. Assuming $SO(3)$ local gauge group for the illustration in the simplest case, the respective Lagrangian density is a linear combination of three parts: the Yang-Mills term ${\propto} F^a_{\mu\nu}F_a^{\mu\nu}$, the matter field term ${\propto} D_{\mu}\Psi_aD^{\mu}\Psi^a$ (involving coupling to Yang-Mills fields through a covariant derivative $D$), and the potential term for a triplet of matter scalar fields ${\propto}\lambda(\Psi_a\Psi^a{-}\rho^2)^2$. Here, the summation is understood with respect to both kinds of indices separately, and $\rho,\lambda $ are fixed constants. The covariant derivative acting on $\Psi^a$ is defined as $D_{\mu}\Psi^a{=}\partial_{\mu}\Psi^a{+} \tilde{g}\epsilon^a_{\phantom{a}bc}A^b_{\mu}\Psi^c$, while $F^a_{\mu\nu}{=}\partial_{\mu}A^a_{\nu}{-} \partial_{\nu}A^a_{\mu}{+}\tilde{g}\epsilon^a_{\phantom{a}bc}A^b_{\mu}A^c_{\nu}$. Here, $A^a$ is the connection field (gauge potential), $F^a$ is the associated curvature (gauge field strength) and $\Psi$ is a (vector-valued) isovector Higgs field. From these group-valued fields, one can construct a Maxwell-like field based on 't~Hooft \cite{hooft1974} definition $F_{\mu\nu}{\equiv} n_a(F^a_{\mu\nu}{-}\tilde{g}^{-1}\epsilon^a_{\phantom{a}bc} D_{\mu}n^bD_{\nu}n^c)$, invariant with respect to $SU(2)$ gauge symmetry. Now, the divergence of the dual field $F^{\star\mu\nu}{\equiv}$ $\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}$ (which would be vanishing identically for an ordinary Maxwell field) is non-va\-ni\-shing and reads $\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}\partial_{\nu}F_{\alpha\beta}{=}{-}{4\pi}\tilde{g}^{-1}J^{\mu}$ where \\ $J^{\mu}{\equiv}\frac{1}{8\pi}\epsilon^{\mu\nu\alpha\beta}\epsilon_{abc}\partial_{\nu}n^a \partial_{\alpha}n^b\partial_{\beta}n^c$ is a conserved topological current, $\partial_{\mu}J^{\mu}$ ${=}0$, here $n^{a}{\equiv} $ $\frac{\Psi^a}{\sqrt{\Psi^b\Psi^b}}$ is a unit director of the Higgs field. With this current, one can associate a topological charge that can be calculated, similarly as for ordinary electric charge, from a volume integral of the time component of the current density: $Q{=}\int J^0\ud^3{x}$. In particular, in the static field case, the definition of $Q$ with the expression for $J^0$ substituted to the volume integral, in terms of topology, is simply the winding number -- a degree or index of the mapping $\vec{x}{\to} \vec{n}(\vec{x})$ between physical space and the domain of finite energy static fields. Such defined $Q$ attains only integer values. However, by analogy with Maxwell fields, we can write the $0$ component of the above divergence of the dual static field as $\vec{\nabla}\vec{B}{=}4\pi \tilde{g}^{-1} J^0$, which upon integration gives the total flux of the magnetic field, and in this way, one obtains the associated magnetic charge $g_Q{=}Q\tilde{g}^{-1}$ of a soliton solution (in the static case $F_{0i}{=}0$. Thus, there is no corresponding electric field). The lowest energy static field in the sector $Q{=}1$, was found by 't~Hooft and Polyakov. The particular form of this solution is not important here. It suffices to say that the solution describes a spatially extended finite energy stable soliton concentrated about the origin. The associated magnetic field of the 't~Hooft tensor $F_{\mu\nu}$ asymptotically overlaps with that of a point magnetic monopole. In effect one can identify the solution as a monopole with magnetic charge $\tilde{g}^{-1}$ and finite mass. If the 't~Hooft tensor is to be interpreted as a genuine Maxwell field, the coupling constant $\tilde{g}$ must be proportional to $e$. Now, the dimensional analysis shows that one must have $\tilde{g}{=}\frac{e}{\hbar c}{=}\frac{\alpha}{e}$. Hence, the quantum of magnetic charge in this model is $g{=}\frac{e}{\alpha}$ which is twice the value obtained by Dirac. Additionally, in Schwinger's quantum field theory of electric and magnetic charges \cite{schwinger1966}, magnetic monopoles have charges twice the Dirac value. Schwinger made a comment on this discrepancy by saying that space-reflection considerations in his model require an infinite discontinuity line rather than the semi-infinite line appearing in Dirac's theory. He gave several arguments to support his more restrictive value for the quantum of magnetic charge \cite{schwinger1966}. However, 't~Hooft points out that the Dirac quantization condition is fully restored in the Georgi--Glashow model when one considers isospin $1/2$ representations of the group $SU(2)$ \cite{hooft1974}. \section{Staruszkiewicz argument against the existence of magnetic monopole} The argument against the existence of a magnetic monopole is rooted in the infrared or zero-frequency regime of electromagnetic fields. This regime is natural to consider since the information about electric and magnetic charges is encoded in the asymptotic part of the electromagnetic field that must decrease slowly enough in order to introduce a non-zero contribution to the Gauss flux integral (large distances mean low frequencies). By solving the bremsstrahlung radiation field, it also becomes clear that asymptotic electromagnetic fields, which are emitted when a charge $Q$ changes its four-velocity, are localized entirely outside the light cone. It is therefore clear that spatial infinity must play the central role in the quantum theory of the electric charge. Staruszkiewicz formulated such a theory \cite{AStar1989a}, and its structure turns out to depend in a nontrivial way on the numerical value of the fine structure constant $\alpha$. As an important part of his theory, Staruszkiewicz gave a simple argument against the existence of magnetic monopoles. The argument was first proposed in an essay in honour of Yakir Aharonov \cite{AStar1998a}, then mentioned in \cite{AStar1997,AStar1998b,AStar2002a}. The argument will be described in what follows. However, first one has to understand the residual structure of the electromagnetic field at the spatial infinity that carries the information about electric and magnetic charges. The structure of such fields has been described by Alexander and Bergmann \cite{alexander1984}. It turns out that this structure can be equivalently described by two independent noninteracting scalar massless fields living on 2+1 dimensional deSitter spacetime. \subsection{Spatial infinity} The outer part of the light cone (sufficient to investigate the spatial infinity) can be covered with the (hyper)spherical coordinates $$x^0{=}\chi \sinh{(\psi)},\qquad \vec{x}{=}\chi \cosh{(\psi)}\cdot\vec{n}(\theta,\phi),$$ with $\chi$ being the spatial coordinate ($0{<}\chi{<}{+}\infty$), $\psi$ the time-like coordinate (${-}\infty{<}\psi{<}{+}\infty$), and a unit vector $\vec{n}$ is parameterized with ordinary spherical angles $0{\leqslant}\theta{\leqslant}\pi$, $0{\leqslant}\phi{<}2\pi$. A time-like de Sitter hyperboloid $\vec{x}^2{-}(x^0)^2{=}\chi^2$ (with $\chi$ regarded as a constant parameter) is a simple-connected three-dimensional curved spacetime with constant Ricci curvature $R{=}6/\chi^2$. For considering asymptotic fields (fields in the limit $\chi{\to}{+}\infty$) or scale-independent fields, one can instead consider the unit hyperboloid $\vec{x}^2{-}(x^0)^2{=}1$ as representing spatial infinity outside the light cone \cite{sommers1978}. The intrinsic metric on the unit de Sitter hyperboloid is diagonal when expressed in the adopted coordinates: $$g_{ik}{\equiv}\chi^{-2}\eta(\partial_ix,\partial_jx){=}{\rm diag}[{-}1,\cosh^2(\psi),\cosh^2(\psi)\sin^2\theta],\quad i,j{=}1,2,3.$$ It can be regarded as defining the metric of the spatial infinity. It is seen that spatial infinity is homeomorphic and isometric to the unit hyperboloid \cite{alexander1984}. The coordinate $\psi$ plays the role of time, while spaces of constant time are unit two-spheres parameterized with $\theta,\phi$. The spatial infinity is the spacetime scene for asymptotic Maxwell fields and for their quantization. It is important to observe that spatial infinity has a well-defined Cauchy surface, which is a feature desirable in the quantum field theoretical context. \subsection{Zero frequency electromagnetic field and electron's charge as a quantum object} The information on charges are encoded in the asymptotic part of the electromagnetic field that decreases slowly enough (this is reminiscent of the Gauss integral formula for the total charge -- as one can say, the electric charge resides at spatial infinity \cite{AStar1998b}). Described in terms of the electromagnetic gauge potential $A^{\mu}(x)$, this part is homogeneous of degree ${-}1$, depending on the direction of the radius vector $x$ in a Lorentzian reference frame distinguished by the homogeneity property. Hence, the asymptotic field is effectively a function defined on the deSitter hyperboloid described earlier. The asymptotic part can be extracted from any field by taking the Gervais--Zwanziger limit $A_{\mu}(x){\to} \lim_{\lambda{\to}{+}\infty}\lambda A_{\mu}(\lambda x)$ \cite{gervais1980} (leaving invariant fields $A_{\mu}$ homogeneous of degree ${-}1$). Fields $A_{\mu}$ of this homogeneity still exhibit a residual gauge freedom: $A_{\mu}{\to} A_{\mu}{+}\partial_{\mu}\Phi$, where $\Phi$ is a function homogeneous of degree $0$, since then $x^{\mu}\partial_{\mu}\Phi(x){=}0$ as follows from the Euler theorem on homogeneous functions. The asymptotic free fields satisfying the wave equation $\square A_{\mu}{=}0$ can be called zero frequency fields. One can see this by performing the Gervais--Zwanziger limit on a representation of $A_{\mu}$ as an integral of Fourier amplitudes over some invariant volume on the light cone of null wave-vectors. In this limit, the integral reduces to an integral of the limiting Fourier amplitudes effectively localized on the tip of the light cone. In terms of the field tensor $F_{\mu\nu}$, if the limit $\lim_{\lambda\to{+}\infty}\lambda^2F_{\mu\nu}(\lambda x)$ exists and is non-zero, the field has a zero-frequency part; otherwise, if the limit is vanishing, there is no zero-frequency part in $F_{\mu\nu}$. It should be noted that being a zero-frequency field is a Lorentz-invariant feature, while a non-zero frequency wave can be Doppler-shifted to an arbitrary frequency wave. These two regimes of electromagnetic radiation are qualitatively different. When an electric charge is scattered, the radiation field can be complicated. However, the zero-frequency asymptotic part of the field, which can be extracted from the total radiation field by taking the Ger\-vais--Zwanziger limit, is quite simple. It is equivalent to one emitted by an electric charge that abruptly changes its four-velocity at the point of scattering in the origin of the coordinate system, and which before and after being scattered remains in inertial motion, respectively, with four-velocity $u^{\mu}$ and $w^{\mu}$. Finding the radiation field emitted in such a process is a standard problem in classical electrodynamics. The result is such that the field vanishes both in the past light cone and in the future light cone and is non-vanishing only outside the light cone. The radiation field can be described as a difference of two Coulomb fields, one at rest with respect to observer $u^{\mu}$ and the other at rest with respect to observer $w^{\mu}$. It is now clear that in any scattering process, the emitted zero-frequency field is universal and determined only by the initial and the final four-velocity, $u^{\mu}$ at $t{=}{-}\infty$ and $w^{\mu}$ at $t{=}{+}\infty$. It is now also clear that although the amount of time available for the zero-frequency field at the spatial infinity is infinite, it is limited by the opening of the light cone, $|x^0|{<}|\vec{x}|$, because the zero-frequency radiation field is nonzero only outside the light-cone. This property of a zero-frequency radiation field is perfectly tailored to understand why the elementary charge is a quantum object. Berestetskii, Lifshitz, and Pitaevskii criterion, \\ $\langle \vec{E}^2\rangle{\gg}{\hbar c}/(c\Delta t)^4$ \cite{BLP1982}, allows to decide if an electromagnetic field averaged over a time interval $\Delta t$ can be treated as classical. A static field would therefore be always classical as concluded in \cite{BLP1982}. This conclusion is paradoxical in view of the quantization of electric charges (a remark on this subject in a more philosophical context was made in \cite{AStar2002b} where the resolution of this paradox presented for the first time in \cite{AStar1997}, is recalled). This conclusion is true, for example, when it comes to solving the Dirac equation for the hydrogen atom -- one considers quantum states of the electron in a classical Coulomb field of the proton, and one obtains good agreement of the predicted frequencies of emission lines with those observed. However, in the case of the zero-frequency field emitted in a scattering process, the criterion must be used carefully. The radiation field in a given reference frame is a Coulomb field $q/r^2$ times some kinematic factor which can be made to be of order $1$ by a suitable choice of the velocity change \cite{AStar1997}. For a Coulomb field of charge $q{=}ne$: $\langle \vec{E}^2\rangle{=}n^2e^2/r^4$ and, by substituting $c\Delta t{=}2r$ to this inequality (which is the available averaging time at the spatial infinity from the opening of the light cone argument discussed earlier), it follows that the field will be classical if $n{\gg}\frac{\sqrt{\hbar c}}{4e}{\approx}\frac{1}{4}\sqrt{137}{\approx}2.93$. This observation was made in \cite{AStar1997}. This means that the Coulomb field ceases to be classical in the neighborhood of spatial infinity \cite{AStar2002b}. It means also that electric charges of the order of elementary charge $e$ (as determined upon application of the Gauss law) is not classical, in particular the electron's charge is a genuine quantum object. If the value of fine structure constant was substantially larger one would not obtain such a sensible inequality. With this observation it becomes clear how it is possible that proton's Coulomb field of the proton in the quantum theory of the hydrogen atom is purely classical object while its amplitude determined by the proton's electric charge is quantized and involving Planck constant at the same time. \subsection{Staruszkiewicz argument} It is an experimentally established fact that all electrically charged particles observed in nature are massive. It means that electrically charged currents of these particles are confined to the future and past light cone, and quantum amplitudes are faded out exponentially at large distances by nonzero masses of the particles. From this, one can draw a physical conclusion that the classical electromagnetic field $F_{\mu\nu}$ at the spatial infinity is totally free. Additionally, it must be homogeneous of degree ${-}2$ at spatial infinity if it is to carry an electric charge. Namely, $F_{\mu\nu}(\lambda x){=}\lambda^{-2}F_{\mu\nu}(x)$ for all positive $\lambda$ and for $(x^0)^2{-}\vec{x}.\vec{x}{\to}{-}\infty$. Such a field has two degrees of freedom described by scalar functions $\ef(x)$ and $\mf(x)$ satisfying the two equations \begin{equation}\label{eq:emfuncts} x^{\mu}F_{\mu\alpha}{=} \partial_{\alpha}[ {-}e x^{\mu}A_{\mu}(x)]{\equiv}\partial_{\alpha}\ef(x), \quad \text{and}\quad \frac{1}{2}\epsilon_{\alpha\beta}^{\phantom{\alpha\beta} \mu\nu}x^{\beta}F_{\mu\nu}{=}\partial_{\alpha}\mf(x). \end{equation} The first equation is proved in \cite{AStar1998a} by a simple calculation assuming that $A_{\mu}(\lambda x){=}\lambda^{{-}1}A_{\mu}(x)$ for each positive $\lambda$ and using the Euler theorem on homogeneous functions, here implying $x^{\nu}\partial_{\nu}A_{\mu}(x){=}{-}A_{\mu}(x)$. Besides the vector $x^{\mu}F_{\mu\nu}$, out of the field tensor and position vector, one can construct in Minkowski spacetime another independent vector linear in the field: $\epsilon_{\alpha\beta}^{\phantom{\alpha\beta} \mu\nu}x^{\beta}F_{\mu\nu}$ (more precisely, a pseudo-vector). Again, this vector can be shown to be a gradient. {This can be proved using the notation of calculus of antisymmetric forms. Consider a $1$-form $\omega{\equiv}\frac{1}{2}\epsilon_{\alpha\beta}^{\phantom{\alpha\beta} \mu\nu}x^{\beta}F_{\mu\nu}\ud{x}^{\alpha}$. Then, the Maxwell equation $\partial_{\mu}F^{\mu\nu}{=}0$ and the homogeneity condition $x^{\alpha}\partial_{\alpha}F_{\mu\nu}{=}{-}2F_{\mu\nu}$ suffice to show that ${\star}(\ud{\omega}){=}0$, where ${\star}$ denotes the Hodge star operator. This means that $0{=}{\star}(0){=}$ ${\star}({\star}(\ud{\omega})){=}{-}\ud{\omega}$. Thus, $\omega$ is an exact $1$-form; that is, there is a scalar $0$-form $m$ such that $\omega{=}\ud{m}{\equiv} (\partial_{\mu}m )\ud{x}^{\mu}$.} Furthermore, the divergence of left sides of the vector equations \eqref{eq:emfuncts} vanishes for arbitrary free Maxwell fields; hence, scalars $\ef$ and $\mf$ must satisfy d'Alembert equations $\partial_{\mu}\partial^{\mu} \ef{=}0$ and $\partial_{\mu}\partial^{\mu} \mf{=}0$ and be homogeneous of degree $0$ (the latter property is seen directly from equations \eqref{eq:emfuncts}: on multiplying both sides of each equation by $x^{\alpha}$, the left hand sides are identically zero, while on the right hand sides one is left with the Euler scaling operator $x^{\alpha}\partial_{\alpha}$ acting on functions $\ef$ and $\mf$). Equations \eqref{eq:emfuncts} together summarize in the four-dimensional notation the structure of electromagnetic fields at spatial infinity \cite{AStar2002a}. The structure of these fields was described by Alexander and Bergmann \cite{alexander1984} who investigated electrodynamics at spatial infinity. Once some solutions are given, the scalars $\ef$ and $\mf$ determine the physical field $F_{\mu\nu}$ completely -- the vector equations \eqref{eq:emfuncts} can be solved for $F_{\mu\nu}(x)$ in a purely algebraic way. To give an example, one can consider the Coulomb field of a point charge $e$ in inertial motion with four-velocity $u^{\mu}$. For this field,\footnote{Here, $(xx)$, $(xu)$, and $(uu)$ represent scalar products with the signature $({+},{-},{-},{-})$.} $\ef(x){=}{e\,(ux)}/{\sqrt{(ux)^2{-}(xx)(uu)}}$ and $\mf(x){=}0$. The field, which is described by $A_{\mu}(x){=}e u_{\mu}\br{(ux)^2{-}(xx)(uu)}^{-1/2}$ in the Lorentz gauge $\partial^{\mu}A_{\mu}{=}0$, is homogeneous of degree ${-}1$, while the corresponding Faraday antisymmetric tensor, which is homogeneous of degree ${-}2$, reads $F_{\mu\nu}{=}e\br{ u_{\mu}x_{\nu}{-}u_{\nu}x_{\mu}}\br{(ux)^2{-}(xx)(uu)}^{{-}3/2}$. For general fields $F_{\mu\nu}(x)$ homogeneous of degree ${-}2$, scalars $\ef(x)$ and $\mf(x)$ (homogeneous of degree zero) can be regarded as arbitrary functions defined over the unit de Sitter hyperboloid. They are effectively functions of $\psi$, $\theta$ and $\phi$ only (independent of $\chi$ and satisfying the d'Alembert equations in this curved $2{+}1$D de Sitter spacetime) and thus are straightforwardly extendable to functions defined in whole outer part of the light cone. The Lagrangian density ${-}F_{\mu\nu}F^{\mu\nu}\ud^4{x}$ expressed in terms of such fields becomes a difference of two identical Lagrangian densities \cite{AStar1989a,AStar1998a} \footnote{ It is seen that $g^{ik}\partial_i\ef\partial_k\ef$ and $g^{ik}\partial_i\mf\partial_k\mf$ are both quadratic forms with signature $({+},{-},{-})$; thus, their difference is a quadratic form with signature $({+},{+},{+},{-},{-},{-})$ the same as the signature of arbitrary Maxwell Field $F_{01}^2{+}F_{02}^2{+}F_{03}^2{-}F_{23}^2{-}F_{31}^2{-}F_{12}^2$ (in a given inertial frame, the latter form can be written as a difference $\vec{E}.\vec{E}{-}\vec{H}.\vec{H}$ with a contribution from electric field $\vec{E}$ and from magnetic field $\vec{H}$).} \begin{equation}\label{eq:quadr}\!\!\!\!{-}F_{\mu\nu}F^{\mu\nu}\ud^4{x}{=}2\frac{\ud{\chi}}{\chi}\frac{\sin(\theta)}{\sech^2(\psi)}\ud{\psi}\ud{\theta}\ud{\phi} \br{g^{ik}\partial_i\ef\partial_k\ef{-}g^{ik}\partial_i\mf\partial_k\mf}.\end{equation} Disregarding the factor $\ud{\chi}{/}\chi$, both Lagrangian densities are identical to one for a free massless scalar field on the $2{+}1$D de Sitter spacetime. The scalars $\ef$ and $\mf$ appear to be completely independent fields, Lorentz--invariantly separated from each other. The action integral for this system regarded as confined to the de Sitter unit hyperboloid $\mathcal{H}_1$ can be defined to be $$S[\ef,\mf]{=}C\int_{\mathcal{H}_1} \br{g^{ik}\partial_i\ef\partial_k\ef{-}g^{ik}\partial_i\mf\partial_k\mf}\sqrt{{-}g}\,\ud^3{\xi}.$$ Here, $g_{ik}$ is the metric tensor in arbitrary intrinsic coordinates $\xi^0,\xi^1,\xi^2$ on $\mathcal{H}_1$ (it is assumed that $g_{00}{>}0$); $C$ is a positive dimensional constant introducing the correct absolute physical scale of the action integral. The function $\ef$ is called the electric part, while the function $\mf$ is called the magnetic part of the field. In order to clarify the use of these names in simple terms, one may recall that in equation \eqref{eq:emfuncts} the electric function arises from Maxwell's tensor $F_{\mu\nu}$ and the magnetic function arises in the same way form the dual Maxwell tensor $\frac{1}{2}\epsilon_{\mu\nu}^{\phantom{\mu\nu}\alpha\beta}F_{\alpha\beta}$, in particular, the magnetic function is vanishing for the Coulomb field considered earlier. It is evident that the fields $\ef$ and $\mf$ enter the Lagrangian density on quite the same footing. Both the fields satisfy the free wave equation on $\mathcal{H}_1$. The only difference lies in the opposite sign in front of the quadratic forms $g^{ik}\partial_i\ef\partial_k\ef$ and $g^{ik}\partial_i\mf\partial_k\mf$, and this difference, although not harmless classically, is crucial when one wants to perform a quantization of that system. Before proceeding further, a remark should be made concerning this sign difference. The overall sign of an action integral is not a matter of convention and cannot be changed arbitrarily. It is known that the duality operation is a symmetry of classical electromagnetism with electric and magnetic charges. The symmetry is very useful in the practice of performing calculations and making some predictions in this theory. However, the sign of the Maxwell action integral gets reversed when magnetic fields are replaced with electric fields. As pointed out by Hawking and Ross \cite{hawking1995} on the occasion of investigating electrically and magnetically charged black holes, the duality symmetry of the classical equations \emph{does not imply that it is a symmetry of the quantum theory, as the action is not invariant under duality}.\footnote{Semi-classical calculations leading to equal rates at which electrically and magnetically charged black holes are being created, made the authors to conclude that duality is a symmetry of the quantum theory, but in a very non-obvious way \cite{hawking1995}. } If the sign of the action integral was not important there would be no essential difference between electric and magnetic functions of the asymptotic electromagnetic fields, and so $\ef$ and $\mf$ would be present in the theory on the same footing. The difference in sign of the electric and magnetic parts mentioned above is essential and has physical consequences -- the wrong sign implies the existence of negative norm states not belonging to the framework of quantum mechanics. Namely, upon quantization only one of the quadratic forms in the above action integral would lead to a positive definite inner product. The one with opposite definite inner product, introducing negative norm states, should be considered as non-physical and therefore should be rejected. A Lagrangian density with a correct sign is that which introduces a field with a positive kinetic term (that is, the one involving partial derivatives of the field with respect to the time-like variable -- for example, $\psi$ in the discussed spherical coordinates on $\mathcal{H}_1$). It is seen that the kinetic term involving $\ef$ in the above action integral is positive, while the kinetic term involving $\mf$ is negative. Therefore, Staruszkiewicz concludes that at the spatial infinity, the part of the electromagnetic field with the incorrect sign should be absent, which is achieved by putting $\mf{=}0$. This statement can be rephrased by saying that magnetic monopoles should not exist. As shown in \cite{AStar1998b}, this conclusion is not changed when the original Maxwell Lagrangian is extended by adding to ${-}\frac{1}{16\pi}F_{\mu\nu}F^{\mu\nu}$ the CP-symmetry violating $\Theta$ term $\Theta \epsilon^{\alpha\beta\mu\nu}F_{\alpha\beta}F_{\mu\nu}$ involving the second independent invariant of the electromagnetic field. By adding the $\Theta$ term, one cannot change the signature of the quadratic form in \eqref{eq:quadr}. Indeed, the extended Lagrangian density, when expressed in terms of new electric and magnetic functions $\ef'$ and $\mf'$, attains the same quadratic form as in equation \eqref{eq:quadr} in which $\ef$ and $\mf$ have been replaced with $\ef'$ and $\mf'$ (the new and old sets of functions are related to each other as follows $\ef'{=}\ef \cosh{\upsilon}{-}\mf \sinh{\upsilon}$ and $\mf'{=}\ef \sinh{\upsilon}{+}\mf \cosh{\upsilon}$, where $\sinh{(2\upsilon)}{=}8\pi\Theta$, one can see from this that the part with the negative kinetic term is increased with respect to the case without the $\Theta$ term). Therefore, the argument against magnetic monopoles still holds with the extended Lagrangian. Now, one should assume $\mf'{=}0$ to ensure that upon quantization, the negative norm states do not arise. Witten showed that field-theoretical realizations of magnetic monopoles in CP-violating theories with the theta mechanism would not be electrically neutral and would carry even non-rational fractions of electric charge $Q{=}e(n{-}\frac{\vartheta}{2\pi})$, depending on vacuum angle parameter $\vartheta$ introduced by the $\Theta$ term \cite{witten1979}. On the other hand, in the framework of Staruszkiewicz theory \cite{AStar1989a}, it can be shown that electric charge must be quantized in units of electronic charge \cite{AStar1998a}. Thus non-existence of magnetic monopoles is compatible with electric charge quantization \cite{AStar1998b}. \section{Conclusions} Ultra-high energy (UHE) photons with energies exceeding $10^{18}\eV$ could be detected by Earth-based observatories. UHE photons are produced in various processes in which electrically charged particles take part. However, more exotic processes are also possible. There are compelling theoretical premises in favor of the existence of isolated magnetic monopoles, especially when it comes to realisation of the monopoles in the framework of non-Abelian gauge fields. UHE photons could be produced in encounters of massive magnetically charged monopole-antimonopole pairs or in processes associated with monopoles accelerated to high energies, typically $10^{21}\eV$ or beyond. Observing UHE photons can pose constraints on the properties of magnetic monopoles. The predicted observational signatures of these particles are being sought in many dedicated experiments currently in operation. Currently, no magnetic monopoles have been found despite many attempts to detect them. Neither intermediate mass monopoles below the GUT scale that could be produced in accelerators nor monopoles that should exist in the Universe and produce observable signatures have been observed. Monopole masses predicted by the field-theoretical realizations of magnetic monopoles are apparently too high, which might explain the negative experimental evidence for magnetic monopoles so far. However, there is another possibility. There have been formulated arguments for why magnetic monopoles allowed by Dirac's theory might not be realized in nature \cite{herdegen1993,AStar1998a}. In particular, Staruszkiewicz's argument against magnetic monopoles is an important part of his quantum theory of the electric charge \cite{AStar1989a}. This argument invokes the positivity of the norm in Hilbert space. This positivity is violated by quantum states of the magnetic part of zero-frequency fields. If these arguments could not be refuted, one would have to conclude that, while isolated magnetically charged solitonic configurations can be considered as solutions in the standard model of particles, only magnetically neutral configurations of magnetic monopoles could be realized in nature. \bibliographystyle{ieeetr} \bibliography{2022LBJJ_UHEphVerMgnMnpl_v2_ArXiv}
Title: Is ozone a reliable proxy for molecular oxygen? I. The O2-O3 relationship for Earth-like atmospheres
Abstract: Molecular oxygen (O2) paired with a reducing gas is regarded as a promising biosignature pair for the atmospheric characterization of terrestrial exoplanets. In circumstances when O2 may not be detectable in a planetary atmosphere (e.g., at mid-IR wavelengths) it has been suggested that ozone (O3), the photochemical product of O2, could be used as a proxy to infer the presence of O2. However, O3 production has a nonlinear dependence on O2 and is strongly influenced by the UV spectrum of the host star. To evaluate the reliability of O3 as a proxy for O2, we used Atmos, a 1D coupled climate/photochemistry code, to study the O2-O3 relationship for "Earth-like" habitable zone planets around a variety of stellar hosts (G0V-M5V) and O2 abundances. Overall, we found that the O2-O3 relationship differed significantly with stellar hosts and resulted in different trends for hotter stars (G0V-K2V) vs cooler stars (K5V-M5V). Planets orbiting hotter host stars counter-intuitively experience an increase in O3 when O2 levels are initially decreased from 100% Earth's present atmospheric level (PAL), with a maximum O3 abundance occurring at 25-55% PAL O2. As O2 abundance initially decreases, larger amounts of UV photons capable of O2 photolysis reach the lower (denser) regions of the atmosphere where O3 production is more efficient, resulting in these increased O3 levels. This effect does not occur for cooler host stars (K5V-M5V), as the weaker incident UV flux does not allow O3 formation to occur at dense enough regions of the atmosphere where the faster O3 production can outweigh a smaller source of O2 from which to create O3. Overall it will be extremely difficult (or impossible) to infer precise O2 levels from an O3 measurement, however, with information about the UV spectrum of the host star and context clues, O3 will provide valuable information about potential surface habitability of an exoplanet.
https://export.arxiv.org/pdf/2208.09415
\title{Is ozone a reliable proxy for molecular oxygen?} \subtitle{I. The \om-\oz\ relationship for Earth-like atmospheres} \author{Thea Kozakis \inst{1} \and Jo\~ao M. Mendon\c{c}a \inst{1} \and Lars A. Buchhave \inst{1}} \institute{National Space Institute, Technical University of Denmark, Elektrovej, DK-2800 Kgs. Lyngby, Denmark} \date{} \abstract {Molecular oxygen (\om) paired with a reducing gas is regarded as a promising biosignature pair for the atmospheric characterization of terrestrial exoplanets. In circumstances when \om\ may not be detectable in a planetary atmosphere (e.g., at mid-IR wavelengths) it has been suggested that ozone (\oz), the photochemical product of \om, could be used as a proxy to infer the presence of \om. However, \oz\ production has a nonlinear dependence on \om\ and is strongly influenced by the UV spectrum of the host star. To evaluate the reliability of \oz\ as a proxy for \om, we used \texttt{Atmos}, a 1D coupled climate and photochemistry code, to study the \om-\oz\ relationship for ``Earth-like'' habitable zone planets around a variety of stellar hosts (G0V-M5V) and \om\ abundances. Overall, we found that the \om-\oz\ relationship differed significantly with stellar hosts and resulted in different trends for hotter stars (G0V-K2V) versus cooler stars (K5V-M5V). Planets orbiting hotter host stars counter-intuitively experience an increase in \oz\ when \om\ levels are initially decreased from 100\% Earth's present atmospheric level (PAL), with a maximum \oz\ abundance occurring at 25-55\% PAL \om. As \om\ abundance initially decreases, larger amounts of UV photons capable of \om\ photolysis reach the lower (denser) regions of the atmosphere where \oz\ production is more efficient, thus resulting in these increased \oz\ levels. This effect does not occur for cooler host stars (K5V-M5V), since the weaker incident UV flux does not allow \oz\ formation to occur at dense enough regions of the atmosphere where the faster \oz\ production can outweigh a smaller source of \om\ from which to create \oz. Thus, planets experiencing higher amounts of incident UV possessed larger stratospheric temperature inversions, leading to shallower \oz\ features in planetary emission spectra. Overall it will be extremely difficult (or impossible) to infer precise \om\ levels from an \oz\ measurement, however, with information about the UV spectrum of the host star and context clues, \oz\ will provide valuable information about potential surface habitability of an exoplanet. } \keywords{astrobiology -- planets and satellites: terrestrial planets -- Planets and satellites: atmospheres} \titlerunning{\oz\ as a Proxy for Molecular Oxygen I} \authorrunning{Kozakis, Mendon\c{c}a, \& Buchhave et al.} \section{Introduction} In the search for life in the Universe, molecular oxygen (\om) is commonly recognized as a promising atmospheric biosignature gas. However, while \om\ is largely created by biological sources on Earth, it can also be produced abiotically in a variety of settings, and thus alone would not constitute a guarantee of life (e.g., \citealt{hu12,word14,doma14,tian14,luge15,gao15,harm15}). Instead of being a standalone biosignature, \om\ as a biosignature will be most powerful when detected simultaneously with a reducing gas as a ``disequilibrium biosignature pair'' (e.g., \citealt{love65,lede65,lipp67}), and when evidence of abiotic \om\ production scenarios can be ruled out (see \citealt{mead17,mead18} for a review). In scenarios where \om\ is not directly detectable, it has been suggested that its photochemical product ozone (\oz) could be used as a proxy for \om\ (e.g., \citealt{lege93,desm02,segu03,lege11,mead18}). Using \oz\ as a proxy for \om\ would be extremely useful in two particular scenarios: 1) at wavelengths where \om\ features are not present (i.e., mid-infrared wavelengths), and 2) when \om\ is present in small amounts (as it was for a significant fraction of Earth's geological history). The mid-infrared wavelength region (MIR; 3-20 $\mu$m) provides an excellent opportunity for the search for life, as it contains features for multiple biosignature gases, as well for gaseous species that could provide evidence for or against biological \om\ production \citep{desm02,schw18,quan21}. Furthermore, thermal emission observations are less impacted by clouds (e.g., \citealt{kitz11}), and could also allow measurements of a planet's surface temperature \citep{desm02}. The collisionally-induced absorption \om\ feature at 6.4 $\mu$m is the only MIR feature that allows for the direct detection of \om, although it would be extremely difficult to use for abundances of \om\ consistent with biological production \citep{fauc20}. It will, however, be useful for identifying high-\om\ desiccated atmospheres, a possible mechanism for abiotic \om\ production \citep{luge15,tian15}. Inferring the presence of biologically produced \om\ will be restricted to indirect detections via the 9.7 $\mu$m \oz\ feature in the MIR. In addition, although \om\ has existed in appreciable amounts on Earth for a significant part of its history, it has only existed in large amounts for a relatively short period of time, posing a fundamental drawback to \om\ as a biosignature \citep{mead18}. Molecular oxygen was first created produced biologically $\sim$2.7 Ga (billion years ago) by oxygenic photosynthesis via cyanobacteria, although it did not build up to appreciable amounts in Earth's atmosphere until the Great Oxidation Event (GOE) $\sim$2.45 Ga (see e.g., \citealt{catl17} for a review). Although the Phanerozoic era (541 Ma - present day) saw the widespread colonization of land plants and \om\ levels comparable to our present atmospheric level (PAL), during the Proterozoic era (2.5 Ga - 541 Ma) it is expected \om\ levels could have been significantly lower \citep{catl17,lent17,dahl20}. As a result, it is likely that \om\ only would have been detectable on Earth for the last $\sim$0.5 Gyr. However, since \oz\ is a logarithmic tracer of \om, it is possible that \oz\ could be capable of revealing small, undetectable amounts of \om\ (e.g., \citealt{kast85,lege93,desm02,segu03,lege11}). Additionally, a detection of \oz\ could provide information about UV shielding, and whether surface life is adequately protected from high-energy UV capable of DNA damage. Some studies have suggested or already adopted O$_3$ as a substitute for O$_2$ (e.g.,\ \citealt{segu03}, \linktocite{KalteneggerMacDonald2020}{Kaltenegger \& MacDonald et al.} \citeyear{KalteneggerMacDonald2020},\ \citealt{lin21}), and others have noted a potentially powerful ``triple biosignature'' in planetary emission with CO$_2$, H$_2$O, and O$_3$, where O$_2$ spectral features are absent \citep{sels02}. O$_3$ is also expected to build up in the stratospheres of planets, allowing characterization via transmission spectroscopy (e.g., \citealt{betr13,betr14,misr14,mead18}). However, it is uncertain how reliably a measurement of \oz\ could allow us to infer the amount of \om. Ozone is known to have a nonlinear relationship with \om, as well as a strong dependence on the UV spectrum of the host star \citep{ratn72,kast80,kast85,segu03,rugh13}. Although several studies have modeled the \om-\oz\ relationship for varying \om\ abundances and different stellar hosts (e.g., \citealt{ratn72,levi79,kast80,kast85,segu03,greg21}), there has been no in-depth study evaluating the ability of \oz\ to predict \om\ as a biosignature. In this series of papers, we will explore the \om-\oz\ relationship in depth for a variety of stellar hosts and atmospheric conditions. For this first paper, we focus on the \om-\oz\ relationship for ``Earth-like'' planets for different stellar hosts. Here we take Earth-like to mean a planet that has the same composition and size as Earth, receives the equivalent total incident flux from the Sun as modern Earth, and has a similar atmospheric composition. This study currently contains the largest number of models run with a fully coupled climate and photochemistry code dedicated to understanding the \om-\oz\ relationship, along with exploring the widest range of stellar hosts as well as the largest number of different \om\ atmospheric abundances. \section{Chemistry of \oz\ production and destruction \label{sec:chem}} In this section we give a brief overview of the most important reactions for the production and destruction of \oz. The wavelength-dependent absorption cross sections for \om\ and \oz\ are shown in Fig.~\ref{fig:XS} as a reference for the reader, as they determine photolysis rates in different wavelength regions. Incident stellar UV flux as well as the amount of nitrogen- and hydrogen-bearing species primarily control the concentration of \oz\ in the atmosphere. \subsection{The Chapman mechanism} Ozone is primarily created in the stratosphere by a set of reactions called the Chapman mechanism \citep{chap30}. These reactions begin with the photolysis of \om, \begin{equation} \m{O}_2 + \m{h}\nu \rightarrow \m{O} + \m{O (}175 < \lambda < 242\ \m{nm}), \label{r:PO2_O} \end{equation} which creates ground state O atoms (also written as O($^3$P)), which are highly reactive due to two unpaired electrons. These O atoms then combine with \om\ molecules to form \oz, \begin{equation} \m{O + O}_2 + M \rightarrow \m{O}_3 + M, \label{r:O2M} \end{equation} where $M$ is a background molecule that carries away excess energy. Reaction~\ref{r:O2M} is a 3-body reaction, meaning it is more efficient at lower temperatures and higher atmospheric densities. It is faster in denser atmospheric regions with a larger availability of O atoms, causing bulk of \oz\ on Earth to exist in the stratosphere rather than at higher altitudes. Photolysis of \om\ can also occur higher in the atmosphere with higher energy photons, \begin{equation} \m{O}_2 + \m{h}\nu \rightarrow \m{O }+ \m{O(}^1\m{D) (}\lambda < 175\ \m{nm}), \label{r:PO2_O1D} \end{equation} where the \od\ radical is created along with a ground state O atom. Free radicals are by nature extremely reactive as they have at least one unpaired valence electron. Thus, they tend to have extremely brief lifetimes. The \od\ radical can return to the ground state by being ``quenched'' via a collision with a background molecule, \begin{equation} \m{O(}^1\m{D)} + M \rightarrow \m{O} + M , \label{r:quench} \end{equation} or it can react with another molecule. Reactions with other molecules will be further explored in Sects.~\ref{sec:cc} and \ref{sec:chemresults}. Although \om\ absorption cross sections are significantly larger at wavelengths that produce \od\ radicals ($<$ 175 nm; see Fig.~\ref{fig:XS}), these photons are absorbed high in the atmosphere and therefore do not contribute to the creation of stratospheric \oz\ on modern Earth. Absorption caused by Lyman-$\alpha$ photons (121.6 nm) is generally absorbed in the mesosphere, and photons in the Schumann-Runge continuum (130-175 nm) are absorbed in the thermosphere. Also note that some wavelengths shorter than Lyman-$\alpha$ can ionize \om, although that wavelength region is not included in our photochemistry model due to the low amount of photons in that region emitted by GKM stars (see Sect.~\ref{sec:atmos}). Although \om\ photolysis from these higher energy photons ($<$ 175 nm) occurs above the stratosphere on modern Earth, planets with different amounts of atmospheric \om\ would experience absorption of these photons at varying atmospheric altitudes. Less \om\ would allow high energy photons to travel deeper into the atmosphere before absorption via \om\ photolysis. Although this will not cause the bulk of \oz\ formation, it will impact the upper atmospheric chemistry by creating more \od\ at lower altitudes. The effects of this will discussed at length in Sect.~\ref{sec:chemresults}. Once \oz\ is created by Reaction~\ref{r:O2M}, it is often quickly photolyzed. \oz\ photolysis from a photon in the Hartley band (200-300 nm) will create an \od\ radical, while photons from the lower energy Huggins bands (310-350 nm), Chappuis bands (410-750 nm), and longer wavelengths will create a ground state O atom, \begin{equation} \m{O}_3 + \m{h}\nu \rightarrow \m{O}_2 + \m{O(}^1\m{D)} (\lambda < \m{310\ nm}), \label{r:PO3_O1D} \end{equation} \vspace{-0.7cm} \begin{equation} \m{O}_3 + \m{h}\nu \rightarrow \m{O}_2 + \m{O (310} < \lambda < \m{1140\ nm}). \label{r:PO3_O} \end{equation} Photons with wavelengths shorter than 200 nm are often absorbed high in the atmosphere by \om\ and other molecules. As with \om\ photolysis, \od\ radicals created by Reaction~\ref{r:PO3_O1D} will either be quenched by a background molecule and returned to the ground state (Reaction~\ref{r:quench}), or they will react with other molecules. Photolysis of \oz\ is not seen as a loss of \oz, as the resulting O atom and \om\ molecule often quickly recombine into \oz\ via Reaction~\ref{r:O2M}. Due to the rapid cycling between \oz\ and O, it is instead the conversion of \oz\ + O (called ``odd oxygen'') into \om\ that actually results in a loss of \oz, as occurs in the final reaction of the Chapman mechanism, \begin{equation} \m{O}_3 + \m{O} \rightarrow 2\m{O}_2. \label{r:O3_O} \end{equation} Odd oxygen (\oz\ + O) being converted into \om\ is considered a loss of \oz\ because the photolysis of \om\ (Reaction~\ref{r:PO2_O},\ref{r:PO2_O1D}) is the slowest of the Chapman mechanism reactions, and the limiting factor in \oz\ production. Therefore the loss of odd oxygen on long timescales causes a true decrease in \oz. \subsection{Catalytic cycles of HO$_x$ and NO$_x$ \label{sec:cc}} The Chapman mechanism on its own overestimates the amount of atmospheric \oz\ because it does not take into account catalytic cycles that destroy \oz. These destruction cycles follow the format, \begin{equation*} \begin{aligned} \m{X + O}_3 \rightarrow \m{XO + O}_2 \\ \m{XO + O} \rightarrow \m{X + O}_2 \\ \hline \m{Net:} \hspace{0.5cm} \m{O}_3 + \m{O} \rightarrow 2\m{O}_2 \end{aligned} \end{equation*} where X is a free radical. During this process X and XO will cycle between each other while converting odd oxygen (\oz\ + O) into \om, similarly to the last step of the Chapman mechanism (Reaction~\ref{r:O3_O}). As stated above, this results in the overall loss of \oz\ because \om\ photolysis is the limiting reaction of \oz\ formation. X and XO can cycle between each other and continuously destroy \oz\ until reactions that convert either X or XO into non-reactive ``reservoir'' species occur. The primary catalytic cycles of \oz\ destruction in modern Earth's atmosphere are the HO$_x$ (hydrogen oxide) and NO$_x$ (nitrogen oxide) catalytic cycles. We note that on modern Earth there are also \oz\ destroying catalytic cycles that are powered by molecular compounds primarily created anthropogenically (e.g.,\ chlorine and bromine cycles; \citealt{crut01}), but they will not be included in this study. The HO$_x$ catalytic cycle is powered by the OH (hydroxyl) and HO$_2$ (hydroperoxyl) radicals. When an \od\ radical is created either by photolysis of \om\ (Reaction~\ref{r:PO2_O1D}) or \oz\ (Reaction~\ref{r:PO3_O1D}) it can react with H$_2$O to form OH, \begin{equation} \m{H}_2\m{O} + \m{O(}^1\m{D)} \rightarrow \m{OH} + \m{OH}. \label{r:H2O_OH} \end{equation} The OH radical is a major sink for multiple atmospheric gases (e.g., CH$_4$, CO) and is often called the `detergent of the atmosphere' for this reason. It destroys \oz\ during the HO$_x$ catalytic cycle as follows, \begin{equation} \m{OH} + \m{O}_3 \rightarrow \m{HO}_2 + \m{O}_2, \label{r:HOx_OH} \end{equation} \vspace{-0.6cm} \begin{equation} \m{HO}_2 + \m{O} \rightarrow \m{OH} + \m{O}_2. \label{r:HOx_HO2} \end{equation} In addition to this primary destruction cycle, other HO$_x$ cycles can contribute significantly to \oz\ destruction via, \begin{equation} \m{OH} + \m{O}_3 \rightarrow \m{HO}_2 + \m{O}_2, \tag{\ref{r:HOx_OH}} \end{equation} \vspace{-0.6cm} \begin{equation} \m{HO}_2 + \m{O}_3 \rightarrow \m{OH} + 2\m{O}_2, \end{equation} resulting in two \oz\ molecules converted to three \om\ molecules, or, \begin{equation} \m{OH} + \m{O} \rightarrow \m{H} + \m{O}_2, \end{equation} \vspace{-0.6cm} \begin{equation} \m{H} + \m{O}_2 + M \rightarrow \m{HO}_2 + M, \label{r:HM} \end{equation} \vspace{-0.6cm} \begin{equation} \m{HO}_2 + \m{O} \rightarrow \m{OH} + \m{O}_2, \tag{\ref{r:HOx_HO2}} \end{equation} with a net result of two O atoms converted into an \om\ molecule. Because OH production via \od\ is a byproduct of the Chapman mechanism, HO$_x$ catalytic cycle efficiency can be increased with higher rates of \oz\ formation. This process can be slowed through reactions that convert OH/HO$_2$ into a reservoir species such as H$_2$O, HNO$_2$, or H$_2$O$_2$, which are significantly less reactive. The NO$_x$ catalytic cycle destroys \oz\ with the NO (nitric oxide) and NO$_2$ (nitrogen dioxide) radicals. The primary source of these radicals in the stratosphere is from N$_2$O (nitrous oxide) which is biologically produced by nitrification and denitrification processes within soil. N$_2$O can additionally be produced anthropogenically, primarily through agriculture. It is converted into NO by interactions with the \od\ radical, \begin{equation} \m{N}_2\m{O} + \m{O(}^1\m{D)} \rightarrow \m{NO} + \m{NO}. \label{r:N2O_NO} \end{equation} A secondary source of NO is production via lightning in the upper troposphere, which can then be transported into the lower stratosphere. The NO$_x$ catalytic cycle destroys O$_3$ as follows, \begin{equation} \m{NO} + \m{O}_3 \rightarrow \m{NO}_2 + \m{O}_2, \label{r:NOx_NO} \end{equation} \vspace{-0.6cm} \begin{equation} \m{NO}_2 + \m{O} \rightarrow \m{NO} + \m{O}_2. \end{equation} NO$_x$ can destroy \oz\ with the following cycle as well, \begin{equation} \m{NO} + \m{O}_3 \rightarrow \m{NO}_2 + \m{O}_2, \tag{\ref{r:NOx_NO}} \end{equation} \vspace{-0.6cm} \begin{equation} \m{NO}_2 + \m{O}_3 \rightarrow \m{NO}_3 + \m{O}_2. \end{equation} \vspace{-0.6cm} \begin{equation} \m{NO}_3 + \m{h}\nu \rightarrow \m{NO} + \m{O}_2. \end{equation} with a net conversion of two \oz\ molecules into three \om\ molecules. NO$_x$ reactions are highly temperature dependent and are faster at hotter temperatures. The main reservoir species associated with NO$_x$ are HNO$_3$ and N$_2$O$_5$, which have slow photolysis rates. We note that although in the stratosphere NO$_x$ destroys \oz\ through this catalytic cycle, that lower in the atmosphere it can help create \oz\ through the ``smog mechanism'' (see Sect.~\ref{sec:o2o3relationship}). This low altitude \oz\ is a pollutant that can cause biological damage. In this study we will focus on the majority of \oz\ in the stratosphere created by the Chapman mechanism. In this study we will focus on the efficiency of the Chapman mechanism, along with the ability of the HO$_x$ and NO$_x$ catalytic cycles to destroy \oz\ for varying \om\ levels around different host stars. \section{Methods \label{sec:methods}} \subsection{Atmospheric models \label{sec:atmos}} We modeled planetary atmospheres with \texttt{Atmos}\footnote{https://github.com/VirtualPlanetaryLaboratory/atmos}, a 1D coupled climate and photochemistry code to explore \oz\ formation for varying levels of \om\ on Earth-like planets around a variety of host stars. Numerous studies have used either of these climate or photochemistry modules, as well as both coupled (e.g., \citealt{arne17,mead18a,linc18L,madd20,greg21,teal22}). We give a brief overview of \texttt{Atmos} and refer readers to \cite{arne16} and \cite{mead18a} for extensive details. The photochemistry model originates from \cite{kast79} and was expanded upon and updated by \cite{zahn06}. It has been used extensively by many studies (e.g.,\ \citealt{kast80,segu03,segu05,segu10,doma14,greg21}). The atmosphere is broken up in 200 plane parallel layers from 0 to 100 km. The abundance of each gaseous species is calculated simultaneously with the flux and continuity equations using a reverse-Euler method for individual atmospheric layers. Vertical transport between different layers include molecular and eddy diffusion. Radiative transfer is computed with a $\delta$-2-stream method as described in \cite{toon89}. For modern Earth \texttt{Atmos} uses 50 gaseous species, with nine of them being short lived and thus not included in transport calculations. The photochemistry model is considered converged when its adaptive time step length reaches 10$^{17}$ seconds within 100 time steps. The climate model was originally developed by \cite{kast86}, but has been significantly updated as described in \cite{kopp13} and \cite{arne16}. Multiple studies have used this code to calculate habitable zones around a variety of stellar hosts used to study habitable zones and atmospheres of Earth-like planets around different stars (e.g.,\ \citealt {kopp13,segu03,segu05,segu10}). The atmosphere is broken up into 100 plane parallel layers from the surface to an atmospheric pressure of 1 mbar. A correlated-$k$ method computes the absorption of \oz, H$_2$O, CH$_4$, CO$_2$, and C$_2$H$_6$ throughout the atmosphere. Total absorption of incident stellar flux is calculated for each atmospheric layer with a $\delta$-2-stream scattering algorithm \citep{toon89}, and outgoing IR radiation is calculated with correlated-$k$ coefficients for each species individually. Updated H$_2$O cross sections from \cite{ranj20} have been incorporated into the code. Convergence criteria are reached when both changes in temperature and the flux out of the top of the atmosphere are sufficiently small ($<$10$^{-5}$). We run the climate and photochemistry models coupled with inputs including host stellar spectrum (121.6 - 45450 nm), initial mixing ratios of atmospheric species, upper and lower boundary conditions for individual species, and initial temperature/pressure profiles. Using initial conditions the photochemistry code runs first and then transfer computed H$_2$O, CH$_4$, CO$_2$, and C$_2$H$_6$ mixing ratio profiles to the climate code. The climate code then updates the temperature and H$_2$O vapor profiles to feed back into the photochemistry. These processes iterate with profiles from the photochemistry allowing for more accurate climate code calculations and vice-versa, until a converged solution is reached. The climate code has not been successfully run to convergence for the same atmospheric height as the photochemistry code \citep{arne16}, so temperature and H$_2$O profiles of the upper, thin part of the atmosphere (typically $<$ 60-70 km) are held constant at the highest computed value from the climate code. Sensitivity tests from \cite{arne16} suggest that the impact on the radiative transfer and climate of these models is not significant. This study also implements the ``short-stepping'' method of convergence, as described in \cite{teal22}. When iterating back and forth between the photochemistry and climate code, occasionally the code will oscillate between two different solutions. For example, if the photochemistry code computes a large quantity of \oz, the climate code will respond with a large amount of atmospheric heating. However, due to the temperature sensitivity of \oz\ production (Reaction~\ref{r:O2M}), this hotter atmosphere will cause lower amounts of \oz\ on the subsequent photochemistry iteration. Using the ``short-stepping'' method we do not allow the climate code to fully adjust to the updated atmospheric profiles from the photochemistry on a single iteration, and instead reach convergence slowly by iterating back and forth between the climate and photochemistry codes until a stable solution is reached. \begin{table}[t!] \begin{center} \label{tab:stars} \caption{Stellar hosts} \begin{tabular}{crrr} \hline \hline Host Star & Model & FUV/NUV & UV data \\ & T$_{\m{\scriptsize eff}}$ (K) & & source \\ \hline G0V & 6000 & 0.0028 & a \\ Sun & 5800 & 0.0010 & a \\ K2V & 5000 & 0.0010 & a \\ K5V & 4500 & 0.0012 & a \\ M5V & 3000 & 0.0084 & b \\ \hline \hline \end{tabular} \end{center} \small{ a - \cite{rugh13}\\ b - \cite{fran16} } \end{table} We modeled planetary atmospheres orbiting a variety of stellar hosts (see Sect.~\ref{sec:stellarspectra}) at the Earth-equivalent distance with varying levels of \om. Here we take Earth-equivalent distance to mean that the planet receives the same total amount of incident flux from their parent stars as modern Earth receives from the Sun. We set \om\ as a constant mixing ratio for all cases, with values varying from 0.01-150\% PAL \om\ (mixing ratios of 2.1$\times10^{-3}$ - 0.315). Higher \om\ levels are not explored because large \om\ levels would unstable with biological compounds \citep{kump08}. Lower \om\ are not modeled because it is thought that \om\ abundances from $\sim10^{-3}$\% to $\sim$1\% PAL are not expected to be stable in an Earth-like atmosphere as calculated by \cite{greg21} (details on these limits in Sect.~\ref{sec:O3biosignature}). Other initial conditions for the models were chosen to resemble modern Earth including atmospheric mixing ratios, planetary composition, and size. \texttt{Atmos} haze production was not used. All models were run at a zenith of 60$^\circ$ degrees (Lambertian average) and with cloudless skies. Fixed mixing ratios were used for CH$_4$ (1.8$\times10^{-6}$), N$_2$O (3.0$\times10^{-7}$), and CO$_2$ (3.6$\times10^{-4}$). All other species used initial atmospheric profiles and boundary conditions as defined in \texttt{Atmos}'s modern Earth template, and surface pressure remained constant at 1~bar. We note that defining CH$_4$ at a constant mixing ratio resembling modern Earth differs from several studies modeling ``Earth-like'' planets, which have adjusted CH$_4$ mixing ratios to reflect the CH$_4$ ground flux of modern Earth, resulting in much higher atmospheric CH$_4$ mixing ratios (e.g., \citealt{rugh15,wund19,teal22}). We chose to maintain CH$_4$ mixing ratio of modern Earth to better isolate the effects of different stellar hosts on the \om-\oz\ relationship. The impact of changing CH$_4$ levels on \oz\ abundance is discussed further in Sect.~\ref{sec:o2o3relationship}. \subsection{Input stellar spectra \label{sec:stellarspectra}} All host star spectra inputted into \texttt{Atmos} comprise of actual UV observations supplemented with synthetic ATLAS model spectra \citep{kuru79} for the visible and IR. Table~\ref{tab:stars} contains information about the host stars and their spectra are shown in Fig.~\ref{fig:stellarspectra}. The G0V-K5V stellar spectra were created in \cite{rugh13} and are a combination of UV data from the \emph{International Ultraviolet Explorer} (IUE) data archives\footnote{http://archive.stsci.edu/iue} and model ATLAS spectra for the same stellar temperature \citep{kuru79}. UV data for the M5V host is from GJ 876 observations obtained by the \emph{Measurements of the Ultraviolet Spectral Characteristics of Low-mass Exoplanetary Systems} (MUSCLES) survey \citep{fran16}. The UV spectrum of a planet's host star is extremely important in the photochemical modeling of \oz\ production. Not only does the total amount of UV dictate photolysis rates, but the UV spectral slope determines the creation and destruction rates of \oz. The far-UV (FUV; $\lambda <$ 200 nm) is primarily responsible for photolysis of \om\ (and the creation of \oz), while the mid- and near-UV (abbreviated NUV, for brevity; 200 nm $< \lambda <$ 400 nm) is responsible for the photolysis of \oz. The NUV additionally can photolyze H$_2$O, which creates the HO$_x$ species responsible for destroying \oz, causing NUV flux to destroy \oz\ both directly and indirectly. Hence, a higher FUV/NUV flux ratio will create \oz\ more efficiently. Low-mass, active stars tend to have lower FUV/NUV flux ratios as activity will cause excess FUV chromospheric radiation, while NUV wavelengths are often absorbed for cool stars by TiO \citep{harm15}. \subsection{Radiative transfer model} After \texttt{Atmos} computes the compositions of our model atmospheres, the \emph{Planetary Intensity Code for Atmospheric Scattering Observations} (\texttt{PICASO}) computes planetary emission spectra \citep{bata19,bata21}. \texttt{PICASO} is a publicly available\footnote{https://natashabatalha.github.io/picaso/index.html} radiative transfer code capable of producing transmission, reflected light, and emission spectra for a diverse range of planets. Our emission spectra were calculated at a phase angle of 0$^\circ$ (full phase) with altitude dependent pressure, temperature and mixing ratio profiles computed by \texttt{Atmos}. Output spectra cover a wavelength range of 0.3 - 14 $\mu$m, although particular focus is put on the \oz\ 9.7 $\mu$m feature in this study. \section{Results} \subsection{\om-\oz\ relationship \label{sec:chemresults}} Figure~\ref{fig:O3-O2_MR_all} shows the \om-\oz\ relationship for all of our model planetary atmospheres. The \om-\oz\ relationship is highly dependent on the stellar host, with different trends for model atmospheres having hotter host stars (G0V, Sun, K2V) versus cooler host stars (K5V, M5V). Since \oz\ is produced via the Chapman mechanism by converting \om\ into \oz, one would naively expect the \oz\ concentration to increase as the \om\ mixing ratio increases, which the case for the cooler host stars. However, the \om-\oz\ relationship for hotter stellar hosts behaves unexpectedly such that \oz\ abundance peaks and then decreases as the abundance of \om\ decreases from modern Earth levels. Maximum \oz\ abundance occurs in the 25\% PAL \om\ models for the G0V and Sun hosts, and the 55\% PAL \om\ model for the K2V host. This effect does not occur for cooler host stars, with \oz\ abundance dropping consistently for models with less \om, though not in a linear fashion. As a result, the K5V and M5V models with maximum \om\ considered (150\% PAL) created the maximum amount of \oz. These results are summarized in Table~\ref{tab:maxO3}. To allow for a simple parameterization of the \om-\oz\ relationships shown in Fig.~\ref{fig:O3-O2_MR_all} that can be used as an approximation in, for instance, GCM and retrieval modeling, we fit a fifth degree polynomial of the form, \begin{equation} y = ax^5 + bx^4 + cx^3 + dx^2 + ex + f, \end{equation} where $y$ is the integrated \oz\ column density (cm$^{-2}$), $x$ is the base 10 logarithm of the O$_2$ mixing ratio, and $a$, $b$, $c$, $d$, $e$, and $f$ are the best fit polynomial coefficients listed in Table~\ref{tab:coefs}. This fit is valid over the range of \om\ abundances modeled in this study (0.01\%-150\% PAL). \begin{table}[t!] \begin{center} \caption{Maximum Integrated O$_3$ Column Density \label{tab:maxO3}} \begin{tabular}{c|r|r} \hline \hline Host Star & Max Int. \oz\ Col. & Max \oz\ Model \\ & Density (10$^{18}$ cm$^{-2}$) & (\% PAL \om) \\ \hline G0V & 7.74 & 25 \\ Sun & 5.64 & 25 \\ K2V & 4.37 & 55 \\ K5V & 4.96 & 150 \\ M5V & 4.65 & 150 \\ \hline \hline \end{tabular} \end{center} \end{table} \begin{table*}[h!] \begin{center} \caption{Coefficients of polynomial fit of the \om-\oz\ relationship \label{tab:coefs}} \begin{tabular}{lcccccc} \hline \hline Host Star & a & b & c & d & e & f\\ \hline G0V & 4.582e+16 & 6.021e+17 & 2.481e+18 & 2.618e+18 & -1.126e+18 & 5.742e+18 \\ Sun & 6.203e+16 & 7.716e+17 & 3.147e+18 & 4.157e+18 & 8.412e+17 & 4.640e+18 \\ K2V & 3.417e+16 & 3.846e+17 & 1.283e+18 & 7.391e+17 & -8.266e+17 & 3.734e+18 \\ K5V & 6.826e+15 & 2.379e+16 & -3.424e+17 & -1.848e+18 & -9.440e+17 & 4.898e+18 \\ M5V & -2.144e+16 & -3.381e+17 & -1.933e+18 & -4.458e+18 & -1.941e+18 & 4.542e+18 \\ \hline \hline \end{tabular} \end{center} \end{table*} The seemingly counterintuitive phenomenon of hotter hosts having \oz\ levels \textit{increase} as \om\ levels \textit{decrease} can be explained by two factors: UV shielding abilities of \om, and the pressure dependency of \oz\ formation. First, we will address the UV shielding ability of \om. Despite the fact that \om\ UV absorption cross sections are either significantly smaller than those of \oz\ or require far higher energy photons (see Fig.~\ref{fig:XS}), \om\ remains an important UV shield on modern Earth, primarily due to its large abundance. Although \om\ is less efficient at absorbing UV photons than \oz, \om\ makes up $\sim$21\% of the atmosphere, whereas \oz\ is a trace gas with a maximum value of $\sim$10 ppm on modern Earth. This allows the far larger number of \om\ molecules to compensate for its smaller absorption cross-sections and absorb many photons with wavelengths shorter than 240~nm (the required wavelength for \om\ photolysis, see Reaction~\ref{r:PO2_O}). As a result, as \om\ decreases, UV shielding in that wavelength range decreases, allowing photolysis to occur deeper in the atmosphere. This is illustrated in Fig.~\ref{fig:genchem}, where mixing ratio profiles of \oz, H$_2$O, CH$_4$, and N$_2$O are shown for all host stars at \om\ abundances of 100\%, 10\%, 1\%, and 0.1\% PAL \om. Photolysis occurs at lower atmospheric altitudes as \om\ decreases, leading the \oz\ layer to shift downward in the atmosphere. This effect is more pronounced for hotter host stars with high UV fluxes (particularly high FUV fluxes capable of \om\ photolysis), and therefore higher photolysis rates. Secondly, the depth in which \om\ photolysis occurs is of particular importance to the altitudes at which the \oz\ forming Chapman mechanism takes place, because as \om\ decreases, photolysis reaches not only deeper but also denser regions of the atmosphere. This is of significant relevance to \oz\ formation due to the pressure dependency of the Chapman mechanism: Reaction~\ref{r:O2M}, in which an O atom and \om\ molecule combine (with the help of a background molecule) to form \oz, is a 3-body reaction, and therefore is faster at higher atmospheric densities. Denser regions allow O, \om, and background molecules to come together and react more rapidly than in a thinner region of the atmosphere. For hotter host stars in our sample (G0V, Sun, K2V), the UV fluxes are strong enough to allow \om\ photolysis to reach much denser atmospheric layers as \om\ decreases, allowing the benefit of faster \oz\ production via Reaction~\ref{r:O2M} to outweigh the smaller source of \om, resulting in peak \oz\ abundance at lower \om\ levels. Our cooler host stars (K5V, M5V), however, have weaker UV fluxes, meaning photons capable of \om\ photolysis cannot travel as deep in the atmosphere as for hotter hosts when \om\ decreases. The additional speed of the Chapman mechanism for lower \om\ does not make up for the smaller amount of \om, causing \oz\ abundance to decrease for decreasing \om\ abundance with these cooler host stars. This result of an increase in \oz\ production as \om\ levels decrease has been noted for the Earth-Sun system by several studies (e.g., \citealt{ratn72,levi79,kast80,kast85,lege93}), although this is the first time it has been explored for Earth-like planets around different stellar hosts. Whether or not this would occur for an Earth-like planet will depend on if the UV flux (particularly the FUV flux) from its host star will be strong enough to incite \om\ photolysis at dense enough atmospheric levels that the increased rate of \oz\ production will be enough to counter the decreased amounts of \om\ from which \oz\ can form. This effect contributes to the strong dependency of the \om-\oz\ relationship on the spectral type of the host star. For example the G0V host star models have more \oz\ at 10\% PAL \om\ than at 100\% PAL \om, whereas for the M5V host star \oz\ abundance in the 10\% PAL \om\ model is nearly 60\% percent less than it is for the 100\% PAL \om\ model. When looking at specific \oz\ mixing ratios, Fig.~\ref{fig:O3-O2_MR_all} also indicates an increase in \oz\ above the stratosphere for all models. This upper atmosphere \oz\ (the ``secondary \oz\ layer'') is produced primarily by \om\ photolysis from higher energy photons ($>$ 175 nm; Reaction~\ref{r:PO2_O1D}) which produces the radical \od. Photons of these wavelengths are absorbed high in the atmosphere but do not contribute significantly to stratospheric \oz, even when a decrease in \om\ allows photolysis to reach deeper layers of the atmosphere. Instead, these photons create \oz\ above the primary \oz\ layer, generally in the mesosphere and thermosphere (see Sect.~\ref{sec:chem} for more details). Although \oz\ mixing ratios are high at these altitudes, due to the thin atmosphere, \oz\ creation at these elevations does not add considerably to the total amount of \oz. Also note that although the K2V host star produces enough UV photons capable of \om\ photolysis to have peak \oz\ production in its 55\% PAL \om\ model, both the K5V and M5V host models show a larger amount of \oz\ than the K2V host for \om\ levels near 100\% PAL (Fig.~\ref{fig:O3-O2_MR_all}). This is because although the K2V host has more FUV than the K5V and M5V hosts (Fig.~\ref{fig:stellarspectra}), the cooler hosts have higher FUV/NUV ratios (Table~\ref{tab:stars}), allowing more efficient \oz\ production without as much NUV \oz\ destruction. In summary, the \om-\oz\ relationship is highly dependent on the UV flux of the host star, with different trends for hotter and cooler host stars. Hotter host stars with high FUV fluxes experience peak \oz\ abundance at lower \om\ levels due to \oz\ formation occurring in deeper, denser parts of the atmosphere where the Chapman mechanism is more efficient. Cooler host stars do not emit enough FUV flux for this effect to occur, and experience consistently decreasing \oz\ as \om\ decreases. \subsection{Impact of varying \om\ on H$_2$O, CH$_4$, \& N$_2$O} Figure~\ref{fig:genchem} shows the impact of varying \om\ levels on the biologically relevant atmospheres species H$_2$O, CH$_4$, and N$_2$O. As \om\ decreases, photons usually absorbed by \om\ ($\lambda <$ 240 nm) travel deeper into the atmosphere and drive the majority of atmospheric changes. This allows photolysis in general to reach lower altitudes, as well as photolysis caused by high energy photons that create the \od\ radical, which reacts quickly with many species. \od\ is produced via photolysis of \om, \oz, N$_2$O, and CO$_2$ as follows, \begin{equation} \m{O}_2 + \m{h}\nu \rightarrow \m{O }+ \m{O(}^1\m{D) (}\lambda < 175\ \m{nm}), \tag{\ref{r:PO2_O1D}} \end{equation} \vspace{-0.7cm} \begin{equation} \m{O}_3 + \m{h}\nu \rightarrow \m{O}_2 + \m{O(}^1\m{D) (}\lambda < 310\ \m{nm}), \tag{\ref{r:PO3_O1D}} \end{equation} \vspace{-0.7cm} \be \m{N}_2\m{O} + \m{h}\nu \rightarrow \m{N}_2 + \m{O(}^1\m{D) (}\lambda < 200 \m{ nm)}, \label{r:PN2O_O1D} \ee \vspace{-0.7cm} \be \m{CO}_2 + \m{h}\nu \rightarrow \m{CO} + \m{O(}^1\m{D) (}\lambda < 167 \m{ nm)}. \label{r:PCO2_O1D} \ee As \om\ decreases, \od\ creation moves deeper into the atmosphere for all these species. For our models, \oz\ photolysis consistently creates the most \od, particularly at lower atmospheric heights. \om\ photolysis is also a significant producer of \od, although it is limited to the stratosphere and above, even for the lowest \om\ levels modeled in this study. CO$_2$ and N$_2$O photolysis contribute to \od\ production as well, although CO$_2$ photolysis is constrained to the upper atmosphere similarly to \om\ photolysis, while N$_2$O photolysis can occur much closer to the planetary surface for low \om\ levels. Increased rates of photolysis as \om\ shielding decreases as well as increased \od\ production reaching lower atmospheric levels causes the depletion of many species. H$_2$O is increasingly depleted for decreasing levels of \om\ due to both photolysis in the atmosphere and \od\ reactions lower in the atmosphere. Both of these reactions create the OH radical while removing H$_2$O, \be \m{H}_2\m{O} + \m{h}\nu \rightarrow \m{H} + \m{OH}, \label{r:PH2O} \ee \vspace{-0.7cm} \begin{equation} \m{H}_2\m{O} + \m{O(}^1\m{D)} \rightarrow \m{OH} + \m{OH}. \tag{\ref{r:H2O_OH}} \end{equation} On modern Earth, Reaction~\ref{r:H2O_OH} is the primary source of OH in the stratosphere, which is a major sink for several species. As H$_2$O levels in the upper atmosphere drop with decreasing \om, this causes OH production to move to lower levels of the atmospheres as seen in Fig.~\ref{fig:catalytic}. Upper atmospheric depletion of H$_2$O and OH production at lower altitudes is seen more strongly for hotter host stars, as they have higher incident UV for photolysis and \od\ creation. CH$_4$, an important biosignature gas, is also depleted in the upper atmosphere for models around all host stars, primarily though oxidation via OH (created via reactions with H$_2$O), along with photolysis and reactions with \od, \be \m{CH}_4 + \m{OH} \rightarrow \m{CH}_3 + \m{H}_2\m{O}, \label{r:CH4_H2O} \ee \vspace{-0.7cm} \be \m{CH}_4 + \m{O(}^1\m{D)} \rightarrow \m{CH}_3 + \m{OH}. \label{r:CH4_OH} \ee In the upper atmosphere, depletion is dominated by photolysis. Reaction~\ref{r:CH4_H2O} is both the main sink of stratospheric CH$_4$ and OH depletion on modern Earth, with CH$_4$ and OH acting as a major sinks for each other. Reactions with \od\ and OH occur deeper in the atmosphere for decreasing \om\ levels as both these radicals are produced at lower altitudes. Reaction~\ref{r:CH4_OH} is an additional source of OH in the lower stratosphere/troposphere. CH$_4$ depletion is limited to the upper stratosphere for model atmospheres around cooler hosts, although can reach the lower stratosphere for model atmospheres with hotter hosts. N$_2$O, another potential biosignature gas, experiences extreme depletion for hotter hosts down to the troposphere for lower levels of \om, and significantly less depletion constrained to the upper atmosphere around cooler stars. This is due primarily to photolysis (Reaction~\ref{r:PN2O_O1D}) in the upper atmosphere, although there are contributions from interactions with \od\ as well, \begin{equation} \m{N}_2\m{O} + \m{O(}^1\m{D)} \rightarrow \m{NO} + \m{NO}. \tag{\ref{r:N2O_NO}} \end{equation} Depletion rates of N$_2$O vary significantly between different stellar hosts due to the strong dependence of incident UV flux on N$_2$O destruction. In summary, the majority of changes to an atmosphere as \om\ decreases are caused by increased photolysis rates as \om\ UV shielding decreases as well as \od\ production from either \om, \oz, N$_2$O, or CO$_2$ photolysis occurring at lower levels of the atmosphere. These effects cause upper atmospheric depletion of H$_2$O, CH$_4$, and N$_2$O, with more depletion for hotter stellar hosts with stronger UV fluxes. \subsection{Impact of varying \om\ on catalytic cycles \label{sec:catalyticresults}} Varying \om\ levels impacts HO$_x$ (OH + HO$_2$) and NO$_x$ (NO + NO$_2$) species, which are the main contributors of catalytic cycles that destroy \oz\ (see Sect.~\ref{sec:cc} for details). Mixing ratio profiles of these species are shown in Fig.~\ref{fig:catalytic} for our hottest and coolest host stars. Once again, the impact on these species as \om\ levels are decreased is controled by photolysis reaching deeper levels of the atmosphere along with \od\ production moving to lower levels as well. As \om\ decreases, HO$_x$ species (OH and HO$_2$) in all models decrease in the upper atmosphere, but increase in the lower atmosphere. OH production via reactions with H$_2$O (Reactions~\ref{r:H2O_OH}, \ref{r:PH2O}) and CH$_4$ (Reaction~\ref{r:CH4_OH}) occur at lower altitudes for lower \om\ levels, especially since both H$_2$O and CH$_4$ are depleted in the upper atmosphere from photolysis. This ``pushing down'' of HO$_x$ species is more noticeable for hotter host stars with higher photolysis rates. Also note that stars with lower FUV/NUV ratios can better remove \oz\ via the HO$_x$ catalytic cycle, as FUV wavelengths create \oz, while NUV wavelengths photolyze H$_2$O to form OH. However, for all host stars the efficiency of the HO$_x$ catalytic cycle of \oz\ destruction is not largely impacted for different \om\ and \oz\ abundances since OH and HO$_2$ move down in the atmosphere along with \oz\ concentrations. Decreased \om\ UV shielding and increased photolysis does not destroy HO$_x$ species, but rather converts them into other HO$_x$ species. When HO$_2$ is photolyzed, \be \m{HO}_2 + \m{h}\nu \rightarrow \m{OH} + \m{O}, \label{r:PHO2} \ee it creates OH. The OH radical itself is extremely reactive with a short lifetime, and typically will react quickly with other species or react with \oz\ to create HO$_2$ (Reaction~\ref{r:HOx_OH}). Although with decreasing \om\ abundance HO$_x$ species are formed lower in the atmosphere rather than destroyed by photolysis, NO$_x$ species (NO + NO$_2$) can be depleted via photolysis. The main source of NO$_x$ in the stratosphere is via N$_2$O reactions with \od. However, as shown in Fig.~\ref{fig:genchem}, N$_2$O is significantly depleted in the atmosphere via photolysis, especially for hotter host stars. While H$_2$O, the primary source of HO$_x$ species in the stratosphere creates HO$_x$ during photolysis, N$_2$O, the primary source of NO$_x$ species, does not. Instead it creates N$_2$ and \od\ (Reaction~\ref{r:PN2O_O1D}), cutting off the main source of NO production from N$_2$O. As for NO$_x$ species themselves, NO$_2$ photolysis creates more NO, while NO photolysis simply breaks the molecule apart, \be \m{NO}_2 + \m{h}\nu \rightarrow \m{NO} + \m{O}, \label{r:PNO2} \ee \vspace{-0.7cm} \be \m{NO} + \m{h}\nu \rightarrow \m{N} + \m{O} , \label{r:PNO} \ee causing NO photolysis to be a sink of NO$_x$. NO can be formed once again via reactions between N atoms and \om\ molecules, \be \m{N} +\m{O}_2 \rightarrow \m{NO} + \m{O}, \label{r:N_NO} \ee although the rate of NO photolysis is faster than Reaction~\ref{r:N_NO}, causing it to be a gradual sink of NO$_x$. Often the N atom created by Reaction~\ref{r:PNO} will remove NO$_x$ via, \be \m{NO} +\m{N} \rightarrow \m{N}_2 + \m{O}, \label{r:NO_N} \ee or the N atoms will recombine with other N atoms, \be \m{N} +\m{N} \rightarrow \m{N}_2 , \label{r:N} \ee with this reaction becoming more efficient as \om\ levels drop. This sink via NO photolysis has less of an impact on cooler host stars with lower photolysis rates, hence less NO$_x$ depletion. As seen in Fig.~\ref{fig:catalytic}, NO$_x$ species are depleted throughout the atmosphere for the G0V host star, while the M5V host star experiences less NO$_x$ depletion, and actually an increase in NO in the lower stratosphere. This is due to primarily to lower photolysis rates for the cooler M5V star which depletes less N$_2$O and NO (Reaction~\ref{r:N2O_NO}). However, for model atmospheres around all host stars the ability of the NO$_x$ catalytic cycle to deplete \oz\ diminishes consistently with decreasing \om\ levels, even for cooler stellar host models with less NO$_x$ depletion. In summary, the HO$_x$ catalytic cycles are not hugely impacted by decreasing \om\ because increased photolysis rates tend to push HO$_x$ species to lower altitudes rather than destroy them. However, NO$_x$ catalytic cycles decrease in efficiency with lower \om\ levels since photolysis of N$_2$O and NO remove NO$_x$ from the atmosphere. {\singlespace \begin{table*}[h!] \centering \footnotesize \caption{UV Integrated Fluxes \label{tab:UV_all}} \begin{tabular}{crrrr|rrr|rrr} Spectral & O$_2$ MR & \multicolumn{3}{c}{UVA 315 - 400 nm (W/m$^2$)} & \multicolumn{3}{c}{UVB 280 - 315 nm (W/m$^2$)} & \multicolumn{3}{c}{UVC 121.6 - 280 nm (W/m$^2$)}\\ \cline{3-5} \cline{6-11} Type & (\% PAL) & TOA & Surface & \% to surf. & TOA & Surface &\% to surf. & TOA & Surface &\% to surf. \\ \hline \hline G0V & 100 & 96.6 & 77.9 & 80.7 & 22.4 & 1.5 & 6.7 & 11.2 & 3.8e-27 & 3.4e-26 \\ G0V & 10 & 96.6 & 76.7 & 79.4 & 22.4 & 1.4 & 6.2 & 11.2 & 1.8e-08 & 1.6e-07 \\ G0V & 1 & 96.6 & 77.5 & 80.3 & 22.4 & 2.4 & 10.7 & 11.2 & 1.7e-04 & 1.5e-03 \\ G0V & 0.1 & 96.6 & 78.6 & 81.4 & 22.4 & 5.9 & 26.3 & 11.2 & 1.7e-02 & 1.5e-01 \\ \hline Sun & 100 & 82.9 & 67.5 & 81.4 & 16.2 & 1.6 & 10.2 & 6.7 & 2.8e-21 & 4.1e-20 \\ Sun & 10 & 82.9 & 66.5 & 80.2 & 16.2 & 1.6 & 9.6 & 6.7 & 3.3e-08 & 5.0e-07 \\ Sun & 1 & 82.9 & 67.1 & 81.0 & 16.2 & 2.7 & 16.4 & 6.7 & 2.3e-04 & 3.5e-03 \\ Sun & 0.1 & 82.9 & 67.7 & 81.7 & 16.2 & 5.6 & 34.8 & 6.7 & 1.5e-02 & 2.3e-01 \\ \hline K2V & 100 & 34.2 & 28.0 & 81.9 & 4.8 & 0.68 & 14.1 & 1.4 & 1.1e-18 & 8.0e-17 \\ K2V & 10 & 34.2 & 27.6 & 80.8 & 4.8 & 0.74 & 15.4 & 1.4 & 1.8e-08 & 1.3e-06 \\ K2V & 1 & 34.2 & 27.8 & 81.4 & 4.8 & 1.2 & 25.4 & 1.4 & 1.0e-04 & 7.3e-03 \\ K2V & 0.1 & 34.2 & 28.0 & 81.8 & 4.8 & 2.2 & 44.8 & 1.4 & 1.0e-02 & 7.2e-01 \\ \hline K5V & 100 & 15.3 & 12.8 & 83.6 & 0.68 & 0.10 & 14.4 & 0.16 & 2.1e-21 & 1.3e-18 \\ K5V & 10 & 15.3 & 12.6 & 82.6 & 0.68 & 0.14 & 20.1 & 0.16 & 8.4e-09 & 5.1e-06 \\ K5V & 1 & 15.3 & 12.7 & 82.9 & 0.68 & 0.23 & 33.6 & 0.16 & 5.4e-05 & 3.3e-02 \\ K5V & 0.1 & 15.3 & 12.7 & 83.0 & 0.68 & 0.34 & 50.4 & 0.16 & 4.7e-03 & 2.8e+00 \\ \hline M5V & 100 & 1.6 & 1.3 & 83.8 & 3.5e-02 & 6.5e-03 & 18.4 & 2.7e-02 & 9.8e-21 & 3.6e-17 \\ M5V & 10 & 1.6 & 1.3 & 83.1 & 3.5e-02 & 1.0e-02 & 28.8 & 2.7e-02 & 4.1e-09 & 1.5e-05 \\ M5V & 1 & 1.6 & 1.3 & 83.4 & 3.5e-02 & 1.6e-02 & 45.7 & 2.7e-02 & 3.8e-05 & 1.4e-01 \\ M5V & 0 & 1.6 & 1.3 & 83.4 & 3.5e-02 & 2.0e-02 & 56.3 & 2.7e-02 & 1.1e-03 & 4.2e+00 \\ \hline \hline \end{tabular} \vspace{-0.2cm} \tablefoot{ Abbreviations: MR = mixing ratio; PAL = present atmospheric level; TOA = top of atmosphere} \end{table*} } \subsection{Surface UV flux for different \om\ and \oz\ levels \label{sec:UV}} \texttt{Atmos} was used to calculate the amount of UV flux reaching the planetary surface in each model atmosphere. Surface UV flux is strongly dependent on incident stellar UV flux and the amount of UV shielding from both \om\ and \oz. High UV fluxes can cause substantial damage to biological organisms, hence UV surface environments will be critical for determining surface habitability. These results are summarized in Table~\ref{tab:UV_all} and shown in Fig.~\ref{fig:surfUV}. Surface UV fluxes calculated here using a zenith angle of 60$^\circ$ (see Sect.~\ref{sec:atmos}). Integrated surface UV fluxes are broken up into three biologically relevant wavelength regimes: UVA, UVB, and UVC. UVA flux (315-400 nm) is the lowest energy type of UV and is only partially shielded by \oz, so a large percentage of incident UVA on modern Earth reaches the planetary surface. UVB (280-315 nm) is more harmful for life, contributing to sun burn and skin cancer in humans and damage to other organisms (e.g., \citealt{kies01}). UVB is shielded much more efficiently by \oz\ than UVA, with a smaller fraction of incident UVB reaching the surface of modern Earth. UVC (121.6-280 nm) is capable of causing DNA damage, but is fortunately shielded almost entirely by \oz\ on modern Earth. Ozone is most efficient at shielding UV in this wavelength region, as evidenced by the \oz\ absorption cross sections shown in Fig.~\ref{fig:XS}. We note that \om\ photolysis, the first step in \oz\ formation (Reactions~\ref{r:PO2_O}, \ref{r:PO2_O1D}), requires a UVC photon ($\lambda <$ 240 nm), allowing \om\ to contribute partially to UVC shielding. However, since \oz\ is the primary shielder of UVC, the requirement of a UVC photon to produce \oz\ creates interesting correlations between incident and surface UVC flux. Because UVA is not strongly shielded by \oz, UVA surface fluxes for all models are closely correlated with the amount of incident UVA flux (see Table~\ref{tab:UV_all} and Fig.~\ref{fig:surfUV}). For all host stars at \om\ levels of 100\%, 10\%, 1\%, and 0.1\% PAL the amount of incident UVA that reaches the surface of these model planets is roughly $\sim$80\% for all cases. Because \oz\ plays only a small role in UVA shielding, all model results are quite similar. UVB surface fluxes are significantly more variable because \oz\ shielding is much more important for these wavelengths. Although the G0V host star provides a higher incident UVB flux than the Sun, G0V-hosted models still maintain slightly less surface UVB flux until \om\ decreases to 0.1\% PAL, at which point the G0V and Sun model surface fluxes become roughly equal. This is due to the larger amount of \oz\ created by the G0V host star compared to the Sun, which allows for stronger UVB shielding (see Fig.~\ref{fig:O3-O2_MR_all} for \om-\oz\ relationship). The percentage of incident UVB flux that reaches the planetary surface varies significantly between different host stars. For our hottest host star (G0V) the amount of incident UVB flux reaching the planetary surface increases from 6.7\% to 26.3\% as \om\ levels drop from 100\% to 0.1\% PAL. As a result of less \oz\ shielding, these percentages are higher for our coolest host star (M5V), which experiences an increase of 18.4\% to 56.3\% of incident UVB reaching the surface as \om\ decreases from 100\% to 0.1\% PAL. Even though the cooler stellar hosts allow a higher percentage of UVB flux to travel through the atmosphere, they still maintain lower surface UVB values than hotter hosts due to their weaker incident UVB flux. The strong reliance of UVC absorption on \oz\ abundance, along with the fact that \oz\ creation requires UVC photons, leads to some unexpected UVC surface flux results. A striking consequence of this is that while the G0V host star provides the highest incident UVC flux of all our host stars, it maintains the lowest surface UVC flux for the 100\% PAL \om\ model by several orders of magnitude, while the much cooler K2V stellar host model experiences the highest surface UVC flux. The much higher incident UVC flux of the G0V host causes much faster \oz\ production than other host stars, allowing for UVC shielding strong enough to counteract the high incident UVC flux. Another interesting result for the 100\% PAL \om\ case is that the M5V model has a slightly higher UVC surface flux than the K5V model, despite the fact that the M5V model has the lowest incident UVC flux. Again, this effect is due to the higher \oz\ abundance of the K5V-hosted planet, created by the stronger incident UVC flux. For all host stars, the atmospheric models with 100\% PAL \om\ allowed only extremely tiny fractions of incident UVC flux reach the surface ($<10^{-17}$\% in all cases). UVC surface fluxes for models with 10\% and 1\% PAL \om\ have similar trends when comparing stellar hosts. Sun-hosted models had the largest surface UVC fluxes in both scenarios. Notice that although the model atmospheres hosted by the G0V star and the Sun have higher \oz\ levels for the 10\% PAL \om\ cases compared to their 100\% PAL \om\ cases, overall UVC shielding is significantly less for the 10\% PAL \om\ cases due to the lesser contribution of \om\ absorbing photons with wavelengths less than 240 nm. Even though the G0V host and the Sun produce much larger amounts of \oz\ than cooler stars, for low \om\ levels the combined decrease in \om\ and \oz\ UV shielding causes them to have higher surface UVC fluxes than cooler stars that produce significantly less \oz. For model atmospheres with \om\ levels of only 0.1\% PAL, surface UVC levels begin to converge to the incident UVC flux as \om\ and \oz\ levels have dropped enough that they shield UVC far less effectively. It has previously been suggested that the usefulness in the ability of \oz\ to shield UV drops off drastically at these \om\ values (e.g.,\ \citealt{segu03}). However, due to CO$_2$ shielding, all models in this study had virtually no photons with wavelengths less than 200 nm reach the planetary surface, even with the lowest \om\ abundance modeled (0.01\% PAL). Although the model atmospheres hosted by the hottest stars create the highest levels of \oz, they constantly experience the highest UVA and UVB surface fluxes due to the limited shielding abilities of \oz\ in these wavelength ranges. However, for the far more damaging UVC wavelengths at 100\% PAL \om\ it is the G0V host star that provides the lowest UVC surface flux by orders of magnitude, with the Sun-hosted models having comparable UVC surface flux to cooler host star models. Somewhat ironically, for lower \om\ levels of 1 to 10\% PAL, it is the Sun that is the host star with the least ``hospitable'' conditions for surface life with the highest UVC surface fluxes. As \om\ drops to 0.1\% PAL UVC surface flux will begin to converge to the incident UVC flux as \om\ and \oz\ shielding drops dramatically. However, though life on modern Earth requires a substantial \oz\ layer for UV protection, it is important to remember that evidence for life on Earth dates back to 3.7 Gyr ago \citep{rosi99}, long before the \om\ levels rose during the Great Oxidation Event 2.5 Gyr ago. The lack of significant atmospheric UV shielding may prevent life as we know it, but it does not rule out its existence. Life could exist, for instance, underwater, at a depth in which significant damaging UV has been absorbed by water (e.g., \citealt{cock07}). \subsection{\oz\ spectral features for different \om\ levels \label{sec:emissionresults}} Emission spectra from our model atmospheres are shown in Fig.~\ref{fig:emission}, zoomed in on the primary MIR \oz\ feature at 9.7 $\mu$m, along with the corresponding \oz\ mixing ratio and temperature profiles, which are necessary for interpreting the features. The temperature difference between the absorbing and emitting layers of the planet's atmosphere, rather than the overall abundance of that gaseous species, determines the depth of planetary emission spectrum features. Because \oz\ is a main contributor of stratospheric heating, the strength of \oz\ features has a highly nonlinear relationship to \oz\ abundance. Once again, we see counterintuitive trends for hotter host stars (G0V, Sun, K2V), and different, more straightforward trends, for cooler host stars (K5V, M5V). For all host stars, the 0.1\% PAL \om\ case has the shallowest \oz\ feature in emission spectra, but the \om\ level for the deepest feature depends on the host star. For the G0V-hosted models the \oz\ feature for the 100\% PAL \om\ case has a similar depth to the 0.1\% PAL \om\ case, despite the fact that they have significantly different integrated \oz\ column densities (7.06$\times10^{18}$ cm$^{-2}$ for 100\% PAL \om; 1.23$\times10^{18}$ cm$^{-2}$ for 0.1\% PAL \om). For the two hottest host stars (G0V, Sun), the 10\% and 1\% PAL \om\ cases are the deepest features. The strong features of the 10\% PAL \om\ models are not surprising since both the G0V and Sun models have higher \oz\ abundances at 10\% PAL than at 100\% PAL \om, but the 1\% PAL \om\ models have significantly less \oz\ than both the 10\% and 100\% PAL \om\ cases (see Fig.~\ref{fig:O3-O2_MR_all} for reference). Conversely, \oz\ features for the coolest host star models correspond more intuitively to \om\ and \oz\ levels, with the highest \om\ and \oz\ abundances having the deepest features, and the lowest \om\ and \oz\ abundances having the shallowest features. The relationship between the depth of \oz\ spectral features and actual \oz\ abundance is dictated by atmospheric temperature profiles. The temperature difference between the emitting and absorbing layers of a gaseous species determines feature depth, therefore \oz\ feature depth is determined by the temperature difference between the altitude of peak \oz\ concentration in the stratosphere and the planet's surface temperature. Because \oz\ NUV absorption is a dominant source of stratospheric heating, a higher \oz\ concentration with significant incident NUV flux for \oz\ to absorb results in higher stratospheric temperatures and thus a shallower spectral feature. This explains why an atmosphere with a large amount of \oz\ and high incident NUV flux (and more stratospheric heating) has a weaker \oz\ feature than an atmosphere with less \oz\ and weaker incident NUV, but a larger temperature difference between the stratospheric and surface temperatures. Cooler host star models with less \oz\ formation and lower incident NUV flux have significantly less stratospheric heating (Fig.~\ref{fig:emission}), and therefore \oz\ spectral feature depths which correspond more strongly with the actual abundance of \oz\ in their atmospheres. In summary, in order to interpret \oz\ features in planetary emission spectra and retrieve the \oz\ (and \om) abundances it will require modeling of the atmospheric temperature profiles. Both photochemistry and climate modeling will be essential in this process. \section{Discussion \label{sec:discussion}} \subsection{Comparison to other studies} Multiple studies have explored \oz\ formation in Earth-like atmospheres using a variety of models, each providing valuable insight on the \om-\oz\ relationship. Here, we briefly describe relevant past studies on this topic. With 1D modeling of Earth's atmosphere, early \oz\ studies revealed the nonlinear link between \oz\ and \om. Both \cite{ratn72} and \cite{levi79} discuss the phenomenon of the \oz\ layer moving down in the atmosphere as \om\ levels decreased (see Sect.~\ref{sec:chemresults} for details on this process) and agreed on peak \oz\ abundance occurring at $\sim$10\% PAL \om. Total \oz\ abundances calculated for these studies differed because they each included different chemical reactions. The model in \cite{ratn72} contained only the Chapman mechanism, while \cite{levi79} additionally HO$_x$ and NO$_x$ catalytic cycle destruction of \oz\ in their model. Later \cite{kast85} replicated the \oz\ peak in abundance at lower \om\ levels using a more sophisticated model including chemistry beyond the Chapman mechanism and catalytic cycles, incorporating 20 gaseous species overall. They predicted maximum \oz\ production to occur at 50\% PAL \om, a higher \om\ estimate than previously. It is important to note that none of these studies included a climate model to calculate self-consistent atmospheric temperatures. Because the Chapman mechanism is temperature dependent, this helps account for discrepancies with later \oz\ calculations. Studies in later years began to model \oz\ production in planetary atmospheres with different types of host stars. \cite{segu03} used what they described as a ``loosely coupled'' 1D climate and photochemistry code (partially based off the \citealt{kast85} model) for different \om\ levels around F2V, G2V, and K2V type stars. We note that this model is a predecessor of the model used in this study: \texttt{Atmos}. No host star displayed a peak \oz\ abundance at an \om\ level less than 100\% PAL, but this is likely because the modeled \om\ levels were evenly spaced on a logarithmic scale from 0.001-100\% PAL \om, whereas more finely spaced \om\ levels are required to capture this effect. Atmospheric chemistry and temperature profiles computed in \cite{segu03} are similar to our Sun and K2V host star models. There are slight differences in total \oz\ abundance in these models compared to those in this study (our models tend to have lower \oz), although this is likely due to differences in input UV spectra, boundary conditions, and model updates. Overall this is the most similar study to ours in terms of variety of \om\ levels and host stars. Other studies have also modeled \oz\ formation in Earth-like planetary atmospheres around different stellar hosts. The effect of varying orbital separations inside the habitable zone on \oz\ formation was explored for F2V, G2V, and K2V hosts (same as \citealt{segu03}) in \cite{gren07}, and for a variety of M dwarfs in \cite{gren14}. Both these studies used the 1D ``loosely coupled'' climate and photochemistry model developed in \cite{segu03}. An increase in star-planet separation for FGK stars caused cooler atmospheric temperatures, which correlated to an increase in \oz. This is because the 3-body reaction that creates \oz\ (Reaction~\ref{r:O2M}) is faster at cooler temperatures. However, this \oz\ increase was not large because larger orbital distances also caused higher levels of HO$_x$ and NO$_x$ species which destroy \oz\ \citep{gren07}. When repeating this study for M dwarfs they found what they described as a ``Goldilocks'' effect in which there was a range of UV that was best for creating the most detectable \oz. If incident UV flux is too low it will create small amounts of \oz\ making it harder to detect, but if the UV flux is too high it will create enough \oz\ to cause significant stratospheric heating, making it more difficult to detect in planetary emission spectra (see Sect.~\ref{sec:emissionresults} details on this phenomenon). M7V spectral types were found to produce the amount of UV that was ``just right'' in creating detectable amounts of \oz\ \citep{gren14}. Although these models were run only at 100\% PAL \om, their results are consistent with this study. The impact of stellar host UV on \oz\ formation has also been modeled using \emph{Exo-Prime}, a 1D coupled climate and photochemistry originally based off the same codes as \texttt{Atmos}, for Earth-like planets orbiting FGKM stars \citep{rugh13}, M dwarfs \citep{rugh15}, cool white dwarfs \citep{koza18}, and red giants \citep{koza19}. However, all these studies were constrained to \om\ abundances of 100\% PAL, although our corresponding models results are consistent. Another \emph{Exo-Prime} study \citep{rugh15b} modeled Earth at different points throughout geological history for FGKM stars, including four different \om\ abundances, although the large variations in abundances of many gaseous species (i.e., CH$_4$, CO$_2$) does not allow for a straightforward comparisons of results with our study. Of the 1D models discussed in this study, only \texttt{Atmos} models photochemistry in the mesosphere and lower thermosphere (up to an altitude of 100 km), whereas other photochemistry models are limited to atmospheric heights below $\sim$65 km \citep{kast80,kast85,segu03,gren07,gren14,rugh13,rugh15,rugh15b,koza18,koza19}. This is relevant for \oz\ formation because of the ``secondary \oz\ layer'' on Earth above the stratosphere (see details in Sect.~\ref{sec:chem} and \ref{sec:chemresults}). High energy photons ($\lambda <$ 175 nm) are normally absorbed above the stratosphere by \om\ photolysis, creating the \od\ radical in the process (Reaction~\ref{r:PO2_O1D}). This could yield different findings for a photochemistry model that does not include higher altitudes since the high-energy photons will then be absorbed at far lower altitudes than in reality. This would change both the \od\ and \oz\ atmospheric profiles, and account for the differences in \oz\ production the we see from different models. However, overall results from the \cite{segu03} model, \emph{Exo-Prime}, and \texttt{Atmos} remain fairly consistent. Along with 1D models, \oz\ formation has been modeled in 3D. In reality, \oz\ formation and abundance is dependent on both the atmospheric latitude and time of day. On the night side of a planet, \oz\ cannot be generated by the Chapman method nor destroyed by photolysis. During the day \oz\ is created most efficiently at the equator where incident UV flux is highest, and then is transported toward the poles by Dobson-Brewer circulation, causing peak \oz\ abundance to vary in altitude depending on the latitude. On modern Earth, there is only a $\sim$2\% difference in \oz\ between the day and night sides, while planets with differing rotation periods may have more unevenly distributed \oz. Despite the fact that 1D models (like ours) can use a zenith angle to represent the ``average'' of incoming radiation, they cannot accurately predict \oz\ formation and transport for slowly rotating planets, especially ones that are tidally-locked. Several studies have used 3D modeling to explore \oz\ formation and distribution on tidally locked planets. \cite{proe16} used a 3D climate and photochemistry model to compare \oz\ distribution on modern Earth and a tidally-locked version of Earth. They found that \oz\ could be distributed to the night-side of the planet and accumulate there in the absence of photolysis. Hemispheric maps of \oz\ distribution at different phases demonstrated that the amount of detectable \oz\ will be phase-dependent during observations. Tidally-locked Earth-like planets are most likely found around lower mass stars, where the tidal-locking radius is within the habitable zone, and rotation periods are substantially shorter than the ``tidally-locked Earth around the Sun'' scenario investigated in \cite{proe16}. \cite{caro18} used a 3D GCM to model how the rapid rotation of a tidally-locked planet would affect \oz\ transport on a planet with a 25-day period. This faster rotation created an ``anti-Dobson-Brewer circulation'' effect, with \oz\ accumulating at the equator rather than being transported toward the poles as on modern Earth. However, this study did not employ a photochemistry model, only circulation effects. \cite{chen18} and \cite{yate20} also performed 3D modeling of tidally-locked habitable zone planets orbiting M dwarfs, although they incorporated both climate and photochemistry models as well. \cite{chen18} used CAM-Chem, a 3D model including 97 species in chemistry computations, while \cite{yate20} used the Met Office Unified Model including only Chapman mechanism and HO$_x$ catalytic cycle chemistry. Both studies found that \oz\ would be transferred to the night-side after being created on the day-side, where it could accumulate to a higher quantity than on the day-side. HO$_x$ species that were also transported to the night-side would be the primary sink of night-side \oz. However, \cite{yate20} computed a thinner \oz\ layer than \cite{chen18}, with the latter's model also computing that \oz\ would exist at higher altitudes. These differences are likely due to differing chemical networks, land mass fraction (only \citealt{chen18} had continents), and input stellar spectra. Overall these results agreed well with \oz\ abundances calculated by \cite{rugh15} for a similar host star, although H$_2$O mixing ratios (important for creating HO$_x$ species) were shown to vary significantly more, showing that 1D models do not include important feedback loops contained within 3D models. The closest similar 3D modeling work to ours is \cite{cook22}, which uses a 3D climate-chemistry code to model the Earth-Sun system across geological history at various \om\ levels from 0.1\% to 150\% PAL. Comparing their results to our Sun-hosted models there is agreement between trends in the time-averaged mixing ratios for different gaseous species. However, a main finding of \cite{cook22} is that their 3D model predicts lower \oz\ abundances for \om\ levels 0.5\% to 50\% PAL when compared to 1D models, including those calculated here as well as in \cite{segu03} and \cite{rugh15b}. The cause for these lower estimates from this 3D model is uncertain, although is possibly related to how 1D codes simulate diurnal averages, or different CO$_2$ abundances/boundary conditions. Further inter-model comparison will be needed in order to clarify these discrepancies \citep{cook22}. Overall our results for \oz\ formation are consistent with previous 1D studies and roughly similar to time-averaged results from 3D models. Despite this, it is important to remember that the night-sides of slowly rotating and tidally locked planets may have significantly more \oz\ than the day-side, introducing phase-angle dependency on the amount of detectable \oz\ for observations. \subsection{Factors that impact the \om-\oz\ relationship \label{sec:o2o3relationship}} The \om-\oz\ relationship is strongly dependent on the host star as well as the planetary atmospheric composition. Here we will briefly describe several ways the \om-\oz\ relationship can diverge from the results in this study. We have shown that the \om-\oz\ relationship is highly influenced by the UV spectrum of the host star, both in terms of the total amount of UV flux and the FUV/NUV flux ratio, with FUV primarily responsible for creating \oz, and NUV destroying it. In this study we selected stellar hosts from a range of spectral types, but have not yet explored the variation of UV activity and FUV/NUV ratios within specific spectral types. This is of particular importance for K and M dwarfs, as they are subject to larger amounts of UV variability, and thus greater variations in the \om-\oz\ relationship for the planets such stars host (e.g., \citealt{fran13,fran16,youn16,loyd18}). For instance, the UV spectrum for our M5V host star comes from GJ 876 which displays low amounts of chromospheric activity \citep{fran16}. If the stellar host in question was a more active star of a similar type, such as Proxima Centauri (classified as an M5.5V star; \citealt{boya12,angl16}), an orbiting Earth-like planet would be subject to a different \om-\oz\ relationship due to the significant change UV spectral slope of the star. A more in-depth study of the impact of varying UV activity levels for K and M dwarf planetary hosts will be necessary to fully understand how \oz\ production would vary for different \om\ atmospheric abundances. Another important aspect of this study to note is that the initial conditions of atmospheric species are kept constant across all models to better understand how the \om-\oz\ relationship differs for different host star spectra. However, the \om-\oz\ relationship could be altered by a variety of scenarios due to the potentially huge diversity of terrestrial planet atmospheric compositions. The HO$_x$ and NO$_x$ catalytic cycles are the most prominent sinks for \oz\ on modern Earth, and could significantly impact \oz\ formation if there was an increase or decrease of the species powering these cycles. Therefore, changes in the amount of stratospheric H$_2$O or N$_2$O would alter the efficiency of \oz\ destruction, as they are the primary sources of stratospheric HO$_x$ and NO$_x$. On modern Earth H$_2$O is generally prevented from traveling into the stratosphere by the cold trap, although it can be created in the stratosphere via CH$_4$ reactions with OH (Reaction~\ref{r:CH4_H2O}), implying a change in CH$_4$ will additionally impact \oz\ destruction. The impacts on the \om-\oz\ relationship as these abundances change will be explored at length in the next paper of this series. Reducing gases in general (e.g., CH$_4$, H$_2$) can impact \om\ and \oz\ levels, whether produced biologically or through volcanic outgassing (e.g., \citealt{hu12,blac14,greg21,cook22}). \oz\ can also be depleted by cometary impacts (e.g., \citealt{marc21}) and through solar flares (e.g., \citealt{pett18}). In addition, \oz\ can vary throughout different seasons on modern Earth \citep{olso18}. Oxygen-bearing species in general can also influence the \om-\oz\ relationship, especially in situations where \om\ is produced abiotically via photolysis-driven production (see \citealt{mead17} for full review). In particular, CO$_2$-rich atmospheres may create significant amounts of \oz\ through CO$_2$ photolysis \citep{hu12,doma14,tian14,harm15,gao15} around host stars with high FUV/NUV flux ratios. FUV photons ($\lambda <$ 200 nm) photolyze CO$_2$ , \be \m{CO}_2 + \m{h}\nu \rightarrow \m{CO} + \m{O}, \label{r:PCO2} \ee to produce an O atom (or the \od\ radical if $\lambda < 167$ nm; Reaction~\ref{r:PCO2_O1D}). Oxygen atoms can combine to create \om, \be \m{O} + \m{O} + M \rightarrow \m{O}_2 + M, \label{r:O_O2} \ee which can then combine with O atoms to create \oz\ (Reaction~\ref{r:O2M}). Because \om\ is photolyzed at shorter wavelengths than \oz\ ($\lambda <$ 240 nm, see Fig.~\ref{fig:XS}), stellar hosts with high incident FUV/NUV flux ratios can allow abiotic \oz\ accumulation without significant corresponding \om\ buildup \citep{hu12,doma14,tian14,harm15}. In such scenarios the \oz/\om\ ratio would be higher than what would be predicted if \oz\ were formed directly from \om, implying that a high \oz/\om\ ratio could indicate non-biological \oz\ (and \om) creation \citep{doma14}. However, it remains uncertain which types of stellar hosts would be favorable for this scenario. Some studies find Sun-like stars can accumulate significant \oz\ through CO$_2$ photolysis if outgassing rates of reduced species are low \citep{hu12}, while others restrict this scenario to K and M dwarfs with high FUV/NUV flux ratios \citep{tian14,harm15}. This scenario might likewise be produced by F star hosts with their strong FUV fluxes, although with enough NUV flux, \oz\ destruction rates could prevent \oz\ buildup \citep{doma14}. Differences in model lower boundary conditions, which control the impact of different \om\ ground sinks, are likely to blame for the disparity in the capacity of \oz\ to accumulate between different studies. \citep{doma14,tian14,harm15,mead17}. Despite uncertainties in \om\ surface sinks, it is clear K/M dwarfs with high FUV/NUV ratios are susceptible, and potentially hotter stars with low abundances of reduced gaseous species. The effect of a CO$_2$-rich atmosphere on the \om-\oz\ relationship will be highly influenced by the host star and atmospheric abundances of reduced gaseous species. Another method of creating \oz\ without using the Chapman mechanism is via the ``smog mechanism'', which can produce \oz\ photochemically using a volatile organic compound (i.e., CH$_4$) and NO$_x$. This process is responsible for smog pollution often occurring in large cities on modern Earth, but could also have occurred during the Proterozoic (2.5 Ga - 541 Ma) with high levels of CH$_4$ and NO$_x$ \citep{gren06}. Under some circumstances \cite{gren06} computed that nearly double the amount of \oz\ on modern Earth could have been produced with just 1\% PAL \om\ via the smog mechanism. Additionally, \cite{gren13} found that the \oz\ smog mechanism may become more efficient than the Chapman mechanism for habitable zone planets around late M dwarf with low UV that is less efficient at \om\ photolysis. Not only would \oz\ created primarily by the smog mechanism rather than the Chapman mechanism change the \om-\oz\ relationship, but ``smog'' \oz\ can be harmful for life. Smog mechanism \oz\ is created in the troposphere rather than the stratosphere, and could result in significant ground-level \oz. Although on Earth our stratospheric \oz\ protects life by shielding harmful UV, ground level \oz\ on a smog-dominated planet can become fatal to Earth organisms at $\sim$1 ppm. Overall the \om-\oz\ relationship could be subject to large variations based both on the UV spectral slope of the host star, as well as atmospheric composition. Ozone formation via either CO$_2$ photolysis or hydrocarbon reactions would not be expected to resemble the \om-\oz\ relationship that ``Earth-like'' atmospheres would demonstrate. However, the FUV/NUV flux ratio of the host star may allow us to rule out certain certain scenarios without observations of the planetary atmosphere. \subsection{Can we infer \om\ abundance from an \oz\ measurement? \label{sec:O2fromO3}} Returning to the question that prompted this study: is \oz\ a reliable proxy for \om? Variations in the \om-\oz\ relationship (Sect.~\ref{sec:o2o3relationship}) would increase the difficulty in using \oz\ to infer \om\ abundance, and would require additional atmospheric information to provide the proper context. For the sake of simplicity, we will discuss the possibility of inferring \om\ from an \oz\ measurement from our ``Earth-like'' models in this paper. But even in this simplified case where we keep initial conditions of all atmospheric species constant (apart from \om\ and \oz\ and let them adapt to different stellar hosts) precisely determining \om\ from \oz\ is not straightforward due to the nonlinear \om-\oz\ relationship. Figure~\ref{fig:O3-O2_MR_all} clearly demonstrates that not only does the amount of \oz\ created for different \om\ levels change significantly for different spectral types, but also the trend that the \om-\oz\ relationship will follow as \om\ is changed will depend on the stellar host. Section~\ref{sec:chemresults} details how planets around hotter stars with higher UV flux (G0V, Sun, K2V) all experience their maximum \oz\ formation efficiency at \om\ levels lower that 100\% PAL, while there is a continuous (albeit nonlinear) decrease of \oz\ production for cooler hosts with lower UV flux (K5V, M5V). Whether a model atmosphere experiences an increase in \oz\ production as \om\ decreases (as seen for hotter stars) is dependent primarily on whether \om\ photolysis ($\lambda <$ 240 nm) can reach deep into the atmosphere. The total amount of \oz\ depends on the FUV/NUV flux ratio of the host star as well, with FUV flux creating \oz\, while NUV wavelengths will cause its destruction (see Sects.~\ref{sec:stellarspectra} and \ref{sec:chemresults}). Although the K2V-hosted models demonstrate this effect with the maximum amount of \oz\ production occurring at 55\% PAL \om, for models at \om\ levels near 100\% PAL \om\ the K5V and M5V hosts have higher \oz\ abundances due to their larger FUV/NUV ratios. Therefore, to predict the \om-\oz\ relationship for a given star even with knowledge of ``Earth-like'' conditions knowing both the total UV emitted and UV spectral slope of the host star will be essential. Idealized planetary emission spectra of the 9.7 $\mu$m \oz\ features in Fig.~\ref{fig:emission} show a non-trivial relationship between both \om\ and \oz\ abundance and spectral feature depth for hotter stellar hosts, with more ``straightforward'' correlation of \om\ and \oz\ abundances and feature depth for cooler hosts (K5V, M5V). This is due to the dependence of feature strength in emission spectra on the atmospheric temperature profile, with \oz\ measurements being particularly complicated by the fact that \oz\ highly influences stratospheric heating. Measuring \oz\ abundance from an emission spectra will require modeling of atmospheric temperature and pressure profiles for an accurate estimate, especially for stars emitting enough UV capable of creating and absorbing \oz\ for significant stratospheric heating. Smaller amounts of \oz\ as created by cooler stars have less of an impact on stratospheric heating, and therefore will maintain a more consistent temperature profile even for large variation in \om\ abundance. Even operating under the unlikely assumption that an accurate measurement of atmospheric \oz\ could be done, inferring \om\ abundance will still not be straightforward, especially for hotter host stars (G0V, Sun, K2V) where \oz\ does not always decrease as \om\ levels decrease. For example, in our Sun-hosted models the total integrated \oz\ column density at 150\% PAL \om\ is roughly the same as the amount at 5\% PAL \om\ (4.92$\times10^{18}$cm$^{-2}$ and 4.80$\times10^{18}$cm$^{-2}$, respectively). This implies that for hotter stellar hosts, it is unlikely \om\ could be well constrained from an \oz\ measurement for relatively high levels of \om. It could, however, be possible to use an \oz\ measurement to differentiate between pre- and post-GOE \om\ levels, as well as infer the existence of a substantial \oz\ layer providing surface UV shielding. Due to the consistent decrease in \oz\ abundance with decreasing \om\ for cooler hosts, it appears that inferring \om\ from \oz\ may be much simpler for planets orbiting cooler stars that those around hotter stars. It is important to note that specific knowledge of the UV spectral slope for cool K and M dwarfs will be extremely important to model \oz\ levels, especially due to the increased likelihood of \oz\ buildup via abiotic means and the diversity of activity levels (hence FUV/NUV flux ratios) around such stars. \subsection{Is it necessary to constrain \om\ abundance for \oz\ to be a useful biosignature? \label{sec:O3biosignature}} Although inferring \om\ levels precisely from \oz\ measurements will not be possible for hot stellar hosts and will still require additional atmospheric context and knowledge of the UV spectral slope for cooler hosts, what does this mean for \oz\ as a biosignature? Would it be necessary to infer \om\ for \oz\ to be a useful indicator of life, or could it serve as a promising biosignature without precise \om\ information? Two of the strongest arguments against \om\ as a biosignature are 1) it can be produced abiotically, and 2) it has been at relatively high abundances for only a small fraction of Earth's geological history (see review in \citealt{mead17,mead18}). We examine these arguments as they pertain to \oz\ as a biosignature. The multiple proposed pathways for abiotic \om\ production will prevent \oz\ from being a ``standalone'' biosignature as well. These mechanisms include production via CO$_2$ photolysis as discussed in Sect.~\ref{sec:O2fromO3}, as well as via H$_2$O photolysis either from an extremely active pre-main sequence star \citep{luge15,tian15} or an atmosphere that has allowed H$_2$O to enter the stratosphere due to a lack of cold trap from low abundances of non-condensable gases \citep{word14}. Ruling out these scenarios could be possible by detections or non-detections of gaseous species that would be produced or destroyed during these processes. For example, \om\ and \oz\ abiotic buildup from CO$_2$ photolysis could be revealed via a detection of CO, sometimes called an ``antibiosignature'' (for detailed descriptions of these mechanisms and their spectral discriminants see \citealt{mead17,mead18}). Potential abiotic production will require contextual knowledge of an atmosphere to use either \om\ or \oz\ as a biosignature. Abiotic buildup from CO$_2$ photolysis with high FUV/NUV flux ratio stellar hosts could potentially impact \oz\ more than \om, as it is possible to accumulate \oz\ more easily than \om\ in this scenario, and could potentially allow simultaneous detection of abiotic \oz\ and CH$_4$ under certain conditions \citep{doma14}. However, predictions of abiotic \om\ and \oz\ buildup are dependent on the lower boundary conditions of the model in question, so these estimates vary \citep{hu12,doma14,tian14,harm15,gao15}. Detections or non-detections of CO and CO$_2$ will be important especially in assessing the origin of an \oz\ detection. The second main argument against \om\ as a reliable biosignature (even when accounting for abiotic sources) is that \om\ levels have only been relatively high for a short period of Earth's geological history. Oxygenic photosynthesis is thought to have been first used by cyanobacteria $\sim$2.7 Ga , although \om\ buildup during the GOE was not thought to have occurred until $\sim$2.5 Ga (e.g., \citealt{poul20}). \om\ levels comparable to modern Earth were not reached until the Phanerozoic (541 Ma - present day) sparked by the Cambrian explosion when land began to be colonized by plants \citep{lent17,dahl20}. Before the GOE \om\ levels were expected to be well below 10$^-3$\% PAL, and potentially remained relatively low during the majority of the Proterozoic (2.5 Ga - 541 Ma) with estimates ranging from $\sim$0.3-10\% PAL \om\ (e.g.,\ \citealt{catl18}). Even if the lowest \om\ estimates for the Proterozoic were reality ($\sim$0.01\% PAL \om), there is evidence of an \oz\ layer after 2.4 Ga \citep{croc18}. Although an \om\ detection would be extremely difficult at this abundance, it has been suggested that \oz\ could reveal this undetectable \om\ (e.g., \citealt{lege93,desm02,segu03,lege11,harm15}). Results of this paper only further prove this point, especially around hotter stars (see Fig.~\ref{fig:O3-O2_MR_all}). For the G0V and Sun-hosted planets even at a level of 0.01\% PAL \om, they still produce $\sim$15\% the amount of \oz\ they do at 100\% PAL \om. This number falls to $\sim$2\% for the M5V-hosted planet, although it still demonstrates a less drastic decrease in \oz\ as \om\ decreases. This implies that especially for planets orbiting hotter stars that \oz\ is a much longer lived detectable biosignature than \om\ for an Earth-like planet, as detections may be sensitive to Proterozoic \oz\ levels. Although gaining precise information about \om\ abundance from an \oz\ measurement will be extremely difficult or not possible (see discussion in Sect.~\ref{sec:O2fromO3}), knowing \oz\ abundance alone would still provide valuable information about the atmosphere. There appears to be a ``bistability limit'' for atmospheric \om, implying that certain \om\ levels would not be stable in the atmosphere due to \om\ sinks and geochemical cycles, as atmosphere switches from reduced to oxidizing (e.g., \citealt{segu03,gold06}). \cite{greg21} calculated that there are only a few stable solutions with \om\ abundances between 3$\times10^{-6}$ and 1\% PAL for an Earth-like planet orbiting the Sun. The existence of this ``bistability limit'' could explain the $\sim$300 Myr delay between the advent of oxygenic photosynthesis and appreciable \om\ accumulation in the atmosphere \citep{gold06}. \oz\ abundance drastically falls off for all our host stars under 0.01\% PAL \om, implying that an \oz\ detection would allow us to distinguish between pre- and post-GOE \om\ levels with relative ease. In the search for Earth-like planets, \oz\ appears to be a viable biosignature for a much longer portion of Earth's history, potentially allowing us to infer the existence of oxygenic photosynthesis for much longer than an \om\ detection. Although \oz\ is not created by life, its UV shielding capabilities could allow estimates of whether the surface environment is safe for life. As seen in Fig.~\ref{fig:surfUV} and Table~\ref{tab:UV_all}, the amount of UV flux reaching the planetary surface begins to converge quickly to the UV incident upon the planet when \om\ abundance drops below 1\% PAL, and it has been predicted that \om\ levels less than this will not be efficient at preventing DNA damage (e.g., \citealt{segu03}). If an upper \om\ ``bistability limit'' indeed exists at 1\% PAL \citep{greg21}, a detection of \oz\ in a planetary atmosphere could imply a certain amount of UV shielding, and potential surface life. However, it is important to remember that the first evidence of life dates back 3.7 Ga, long before the GOE, or even oxygenic photosynthesis \citep{rosi99}. Although \oz\ may be a longer lived biosignature than \om\ and can indicate substantial UV surface shielding, a non-detection of \oz\ (or \om) could not rule out the existence of life. Although the surface UV environment would have been harsh before the GOE, it is possible that without a UV screen that life could thrive in the photic zone of the ocean, and perhaps colonize land \citep{cock07}. It has even been suggested that a significant amount of UV may have been necessary to synthesize prebiotic molecules (e.g., \citealt{pate15,ranj16,rimm18}). Even substituting \oz\ for \om\ in biosignature searches, life on pre-GOE Earth would be undetectable. \section{Conclusions} In this first part of our paper series we show that the nonlinear O$_2$-O$_3$ relationship varies significantly for model atmospheres of planets orbiting different types of host stars, with different trends for planets with hotter host stars versus those with cooler host stars. As seen in Fig.~\ref{fig:O3-O2_MR_all}, planets orbiting hotter host stars display peak \oz\ abundance at lower \om\ levels than modern Earth, while planets with cooler hosts have \oz\ decrease along with \om. The increase in \oz\ at lower \om\ levels for hotter host stars is due to the \oz\ layer shifting downward in the atmosphere as \om\ levels (and its ability to absorb UV) decrease. At these deeper and denser levels of the atmosphere the 3-body reaction that creates \oz\ (Reaction~\ref{r:O2M}) allows more efficient \oz\ production than at high \om\ abundances. Cooler stars do not experience this effect since it requires a stronger incident FUV flux to push \oz\ formation deep enough in the atmosphere to allow for much faster production. As \om\ decreases in the atmosphere, photolysis of many gaseous species as well as \od\ production will occur at lower atmospheric levels. The biologically relevant molecules H$_2$O, CH$_4$, and N$_2$O all experience upper atmospheric depletion to different degrees, with these effects significantly more prominent around host stars with higher UV fluxes (Fig.~\ref{fig:genchem}). As \om\ decreases and photolysis rates increase, HO$_x$ species are primarily pushed to lower altitudes rather than destroyed, while NO$_x$ species are destroyed by photolysis as well as produced at lower atmospheric levels (Fig.~\ref{fig:catalytic}). UVA and UVB wavelengths are only partially shielded by \oz, so the amount of photons in this wavelength range that reach the planetary surface scales with the amount of incident UVA and UVB flux. However, for biologically damaging UVC photons, there is a much stronger dependence on both \om\ and \oz\ abundance. For high \om\ levels, our hottest host star (G0V) had the lowest surface UVC flux, despite also having the highest incident UVC flux (Fig.~\ref{fig:surfUV}). As \om\ and \oz\ abundances decreases these UVC surface levels begin to converge to incident UVC flux as UV shielding becomes inefficient. Ozone features in planetary emission spectra were found to require knowledge of the atmospheric temperature profiles, as the depth of features is dictated by the temperature difference between the emitting and absorbing layers of the gaseous species. Since \oz\ NUV absorption is a significant source of stratospheric heating, a large amount of \oz\ along with significant incident NUV flux will cause a smaller temperature difference between the stratosphere and planetary surface, resulting in a shallower spectral feature (Fig.~\ref{fig:emission}). For cooler stars with slower \oz\ production, and therefore less stratospheric temperature inversion, \om\ spectral features are more intuitive to interpret. Overall it is clear that interpreting any observation of \oz\ will require the UV spectrum of the host star as well as photochemical and climate modeling of the planetary atmosphere. Now that we have explored the \om-\oz\ relationship and its impact on planetary emission spectra, let us now return again to our original question: is \oz\ a reliable proxy for \om? In short, the complicated nature of the \om-\oz\ relationship tells us that \oz\ is not a reliable tracer of \om. Our results show us that for hotter stars, using \oz\ as a precise tracer for \om\ will not be possible due to the degeneracies in the \om-\oz\ relationship, and for cooler stars it will be very difficult without knowledge of the UV spectrum of the host star as well as planetary atmospheric composition. However, an \oz\ measurement on its own is still an insightful measurement, even if it does not provide precise information on \om\ abundance. Not only is \oz\ detectable in trace amounts (unlike \om), it additionally allows for an assessment of the UV surface environment of the planet. There is likely no ``standalone'' atmospheric biosignature, but either \om\ or \oz\ along with atmospheric context could provide evidence of oxygenic photosynthesis. Both will require detections of other gaseous species to rule out various mechanisms for abiotic \om\ or \oz\ production. Although it is important to note that a CO$_2$ rich planet may be more likely to have an abiotic buildup of \oz\ without \om\ - a strike against \oz\ as a biosignature gas (see Sect.~\ref{sec:o2o3relationship} for detailed discussion). However, there is a strike against \om\ in comparison to \oz: \om\ has only existed on Earth in relatively high amounts for a short fraction of Earth's geological history. Ozone, on the other hand, is detectable in trace amounts, potentially making it a longer lived detectable biosignature for Earth-like planets. It seems that neither \om\ or \oz\ is inherently a ``better'' biosignature than the other; simply that they give different information and can be more or less useful depending on the scenario. With knowledge of the UV spectrum of the host star along with careful climate and photochemistry modeling we can begin to understand the \om-\oz\ relationship and use \oz\ as a reliable indicator for oxygenic photosynthesis. \begin{acknowledgements} We thank both the anonymous referee and journal editor for their comments which helped improve the clarity of this manuscript. All computing was performed on the HPC cluster at the Technical University of Denmark \citep{hpc}. This project is funded by VILLUM FONDEN. \end{acknowledgements} \bibliographystyle{aa} \bibliography{main}{}
Title: Alternating north-south brightness ratio of Ganymede's auroral ovals: Hubble Space Telescope observations around the Juno PJ34 flyby
Abstract: We report results of Hubble Space Telescope observations from Ganymede's orbitally trailing side which were taken around the flyby of the Juno spacecraft on June 7, 2021. We find that Ganymede's northern and southern auroral ovals alternate in brightness such that the oval facing Jupiter's magnetospheric plasma sheet is brighter than the other one. This suggests that the generator that powers Ganymede's aurora is the momentum of the Jovian plasma sheet north and south of Ganymede's magnetosphere. Magnetic coupling of Ganymede to the plasma sheet above and below the moon causes asymmetric magnetic stresses and electromagnetic energy fluxes ultimately powering the auroral acceleration process. No clear statistically significant time variability of the auroral emission on short time scales of 100s could be resolved. We show that electron energy fluxes of several tens of mW m$^{-2}$ are required for its OI 1356 \AA$\;$ emission making Ganymede a very poor auroral emitter.
https://export.arxiv.org/pdf/2208.09057
\title{Supporting Information for "Oscillating north-south brightness ratio of Ganymede's auroral ovals: Hubble Space Telescope observations around Juno's PJ34 flyby"} \authors{ Joachim Saur\affil{1}, Stefan Duling\affil{1}, Alexandre Wennmacher\affil{1}, Clarissa Willmes\affil{1}, Lorenz Roth\affil{2}, Darrell F. Strobel\affil{3} , Fr\'ed\'eric Allegrini\affil{4,10}, Fran Bagenal\affil{5}, Scott J. Bolton\affil{4}, Bertrand Bonfond\affil{6}, George Clark\affil{7}, Randy Gladstone\affil{4}, T.K. Greathouse\affil{4}, Denis C. Grodent\affil{6}, Candice J. Hansen\affil{8}, W.S. Kurth\affil{11}, Glenn S. Orton\affil{9}, Kurt D. Retherford\affil{4}, Abigail M. Rymer\affil{7}, A.H. Sulaiman\affil{11} } \affiliation{1}{University of Cologne, Institute of Geophysics and Meteorology, Cologne, Germany} \affiliation{2}{KTH, Royal Institute of Technology, School of Electrical Engineering, Stockholm, Sweden} \affiliation{3}{Johns Hopkins University, Baltimore, MD, USA} \affiliation{4}{Southwest Research Institute, San Antonio, TX, USA} \affiliation{5}{University of Colorado, Boulder, CO, USA} \affiliation{6}{Universit\'{e} de Li\`{e}ge, LPAP - STAR Institute, Li\`{e}ge, Belgium} \affiliation{7}{Applied Physics Laboratory Johns Hopkins University, Laurel, MD, USA} \affiliation{8}{ Planetary Science Institute, Tucson, AZ, USA} \affiliation{9}{Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA} \affiliation{9}{NASA Jet Propulsion Laboratory, Pasadena, CA, USA} \affiliation{10}{Department of Physics and Astronomy, University of Texas at San Antonio, San Antonio, TX, USA} \affiliation{11}{Department of Physics and Astronomy, University of Iowa, IA, USA} \begin{article} \end{article} \noindent\textbf{Contents of this file} \begin{enumerate} \item Text S1, S2 \item Figures S1 \item Tables S1, S2 \end{enumerate} \noindent\textbf{Introduction} The supporting information supplied here provides additional text, one figure and one table to illustrate in more detail observations and statements made in the main manuscript. \noindent\textbf{Text S1.} The essential physics in the calculation of UV emissions from an atmosphere/exosphere of O and O$_2$ including electron impact, solar resonance scattering, and solar surface reflection have been discussed in \citeA{hall95}. The ratio of airglow/auroral emission rates for OI 1356/OI 1304 exceed 1.3 on Ganymede \cite{hall98} and are diagnostic that OI 1356 is mostly due to dissociative electron impact excitation of O$_2$. Our calculations were performed with the best available measured O$_2$ emission cross sections from \citeA{kani03}. With these O$_2$ cross sections the emission ratio of OI 1356 /OI 1304 exceeds 2 over the energy range of these measurements. Performing a comparable calculation for the same emission ratio for electron impact on O atoms yields a ratio that does not exceed 0.4. The measured O$_2$ emission cross sections were extrapolated with the Bethe-Oppenheimer expression $\sigma$1356(E) = 4.05 $\times $10$^{-19}$ E$^{-1}$ ln(44 E) cm$^2$, for E in keV above $>$0.6 keV. Based on \citeA{hall98}, we adopt an O$_2$ radial column density of 3 $\times$ 10$^{14}$ cm$^{-2}$, which implies that the exobase is near the surface of Ganymede. These calculations with updated O$_2$ cross sections are displayed in Figure \ref{f:darrell}. They confirm the previous results in Figure 3 of \citeA{evia01} up to approximately 300 eV and replace the intensities predicted for electrons accelerated into the keV energy range due to the extrapolated cross sections based on the Bethe-Oppenheimer approximation. \noindent\textbf{Text S2.} The error bars in Figures 3 and 4 are calculated based on the counts provided in the flt-files. Each data point in Figure 3 a and Figure 4 is calculated based on the counts within the respected exposures and the considered area on Ganymede. The total signal $S = A + R + B$ within the specified areas of Ganymede is composed of photons from the auroral emission $A$, photons due to reflected light on the surface of Ganymede $R$ and due to background noise $B$ of the detector and the sky. The signal to noise SNR in the Poisson distributed photon fluxes is then given by $ SNR = (S -R -B)/(S+R+B)^{1/2}$. The error bars on the ratio of signals in Figure 3 b is calculated based on standard error propagation \cite{bevi03}. \newpage \begin{table*} \caption[]{Exposure details of HST/STIS observations of Ganymede in support of the Juno flyby on 2021-06-07 (ID: 16499). Juno's closest approach to Ganymede occurred at 17:35 UTC including the light travel time of 39 min to Earth. Orbit 3 of visit 1 and orbit 1 of visit 2 were split up into two exposures each for technical reasons. $^a$: UTC at exposure start (light travel time included), $^b$: magnetic latitude, $^c$: Disk averaged auroral brightness for OI 1356 \AA$\;$ including limb emission up to 200 km, normalized to $\pi (R_G + 200 \mbox{km})^2$, for complete HST orbits. } \label{table:1} \centering \begin{tabular}{c c c r r r r r r r r} % \hline\hline Visit & Orbit & Exp & Root name &Obs. Time$^a$ &Exp. Time&$\Psi_m$ $^b$&brightness$^c$ \\ \# & \# & \# & & hh-mm-ss & s & degree&R\\ \hline 1 & 1 & 1 & oejj01010 & 00:54:04 & 1631&-3.5 &128.8 $\pm$4.1 \\ \hline 1 & 2 & 2 & oejj01020 & 02:29:29& 2291&4.7 & 66.3 $\pm$2.5\\ \hline 1 & 3 & 3 & oejj01030 & 03:43:16 & 1000 & 9.3& 50.8 $\pm$2.3\\ 1 & 3 & 4 & oejj01040 & 04:04:38 & 1076& 9.5& \\ \hline 2 & 1 & 5 & oejj02010 & 19:44:20& 866& -9.5&67.3 $\pm$2.9 \\ 2 & 1 & 6 & oejj02020 & 20:01:06& 800 &-9.5&\\ \hline 2 & 2 & 7 & oejj2A010 & 21:32:56 & 1514&-5.5&88.2 $\pm$3.5\\ \hline 2 & 3 & 8 & oejj2A020 & 23:08:23 & 2331&2.5& 76.2 $\pm$2.6\\ \hline \end{tabular} \end{table*} \newpage \begin{table*}[b] \caption[]{ Statistical properties of 100s sub-exposures of all 6 HST orbits, $\bar{b}$ is the average brightness during each orbit, $\sigma_{sub-exp} $ is the standard deviation of the brightness variability of the sub-exposures during each orbit, and $ \bar{\sigma}_{indiv}$ is the average over the individual brightness uncertainties in each sub-exposure based on count statistics, r = ratio of brightness variability within sub-exposures to averaged one-sigma brightness uncertainty of individual sub-exposures, i.e., for $ \sigma_{\mathrm{sub-exp}} / \bar{\sigma}_{\mathrm{indiv}} $ a value of $r < 1$ indicates variability could be consistent with statistical noise, only. } \label{table:2} \centering \begin{tabular}{c c c r r r r r r r r} % \hline \\ \vspace*{0.1cm} Visit & Orbit & Hemisphere & $\bar{b}$& $ \sigma_{\mathrm{sub-exp}} $ & $ \bar{\sigma}_{\mathrm{indiv}} $ & $r= \sigma_{\mathrm{sub-exp}} / \bar{\sigma}_{\mathrm{indiv}} $\\ \# & \# & &R &R & R & \\ \hline 1 & 1 & North & 231.1 &39.7 & 47.3 & 0.84 \\ 1 & 1 & South & 148.0 &39.1 & 35.5 & 1.10 \\ 1 & 2 & North & 42.7 & 32.7 & 30.3 & 1.08 \\ 1 & 2 & South & 108.1& 21.5 & 30.7 & 0.70\\ 1 & 3 & North & 58.3 &27.6 & 31.7& 0.87 \\ 1 & 3 & South & 65.5 &26.5 & 26.3& 1.01 \\ 2 & 1 & North & 99.7& 35.1 & 37.4& 0.94\\ 2 & 1 & South & 64.0 &28.9 & 23.4& 1.24\\ 2 & 2 & North & 118.1&38.1 & 41.1& 0.92\\ 2 & 2 & South & 92.4 &28.4 & 29.3& 0.97\\ 2 & 3 & North & 54.6 &30.9 & 36.5& 0.85\\ 2 & 3 & South & 121.2 &32.9 & 34.6& 0.95\\ \hline average && North & &34.0 &37.4 & 0.91\\ average && South & &29.5 & 30.& 0.99 \\ \hline \end{tabular} \end{table*} \newpage
Title: Dense Nuclear Matter with Baryon Overlap
Abstract: The possibility of new short-distance physics applicable inside the cores of NS is incorporated into the equation of state generated by the quark-meson coupling model. The contribution of this new physics to the energy density is taken to be proportional to the amount of overlap between the quark cores of the baryons involved. With no change to the properties of symmetric nuclear matter at saturation density, including an incompressibility compatible with data on giant monopole resonances, one can sustain neutron stars with a maximum mass $M_{max}>2.1$ M$_\odot$, even when hyperons are included.
https://export.arxiv.org/pdf/2208.09331
\section{Results} \label{sec:Results} The EoS generated by the QMC model supplemented by the phenomenological overlap term (Eq.~\ref{eq:overlap} ) will be studied either with or without the term cubic in the $\sigma$ field. The latter is labelled $\sigma^3$, with its strength determined by the coefficient $\lambda_3$, which is chosen to be $\lambda_3=0.02$ fm$^{-1}$ or $\lambda_3=0.00$ fm$^{-1}$. The choice $\lambda_3=0.02$ fm$^{-1}$ is motivated by the study of the energies of giant monopole resonances by Martinez~\textit{ et al.}~\cite{Martinez:2018xep}, where it was found that this was the smallest value of $\lambda_3$ capable of producing a value for the nuclear incompressibility compatible with that data. Where the overlap term is explicitly included, the EoS will be denoted by \textit{Overlap}. When it is not included the EoS will be denoted \textit{No Overlap}. If the derived EoS includes hyperons, then it will be denoted \textit{F-QMC}, while the case where only nucleons are included will be denoted \textit{N-QMC} as a comparison. Given that the inclusion of $\lambda_3 = 0.02$ fm$^{-1}$ leads to a maximum NS mass that is unacceptably low when hyperons are included, various overlap parameters ($E_0$ and $b$) were explored: \begin{enumerate} \label{list:over} \item Overlap energy: $E_0=3500$, $4500$, $5500$ MeV \item Range parameter: $b=0.4$, $0.5$ fm \end{enumerate} The upper limits on both the range and strength parameters were set by the requirement that there be no significant change in the properties of symmetric nuclear matter at saturation density. Where the parameters are left unspecified in what follows, they were chosen to be $E_0=5500$ MeV and $b=0.5$ fm. These are the preferred choice, in that they lead to an acceptable maximum mass while not altering the nuclear matter parameters. The bulk properties of the NS are used to test the viability of the model at high density. The properties of interest are the mass-radius relationship and the tidal deformability. For brevity, only PSR J0740+6620 is shown when constraining the QMC EoS. The mass is taken to be $M=2.072^{+0.067}_{-0.066}$ M$_\odot$, with a 68\% interval around the median~\cite{Riley:2021pdl, Fonseca:2021wxt}. GW170817 serves as a constraint for the tidal deformability, $\Lambda_{\textrm{M}}$~\cite{LIGOScientific:2018cki}. \subsection{The Choice of Parameters} \label{sec:parameterfixing} The mass of the $\sigma$ meson is set at 700 MeV, while the masses of the other mesons are taken from their experimental values ($m_\delta=983$ MeV, $m_\rho=770$ MeV, $m_\omega=783$ MeV). In terms of these masses the meson-nucleon coupling strengths in free space are often written as \begin{eqnarray} G_\sigma=\frac{g_\sigma^2}{m_\sigma^2}, \quad G_\omega=\frac{g_\omega^2}{m_\omega^2}, \quad G_\rho=\frac{g_\rho^2}{m_\rho^2}. \end{eqnarray} $G_\delta=3$ fm$^2$ is chosen as the preferred value for the coupling of the $\delta$ field~\cite{Motta:2019tjc,Haidenbauer:1992tn} but for comparison, in Appendix A all calculations are repeated for $G_\delta=0$ fm$^2$. In all cases (with and without $\lambda_3$ and $G_\delta$) we require that the bag overlap terms do not alter the properties of nuclear matter at saturation density. Those parameters are fixed at the typical values $n_0=0.16$ fm$^{-3}$, the binding energy per nucleon at $n_0$ is taken to be $E/A = -15.8$ MeV, while the symmetry energy is $S=30$ MeV \cite{Shapiro:1983du, Glendenning:1997wn, Li:2013ola}. As the incompressibility and the slope of the symmetry energy are typically taken to lie in the range $K_\infty=250\pm50$ MeV \cite{Stone:2014wza, Dutra:2012mb} and $L_0=60\pm20$ MeV \cite{Li:2013ola}. We choose to use $K_\infty=260$ MeV and $L_0=62$ MeV respectively, because, while the relation between the incompressibility and the energies of the giant monopole resonances (GMR) is somewhat complicated~\cite{Sharma:2008uy,Piekarewicz:2002jd}, calculations of the GMR using the QMC EDF tend to favor values of $K_\infty$ at the lower end of this range~\cite{Stone:2014wza}. As reported by Guichon~\textit{et al.}~\cite{Guichon:2018uew}, the inclusion of $\lambda_3 =0.02$ fm$^{-1}$ lowers $K_\infty$ by about 10\%, leading to the value 260 MeV noted earlier. This remains unchanged for $b = 0.4$ fm for $E_0$ ranging from 3500 to 5500 MeV, while it increases slightly (from 262 to 264 MeV) over this range of $E_0$ for $b=0.5$ fm. So long as $b<0.6$ fm then $K_\infty<300$ MeV is within the acceptable limits. On the other hand, for $\lambda_3 = 0.00$ fm$^{-1}$ the incompressibility is 295 MeV (rising to 298 MeV for $b=0.5$ fm). Physically the range parameter sets the scale at which the extra repulsive force acts in medium. Since saturation density is relatively low, the overlap term has essentially no influence there and thus does not affect the properties of finite nuclei. In NS, the gravitational force compacts the baryonic matter well past saturation, allowing them to eventually overlap~\cite{Ozel:2016oaf}. The extra repulsion induced by baryon overlap stiffens the EoS at supra-nuclear densities. This is explored in subsection~\ref{sec:EoS}. \subsection{\texorpdfstring{NS composition under $\beta$-Equilibrium}{NS composition}} \label{sec:composition} QMC has previously demonstrated the appearance of only the $\Lambda$ and $\Xi^{0,-}$ hyperons~\cite{Stone:2019blq,Whittenbury:2013wma}. Because of the enhancement of the color hyperfine interaction in-medium~\cite{Guichon:2008zz} and the repulsive three-body force generated by the scalar polarisability, the $\Sigma^{\pm,0}$ baryons experience significant repulsion and are not energetically allowed at densities $n_b\leq1.2$ fm$^{-3}$. The same physical mechanism explains why $\Sigma$-hypernuclei do not exist and also leads to the exclusion of $\Delta$ baryons in NS in this model~\cite{Motta:2019ywl}. The species fractions inside a NS, as predicted by F-QMC, are shown in Fig.~\ref{fig:specfrac}. F-QMC predicts no hyperons below 3 $n_0$. The overlap term has no bearing on the species fraction because the repulsion introduced by the overlap is independent of quark content and hence does not discriminate between baryon species (see Eq.~(\ref{eq:overlap})). There is a difference in the appearance of hyperons with (solid) and without (dashed) $\lambda_3$. The $\Xi^{0,-}$ appears slightly later when the term in $\sigma^3$ is present. The relative abundances are also modified. % \begin{table} \caption{\label{tb:chemicalpotentials} The chemical potentials, $\mu_i$ in MeV, for each baryon at saturation density, with and without overlap.} \begin{ruledtabular} \centering \begin{tabular}{c|ccccc} & \multicolumn{5}{c}{$\lambda_3=0.02$ fm$^{-1}$}\\ F-QMC & n & p & $\Lambda$ & $\Xi^0$ & $\Xi^-$ \\ \hline Overlap & 970 & 857 & 1076 & 1300 & 1326 \\ No Overlap & 970 & 857 & 1076 & 1300 & 1326 \\ \hline & \multicolumn{5}{c}{$\lambda_3=0.00$ fm$^{-1}$}\\ F-QMC & n & p & $\Lambda$ & $\Xi^0$ & $\Xi^-$ \\ \hline Overlap & 970 & 856 & 1080 & 1298 & 1333 \\ No Overlap & 970 & 856 & 1080 & 1298 & 1333 \\ \end{tabular} \end{ruledtabular} \end{table} The chemical potentials of the nucleons, hyperons and leptons are unaffected by changes in the overlap parameters. Table~\ref{tb:chemicalpotentials} lists the chemical potentials of the baryon species with and without overlap ($E_0=5500$ MeV and $b=0.5$ fm). We see that the $\Lambda$ experiences an attractive potential of $35-40$ MeV, while the attraction felt by the $\Xi^0$ is considerably smaller. These values are consistent with the fact that the $\Lambda$ is bound in the 1s-state in Pb by around 26 MeV~\cite{Hashimoto:2006aw}, as well as with the recent observation of a $\Xi$ weakly bound to $^{14}$N~\cite{Nakazawa:2015joa,Shyam:2019laf,Guichon:2008zz}. \subsection{QMC EoS} \label{sec:EoS} The low density crustal region enveloping the core of a NS is populated by nuclei with increasing neutron excess~\cite{Chamel:2015oqa,Yakovlev:2000jp,Antic:2020zuk}. Here the QMC EoS, which is an appropriate description of nuclear matter, is matched onto the low density EoS provided by Hempel and Schaffner-Bielich \cite{Hempel:2009mc, Hempel:2011mk}. The Hempel and Schaffner-Bielich model is a relativistic mean field model for interacting nucleons which takes into account excluded volume effects. In what follows the QMC EoS is matched to the crust region at $n\approx0.7$ $n_0$, which is appropriate in describing the transition of nuclei to nuclear matter at the crust-core boundary. The derived F-QMC (solid) and N-QMC (dashed) EoS, with crust, are shown in Fig.~\ref{fig:EoS}. Note that in all figures, unless otherwise stated, the overlap case corresponds to $b=0.5$ fm and $E_0 = 5500$ MeV. As expected, the hyperons soften the EoS when compared to that for nucleons only. While the inclusion of the overlap term has no influence on the nuclear matter parameters at saturation density, it is clear that the EoS is significantly stiffer at high density. In Fig.~\ref{fig:EoS} we see that the effect of the overlap term becomes considerable at energy densities of order $250-350$ MeV-fm$^{-3}$, or $n>2$ $n_0$. Furthermore, the degree of softening induced by the hyperons is reduced at higher densities. Comparing the two panels in Fig.~\ref{fig:EoS}, the cubic term in $V(\sigma)$ acts to soften the EoS, whether hyperons are included or not. In order to make the QMC EoS generated here (without crust) widely available, an analytic function has been fitted to the F-QMC EoS with $\lambda_3=0.02$ fm$^{-1}$ and $\lambda_3=0.00$ fm$^{-1}$. This is valid for the energy density between $0-1600$ MeV-fm$^{-3}$ but must be matched to a crust EoS for $n<0.70$ $n_0$, corresponding to an energy density $\epsilon\approx105$ MeV-fm$^{-3}$. Eq.~(\ref{eq:powerlaw}) takes the argument for energy density in MeV-fm$^{-3}$ and gives the pressure in MeV-fm$^{-3}$. \begin{eqnarray} \label{eq:powerlaw} P(\epsilon)=N_1\epsilon^{p_1}+N_2\epsilon^{-p_2}. \end{eqnarray} The error computed is given by Eq.~(\ref{eq:error}) and the parameters are summarised in Table~\ref{tb:AnalyticEoS} below, with the energy density split into different regions. Eq.~(\ref{eq:powerlaw}) is not suitable for computing the speed of sound (Eq.~(\ref{eq:sound})) as the domain boundaries do not precisely correspond to the appearance of new species. \begin{eqnarray} \label{eq:error} \textrm{RMSE (\%)} = \sqrt{\frac{1}{N} \sum_i^N \frac{(x_i-y_i)^2}{y_i^2} }\times100. \end{eqnarray} \begin{table} \caption{\label{tb:AnalyticEoS} Parameters for Eq.~(\ref{eq:powerlaw}), corresponding to the F-QMC EoS with $\lambda_3=0.02$ fm$^{-1}$ and $\lambda_3=0.00$ fm$^{-1}$. The domains for the energy density, $\epsilon$ (MeV-fm$^{-3}$), have been split as denoted by the left hand column.} \begin{ruledtabular} \centering \begin{tabular}{c|ccccc} & \multicolumn{5}{c}{F-QMC with $\lambda_3=0.02$ fm$^{-1}$} \\ $\epsilon$ & $N_1$ & $p_1$ & $N_2$ & $p_2$ & RMSE \\ \hline 0-34 & 7.733$\times10^{-4}$ & 1.203 & - & - & 1.49\% \\ 35-90& 6.171$\times10^{-7}$ & 3.043 & 1.578 & 1.128 & 0.64\% \\ 91-133& 3.309$\times10^{-7}$& 3.186 & - & - & 0.10\% \\ 134-298& 1.260$\times10^{-6}$ &2.921 & - & - & 1.09\% \\ 299-550& 5.884$\times10^{-6}$ &2.656 & - & - & 1.16\% \\ 551-620& 8.387$\times10^{-5}$& 2.234 & - & - & 0.13\% \\ 621-1021& 1.730$\times10^{-3}$ & 1.764 & - & - & 0.10\% \\ 1022-1600& 8.269$\times10^{-3}$ &1.539 & - & - & 0.11\% \\ \hline & \multicolumn{5}{c}{F-QMC with $\lambda_3=0.00$ fm$^{-1}$} \\ $\epsilon$ & $N_1$ & $p_1$ & $N_2$ & $p_2$ & RMSE \\ \hline 0-24 & 9.725$\times10^{-4}$ & 1.183 & - & - & 4.04\% \\ 25-90 & 6.234$\times10^{-8}$ & 3.498 & 1.549$\times10^{-2}$ & -0.3064 & 1.10\% \\ 91-162 & 1.376$\times10^{-7}$ & 3.355 & - & - & 0.66\% \\ 163-299& 6.243$\times10^{-7}$ & 3.064& - & - & 1.02\% \\ 300-549& 7.150$\times10^{-6}$ & 2.645 & - & - & 1.59\% \\ 550-595& 1.143$\times10^{-4}$ & 2.204 & - & - & 0.06\% \\ 596-861& 4.517$\times10^{-3}$ & 1.629 & - & - & 0.05\% \\ 862-1600& 8.295$\times10^{-3}$ & 1.538 & - & - & 0.25\% \\ \end{tabular} \end{ruledtabular} \end{table} \subsection{NS Bulk Properties} The Tolman-Oppenheimer-Volkoff (TOV) equation was used to compute the mass and radius of the NS. Assuming that the NS is non-rotating and spherically symmetric, the TOV equation is \begin{eqnarray} \label{eq:TOV} \frac{dp}{dr}=-\frac{\left[p(r)+\epsilon(r)\right]\left[M(r)+4\pi r^3p(r)\right]}{r\left(r-2M(r)\right)} \, , \end{eqnarray} where \begin{eqnarray} \label{eq:massstar} M(r)&=4\pi\int_0^r \epsilon(r')(r')^2dr' \, . \end{eqnarray} The central pressure is chosen at $r=0$ and integrated outwards until $p(R)=0$, where $R$ is the final radius of the star. This process is repeated for different central pressures to form the sequence of stars plotted in Fig.~\ref{fig:MR}. The box denotes the constraint corresponding to pulsar PSR J0740+6620, $M=2.072^{+0.067}_{-0.066}$ M$_\odot$ and $R=12.39^{+1.30}_{-0.98}$ km. The total baryon number is given by \begin{eqnarray} A=\int_0^R \frac{4\pi r^2n_B(r)}{(1-\frac{2 G m(r)}{r})^\frac{1}{2}}dr \, . \end{eqnarray} GW 170817 is a binary system with a total mass of $2.73^{+0.04}_{-0.01}$ M$_\odot$~\cite{LIGOScientific:2018cki}. The component masses in the low spin case are $m_1\in(1.36, 160)$ M$_\odot$ and $m_2\in(1.16, 1.36)$ M$_\odot$. The tidal deformability may be computed as \begin{eqnarray} \label{eq:tidal} \Lambda_M=\frac{2}{3}k_2 \left(\frac{R}{M} \right)^5 \, . \end{eqnarray} The dimensionless constant, $k_2$, is the tidal love number and the full equation is given in Ref.~\cite{Meng:2021ijp,Chatziioannou:2020pqz}. Eq.~(\ref{eq:tidal}) gives the one sided tidal deformation, which is constrained by GW170817 for a $1.4$ M$_\odot$ star to lie in the range $\Lambda_{1.4}=190^{+390}_{-120}$~\cite{LIGOScientific:2018cki}. This is shown as a black line in Fig.~\ref{fig:Tidal}. However, we note that other work has reported that the upper limit on $\Lambda_{1.4}$ could be as large as $800$ MeV~\cite{Kim:2018aoi}. \subsubsection{Mass-Radius Relation} Table~\ref{tb:MRtable} summarises the properties of a NS with maximum mass, $M_{max}$, predicted by QMC. All entries are for the case where hyperons are included, unless otherwise indicated. The overlap parameters alter the NS properties in predictable ways. Increasing $E_0$ has mild effects on the maximum mass of the star, with little change to its radius (see Fig. \ref{fig:MR}). The range parameter $b$, however, raises the mass substantially as it is increased. For $b=0.4$ fm and $\lambda_3 = 0.02$ fm$^{-1}$, the maximum mass is predicted to be $M_{max}<2$ M$_\odot$, which is unsatisfactory. \begin{table} \caption{\label{tb:MRtable} Macroscopic properties of the NS (including hyperons unless otherwise indicated) are computed with variations of the overlap parameters. The range parameter, $b$, and overlap energy, $E_0$, have units of fm and MeV, respectively. The results summarise the maximum mass ($M_{max}$, $M_\odot$), total baryon number ($A$, $10^{57}$) and central number density ($n_c$, fm$^{-3}$), central pressure ($P_c$, MeV-fm$^{-3}$), and central energy density ($\epsilon_c$, MeV-fm$^{-3}$) for each parameter set used.} \begin{ruledtabular} \centering \begin{tabular}{cc|ccccc} & & \multicolumn{5}{c}{$\lambda_3=0.02$ fm$^{-1}$}\\ $b$&$E_0$&$M_{max}$&A&$n_c$& $P_c$ &$\epsilon_c$ \\ \hline 0.4 & 3500 & 1.77 & 2.41 & 1.01 & 256 & 1191 \\ 0.4 & 4500 & 1.78 & 2.43 & 1.02 & 268 & 1205 \\ 0.4 & 5500 & 1.79 & 2.45 & 1.03 & 280 & 1219 \\ \hline 0.5 & 3500 & 2.02 & 2.82 & 1.02 & 408 & 1260 \\ 0.5 & 4500 & 2.08 & 2.92 & 1.01 & 448 & 1258 \\ 0.5 & 5500 & 2.14 & 3.02 & 1.00 & 492 & 1267 \\ \hline 0.5\footnote{\label{ft:Nucleonsoverlap}N-QMC (Overlap)} & 5500 & 2.25 & 3.22 & 1.00 & 680 & 1314 \\ -\footnote{\label{ft:hyperonsNoOverlap}F-QMC (No Overlap)} & 0 & 1.74 & 2.36 & 0.961 & 213 & 1111 \\ -\footnote{\label{ft:NucleonsNoOverlap}N-QMC (No Overlap)} & 0 & 1.96 & 2.74 & 1.15 & 559 & 1461 \\ \hline & & \multicolumn{5}{c}{$\lambda_3=0.00$ fm$^{-1}$}\\ $b$&$E_0$&$M_{max}$&A&$n_c$& $P_c$ &$\epsilon_c$ \\ \hline 0.4 & 3500 & 1.91 & 2.63 & 0.883 & 221 & 1030 \\ 0.4 & 4500 & 1.92 & 2.64 & 0.900 & 231 & 1055 \\ 0.4 & 5500 & 1.92 & 2.65 & 0.915 & 242 & 1078 \\ \hline 0.5 & 3500 & 2.11 & 2.97 & 0.906 & 336 & 1102 \\ 0.5 & 4500 & 2.17 & 3.05 & 0.923 & 387 & 1144 \\ 0.5 & 5500 & 2.21 & 3.14 & 0.903 & 406 & 1123 \\ \hline 0.5 $^{\textrm{a}}$ & 5500 & 2.34 & 3.37 & 0.934 & 643 & 1222 \\ - $^{\textrm{b}}$ & 0 & 1.89 & 2.60 & 0.867 & 201 & 1005 \\ - $^{\textrm{c}}$ & 0 & 2.11 & 2.97 & 1.03 & 529 & 1305 \\ \end{tabular} \end{ruledtabular} \end{table} Table~\ref{tb:Central} reflects the central properties of different mass NS predicted for F-QMC with overlap. The central density, pressure and energy densities are all greater when $\lambda_3=0.02$ fm$^{-3}$, for all masses. For a star of mass $M=1.4$ M$_\odot$, the number density is lower than the threshold density for hyperons and hence there are no hyperons in these stars. However, as the density increases the star's core is then populated by hyperonic matter. \begin{table} \caption{\label{tb:Central} The central number density ($n_c$, fm$^{-3}$), pressure ($P_c$, MeV-fm$^{-3}$) and energy density ($\epsilon_c$, MeV-fm$^{-3}$) for different mass stars (M$_\odot$) predicted by F-QMC with overlap ($E_0=5500$ MeV, $b=0.5$ fm).} \begin{ruledtabular} \centering \begin{tabular}{cccc} \multicolumn{4}{c}{$\lambda_3=0.02$ fm$^{-1}$}\\ Mass & $n_c$ & $P_c$ & $\epsilon_c$\\ \hline 1.0 & 0.341 & 30 & 336 \\ 1.4 & 0.427 & 59 & 432 \\ 1.6 & 0.482 & 86 & 496 \\ 1.8 & 0.547 & 124 & 576 \\ 2.0 & 0.663 & 196 & 733 \\ \hline \multicolumn{4}{c}{$\lambda_3=0.00$ fm$^{-1}$}\\ Mass & $n_c$ & $P_c$ & $\epsilon_c$\\ \hline 1.0 & 0.317 & 27 & 311 \\ 1.4 & 0.395 & 53 & 397 \\ 1.6 & 0.438 & 74 & 447 \\ 1.8 & 0.487 & 102 & 507 \\ 2.0 & 0.564 & 154 & 607 \\ \end{tabular} \end{ruledtabular} \end{table} In Fig.~\ref{fig:MR}, the mass-radius curve shows that the overlap term is essential in predicting a heavy NS, $M>2$ M$_\odot$, once the incompressibility is reduced to the preferred range (i.e., with $\lambda_3=0.02$ fm$^{-1}$). Without the overlap term, the mass of the star is significantly lower. The inclusion of the $\sigma^3$ term acts to reduce the radius and lower the star's mass. The mass reduction is caused by the additional scalar meson attraction, with a consequent softening of the EoS. For $\lambda_3=0.02$ fm$^{-1}$, the radius of the star slightly increases as the mass decreases from $1.5$ M$_\odot$ to $1.0$ M$_\odot$, in contrast to the case $\lambda_3=0.0$ fm$^{-1}$, where there are no significant changes to the radius. In the phenomenologically interesting region, $M \, \approx \, 1.4 \, M_\odot$, the radius of the star is significantly lower when $\lambda_3 = 0.02$ fm$^{-1}$. The presence of hyperons reduces the maximum mass, as well as increasing the radius at maximum mass. From Fig.~\ref{fig:MR} we see that the overlap term decreases the radius at maximum mass for F-QMC, whereas for N-QMC the radius is increased at maximum mass when the overlap term is present. Figure~\ref{fig:Tidal} illustrates the tidal deformability for the QMC EoS. The dashed line for the nucleon only case cannot be distinguished from F-QMC because the QMC model predicts no hyperons in a $1.4$ M$_\odot$ star. \subsection{Speed of sound} \label{sec:speed of sound} In the absence of direct observations of the composition of the core of a NS, theoretical calculations of the speed of sound (equivalently, polytropic index, $\gamma$) do offer valuable insights. It has previously been suggested that non-trivial changes in the EoS correspond to phase changes from hadronic matter to either quark matter~\cite{Annala:2019puf} or the threshold of creation of hyperons~\cite{Motta:2020xsg}. The EoS at low and extremely high densities have been extensively studied in effective field theory ($n<1.1$ $n_0$) and perturbative QCD ($n>40$ $n_0$), respectively. Annala~\textit{et al.}~\cite{Annala:2019puf} suggest that for stars with $M=1.4$ M$_\odot$, hadronic nuclear theories are suitable in predicting the EoS giving rise to canonical mass stars. The QMC model is consistent with their EoS in that region. However, for $M>2$ M$_\odot$, Annala~\textit{et al.} suggest that the central density becomes so large that the cores of the stars may be populated by deconfined quark matter and gluons~\cite{Annala:2019puf}. Quark matter, being conformal and scale invariant, would then have a speed of sound, $c_s^2=\frac{1}{3}$, approaching logarithmically from below as the density increases. On the other hand, it has been shown that this is not a distinct feature of conformal matter~\cite{Motta:2020xsg,Stone:2019blq}. Since the QMC model does not contain any elements of deconfined quarks, $c^2_s<\frac{1}{3}$ approaching from below, may also be interpreted as the creation of hyperons. \begin{eqnarray} c^2_s= \frac{dP}{d\epsilon}. \label{eq:sound} \end{eqnarray} Figure~\ref{fig:Sound} shows the three sudden changes in $c_s^2$ which occur at those number densities where the different species of hyperons first appear (see Fig.~\ref{fig:specfrac}). In order of appearance, these are the $\Lambda$, $\Xi^-$ and $\Xi^0$. Without the overlap term (red lines), the results reflect those report by earlier for the QMC model~\cite{Motta:2020xsg,Stone:2019blq}. However, when the overlap terms are included (blue lines), $c^2_s$ can be as large as 0.5 or more. This is consistent with pure hadronic matter, as cited in~\cite{Annala:2019puf}.
Title: The Mass Scale of High-Redshift Galaxies: Virial Mass Estimates Calibrated with Stellar Dynamical Models from LEGA-C
Abstract: Dynamical models for $673$ galaxies at $z=0.6-1.0$ with spatially resolved (long-slit) stellar kinematic data from LEGA-C are used to calibrate virial mass estimates defined as $M_{\rm{vir}}=K \sigma'^2_{\star,\rm{int}} R$, with $K$ a scaling factor, $\sigma'_{\star,\rm{int}}$ the spatially-integrated stellar velocity second moment from the LEGA-C survey and $R$ the effective radius measured from a S\'ersic profile fit to HST imaging. The sample is representative for $M_{\star}>3\times10^{10}~M_{\odot}$ and includes all types of galaxies, irrespective of morphology and color. We demonstrate that using $R=R_{\rm{sma}}$~(the semi-major axis length of the ellipse that encloses 50\% of the light) in combination with an inclination correction on $\sigma'_{\star,\rm{int}}$~produces an unbiased $M_{\rm{vir}}$. We confirm the importance of projection effects on $\sigma'_{\star,\rm{int}}$ by showing the existence of a similar residual trend between virial mass estimates and inclination for the nearby early-type galaxies in the ATLAS$^{\rm{3D}}$~survey. Also, as previously shown, when using a S\'ersic profile-based $R$ estimate, then a S\'{e}rsic index-dependent correction to account for non-homology in the radial profiles is required. With respect to analogous dynamical models for low-redshift galaxies from the ATLAS$^{\rm{3D}}$~survey we find a systematic offset of 0.1 dex in the calibrated virial constant for LEGA-C, which may be due to physical differences between the galaxy samples or an unknown systematic error. Either way, with our work we establish a common mass scale for galaxies across 8 Gyr of cosmic time with a systematic uncertainty of at most 0.1 dex.
https://export.arxiv.org/pdf/2208.12605
\title{The Mass Scale of High-Redshift Galaxies: Virial Mass Estimates Calibrated with Stellar Dynamical Models from LEGA-C} \correspondingauthor{Arjen van der Wel} \email{arjen.vanderwel@ugent.be} \author{Arjen van der Wel} \affil{Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, 9000 Gent, Belgium} \author[0000-0002-0786-7307]{Josha van Houdt} \affil{Max-Planck Institut f\"{u}r Astronomie K\"{o}nigstuhl, D-69117, Heidelberg, Germany} \author{Rachel Bezanson} \affil{University of Pittsburgh, Department of Physics and Astronomy, 100 Allen Hall, 3941 O'Hara St, Pittsburgh PA 15260, USA} \author{Marijn Franx} \affil{Leiden Observatory, Leiden University, P.O.Box 9513, NL-2300 AA Leiden, The Netherlands} \author{Francesco D'Eugenio} \affil{Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, 9000 Gent, Belgium} \author{Caroline Straatman} \affil{Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, 9000 Gent, Belgium} \author{Eric F.~Bell} \affil{Department of Astronomy, University of Michigan, 1085 South University Ave., Ann Arbor, MI 48109, USA} \author{Adam Muzzin} \affil{Department of Physics and Astronomy, York University, 4700 Keele St., Toronto, Ontario, M3J 1P3, Canada} \author{David Sobral} \affil{Department of Physics, Lancaster University, Lancaster LA1 4YB, UK} \author{Michael V.~Maseda} \affil{Leiden Observatory, Leiden University, P.O.Box 9513, NL-2300 AA Leiden, The Netherlands} \author{Anna de Graaff} \affil{Leiden Observatory, Leiden University, P.O.Box 9513, NL-2300 AA Leiden, The Netherlands} \author{Bradford P.~Holden} \affil{UCO/Lick Observatory, University of California, Santa Cruz, CA 95064, USA} \keywords{galaxies: high-redshift -- galaxies: kinematics and dynamics -- galaxies: structure} \section{Introduction} \label{section:intro} The total mass of a galaxy is perhaps the single most important of its properties. At all cosmic times, it is related to a host of other properties, such as the stellar mass \citep[e.g.,][]{Taylor2010}, star formation rate (\citealt{Brinchmann2004, Noeske2007}), size \citep[e.g.,][]{Shen2003, VanderWel2014a}, average stellar age (e.g. \citealt{Gallazzi2005, Gallazzi2014, Wu2018}) and rotational properties (e.g., \citealt{Emsellem2007}). Furthermore, a galaxy can be plausibly linked to its progenitors through its mass, which changes through time due to passive growth or mergers \citep[e.g.,][]{Bezanson2009, Naab2009}. Deriving accurate, unbiased masses is therefore an obvious priority in any galaxy survey. For spatially integrated kinematic measurements we define the virial mass using the scalar virial theorem: \begin{equation} M_{\rm{vir}} = K\frac{\sigma^{2}R}{G} \label{eq:virial_mass} \end{equation} \noindent with $G$ the gravitational constant, $R$ the radius of the galaxy, $\sigma$ the velocity second moment, strictly speaking, only equal to the velocity dispersion for a non-rotating galaxy, and $K$ a scaling factor. All of these quantities, with the exception of $G$, are at this point only generically defined and their exact definition is, in essence, the topic of this paper. The parameters used in this $M_{\rm{vir}}$ estimate only crudely approximate the kinematic and geometric structure of galaxies and are therefore susceptible to both random and systematic uncertainties. Two approaches have been used to address this issue: comparison with dynamical models, and comparison with stellar mass estimates from photometry. First, dynamical models provide an accurate absolute mass scale with which $M_{\rm{vir}}$ estimates can be compared. Such an empirical calibration was done for the SAURON survey in \citet{Cappellari2006} and revisited by the ATLAS$^{\rm 3D}$ collaboration in \citet{Cappellari2013+jam} (hereafter C06 and C13, respectively). This volume-limited sample of morphologically selected early-type galaxies produces a large dynamic range in mass, but the number of high-mass galaxies is relatively small. Larger surveys across all galaxy types (\citealt{CALIFA2012,MANGA2015,SAMI_DR2}) have not revisited this calibration, even though this would be relatively straightforward with the dynamical models for 2000 galaxies in MANGA at $z\lesssim 0.1$ published by \citet{Li+2018}. Second, a comparison between $M_{\rm{vir}}$ and stellar mass estimates $M_*$ provides an idea of the precision of both parameters (\citealt{Taylor2010}). The average value and scatter in $M_*/M_{\rm{vir}}$ are informative, but both quantities suffer from systematics, again leaving the absolute mass scale uncertain. This method is popular at higher redshifts (\citealt{VdWel2006,VdSande2013,Belli2014}) where (until now) dynamical models have been difficult to construct, resulting in small and biased samples of old, massive galaxies (\citealt{DokkumMarel2007,WelMarel2008, Shetty2015,Guerou2017,Newman2018}). In this paper we make progress in addressing these issues by using nearly 800 galaxies in the $z=0.6-1$ redshift range from the LEGA-C survey (\citealt{LEGAC2016, vanderwel21}) with dynamical models from \citet[][hereafter, vH21]{houdt21}. This sample includes all types of galaxies as the selection was blind to structure and color. Using mass estimates from the dynamical models we will establish the normalization $K$, show that the use of a circularized radius should be avoided, and introduce a necessary but simple inclination correction for the integrated velocity second moment. We assume a $\Lambda$CDM cosmology with $H_{0}=70$ kms$^{-1}$, $\Omega_{\Lambda}=0.7$ and $\Omega_{M}=0.3$. \section{Data \& Methods} \label{section:data} This work is based on the Large Early Galaxy Astrophysics Census survey (LEGA-C; the survey description and further details can be found in \citealt{LEGAC2016,LEGAC2018,vanderwel21}). This survey provides exceptionally deep, spatially resolved spectra for approximately $4000$ magnitude-limited galaxies from the UltraVISTA photometric parent catalog (\citealt{UltraVista2013}), targeted at redshifts between $0.6$ and $1.0$. Spectra have been obtained with the VIMOS instrument on the Very Large Telescope. With $\sim20$ hours of integration per object, $R\sim3500$ spectra are produced with a wavelength coverage between $\sim6300$ and $\sim8800$\AA. In this paper, we use the subset of galaxies drawn from the third data release \citep[DR3;][]{vanderwel21} suitable for kinematic modeling described in full by \citet{houdt21}. Galaxies are selected to have high signal-to-noise ratio ($>10$\AA$^{-1}$), measured $\sigpr$ (see below for definition), have major axes aligned within $45$ degrees of the direction of the slit, and have imaging data show a regular morphology that is well described by a S\'{e}rsic profile ({\tt FLAG\_MORPH}~$=0$ in DR3). Stellar- and gas kinematics are derived from the spectra using pPXF\footnote{v6.0.0, via \url{http://www-astro.physics.ox.ac.uk/~mxc/software/}} (\citealt{pPXF-2004,pPXF-2017}). In summary, a combination of single stellar population (SSP) templates and Gaussian emission lines are fit to the observed spectra. The theoretical spectra are broadened and shifted to find the spatially resolved rotation and dispersion, independently for the gas and stars. This is done for 2D and 1D spectra, where the former is used for the Jeans models (see below) and the latter is used to extract the integrated velocity second moment $\sigpr$ which are used to calculate the virial masses. For further details on the spectral modeling, see \citet{Bezanson2018,Bezanson2018b}. The notation $\sigpr$, introduced by \citet{Bezanson2018b}, is chosen to differentiate between the spatially integrated velocity broadening along the line of sight and the intrinsic velocity dispersion $\sigma_{\star}$. Optical imaging is available for each galaxy with HST/ACS F814W observations (\citealt{COSMOS2007}). Structural parameters are extracted with single-component S\'{e}rsic fits using GALFIT\footnote{v3.0.5, available at \url{https://users.obs.carnegiescience.edu/peng/work/galfit/galfit.html}} (\citealt{GALFIT2010}) as described in \citet{VdWel2012} and DR3. All three main parameters from the S\'ersic fit -- S\'ersic index, effective radius, and projected axis ratio -- play a key role in this work and their use in Equation \ref{eq:virial_mass} is, essentially, the topic of this paper. The dynamical masses to which the virial masses will be scaled were obtained from Jeans models as presented by \citet{houdt21}. Summarising, the galaxies are modelled as oblate axisymmetric spheroids as implemented in the Jeans Anisotropic Multi-Gaussian Expansion (JAM) code (\citealt{JAM2008})\footnote{v5.0.17 from https://pypi.org/project/jampy/}. The surface brightness is parameterised by the S\'{e}rsic profiles derived from the HST/ACS F814W imaging, decomposed into a series of Gaussians using the MGE\footnote{v5.0.12, from \url{https://pypi.org/project/mgefit/}} (\citealt{MGE2002}) code. The probability density of the inclination is assumed to be a function of the observed axis ratio, using observationally derived intrinsic shape distributions (\citealt{Chang2013,VdWel2014b,houdt21}). The slit geometry of the LEGA-C spectroscopy is included in the models: the Jeans equations are integrated through rectangular $1\arcsec \times 0.205\arcsec$ apertures, instrumental velocity gradients are subtracted, and the centering of the slit is marginalized over in the Bayesian fitting approach. The model predictions of $v_{\rm{rms}}$ are compared with the measured $\sqrt{v'^2_{\rm{\star}}+\sigma'^2_{\star}}$, where $v'_{\rm{\star}}$ and $\sigma'_{\star}$ are the measured line of sight velocity first and second moments for each spatial element, typically reached $1-1.5\arcsec$ or $1-3$ effective radii along the slit direction. There are two components in the gravitational potential: the stellar component (for which we assume that mass follows light as seen in the HST image) and a dark matter component, parameterized by a NFW halo. We do not claim to constrain the dark matter mass directly, but the inclusion of a dark component is required by the data and allows greater flexibility in fitting a gradient in the mass-to-light ratio regardless of its origin (stellar $M/L$, stellar Initial Mass Function (IMF), gas, dark matter), and therefore produces more realistic uncertainties. In vH21 we already published mass estimates and proxies for dynamical structure (e.g., $V/\sigma$), but to facilitate comparisons with other datasets and models we present in Appendix A the fitted parameters and model components. In this paper we use the sample of 673 galaxies selected to have JAM mass estimates within $R_e$ with a precision better than 0.5 dex. This is 22\% of the sample of galaxies for which we can estimate virial masses based on the integrated $\sigma'_{\star}$, which is essentially the full LEGA-C primary, K-band selected sample described by \citet{vanderwel21}. As explained by vH21, many galaxies do not have a Jeans mass estimate because their major axis is misaligned with the LEGA-C slit by more than 45 degrees ($\sim 50$\%, given the random position angle distribution), while other galaxies do not have spectra with sufficient signal-to-noise. However, as we will discuss further below, we find no correlations between the virial-to-Jeans mass ratio and any other parameter, which implies that our derived virial mass estimates can be applied generally to galaxies in the mass and redshift range of LEGA-C, which samples galaxies with stellar masses $\gtrsim2\times10^{10}~M_{\odot}$ and $50\lesssim \sigma'_{\star}/ (\rm{km s}^{-1}) \lesssim 300$, at $0.6<z<1$. \section{Calibration of the Virial Mass} \label{section:results} Conceptually, there are two critical aspects that need to be addressed when estimating virial masses based on integrated velocity dispersions (or, to be more precise, second moments) and effective radius measurements: how non-homology in radial structure affects the virial mass (discussed in Section \ref{sec:results_p2}) and how to take into account non-homology in the 3D geometry of galaxies and the resulting projection effects on the observables (discussed below in Section \ref{sec:results_p1}). \subsection{Non-Homology in Radial Structure} \label{sec:results_p2} We define the total dynamical mass from the best-fitting Jeans models as 2$\times$ the Jeans model mass enclosed within a sphere with radius $R_{\rm{sma}}$, the semi-major axis of the ellipse that contains 50\% of the (projected) S\'ersic light model: \begin{equation} M_{\rm{JAM}} \equiv 2\times M(r<R_{\rm{sma}}). \end{equation} This ensures that both mass estimates are approximately based on the same luminosity; essentially, our comparison is between mass-to-light ratios. The choice for $R_{\rm{sma}}$ is motivated in Section \ref{sec:radius}, which also includes a broader discussion on the concept of using any galactic radius as a proxy for virial radius. As a starting point we calculate a simple virial mass estimate that is only proportional to $R_{\rm{sma}}$ and $\sigprsq$, the observed (projected) velocity second moment: \begin{equation} M_{5} = 5\frac{\sigprsq R_{\rm{sma}}}{G} \label{eq:mvir5} \end{equation} \noindent The constant scaling factor 5 has been often used as a practical tool without explicit justification \citep[e.g.,][]{bender92, Jorgenson1996, VdWel2006, Toft2012}. To place this normalization a firmer basis, C06 provided a calibration using detailed dynamical models as we do here. They found that, when the effective radius is measured in a then "classic" way using an $r^{1/4}$~growth-curve extrapolation \citep{dressler87, Jorgenson1996}, the best fitting coefficient was indeed $K=5.0\pm0.1$. In Figure \ref{fig:mass_mass} we compare $M_5$ with $M_{\rm{JAM}}$. There is a strong trend with S\'{e}rsic index $n$; clearly, galaxies are not self-similar in detail. In Figure \ref{fig:n_k} we explicitly show this $n$-dependence in comparison with the proportionality factor \begin{equation} K(n) = 8.87 - 0.831 n + 0.0241 n^{2}. \label{eq:kn} \end{equation} \noindent $K(n)$ is the scaling factor taken from C06 (Eq.~20). The residual correlation between $M_5$ with $M_{\rm{JAM}}$ follows this description very well, indicating that this non-homology correction is required. This agrees with the finding by \citet{Taylor2010}, who compared stellar masses from population with virial mass estimates. It also agrees with the results by C13, who used JAM dynamical models as we do here and also concluded that the above non-homology correction is needed when the effective radius is measured from S\'ersic models as we do here. However, we are now left with a strong residual correlation with projected axis ratio: round galaxies have larger $M_{\rm{JAM}} / M_{\rm{vir}}$ than flat galaxies, signaling the importance of non-homology in 3D galaxy structure (in essence, spheres and disks) and the resulting projection effects on both the kinematics and the light distribution. \subsection{Non-Homology in 3D Galaxy Structure and Projection Effects} \label{sec:results_p1} The projected axis ratio $q$ reflects the combination of the intrinsic, 3-dimensional geometry of a galaxy and the viewing angle. We use the residual trend with $q$ in Figure \ref{fig:n_k} to derive a second structural homology correction to account for variations in galaxy geometry that also includes projection effects. We do so purely empirically, by removing the residual trend with axis ratio $q$. In the left-hand panel of Figure \ref{fig:sigma_correction} we show the same residual trend with $q$~as in Figure \ref{fig:n_k} but now explicitly as a function of $q$ and adopting the virial mass \begin{equation} M_{\rm{n}} = K(n)\frac{\sigprsq R_{\rm{sma}}}{G} \label{eq:mvirk} \end{equation} that includes the radial non-homology correction $K(n)$. With this definition, galaxies that are round in projection have underestimated virial masses; that is, the projected velocity second moment of nearly face-on, rotating galaxies do not `see' galactic rotation. It is important to note here that the velocity dispersion used in the virial mass estimate must include all sources of motion in the galactic potential: not only the quasi-random motions associated with the true velocity dispersion at a given location in a galaxy, but also organized motions such as rotation. But a striking feature is that there is no systematic offset between $M_{\rm{n}}$ and $M_{\rm{JAM}}$. Apparently, variations in geometry and projection effects to do not cause a systematic difference between simple virial mass estimates and more accurate dynamical models using spatially resolved kinematics. We now introduce the homology correction $K(q)$: \begin{equation} K(q) = (0.87+0.38 e^{-3.78(1-q)})^2 \label{eq:kq} \end{equation} which is the inverse of the solid line in the left-hand panel of Figure \ref{fig:sigma_correction}. This analytical form is purely practical and has no physical basis. For a given geometry and dynamical structure an inclination correction can be derived from the dynamical model or calculated directly from the tensor virial theorem \citep{bender92}, but our sample consists of a set of galaxies with a large variety in structure. The middle panel of Figure \ref{fig:sigma_correction} shows the distribution of $M_{\rm{vir}}/M_{\rm{JAM}}$ according to our best-effort virial mass estimate: \begin{equation} M_{\rm{n,q}} = K(n)K(q)\frac{\sigprsq R_{\rm{sma}}}{G} \label{eq:mvir} \end{equation} \noindent which is now independent of $q$ (by construction) and for which the variance is reduced by $\sim1/3$ (the new rms is 0.12 dex)\footnote{LEGA-C DR3 includes two quantities related to the stellar velocity dispersion: the measured stellar velocicity second moment {\tt SIGMA\_STARS\_PRIME} (written as $\sigpr$ in this paper) and {\tt SIGMA\_STARS\_VIR} = $\sqrt{K(q)} \times$~{\tt SIGMA\_STARS\_PRIME}.}. Importantly, no dependence on S\'ersic index is re-introduced: on average, the inclination correction works well for both high- and low-$n$ galaxies. As mentioned before, the norm in the literature on virial mass estimates of high-redshift galaxies has been to choose the circularized radius $R_{\rm{circ}} \equiv \sqrt{q}R_{\rm{sma}}$. This is motivated by the result that the stellar-to-virial mass ratio produces smaller scatter when $R_{\rm{circ}}$ in a virial mass estimate, both at low redshift (C13) and at high redshift \citep{Belli2017}, where the latter interpret this as evidence for rotational support via tentative residual correlations with axis ratio and S\'ersic index. In the right-hand panel of Figure \ref{fig:sigma_correction} we show the result of using $R_{\rm{circ}}$ (and $K(n)$ but not $K(q)$). Compared to the left-hand panel there is a much weaker trend with axis ratio. The factor $\sqrt{q}$ -- replacing our $K(q)$ -- acts as a reasonably good homology correction, but this comes at the expense of a systematic offset. This offset is to be expected since the virial mass is now derived on the basis of a smaller radius than the Jeans model mass (by approximately a factor $\sqrt{q}$), but reducing the Jeans model mass by re-calculating it within a sphere of radius $R_{\rm{circ}}$ instead of $R_{\rm{sma}}$ would re-introduce the same axis ratio trend as seen in the left-hand panel. \section{Discussion} \label{section:Discussion} \subsection{The Choice of Virial Radius} \label{sec:radius} Defining the virial radius of a galaxy is conceptually problematic. The only true virial radius is that of the dark matter halo, but this is not traced by the luminous body. In this paper, implicitly making several assumptions and approximations, we equate the virial radius with $R_{\rm{sma}}$, the semi-major axis of the ellipse that contains 50\% of the light in the HST ACS/F814W image, and we compare the inferred virial mass estimate with the Jeans model mass calculated within a {\it sphere} with the same radius. Specifically, we assume that: 1) the sphere with radius $R_{\rm{sma}}$ contains 50\% of the 3D luminosity distribution; 2) $R_{\rm{sma}}$, measured at a rest-frame wavelength of $\sim4000-5000$\AA~by fitting a 2D S\'ersic profile can be used as proxy for the spatial extent of galaxies. The first of these assumptions was first addressed by \citet{ciotti91}, who showed that for a spherical galaxy with a S\'ersic profile $r_{1/2}/R_{\rm{sma}}=$1.34-1.36 for S\'ersic indices $n=2-10$ (here, $r_{1/2}$ is the radius of a sphere that contains 50\% of the 3D light distribution). But for disk galaxies this value decreases and approaches unity (C13). A key consideration is, then, that most galaxies ($\sim90\%$) in the LEGA-C sample analyzed in this paper are rotating, rather flat (intrinsic $c/a\approx 0.3$) and nearly axi-symmetric systems as evidenced by both their projected shape distribution \citep{Chang2013, VdWel2014b} and their kinematics (vH21). This large fraction of highly flattened galaxies is not specific to the LEGA-C sample: galaxies in the present-day Universe, including massive quiescent/early-type galaxies, generally have similar shapes \citep[e.g.,][]{Chang2013} and commonly show a large degree of rotational support \citep[e.g.,][]{Emsellem2011}. \citet{van-de-ven21} show that for such flattened, oblate galaxies the difference between the projected $R_{\rm{sma}}$ and $r_{1/2}$ of a sphere is negligible. A minor caveat is that for slowly rotating triaxial galaxies (and more generally, galaxies with non-disklike geometries, in total about 10\% of the LEGA-C sample) $R_{\rm{sma}}$ and $r_{1/2}$ can differ: \citet{van-de-ven21} find that for massive, triaxial ellipticals $r_{1/2}/R_{\rm{sma}} = 1.18\pm0.18$, where the error reflects the galaxy-to-galaxy scatter due to variations in intrinsic shape and viewing angle. These considerations generalizes the conclusions from previous work by \citet{Hopkins2010} and C13 who showed that $R_{\rm{sma}}$ is largely independent of inclination and is therefore the preferred size proxy (rather than $R_{\rm{circ}}$). We return to this issue in Section \ref{sec:atlas_sersic} when we examine the mass offset we see in Figure \ref{fig:sigma_correction} when using circularized radii (right-hand panel) and comparing with $M/L$ measurements from the ATLAS$^{\rm{3D}}$ survey of nearby early-type galaxies. The second crucial assumption made explicit above (that the rest-frame $\sim4000-5000$\AA~$R_{\rm{sma}}$ from a S\'ersic fit is a good proxy for galaxy size) is more difficult to defend. The observed color gradients \citep[e.g.,][]{VdWel2012} imply that $R_{\rm{sma}}$ is wavelength dependent \citep[also see][]{kelvin12}, which in turn implies the presence of mass-to-light gradients \citep{Szomoru2013, Mosleh2017, suess19}. If the inner parts of galaxies are dominated by stars, then the more sensible choice of the virial radius might be a mass-weighted half-light radius. At the same time, gas and dark matter fractions increase with radius, creating $M/L$ gradients in the opposite direction. Color gradient information is currently not available for the full sample of galaxies studied here. We should therefore keep in mind that our definition of the galaxy mass scale is set by our choice of $R_{\rm{sma}}$ as the optical half-light radius, measured at $\sim4000-5000$\AA, a choice that is to some extent arbitrary as it is determined by the available data. In addition, our $R_{\rm{sma}}$ (and the stellar profile used in the Jeans dynamical model) relies on the S\'ersic profile. A comparison between S\'ersic model magnitudes and large-aperture ground-based photometric magnitudes convinces us that the S\'ersic profile is appropriate: the difference (accounting for differences in filter transmission curves) is, on average, just 0.02 mag, with 0.15 mag scatter. In particular, the S\'ersic model does not unduly extrapolate the light profile, artificially increasing the luminosity and the radius. We therefore believe the total luminosities to be accurate. This, in turn, implies that both $M_{\rm{JAM}}(R<R_{\rm{sma}})$ and $M_{\rm{vir}} / 2$ -- the approximate mass estimates within radius $R_{\rm{sma}}$ -- are accurate in relation to each other. The multiplication by a factor 2 is an unverified extrapolation and only serves to account for the total luminosity and to enable comparisons with, e.g., total stellar mass inferred from spatially integrated photometry. Finally, for some purposes the circularized radius $R_{\rm{circ}}$ can be more useful. Traditionally, Fundamental Plane studies use $R_{\rm{circ}}$ and the projected axis ratio $q$ does not factor in. When $q$ is not available (for example, when sizes are derived from growth curves and circular apertures), then our $K(q)$ correction does not apply. Also, for extremely elongated, prolate galaxies $R_{\rm{circ}}$ is the more stable size proxy (compared to $r_{1/2}$). But overall, given the weak viewing angle dependence of $R_{\rm{sma}} / r_{1/2}$ for most galaxy geometries encountered in nature, we recommend the use of $R_{\rm{sma}}$ and the virial mass estimate from Eq.~\ref{eq:mvir}. \subsection{Residual Correlations} As discussed in Section \ref{section:results}, correlations in $M_{\rm{vir}}/M_{\rm{JAM}}$ with S\'{e}rsic $n$ and projected shape have been accounted for and removed in our final $M_{\rm{vir}}$ estimate. We find no significant correlations with any other parameter that is available for our sample. In particular, there is no difference between large and small galaxies, high- and low-mass galaxies, and no dependence on star-formation activity. Furthermore, we do not find a trend with redshift. Most importantly, in Figure \ref{fig:rot_mass} we show that there is no residual correlation with the rotation parameter $\kappa$ derived from the Jeans models \citep[see][for details]{houdt21}. This is in contrast with the findings of \citet{WelMarel2008}, who find that fast-rotating galaxies overestimate the virial mass by as much as $0.2$ dex. However, that comparison is done at fixed $K=5$. As we show here, and demonstrated earlier for present-day galaxies (C13), a non-homologous scale factor should be used to derive unbiased masses, at least when using a S\'ersic profile-based effective radius. Using $K(n)=5$, we find a difference of at most $0.1$ dex between the fast rotating galaxies ($\kappa>0.5$) and slow rotating galaxies ($\kappa<0.5$). Whether or not this entirely explains the results from \citet{WelMarel2008} remains unclear, but this discrepancy is indicative of the importance of using consistent measurements when deciding which normalisation to use and when comparing galaxies across different epochs. \\ % The newly calibrated virial mass estimates also apply equally, in a systematic sense to within 10\% or 0.04 dex, to quiescent and star-forming galaxies (Figure \ref{fig:jam_vir}). The sample is separated based on their rest-frame $U-V$~and $V-J$~colors. Both types show no systematic difference between $M_{\rm{vir}}$~and $M_{\rm{JAM}}$, but star-forming galaxies show larger scatter. The scatter is consistent with the formal uncertainties in the $M_{\rm{JAM}}$~estimates. These are slightly larger than the formal uncertainties on $M_{\rm{vir}}$ due to the added flexibility provided by the dark-matter component in the JAM model. The uncertainties for the star-forming galaxies are larger than for the quiescent galaxies for several reasons: 1) the $\sigpr$~measurements are less precise as a result of lower $S/N$; 2) the stellar light profiles likely suffer from stronger deviations from the mass-follows-light assumption for the stellar component due to dust and star-forming regions; and 3) gas and dark matter fractions are likely higher. \section{Comparison with ATLAS$^{\rm 3D}$} \label{A3D} The JAM-based mass models from the ATLAS$^{\rm 3D}$ survey (C13) have served as the standard benchmark for virial mass estimates. The motivation for our work is to directly determine the normalization of the virial mass for galaxies at large lookback time, reducing potential observational biases and evolutionary effects. The self-consistent, mass-follows-light dynamical models for the ATLAS$^{\rm 3D}$ data from C13 take Multi-Gauss Expansion models as the light and mass tracer; S\'ersic profiles are fitted independently by \citet{Krajnovic2013} but are not used in the modeling. C13 provide two separate virial mass estimates estimates for the MGE light model (here, referred to as $M_{\rm{vir}}$) and the S\'ersic light model (here, referred to as $M_{\rm{vir,n}}$). In Section \ref{sec:atlas_mge} we examine the residual trend with projected axis ratio in the $M_{\rm{vir}}$ estimates for the ATLAS$^{\rm 3D}$ sample, and in Section \ref{sec:atlas_sersic} we discuss a systematic difference of 0.1 dex between LEGA-C and ATLAS$^{\rm 3D}$ mass estimates. \subsection{A Dependence of the Virial Mass on Projected Axis Ratio in \rm{ATLAS$^{\rm 3D}$}}\label{sec:atlas_mge} The left-hand panel of Figure \ref{fig:dml_q__mge_atlas3d} compares $(M/L)_{\rm{JAM}}$ and $(M/L)_{\rm{vir}}$ where both are based on the MGE light model, and where $M_{\rm{vir}}$ is defined in C13. This reveals a significant and hitherto hidden dependence on projected axis ratio, analogous to the trend seen for LEGA-C in this paper. With this new insight we define a new virial mass estimate for ATLAS$^{\rm 3D}$ (shown in the right-hand panel of Figure \ref{fig:dml_q__mge_atlas3d}): \begin{equation} M_{\rm{vir,A3D}} = 3.6\frac{\langle V_{\rm rms}^2 \rangle_e R_{\rm{MGE,maj}}}{G} \label{eq:mvir_a3d} \end{equation} \noindent where $\langle V_{\rm rms}^2 \rangle_e$ is the deprojected second moment of the velocity, from Eq. 29 of C13.\footnote{The values are available from https://purl.org/atlas3d} This is the quadratic sum of the velocity dispersion $\sigma$ and the inclination-corrected velocity $V/\sin i$, averaging (light-weighted) over all spatial elements within the MGE half-light ellipse. Here, $i$ is the inclination inferred from the JAM. This inclination-corrected $\langle V_{\rm rms}^2 \rangle_e$ is not an empirical function of the observed axis ratio (as in Eq.~\ref{eq:kq} of this paper), but derives from the projected shape of the kinematics and the inclination, simultaneously removing the trend with axis ratio and reducing the scatter in the virial mass estimate. Specifically, the scatter in the ratio decreases from 0.08 dex in the left-hand panel to 0.06 dex in the right-hand panel. This result confirms one of the main findings of this paper for ATLAS$^{\rm 3D}$: the scatter in the virial estimates and the dependency on $q$ is partially due to the effect of inclination on the measured second velocity moment. The normalization factor is reduced from 3.9 to 3.6 in order to remove an offset with respect to $(M/L)_{\rm{JAM}}$ (since generally $\langle V_{\rm rms}^2 \rangle_e > \sigma^2_{\rm{Re}}$). Note that no S\'ersic index dependence enters in the above: the mixed use of MGE light models for JAM and a S\'ersic index-based homology correction is generally not recommended (also see C13). \subsection{A Systematic Offset between \rm{LEGA-C} and \rm{ATLAS$^{\rm 3D}$}}\label{sec:atlas_sersic} C13 also provide a S\'ersic-based $M_{\rm{vir,n}}$ estimate, which uses $K(n)$ (here, Eq.\ref{eq:kn}) and $R_{\rm{circ}}$. We already saw in Section \ref{sec:results_p1} and Figure \ref{fig:sigma_correction} (right-hand panel) that this definition produces a mass offset with respect to the LEGA-C JAM estimates, suggesting a systematic difference between the LEGA-C and ATLAS$^{\rm 3D}$ mass scales, which we examine here. As a first step we show for ATLAS$^{\rm 3D}$ that $(M/L)_{\rm{vir,n}}$ is independent of axis ratio (left-hand panel of Fig.~\ref{fig:legac_atlas3d}), reaffirming that using $R_{\rm{circ}}$ instead of $R_{\rm{sma}}$ serves as a first-order inclination correction on $M_{\rm{vir,n}}$. Note that the sign of the y-axis is reversed with respect to Figure \ref{fig:dml_q__mge_atlas3d} because we now wish to compare various JAM $M/L$ estimates to a common virial mass estimator ($M_{\rm{vir,n}}$ from C13). We also note that the scatter (0.12 dex) is larger than in Figure \ref{fig:dml_q__mge_atlas3d} because the S\'ersic models have different $R_e$ and $L$ compared to the MGE models that are used for both the JAM and the $M_{\rm{vir}}$ in Figure \ref{fig:dml_q__mge_atlas3d}. For LEGA-C we have two JAM flavors: our preferred model, which we use in this paper, includes a dark matter component (necessitated by the relatively large extent of the kinematic data, typically $1-3R_e$), but vH21 also presented self-consistent mass-follows-light models, analogous to the ATLAS$^{\rm 3D}$ modeling results used in this section. We calculate JAM $M/L$ for LEGA-C by integrating both the mass and the luminosity within a sphere with radius $R_{\rm{sma}}$. The LEGA-C mass-follows-light model $M/L$ shows a 0.09 dex offset w.r.t. $(M/L)_{\rm{vir,n}}$, whereas the model with dark matter produces a 0.12 dex offset. This implies that it is not the inclusion of a dark component that elevates the LEGA-C JAM $M/L$ by $\approx 0.1$ dex. The other implication is that the offset in $M$ in the right-hand panel of Figure \ref{fig:sigma_correction} is due to an offset in $M/L$. This is not self-evident, as this proportionality rests on the assumption that the sphere with radius $R_{\rm{sma}}$ includes 50\% of the luminosity. This is not an exact equality, but for the LEGA-C JAM models the difference is small: the fraction of the luminosity within a sphere of radius $R_{\rm{sma}}$ is, on average, 0.47$\pm$0.02, where the error is the average random error on the individual estimates. This justifies our choice to compare the virial mass estimates with 2$\times$~the dynamical model mass calculated within a sphere with radius $R_{\rm{sma}}$ (see also Sec.~\ref{sec:radius}). The offset in $M/L$ implies that, for a given set of observables ($q$, $n$, $R_e$, and $\sigma$), $M_{\rm{JAM}}$ is 0.1 dex larger for LEGA-C than for ATLAS$^{\rm 3D}$. It is not clear whether this offset is physical or the result of an unknown systematic error. There are many differences between higher- and lower-redshift galaxies and between the ATLAS$^{\rm 3D}$ and LEGA-C samples. About 50\% of the LEGA-C sample are late-type galaxies, with presumably high gas and dark matter fractions relative to the early-type galaxies in ATLAS$^{\rm 3D}$. But also the early-type galaxies in LEGA-C are not equivalent to those in the ATLAS$^{\rm 3D}$ sample. The average $\sigma_{\star}^2$ is $>$2$\times$ higher for LEGA-C, and higher-redshift early-type galaxies are more compact and more rotation dominated. It is therefore not implausible that the dark matter fraction and overall structure is different. On the pragmatic side, the LEGA-C stellar kinematic data for quiescent (early-type) galaxies typically probe out to 2$R_e$ for LEGA-C, which necessitates the inclusion of a dark matter component, whereas for ATLAS$^{\rm 3D}$ this is $1R_e$, for which models with and without dark matter produce very similar total $(M/L)(<R_e)$ (C13). It is possible that the LEGA-C $M/L(<R_e)$ estimates are biased upward by the statistical weight of kinematic data outside $R_e$ in combination with the low spatial resolution. Kinematic data with higher spatial resolution from, e.g., ELT is required to resolve this issue. Remaining agnostic about the interpretation of the offset in mass scale between LEGA-C and ATLAS$^{\rm 3D}$, we conclude that we have established a common mass scale for galaxies across 8 Gyr of cosmic time with a small systematic uncertainty of 0.1 dex. \section{Summary \& Conclusions} \label{section:conclusions} In this paper we provide a new calibration of the mass scale of galaxies at $z=0.6-1$ that is applicable to galaxies of all morphological types. Jeans axi-symmetric models for 673 galaxies based on spatially resolved long-slit stellar kinematics from LEGA-C serve as the baseline. Integrated stellar velocity dispersions (the second moment) and S\'ersic profiles from HST imaging then allow for virial mass estimates (Eq.~\ref{eq:mvir}) with a systematic uncertainty with respect to the locally calibrated mass scale of at most 0.1 dex and with 20\% random uncertainty for quiescent galaxies and 40\% random uncertainty for star-forming galaxies (Figure \ref{fig:jam_vir}). The combination of elements to arrive at this level of consistency are as follows: \begin{itemize} \item {\bf Non-homology in radial structure} as parameterized in Eq.~\ref{eq:kn} in order to remove any dependence on galaxy structure (S\'ersic index $n$). Without this correction, and adopting a standard proportionality factor of $K=5$, disk galaxies will have their dynamical masses underestimated by $>50\%$~(Figure \ref{fig:mass_mass}). We note that the use of $K(n)$ is contingent on the use of the S\'ersic profile as a proxy for the light profile, as demonstrated previously by \citet{Cappellari2013+jam}. \item {\bf Non-homology in 3D galaxy structure} as parameterized in Eq.~\ref{eq:kq} in order to remove any dependence on projected axis ratio. This accounts for the combined effect of variations in intrinsic, 3D galaxy shape and projection effects. Without such a correction, using the measured, projected velocity second moment (referred to as $\sigpr$ in this paper), face-on galaxies will have underestimated masses, and edge-on galaxies overestimated masses (Figure \ref{fig:sigma_correction}, left-hand panel). \item {\bf Half-light radius} as measured along the major axis ($R_{\rm{sma}}$), as previously demonstrated by \citet{Cappellari2013+jam}. \end{itemize} For convenience we repeat the relevant equations here. The calibrated virial mass estimate is defined as: \begin{equation} M_{\rm{vir}} = K(n)K(q)\frac{\sigprsq R_{\rm{sma}}}{G} \end{equation} \noindent where \begin{equation} K(n) = 8.87 - 0.831 n + 0.0241 n^{2} \rm{(from C13)} \end{equation} \noindent and \begin{equation} K(q) = (0.87+0.38 e^{-3.78(1-q)})^2 \end{equation} \noindent with the projected axis ratio $q\equiv b/a$. A comparison with the low-redshift sample of early-type galaxies with IFU data from ATLAS$^{\rm{3D}}$ shows that the LEGA-C dynamical masses are systematically higher by 0.1 dex (Section \ref{sec:atlas_sersic}). It is not clear whether this offset is due to structural differences between the galaxies in the two samples, or due to an unknown systematic error. Nonetheless, we stress that a common mass scale for galaxies across 8 Gyr of cosmic time with a systematic uncertainty of at most 0.1 dex should be considered a success. Numerous applications, spin-offs and expansions are possible. Practical applications of our calibrated virial mass scale include quick, unbiased dynamical mass estimates for large samples with rudimentary measures of the velocity second moment and size, and the cross-comparison of dynamical masses based on measurements from different instruments\footnote{Subtle differences in integrated $\sigma$ measurements can be expected when comparing slit- and fiber-based spectra, but at higher redshift the apertures are sufficiently large to render potential aperture corrections small compared to the uncertainties on the mass estimates. } Future work will include a one-to-one comparison between dynamical models based on ionized gas kinematics and stellar kinematics at large look-back time; a comparison with stellar mass estimates, augmented with either direct or inferred gas mass estimates; separating the radial dependence of stellar $M/L$, gas mass fractions and dark matter fraction (for a smaller subset with highly significant deviations from the mass-follows-light assumption), and a comparison with total masses of galaxies in cosmological hydrodynamical simulations. \section*{Acknowledgments} Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 194-A.2005 (The LEGA-C Public Spectroscopy Survey). This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 683184). \bibliographystyle{aasjournal} \bibliography{ms_v2} \restartappendixnumbering \appendix{} \label{AppendixA} In this Appendix we provide, as a supplement, the dynamical model parameters presented by vH21. The virial mass values were published by \citet{vanderwel21}. In vH21 we already published mass estimates and proxies for dynamical structure (e.g., $V/\sigma$), but to facilitate comparisons with other datasets and models we present here all fitted parameters, including those without precise constraints (essentially, nuisance parameters required only to marginalize over astrophysically motivated priors). For a full description of the models we refer to vH21, and here we only repeat the important caveats that should be kept in mind when using these data. In Table \ref{tab:cat1} we include the $M/L$ estimates for the mass-follows-light model as well as the model that includes a dark component: in addition to the stellar component (as traced by the HST ACS/F814W images) a dark matter component is incorporated. This component serves to account for any $M/L$ gradient, regardless of whether it is actual dark matter, or rather due to gas or stellar $M/L$ variations. While poorly constrained for most individual galaxies (this is reflected in the uncertainties in $f_{\rm{DM}}$, which are much larger than the uncertainties in the total $M/L$) we systematically find for the ensemble that a marginally positive $f_{\rm{DM}}$ is required to model our kinematic data that probe out to $1-3R_e$. In short, constraints on $f_{\rm{DM}}$ reflect a deviation from the mass-follows-light assumption and should not be taken at face value: careful analysis of individual galaxies is required to interpret these quantities. Inclinations are almost always unconstrained by the kinematic data due to the relatively poor spatial resolution and large width of the slit. The estimates in the table are therefore the result of the marginalization over the prior, which is set by the projected shape of the HST light profile (see vH21 for further details). \begin{deluxetable*}{cccccccc} \tabletypesize{\scriptsize} \tablecolumns{12} \tablecaption{LEGA-C JAM parameters (models with dark component) \label{tab:cat1}} \tablehead{ \colhead{ID1} & \colhead{ID2} & \colhead{$\log{(L)}$} & \colhead{$\log{(M/L)}$~(no DM)} & \colhead{$\log{(M/L)}$~(with DM)} & \colhead{$f_{\rm{DM}}$} & \colhead{$\beta_{\rm{z}}$} & \colhead{$i$}\\ \colhead{} & \colhead{} & \colhead{$L_{\odot\rm{,g}}$} & \colhead{$M_{\odot}/L_{\odot\rm{,g}}$} & \colhead{$M_{\odot}/L_{\odot\rm{,g}}$} & \colhead{} & \colhead{} & \colhead{deg.} } \startdata 5 & 4792 & 10.87 & $0.05^{+0.05}_{-0.03}$ & $0.34^{+0.27}_{-0.24}$ & $0.62^{+0.38}_{-0.42}$ & $0.03^{+0.33}_{-0.35}$ & $28\pm4$ \\ 26 & 10462 & 10.73 & $-0.24^{+0.06}_{-0.05}$ & $0.15^{+0.46}_{-0.36}$ & $0.71^{+0.29}_{-0.58}$ & $0.02^{+0.32}_{-0.36}$ & $44\pm5$ \\ 27 & 10902 & 11.35 & $0.09^{+0.05}_{-0.03}$ & $0.43^{+0.48}_{-0.42}$ & $0.62^{+0.38}_{-0.52}$ & $-0.03^{+0.34}_{-0.33}$ & $31\pm4$ \\ 38 & 14375 & 11.25 & $0.14^{+0.02}_{-0.02}$ & $0.17^{+0.07}_{-0.08}$ & $0.10^{+0.76}_{-0.09}$ & $-0.07^{+0.34}_{-0.29}$ & $74\pm5$ \\ 39 & 14729 & 11.37 & $0.47^{+0.05}_{-0.04}$ & $0.42^{+0.32}_{-0.40}$ & $0.00^{+1.00}_{-0.00}$ & $0.19^{+0.23}_{-0.41}$ & $67\pm6$ \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \enddata \tablecomments{JAM model parameters described by vH21. (1): LEGA-C ID (DR3); (2):Ultra-VISTA ID \citep{Muzzin2013b}; (3): $g$ band Mass-to-light from the mass follows light model; (4): Total $g$ band mass-to-light ratio from the light plus dark matter model; (5): Dark matter fraction; (6): Inclination; (7): Vertical anistropy $\beta_{z}\equiv 1 - \langle v_{z}^{2}\rangle/\langle v_{R}^{2}\rangle$. All quantities are calculated within spherical apertures with radius $R_{\rm{Sersic,sma}}$. Values and uncertainties are based on 16th, 50th, 84th percentiles of posterior parameter distributions. The machine-readable table has 861 entries and is matched to the catalog published in vH21.} \end{deluxetable*}
Title: Synthetic ngVLA line observations of a massive star-forming cloud
Abstract: Study of pre-stellar cloud evolution requires observations with high sensitivity and resolution, and regions of high-mass star formation are particularly challenging. We wish to quantify, to what accuracy the physical conditions within a massive star-forming cloud can be determined from observations. We are particularly interested in the possibilities offered by the Next Generation VLA (ngVLA) interferometer. We use data from a magnetohydrodynamic simulation of star-formation and concentrate on a filamentary structure that has physical properties similar to an infrared-dark cloud. We produce synthetic ngVLA observations of spectral lines and analyse the column density, gas temperature, and kinematics. The results are compared to ideal observations and the actual 3D model. For a distance of 4 kpc, ngVLA provides a resolution of 0.01 pc even in its most compact configuration. For abundant molecules, such as HCO+, NH3, N2H+, and CO isotopomers, cloud kinematics and structure can be mapped down to sub-arcsec scales. For NH3, a reliable column density map could be obtained for the entire 15 * 40 arcsec cloud, even without additional single-dish data, and kinetic temperatures are recovered to a precision of 1 K. At higher frequencies, the loss of large-scale emission is noticeable. The line observations accurately trace the cloud kinematics, except for the largest scales. The line-of-sight confusion complicates the interpretation of the kinematics, and limits the usefulness of collapse indicators based on the blue asymmetry of optically thick lines. The ngVLA will provide accurate data on the small-scale structure and the physical and chemical state of clouds, even in high-mass star-forming regions at kiloparsec distances. Complementary single-dish data are still essential for estimates of the total column density and the large-scale kinematics.
https://export.arxiv.org/pdf/2208.01894
\title{Synthetic ngVLA line observations of a massive star-forming cloud} \author{M. Juvela \inst{1}, E. Mannfors \inst{1}, T. Liu\inst{2}, \and L.V. T{\'o}th\inst{3}} \institute{ Department of Physics, P.O.Box 64, FI-00014, University of Helsinki, Finland, {\email mika.juvela@helsinki.fi} \and Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, Peoples Republic of China \and Department of Astronomy, E{\"o}tv{\"o}s Lor{\'a}nd University, P{\'a}zmГЎny P{\'e}ter s{\'e}t{\'a}ny 1/A, H-1117 Budapest, Hungary } \date{Received September 15, 1996; accepted March 16, 1997} \abstract {Studies of the interstellar medium and the pre-stellar cloud evolution require spectral-line observations that have high sensitivity and high angular and velocity resolution. Regions of high-mass star formation are particularly challenging, because of line-of-sight confusion, inhomogeneous physical conditions, and potentially very high optical depths.} {We wish to quantify, to what accuracy the physical conditions within a massive star-forming cloud can be determined from observations. We are particularly interested in the possibilities offered by the Next Generation VLA (ngVLA) interferometer.} {We use data from a magnetohydrodynamic simulation of star-formation in a high-density environment. We concentrate on the study of a filamentary structure that has physical properties similar to a small infrared-dark cloud. We produce synthetic observations for spectral lines observable with the ngVLA, are analysed these to measure column density, gas temperature, and kinematics. Results are compared to ideal line observations and the actual 3D model.} {For a nominal cloud distance of 4\,kpc, ngVLA provides a resolution of $\sim$0.01\,pc even in its most compact configuration. For abundant molecules, such as HCO$^+$, NH$_3$, N$_2$H$^+$, and CO isotopomers, cloud kinematics and structure can be mapped down to sub-arcsec scales in just a few hours. For ${\rm NH_3}$, a reliable column density map could be obtained for the entire $15\arcsec \times 40\arcsec$ cloud, even without the help of additional single-dish data, and kinetic temperatures are recovered to a precision of $\sim$1\,K. At higher frequencies, the loss of large-scale emission becomes noticeable. The line observations are seen to accurately trace the cloud kinematics, except for the largest scales, where some artefacts appear due to the filtering of low spatial frequencies. The line-of-sight confusion complicates the interpretation of the kinematics, and the usefulness of collapse indicators based on the expected blue asymmetry of optically thick lines is limited. } {The ngVLA will be able to provide accurate data on the small-scale structure and the physical and chemical state of star-forming clouds, even in high-mass star-forming regions at kiloparsec distances. Complementary single-dish data are still essential for estimates of the total column density and the large-scale kinematics.} \keywords{ ISM: clouds -- -- ISM: molecules -- Radio lines: ISM -- Stars: formation -- Stars: protostars } \section{Introduction} Progress in star-formation (SF) studies is made via a combination of observations and numerical simulations. Many open questions remain regarding the cloud formation, the balance between gravity, turbulence, and magnetic fields at different scales, how filamentary structures are formed, how these fragment, and how mass accretion takes place at different scales, from clouds to filaments and finally to protostellar cores - and how all these processes differ between star-forming regions \citep{Motte2018,Hacar2022,Pattle2022}. We also need better understanding of the interstellar medium (ISM) itself, the gas and the dust, which are used as tracers of the SF process and affect, and are affected by, the SF. Understanding high-mass SF is particularly important as massive stars are crucial to many astrophysical processes, and their ionising radiation and stellar winds, as well as the heavy elements produced by supernovae affect their host galaxies \citep{Kennicutt2005}. Massive stars are formed partly in infrared dark clouds (IRDCs), which are massive ($M\ga1000$\,M$_{\sun}$), cold (on average $T \sim $10--20\,K) and dense ($\Sigma \sim 0.02$\,g\,cm$^{-2}$) \citep{Peretto2010,Kainulainen2013,Tan2014,Lim2016}. Studies suggest that high-mass star-forming cores fragment as soon as they lose turbulent and magnetic support \citep{Csengeri2011}, and numerical simulations show clouds not to be in equilibrium \citep{Padoan2001,Vasquez2007}. Although simulations provide a direct handle on the dependencies between SF and the environment where SF takes place, different views exist regarding the main causes of especially the high-mass SF \citep{Zinnecker2007,Tan2018,Motte2018}. It is still not clear whether high-mass stars form through competitive accretion \citep{Larson1992,Bonnel2001,Bonnell2004} or core-accretion \citep{Larson1981,McKee1999,McKee2003} or via large-scale processes that are driven mainly either by gravity \citep{VazquezSemadeni2019,NaranjoRomero2022} or turbulent inertial flows \citep{Padoan2020,Pelkonen2021} - or rather what is the relative importance of the different mechanisms in different physical environments. The SF theories are all supported by numerical simulations, although with some differences in the assumed boundary conditions and included physics. It is essential that these paradigms are tested against real observations in an objective way. Observations can be used to estimate physical parameters (column densities, volume densities, temperatures), the kinematics and chemistry in different phases of the SF process, and to quantify the resulting populations of clumps, filaments, and cores. These can all be compared to the model predictions, using synthetic observations that take into account, at least partially, the complexity of the source structure and the variations in the physical conditions along the line of sight (LOS) and inside the finite telescope beam can be taken into account. To accurately test star-formation theories, one needs observations that cover large areas with high sensitivity and fidelity and, on the other hand, have the resolution to probe the small-scale core fragmentation and the initial stages of protostellar collapse. Recent continuum observations, for example the surveys conducted with the Herschel Space Observatory \citep{Pilbratt2010}, have resulted in significant advances in the understanding of the structure of the star-forming clouds and the large-scale context of SF. However, spectral line observations remain crucial, to mitigate the effects of LOS confusion and to gain direct access to the physical properties, kinematics, and chemistry of the gas component. Radio interferometers are essential for studies of the small structures, the cloud cores with sizes below $\sim$0.1\,pc or even further the core fragmentation down to $\sim$1000\,au scales \citep{Tokuda2020,Sahu2021}. Interferometry is also needed for the more distant targets, including the typical high-mass star-forming clouds that are at kiloparsec distances. \citep{Liu2020_ATOMS-I,Beuther2021,Oneill2021,Motte2022_IMF_I}. Instruments like the Atacama Large Millimeter/submillimeter Array (ALMA\footnote{https://www.almaobservatory.org}, the Submillimetre Array (SMA\footnote{https://lweb.cfa.harvard.edu/sma/}), and the IRAM NOrthern Extended Millimeter Array (NOEMA\footnote{https://www.iram-institute.org/EN/noema-project.php}) already provide sub-arcsecond resolution. Next-generation Very Large Array (ngVLA\footnote{https://ngvla.nrao.edu}) is the planned extension of the Karl G. Jansky Very Large Array (VLA\footnote{https://www.vla.nrao.edu}). It will cover frequencies 1.2-116\,GHz with (at the highest frequencies) a maximum instantaneous bandwidth of 20\,GHz and with a sensitivity and angular resolution potentially exceeding those of both the current VLA and ALMA instruments. The ngVLA will also complement the radio-frequency coverage of ALMA and SKA\footnote{https://www.skatelescope.org} that, respectively, operate mainly at higher and lower frequencies. It will be complementary also by providing high-resolution observations of the northern sky. The ngVLA will be able to observe basic transitions of several key molecules, such as NH$_3$, some of which are not accessible to ALMA but are important for SF studies and especially for the study of the cold ISM and the early phases of the SF process. In this paper we study, as a test case, synthetic observations of a massive filamentary cloud. The target has properties similar to a infrared dark cloud (IRDC) capable of giving birth to high-mass stars. The cloud model is obtained from a magnetohydrodynamic (MHD) simulation, which is post-processed with radiative transfer calculations to predict line emission in several transitions that are accessible to the ngVLA. The simulated line data are processed with the CASA program\footnote{https://casa.nrao.edu} to make predictions for actual ngVLA observations, including realistic noise and effects of the interferometric mode of observations. This is part of an ngVLA community study, where we investigate the use of the ngVLA interferometer for observations of star-forming clouds. In the present paper, the emphasis is on moderately extended structures, and we concentrate on observations with the most compact ngVLA antenna configuration. Smaller scales (e.g. sub-fragmentation of cores) will be addressed in future publications. The contents of the paper are the following. Section~\ref{sect:simulations} describes the MHD runs (Sect.~\ref{sect:MHD}), radiative transfer modelling (Sect.~\ref{sect:RT}), and CASA simulations (Sect.~\ref{sect:CASA}). The synthetic observations are analysed in Sect.~\ref{sect:results}, regarding column densities (Sect.~\ref{sect:colden}) and temperatures (Sect.~\ref{sect:temperature}) and cloud kinematics (\ref{sect:kinematics}). We discuss the results in Sect.~\ref{sect:discussion} before listing the final conclusions in Sect.~\ref{sect:conclusions}. \section{Simulated observations} \label{sect:simulations} \subsection{MHD simulations} \label{sect:MHD} As a starting point for the synthetic observations, we use the density and velocity fields from the MHD simulation described in \citet{Haugbolle2018}. The MHD run covers a volume of (4\,pc)$^3$, using octree spatial discretisation with a root grid of 256$^3$ cells and up to five levels of refinement, reaching a best linear resolution of 100\,au ($4.88 \times 10^{-4}$\,pc). The refinement was based on density, especially to resolve the collapsing cores. Sink particles were used to represent collapsed regions where the density exceeded the average density by a factor of 10$^5$-10$^6$ and where stars are thus forming. The total mass contained in the (4\,pc)$^3$ box is about 3000\,M$_{\sun}$. In the snapshot used in this paper, over 400 stars have already formed, and these are also used in the subsequent radiative transfer modelling as radiation sources. Details of the MHD simulation and a study of formed stellar populations can be found in \citet{Haugbolle2018}. \subsection{Radiative transfer modelling} \label{sect:RT} We concentrate on the densest part of the model and extracted a (1.84\,pc)$^3$ subvolume out of the full (4\,pc)$^3$ MHD run. The subvolume contains 230 young stars with masses ranging from sub-solar to 36\,$M_{\sun}$ and includes two main density enhancement that, in the selected view direction ``x'', appear as a single elongated structure (Fig.~\ref{fig:colden}a). We refer to this structure as ``the filament''. In 3D, it consists of a northern and a southern clumps, which are connected also in 3D, but only by narrow bridges that are inclined by about 45 degrees with respect to the observer LOS (Fig.~\ref{fig:colden}b). The 3D nature of the object is therefore different from its appearance in the projected images. Figure~\ref{fig:mass_centres} shows the mass surface density towards the direction ``x''. The figure also shows the outlines of the northern and southern clumps, which are in 3D defined by density isosurfaces. We started by modelling the dust component, using the dust properties from \citet{Weingartner_Draine_2001}, the Case B that has a selective extinction of $R_{\rm V}=5.5$, the high value being appropriate for dense clouds. The calculations were carried out with the radiative transfer programme SOC \citep{Juvela_2019_SOC}, using the full octree discretisation of the MHD model. The dust heating included an isotropic external field \citep{Mathis1983} and the contribution of the embedded stars, which were modelled as as blackbody point sources, with the given luminosities and effective temperatures \citep{Haugbolle2018}. The radiative transfer calculations solved the dust temperature $T_{\rm dust}$ for each model cell, assuming that the grains are at equilibrium with the radiation field. The dust-temperature distribution has its mode at 14.3\,K and extends below 10\,K in dense, non-protostellar cores. The temperature structure is not fully resolved in the immediate vicinity of stars, but temperatures at and above 100\,K are reached in nearly 0.1\% of the cells. Because the spatial grid is highly refined close to the star-forming cores, this corresponds to a much smaller fraction of the cloud volume. The analysis of the synthetic observations of dust emission (and polarisation as a proxy of the magnetic fields) are deferred to a future publication. In this paper, only the information of the estimated large-grain dust temperatures is used. Appendix~\ref{sect:Tdust} shows a colour-temperature map, how the dust temperature distribution would appear based on 160-500\,$\mu$m continuum observations. The line modelling is based on the density and velocity fields of the MHD simulation and the results of the dust modelling, assuming that the dust temperature serves as a proxy for the gas kinetic temperature, $T_{\rm kin}\approx T_{\rm dust}$. The line modelling was carried out with the radiative transfer program LOC \citep{Juvela_2020_LOC}, with the molecular data obtained from the LAMDA database \citep{Schoier2005}. The calculations solve the non-LTE excitation of the molecules in each cell and provide spectral line maps towards chosen directions. Spectra were computed for $^{13}$CO(1-0), C$^{18}$O(1-0), HCO$^{+}$(1-0), ${\rm H^{13}CO^{+}}$(1-0), ${\rm N_2 H^{+}}$(1-0), ${\rm NH_3}$(1,1), and ${\rm NH_3}$(2,2) lines. In the case of ${\rm N_2 H^{+}}$(1-0) and ${\rm NH_3}$(1,1), the hyperfine structure was taken into account by assuming LTE conditions between the hyperfine components \citep{Keto1990}. The assumed peak fractional abundances are $2\times 10^{-6}$ for $^{13}$CO, $3\times 10^{-7}$ for ${\rm C^{18}O}$, $5\times 10^{-10}$ for HCO$^{+}$, $1\times 10^{-11}$ for ${\rm H^{13}CO^{+}}$, $3\times 10^{-8}$ for ${\rm NH_3}$, and $1\times 10^{-9}$ for ${\rm N_2 H^{+}}$. The abundances are further scaled with an additional density dependence $n({\rm H}_2)^{2.45}/(3.0\times 10^8+n({\rm H}_2)^{2.45})$ \citep[cf.][]{Glover2010}. This decreases the abundances at densities below $n({\rm H}_2)=10^{4}$\,cm$^{-3}$, reduces them by a factor of ten at $n({\rm H}_2)=10^{3}$\,cm$^{-3}$ and makes them negligible at the lowest densities, where the gas can be assumed to be mostly atomic. We do not model the depletion that could decrease the abundances at the highest densities, especially within the pre-stellar cores. The assumption $T_{\rm kin}\approx T_{\rm dust}$ is accurate at densities close to and above $10^5$\,cm$^{-3}$, where the gas-dust collisions bring the gas kinetic temperature to within a few degrees of the dust temperature \citep{Goldsmith2001, Young2004, Juvela2011}. For example for ${\rm N_{2}H^{+}}$(1-0), most emission comes from such high-density gas, because the critical density of the transition is close to $n({\rm H}_2) = 10^5$\,cm$^{-3}$. There is a range of densities around $n({\rm H}_2) \approx 10^4$\,cm$^{-3}$ where we may underestimate the kinetic temperature and the ${\rm N_{2}H^{+}}$ abundance is not negligible. However, the effect of this uncertainty is smaller than that of the assumed absolute abundances. HCO$^{+}$ has a similarly high critical density. The $T_{\rm kin}\approx T_{\rm dust}$ approximation is less appropriate for ${\rm C^{18}O}$ and especially for $^{13}$CO, because its higher optical depth leads to significant excitation at lower densities. However, in the absence of local heating sources, $T_{\rm dust}$ remains close to 20\,K in the less dense regions, and if the gas is assumed to be in molecular form (the lack of photodissociation also implying reduced heating via the photoelectric effect), the gas kinetic temperatures should not be much higher. The fractional abundances are generally subject to large uncertainty. The typical $^{13}$CO abundance is $\sim 10^{-6}$ \citep[e.g.][]{Dickman1978, Roueff2021}, but the [$^{13}$CO]/[${\rm C^{18}O}$] ratio can vary significantly around the cosmic abundance ratio $\sim$5.5 \citep{Myers1983}. Values $3-10$ have been reported in dark clouds and high-mass star-forming regions alike \citep{Paron2018, Areal2018, Roueff2021}. In our simulations, the ratio is [$^{13}$CO]/[${\rm C^{18}O}$]=6.67, which is close to the average values found in IRDCs \citep{Du2008, Areal2019}. \citet{Pirogov1995} reported an average value [HCO$^{+}$]/[${\rm H^{13}CO^{+}}$]=29 for a sample of dense clouds. \citet{RodriguezBaras2021} examined nearby clouds and, for example in Orion, found roughly similar values. However, the distribution of estimates is very wide and extends up to values similar to the isotopic ratio $^{12}$C/$^{13}$C$\sim$90 in the solar neighbourhood \citep{Milam2005}. We use a ratio [HCO$^{+}$]/[${\rm H^{13}CO^{+}}$]=50. The assumed abundance [HCO$^{+}$]=$5 \times 10^{-10}$ is close to the lower limit found in the IRDC survey of \citet{Vasyinina2011} or in \citet{RodriguezBaras2021}. For most clouds, the estimated abundances are higher, by up to a factor of ten, and our simulations are therefore somewhat pessimistic regarding the HCO$^{+}$ and ${\rm H^{13}CO^{+}}$ line intensities \citep[see also][]{Blake1987,Sanhueza2012}. For ammonia, the selected value [${\rm NH_3}$]=10$^{-8}$ is typical of IRDCs \citep{Chira2013, Sokolov2017}. Finally, the estimates of [${\rm N_{2}H^{+}}$] vary more than one order of magnitude even in samples of dense clouds. \citet{Ryabukhina2021} found in the IRDC G351.78-0.54 values [${\rm N_{2}H^{+}}$]=0.5-2.5$\times10^{-10}$, which are similar to the values 1-4$\times 10^{-10}$ reported for low-mass cores \citep{Caselli2002a}. However, in \citet{Gerner2014}, the values tended to be slightly below [${\rm N_{2}H^{+}}$]=$10^{-9}$ for IRDCs and slightly higher for high-mass protostellar objects. Using data from the MALT90 survey \citep{Foster2011}, \citet{Miettinen2014} found an average of [${\rm N_{2}H^{+}}$]=$1.6\times 10^{-9}$, both infrared-dark and infrared-bright sources exhibiting similar abundances. Our assumed value [${\rm N_{2}H^{+}}$]=$1\times 10^{-9}$ is therefore closer to these higher estimates. In the present paper, our main interest is in the comparison between ``a model'' and the synthetic observations made of that particular model. A high degree of physical accuracy in the underlying cloud description is therefore less crucial, and this applies both to the assumed temperature and abundance distributions. The radiative transfer modelling resulted in spectral-line maps that cover the model with a pixel size that was set equal to the smallest cell size in the 3D model. For the assumed cloud distance of 4\,kpc, one pixel (equal to 100\,au or $4.88\times 10^{4}$\,pc) corresponds to an angular size of 0.025$\arcsec$. The velocity resolution of the extracted spectra was set to $\sim$0.1\,km\,s$^{-1}$, which is of the order of the thermal line broadening and much below the total observed line widths that are typically $\sim$1\,km\,s$^{-1}$ or larger. The spectra that are obtained directly from the radiative transfer calculations are in the following called the ``ideal'' observations. The radiative transfer program LOC is not based on Monte Carlo method, and the spectra do not therefore contain random Monte Carlo noise. The errors are only due to the sampling of the radiation field with a finite number of rays. This could have a small effect when line data are compared directly to the properties of the 3D model, but it does not directly affect the comparison between the ``ideal'' observations and the synthetic ngVLA observations, because the latter are based on the former. \subsection{CASA simulations} \label{sect:CASA} The spectral cubes produced by the radiative transfer calculations were converted to synthetic ngVLA observations with the help of the CASA simulator\footnote{https://casaguides.nrao.edu/index.php/Simulating\_ngVLA\_Data-CASA5.4.1}. Because our emphasis is on extended rather than sub-core-scale structure, we concentrate on observations made with the Core Subarray, which consists of 94 antennas with maximum baselines of 1.3\,km. The ngVLA observation were simulated with the CASA {\rm simobserve} script, adding noise according to the published ngVLA characteristics\footnote{https://ngvla.nrao.edu/page/performance}. The simulated raw observations were also processed with the CASA program, where maps were made with the {\tt tclean} algorithm, with robust weighting (robust=0.5) and automatic multi-threshold masking. The target field was set at zero declination, which results in small ellipticity in the synthesised beam. For the Core antenna configuration, after adding a small amount of tapering, the resolution ranges from $2.6\arcsec \times 2.2\arcsec$ for the ammonia lines near 23.7\,GHz to about $0.58\arcsec \times 0.50\arcsec$ for the C$^{18}$O and $^{13}$CO lines near 110\,GHz. We used two pointings that are separate by 14$\arcsec$ in the latitude. The corresponding map sizes range from nearly circular ammonia maps, with a diameter of 3.6$\arcmin$, to more elliptical coverage of $0.87\arcmin\times 0.77\arcmin$ at the highest frequencies. This is sufficient to cover the cloud filament that has a length less than $\sim$0.7$\arcmin$ (0.8\,pc at the distance of 4\,kpc). The nominal simulations correspond to an observing time of six hours with the Core antenna configuration. All observations are corrected for the main beam. \section{Results} \label{sect:results} \subsection{Line maps} \label{sect:maps} Figure~\ref{fig:W} shows maps of integrated intensity for the ideal observations (the direct output from the radiative transfer modelling without beam convolution, observational noise, or the effects of interferometric observations). Apart from differences in the signal level, the molecules (${\rm C^{18}O}$, ${\rm N_2 H^{+}}$, HCO$^{+}$, and ${\rm NH_3}$) show only small differences that result from different optical depths and critical densities. The ${\rm N_2 H^{+}}$ and ${\rm NH_3}$ integrated intensities include the emission from all hyperfine components. Figure~\ref{fig:Wobs} is the corresponding plot for the nominal synthetic ngVLA observations (six hours with the Core array). The noise is sufficiently low not to be clearly visible in the integrated intensity. The main differences compared to Fig.~\ref{fig:W} are the loss of some extended emission, for example in the South-West part of the ${\rm C^{18}O}$ map, which is also visible as lower integrated intensity. There are some regions of negative signal around of the filament. These artefacts might be reduced by more careful manual data reduction and especially by the inclusion of single-dish data. However, at small scales, the correspondence to the ideal maps is good and mainly limited by the beam size of the synthetic observations. Figure~\ref{fig:plot_spectra} shows ${\rm NH_3}$ and ${\rm N_2 H^{+}}$ spectra that are averaged over parts of the northern and southern clumps where the maximum LOS density exceeds $2\times 10^6$\,cm$^{-3}$ (about half of the areas shown in Fig.~\ref{fig:mass_centres}). In spite of the complex 3D structure of the target areas (see Appendix~\ref{app:LOS_emission}), the average spectra are not far from Gaussian. Figure~\ref{fig:W_vs_true} shows the observed integrated line intensities as a function of column density. The x-axis is here a modified value $N^{\prime}({\rm H}_2)$, which is the true model column density scaled with the density dependence that was used in setting the molecular abundances. The density dependence was the same for all molecules. Thus, the actual column density of each molecule is equal to $N^{\prime}({\rm H}_2)$ multiplied with the peak abundances of the molecule (Sect.~\ref{sect:RT}). In an ideal case, the relationship between $W$ and $N^{\prime}({\rm H}_2)$ should be linear. The first frame of Fig.~\ref{fig:W_vs_true} shows the results for ideal observations. There are minor deviations from linear relationships, with only small effects from temperature (and excitation-temperature) variations and some saturation at the highest optical depths. The same factors affect the dispersion, which is of the order of a factor of two in the direction of the $W$ axis. Figure~\ref{fig:W_vs_true}b shows the same for the nominal synthetic ngVLA observations (without single-dish data). The observed $W$ is plotted against the same modified column density $N^{\prime}({\rm H}_2)$ as above, but convolved to the angular resolution of the observations. The correlation between column density and integrated intensity $W$ remains fair at the highest column densities, but the observed $W$ values are now lower because of interferometric filtering. The ammonia observations benefit from the lower frequency and the correspondingly larger synthesised beam (lower noise) and larger main beam (larger maximum recoverable scale). If the line area were plotted against the true column density $N({\rm H}_2)$ rather than against $N^{\prime}({\rm H}_2)$, the $W$ values at the low-column-density part of the plot would appear to drop, although the actual change is in the x-axis variable. The values are lower because the average fractional abundances decrease towards lower column densities. In our case, this amounts to more than a factor of three reduction in the $W$ values at $N({H}_2)\sim 5\cdot 10^{21}$\,cm$^{-2}$, compared to the value seen at $N^{\prime}({H}_2))\sim 5\cdot 10^{21}$\,cm$^{-2}$. However, this difference is entirely dependent on the ad hoc assumption of the fractional abundances. \subsection{Column densities} \label{sect:colden} Column densities can be estimated with different spectral lines or line combinations (Appendix~\ref{app:colden}). We examine four cases: (1) assuming optically thin ${\rm C^{18}O}$ lines with $T_{\rm ex}$=15\,K, (2) using a combination of ${\rm H^{13}CO^{+}}$ and HCO$^{+}$ lines, or using the hyperfine structure of either (3) the ${\rm N_2 H^{+}}$(1-0) or (4) the ${\rm NH_3}$(1, 1) lines. All estimates are converted to $N({\rm H}_2)$ by using the maximum fractional abundance in the model cloud. To eliminate the effect of abundance variations, the true model column densities are also rescaled to take into account the LOS abundance variations, and, once these are convolved to the resolution of the observations, there should ideally be a one-to-one correspondence to the values derived from the synthetic observations. We fit the spectra with one or more Gaussian components or, in the case of ${\rm N_2 H^{+}}$ and ${\rm NH_3}$, with the hyperfine structure and one velocity component. Appendix~\ref{sect:spectral_fits} shows examples of ${\rm C^{18}O}$ spectra observed towards the northern core that are fitted with different number of Gaussian components. Both the northern and southern cores show a very complex velocity structure, but the relevant line parameters can still be mostly approximated using the single Gaussian. Towards the centre of the field, the emission from the two LOS regions overlaps, and the spectra show two equally strong velocity components that are separated by some 2\,km\,s$^{-1}$ in velocity. In Fig.~\ref{fig:colden_maps}, the first column shows the true column density that is modified according to the common density dependence of the abundances. The second column shows the corresponding estimates derived from ideal observations, and the third column the estimates based on the synthetic ngVLA observations. The analysis is based on Gaussian fits or, in the case of ${\rm N_2 H^{+}}$ and ${\rm NH_3}$ lines, hyperfine fits, all with a single fitted velocity component. There are already significant differences between the first two columns that are caused by the assumptions of the column-density estimation: LOS homogeneity, the fixed $T_{\rm ex}$ value in the case of ${\rm C^{18}O}$, and the inaccuracy of representing the spectra with single Gaussian velocity components. Figure~\ref{fig:plot_colden_cor} plots the column-density estimates from ${\rm C^{18}O}$, ${\rm N_2 H^{+}}$, and ${\rm NH_3}$ against the true column density. The plots include estimates from both the ideal observations and the synthetic ngVLA observations. The ideal ${\rm C^{18}O}$ observations provide accurate values, the adopted $T_{\rm kin}=15$\,K not resulting in a significant offset. Bias appears only at high column densities, $N({\rm C^{18}O})>10^{16}$\,cm$^{-2}$, where the lines are no longer optically thin or possibly because the $T_{\rm kin}$ values of the model cloud are systematically lower in regions of high density. In contrast, the ngVLA estimates are clearly lower, mirroring the trends in Fig.~\ref{fig:W_vs_true}. Gaussian fits with two velocity components usually result in only small changes, except for some low-column-density regions. For ${\rm N_2 H^{+}}$, synthetic ngVLA observations reach high S/N only in the high-density part of the filament (Fig.~\ref{fig:colden_maps}i) and the estimates are below the true values. For ${\rm NH_3}$, the beam size is larger (2.4$\arcsec$ compared to 0.64$\arcsec$ for ${\rm N_2 H^{+}}$), the S/N (and the main-beam size) is similarly larger, and the ngVLA observations provide more accurate estimates up to the highest column densities. At ammonia column densities $\sim 10^{15}$\,cm$^{-2}$ and below, the estimates fall below the true values. This is probably due to the filtering of large-scale emission, although the assumption of a single velocity component may also bias the results in some regions. \subsection{Kinetic temperature} \label{sect:temperature} We estimated the gas kinetic temperature with the combination of ${\rm NH_3}$(1,1) and ${\rm NH_3}$(2,2) lines (Appendix~\ref{app:colden}). The analysis assumes a homogeneous and isothermal medium, but the observations should give a good approximation for the mass-weighted temperature, as long as the LOS variations and the optical depths are not extreme \citep{Juvela2012}. We used hyperfine fits of the ${\rm NH_3}$(1, 1) line and Gaussian fits of the ${\rm NH_3}$(2, 2) line, assuming a single velocity component. Figure~\ref{fig:plot_Tkin} compares the true density-weighted kinetic temperature $T_{\rm kin}^{\rm true}$ of the model cloud with the values derived from ideal and from synthetic ngVLA ammonia observations. All data are convolved to 3$\arcsec$ resolution. The ideal ${\rm NH_3}$ observations result in correct values to within a couple of degrees, with a small positive bias across all temperatures. In the centre of the map, the presence of multiple velocity components can also contribute to the differences. For synthetic ngVLA observations, the locus coincides with that of ideal observations. However, there is some positive bias at higher temperatures (mostly lower column densities). The maps suggest that these deviations are associated with the border regions with lower S/N, especially for ${\rm NH_3}$(2,2), and large-scale effects from the filtering of extended emission. \subsection{Cloud kinematics} \label{sect:kinematics} Regarding the cloud kinematics, we look first at the differences between the true mass-weighted average LOS velocity in the model cloud and the mean velocities estimated from line observations. Second, we look at infall indicators and how they are related to the actual infall motions in the model cloud. \subsubsection{Small-scale velocity field} \label{sect:LOS_velocity} We compare the observed mean line velocities $\langle v \rangle$ to the mass- and abundance-weighted LOS velocity $\langle v_{\rm LOS} \rangle$. The observed $\langle v \rangle$ are calculated as the intensity-weighted average radial velocity over the line profiles, both for the ideal spectra and the synthetic ngVLA spectra. The parameter $\langle v_{\rm LOS} \rangle$ is read directly from the 3D model cloud. We include in its definition also the abundance variations, so that ideally $\langle v_{\rm LOS} \rangle \approx \langle v \rangle$ for optically thin emission. Figure~\ref{fig:plot_VLOS_1} compares the observed $\langle v \rangle$ of the ${\rm C^{18}O}$ and ${\rm N_2 H^{+}}$ spectra to the actual $\langle v_{\rm LOS} \rangle$ values in the model. The observed $\langle v \rangle$ values differ from $\langle v_{\rm LOS} \rangle$ by up to $\sim$1\,km\,s$^{-1}$. These can be caused by temperature variations that affect the observed spectra but not the $\langle v_{\rm LOS} \rangle$ values. A second explanation are radiative transfer effects, because both ${\rm C^{18}O}$ and ${\rm N_2 H^{+}}$ reach optical depths slightly above one. The radial velocities derived from the two species are clearly correlated, but also show differences below 1\,km\,s$^{-1}$. Although synthetic ngVLA observations may be affected by the interferometric filtering, their $\langle v \rangle$ values are still clearly closer to the velocities of the ideal spectra than to the mass-weighted average velocities $\langle v_{\rm LOS} \rangle$ of the model cloud. Figure~\ref{fig:plot_VLOS_2} compares radial velocities of different types of idealised spectra. It shows that the differences between non-LTE spectra and $\langle v_{\rm LOS} \rangle$ are explained more by the variations in kinetic temperature (frame f) than by non-LTE effects (frame e). Together these account for about for half of the differences to $\langle v_{\rm LOS} \rangle$, self-absorption and other radiative-transfer effects providing further contributions. Figure~\ref{fig:plot_VLOS_2_ZOOM} shows the same data in the northern core. There are several interesting kinematic features, such as a linear vertical filament (south of the central core, with length of $\sim$0.05\,pc and a width of ~0.001\,pc) and spiralling structures close to the core centre, which itself shows velocity differences of more than 4\,km\,s$^{-1}$ between its northern and southern parts. The area of the vertical filament shows a particularly large difference between $\langle v_{\rm LOS} \rangle$ and the $\langle v \rangle$ values derived from the non-LTE spectra. The structure almost completely disappears in the latter, because of its low excitation compared to other material along the LOS. Figure~\ref{fig:H13COp_velocity} compares further the ideal and synthetic ${\rm H^{13}CO^{+}}$ observations of the northern core. The radial velocities are taken from Gaussian fits with a single velocity component\footnote{Compared to direct intensity-weighted mean velocities, the single-component Gaussian fits are here better in extracting the velocity of the dominant LOS emission structure.}, and they are traced along four paths starting at the core location. The ngVLA observations follow the results of ideal observations, typically with a precision to a fraction of 1\,km\,s$^{-1}$. Towards the core, limited by the resolution of the observations, the velocity gradients increase up to $\sim$50\,km\,s$^{-1}$\,pc$^{-1}$. However, large gradients are seen also further out, as the selected path crosses different LOS emission regions. The complexity of the LOS structure is demonstrated further in Appendix~\ref{app:LOS_emission}, where show plots of the 3D density cube and the position-position-velocity (PPV) cube derived from synthetic ngVLA observations of the $^{13}$CO line. \subsubsection{Large-scale velocity field} \label{sect:PCA} Principal component analysis (PCA) uses the eigenvalue decomposition of the spectral observations to examine velocity fluctuations as a function of scale \citep{Heyer_1997} This usually results in a scaling relation $\delta v \sim R^{\alpha}$, which can be further related to the energy power spectrum \citep{Brunt_Heyer_2002, Brunt_Heyer_2013}. Figure~\ref{fig:plot_PCA} shows the calculated scaling relations for $^{13}$CO and C$^{18}$O, for an area of $27\arcsec\times 41\arcsec$ centred on the filament. In ideal observations, both molecules give the same value of 0.78 for the exponent of the scaling relation, in spite of the optical depths differing by a factor of several (fractional-abundance ratio of ten). The slopes are significantly steeper for the synthetic ngVLA observations, with little difference between the $^{13}$CO and ${\rm C^{18}O}$ lines. The downturn and truncation of the relationships around 0.004\,pc is due to the beam size ($0.55\arcsec \sim 0.001$\,pc). \subsubsection{Infall indicators} \label{sect:infall} The main structure of the model cloud consists of the northern and the southern clumps. These have complex 3D velocity fields, but the net flow of gas is directed towards the clump centres and many individual cores. However, observations provide information only of the LOS velocities. To characterise the corresponding LOS motions in the 3D model, we define an inflow index $\xi$ with the equation \begin{equation} \xi = \langle {\rm sign}(x_0-x) \, \, n^{\prime}(x) \,\, (v(x)-v_0) \rangle \, \, / \, \, \langle n^{\prime}(x) \rangle, \label{eq:infall_xi} \end{equation} where $v(x)$ and $n(x)$ are the velocity and the density along the LOS coordinate $x$. The location of the main LOS density maximum is $x_0$, the radial velocity at that position is $v_0$, and the sign is selected so that infall towards the maximum corresponds to positive $\xi$ values, the $\xi$ parameter having units of velocity. The averaging can be done over the whole LOS but, to measure gas motions above a given density threshold $n_0$, we use $n^{\prime}=\max(0, n(x)-n_0)$. Figure~\ref{fig:infall_index} shows the $\xi$ values for the whole filament area, for the threshold values of $n_0=10^4$\,cm$^{-3}$ and $n_0=10^6$\,cm$^{-3}$. The high-density gas shows almost exclusively positive $\xi$ values (especially in frame d), indicating the systematic contraction of the structure. When the clumps are examined in more detail, larger $\xi$ values can be resolved towards the central cores at sub-arcsecond scales (Fig.~\ref{fig:infall_index_zoom}). The mean values of $\xi$ are clearly positive, but there are also small intertwined areas with negative $\xi$ values. When observed at lower resolution, these can be thus expected to dilute the overall infall signature. In line observations, infall regions might be identified with analysis of the line profiles, with the comparison of optically thin and thick tracers. This presumes that the excitation temperature increases towards the infall centre, which then leads to asymmetric self-absorption and the net emission of optically thick lines moving to lower radial velocities and results in a blueshifted profile. The assumption is likely to be correct for isolated, symmetric, and centrally concentrated structures, rendering it very useful in the study of isolated cores. However, if the line profile contains emission from completely separate LOS structures (with different radial velocities and excitation conditions), the line profile cannot be expected to be closely correlated with the infall motions of individual cores We examined this question using a parameter \begin{equation} \delta v = \frac{v_{\rm thin} - v_{\rm thick}}{\Delta v_{\rm thin}} \label{eq:infall_dv} \end{equation} that compares the radial velocities of one optically thin and one optically thick line \citep[cf.][]{Mardones1997}, adopting a convention where {\em positive} values of $\delta v$ indicate infall or collapse. We calculated $\delta V$ for three pairs of lines: ${\rm C^{18}O}$(1-0) and $^{13}$CO(1-0), ${\rm H^{13}CO^{+}}$(1-0) and HCO$^{+}$(1-0), ${\rm N_2 H^{+}}$(1-0) and HCO$^{+}$(1-0). Two alternative sets of input values were used in Eq.~(\ref{eq:infall_dv}). In the first case, the velocities and the line width were obtained from Gaussian fits with a single velocity component. In the second case, we examined directly the channels with optically thin emission above a given brightness-temperature threshold and estimated $v_{\rm thin}$ and $v_{\rm thick}$ directly as weighted averaged over these channels. Figure~\ref{fig:LOS_collapse_LOC} shows the results for ideal ${\rm C^{18}O}$(1-0) and $^{13}$CO(1-0) observations. For comparison, we plot in Figure~\ref{fig:LOS_collapse_LOC} also the skewness of the $^{13}$CO(1-0) profile (for channels around the peak of the ${\rm C^{18}O}$(1-0) spectrum). The values of $\delta v$ remain well below $\delta v \sim 2$ that has been considered significant in studies of protostellar cores \citep{Mardones1997}. This is due to the very large total linewidths. The two methods to compute $\delta v$ do mostly agree, but there are also areas where the Gaussian fits lead to clearly different or even opposite results. This is not surprising, considering the complexity of some of the spectral profiles. There is very little correlation between the $\delta v$ and the $\xi$ values of Fig.~\ref{fig:infall_index_zoom}, and there are marked differences also between the $\delta v$ maps derived from ideal observations (Fig.~\ref{fig:LOS_collapse_LOC}) or from the synthetic ngVLA observations (not shown). The scenario where $\delta v$ is able to probe collapse (an isolated core with centrally peaked $T_{\rm ex}$ distribution and a symmetric velocity field) will also result in positive kurtosis in the profiles of optically thick lines. The correlation between kurtosis and $\delta v$ is weak for the ideal observations but becomes significant in the synthetic ngVLA observations. This shows that these statistics are not efficient in localising regions of infall motion, not only because of the more complex physical situation but also because they are sensitive to systematic errors that affect the data at larger scales (e.g., self-cancellation due to the extended emission from optically thick species). We examine the LOS emission further in Appendix~\ref{app:LOS_emission}. There the spectra are seen consist of emission from a number of disjoint density peaks, making $\delta v$ insensitive to actual infall motions in any single LOS density peak. We return to this question also in the following discussion (Sect.~\ref{disc:infall}). \section{Discussion} \label{sect:discussion} We have investigated the use of line observations for studies of star-forming clouds, comparing ideal and synthetic observations to the known properties of the model cloud. In particular, we have simulated observations for the planned ngVLA radio interferometer. We discuss below the main result. \subsection{Cloud model and synthetic observations} The model cloud has a mass of some 3000\,M$_\sun$ in a volume of (4\,pc)$^3$. It has thus a mean density of nearly $n({\rm H}_2)=700$\,cm$^{-3}$ and is capable of star formation up to high-mass stars \citep{Haugbolle2018}. We concentrated on the densest part of the, where the column densities reach up to $N({\rm H_2}) \sim 10^{24}$\,cm$^{-2}$. The target is thus similar to a small filamentary infrared IRDC \citep[e.g.][]{JimenezSerra2014, Barnes2021, Liu2022}. As shown by Fig.~\ref{fig:colden}, the appearance of a single filament can easily result from projection effects in a turbulent cloud \citep{Juvela2012b}. It is noteworthy that in two out of the three orthogonal view direction the same disjoint 3D regions align to form a single filamentary structure. They also have a velocity difference only of a couple of km\,s$^{-1}$ and thus cannot be easily separated even in velocity space. The cloud also includes several hundred newly-born stars that provide local heating and increase the LOS temperature variations. The average temperatures are low (with a mode of 14.3\,K) but exceed 100\,K in small regions. Therefore, the model provides a good analogue for a high-mass star forming region with some of its observational challenges. Nevertheless, there are limitations to the realism of the cloud model. First, it does not include a detailed description of molecular outflows that could considerably further complicate the kinematics. Second, the chemical abundances were set using an ad-hoc density dependence, instead of detailed time-dependent chemical modelling. Third, the computed dust temperatures were used as a proxy for the gas temperatures, instead of direct computation of the gas thermal balance. This is justified at high densities (Sect.~\ref{sect:RT}) but is increasingly inaccurate when densities fall below $\sim 10^5$\,cm$^{-3}$. The gas heating varies depending on the local conditions, also in ways that are not considered in our model. Cosmic-ray fluxes in excess of $\xi=10^{-14}$\,s$^{-1}$ have been observed towards some high-mass star-forming regions. These are much higher than the typical values in dense clouds ($\sim 10^{-17}$\,s$^{-1}$ and above, \citep{Caselli1998,Gerin2010}). However, the high rates are found preferentially in regions of low density and low molecular abundances \citep{Bayet2011}. Cosmic rays are attenuated only by thick columns of gas ($N({\rm H_2})\sim 10^{23}$\,cm$^{-2}$) but are also affected by the magnetic fields. \citet{Owen2021} investigated cosmic-ray propagation in filamentary clouds and the resulting effects on gas temperatures. The effect reaches almost 20\,K, when the cosmic-ray energy density is increased a factor of ten above its normal Milky-Way value. However, those calculations correspond to lower densities and therefore do not include the gas-dust coupling. In models, where the gas would be heated to similarly high temperatures via photoelectric effect, the collisions are able to bring the gas temperature down to within some degrees of the dust temperature \citep{Goldsmith2001, Juvela2011}. Therefore, although the gas temperatures in our model are not calculated precisely, they are still representative of the temperature fields in real clouds. More locally, the formed high-mass stars create photon dominated regions (PDRs) and the young stars can produce X-rays with distinct effects on the chemistry \citep{Hollenbach1997, Spaans2005, Meijerink2006}. The modelling of the detailed temperature structure and chemistry of PDRs is clearly beyond the scope of this paper. The MHD simulation contains a few bona fide massive stars, but, in their surroundings, the predicted line emission will be more reminiscent of earlier evolutionary stages. The selected angular resolution (1$\arcsec$ corresponding to 4000\,au) means that the synthetic observations are not sensitive to structures at the scale of protostellar disks. The above factors do not directly affect our main goal, the comparison of a 3D model and synthetic observations made of that model. Both the column-density and temperature distributions are similar to those found in real IRDCs. Furthermore, the synthetic line observations were calculated with full radiative transfer calculations, which take into not only the varying densities and kinetic temperatures but also deviations from LTE conditions. \subsection{Column densities} As long as the observed lines are not completely optically thick, column densities can be estimated using a single line with an assumed excitation temperature or with two or more lines with an assumed or measured opacity ratio. The cases of the ${\rm N_2 H^{+}}$ and ${\rm NH_3}$ column densities fall into the second category, making use of the known the optical-depth ratios of the hyperfine components. All our column-density estimates include the assumption of a homogeneous medium, and the resulting errors can be expected to increase as the optical depths and the density and kinetic-temperature variations increase. Because of the low average kinetic temperatures, the brightness temperatures are in our simulations typically only of the order of 10\,K. High sensitivity is therefore required to reliably measure the line intensities outside the densest cores, especially in the case of the ${\rm N_2 H^{+}}$ and ${\rm NH_3}$ satellite lines. Of these two species, the ${\rm NH_3}$ integrated intensities are 2-3 times larger (Fig.~\ref{fig:W}). The largest differences between ${\rm N_2 H^{+}}$ and ${\rm NH_3}$ are caused, however, from the almost factor of four difference in frequencies. As a result, S/N of ${\rm NH_3}$ is much higher but the angular resolution is correspondingly lower (2.43$\arcsec$ vs. 0.64$\arcsec$; Fig.~\ref{fig:colden_maps}). Both species provide accurate column density estimates in ideal observation. In the synthetic ngVLA observations, ${\rm N_2 H^{+}}$ shows a larger dispersion (for a significantly higher angular resolution) and systematically lower values (Fig.~\ref{fig:plot_colden_cor}). The low values are naturally explained by the filtering of extended emission, which is evident for example in Fig.~\ref{fig:W}. Similar to ${\rm N_2 H^{+}}$, reliable observations of ${\rm C^{18}O}$ and ${\rm H^{13}CO^{+}}$ are limited to regions of highest column density ($N({\rm H_2})\ga5\cdot 10^{22}$\,cm$^{-2}$). Previous observations of IRDCs have shown that e.g. H$^{13}$CO$^{\rm +}$(1-0) emission matches continuum emission quite well in dense regions \citep{Liu2022}. For the ngVLA Core antenna configuration, the maximum recoverable scales are almost 80$\arcsec$ at the frequency of the ${\rm NH_3}$(1,1) line and slightly less than 20$\arcsec$ at the frequency of the ${\rm N_2 H^{+}}$(1-0) line. Our cloud falls between these scales, the filament ($N({\rm H}_2)>5\times 10^{22}$\,cm$^{-2}$) extending over an area of some $40 \times 10$ arcsec. As pointed out in ngVLA memos\footnote{https://library.nrao.edu/ngvla.shtml, memo \#14}, at scales above 5$\arcsec$ it becomes important for image fidelity and sensitivity to combine interferometric observations with data from a large single-dish telescope. We return to the importance of single-dish data in Sect.~\ref{sect:singleDish}. We estimated column densities also with ${\rm C^{18}O}$(1-0) spectra and the assumption of a constant excitation temperature. For ideal observations, this resulted in fairly accurate estimates, the break-up of the assumption of optically thin lines causing significant underestimation only at the highest column densities (Fig.~\ref{fig:plot_colden_cor}a). However, in our cloud model also the kinetic temperature tends to decrease with increasing density, which could explain part of the bias. In the case of synthetic ngVLA observations, the column densities are underestimated, similar to the ${\rm N_2 H^{+}}$ results, confirming that the bias is common for observations towards the upper end of the ngVLA frequency range and related to the lack of information of the lowest spatial frequencies. Column densities were calculated using Gaussian fits or, in the case of hyperfine structure, a set of Gaussians for the hyperfine components but only a single velocity component. It is difficult to predict, how much error the Gaussian approximation produces, because this can depend even on the initial values used in the fits. The fitted component might cover multiple velocity components or could converge to just one of those. The effect on ${\rm C^{18}O}$ column-density estimates is easy to understand, because these are directly proportional to the integrated line intensity. Even in the case of multi-modal spectra (examples shown in Fig.~\ref{fig:fit_c18o_CASA}), the differences between the one-component and two-component fits were small at high column densities (Fig.~\ref{fig:plot_colden_cor}). At lower column densities, the two-component fits resulted in larger column densities, by up to a factor of two. Such differences could be expected in the centre of the field, where the spectra show two equally strong velocity components. The hyperfine structure of the ${\rm N_2 H^{+}}$ and ${\rm NH_3}$ lines was fitted using only a single velocity component. Unlike in the simple Gaussian fits, these fits provide directly estimates for the excitation temperature and optical depth. In this case, multi-component fits could easily lead to unphysical solutions. For example, one components could have $T_{\rm ex}<T_{\rm bg}$, providing an ``absorption'' feature to help the fitting of some non-gaussian line profiles, or one could even have $T_{\rm ex} \approx T_{\rm bg}$ combined with an arbitrary column density. Some unphysical solutions could be automatically rejected or avoided by using suitable regularisation. However, fitting of multi-component models to hyperfine spectra remains susceptible to degeneracies, and the automatic selection of the physically most likely solution remains a challenge. In the purely technical sense, the fits are still easy. For example, the maps of ideal observations in Fig.~\ref{fig:colden_maps} contained some 5 million individual spectra but could fitted in just a few minutes. \subsection{Estimates of kinetic temperature} Figure~\ref{fig:plot_Tkin} showed kinetic-temperature estimates derived from the ${\rm NH_3}$(1,1) and ${\rm NH_3}$(2,2) spectra. The accuracy of the estimates obtained with synthetic ngVLA observations is mostly comparable to that of ideal observations, with statistical noise of the order of 1\,K. However, there is an increasing positive bias towards higher temperatures, which mirrors the behaviour in Fig.~\ref{fig:plot_colden_cor}c for $N({\rm H}_2)$ at low column densities. This can be attributed partly to imperfections of the fits (with a single velocity component) and the different optical depths of the transitions, when combined with the large LOS temperature and density variations. \citet{Shirley2015} estimated that the effective critical density of the ${\rm NH_3}$(2,2) is at 10-20\,K temperatures about two times higher than that of ${\rm NH_3}$(1,1). In the $T_{\rm kin}$ analysis, the ${\rm NH_3}$(2,2) spectra were fitted using fixed radial velocities obtained from the ${\rm NH_3}$(1,1) hyperfine fits. If the radial velocities of the ${\rm NH_3}$(2,2) line are kept as free parameters, the appearance of Fig.~\ref{fig:plot_Tkin} changes only little, which shows that the two transitions are still essentially probing the same volume of gas. The temperature determination may also be influenced by imperfections in the interferometric observation and the data reduction, larger self-cancellation in ${\rm NH_3}$(1,1) observations naturally leading to positive bias in $T_{\rm kin}$ estimates. The S/N of the synthetic ngVLA observations was clearly sufficient to map kinetic temperature over large cloud areas, beyond the $N({\rm H}_2)=5\times 10^{22}$\,cm$^{-2}$ threshold used in Fig.~\ref{fig:plot_Tkin}. \subsection{Velocity fields} \label{sect:velocity_fields} Analysis in Sect.~\ref{sect:LOS_velocity} showed that the mean radial velocity of observed spectral lines can significantly differ from the actual mass-weighted mean radial velocity in the model cloud. For ideal observations, which were computed at the full resolution of the model, the differences were up to $\pm$1\,km\,s$^{-1}$. This corresponds to a large fraction of the total range of velocities. The total velocity dispersion of our model cloud is $\sim$4\,km\,s$^{-1}$, which is also consistent with observations of many IRDCs \citep{Li2022, Liu2022}. The large differences between the observed and true (mass-weighted) velocities are limited to small regions, typically with large volume densities and large optical depths. The synthetic ngVLA observations and ideal line observations show differences at two distinct scales. At the smallest scales, close to the resolution of the observations, these result from observational noise. In the area with $N({\rm H_2})>5\times 10^{22}$\,cm$^{-2}$, the rms differences are quite small, $\sim$0.1\,km\,s$^{-1}$ and $\sim$0.15\,km\,s$^{-1}$ for ${\rm C^{18}O}$ and ${\rm N_2 H^{+}}$, respectively. These values were measured at scales below 4$\arcsec$, by filtering out the larger-scale signal in the velocity maps. The absolute differences could be several times higher, but these are limited to larger spatial scales ($>4\arcsec$) and are observed to be similar for both molecules. Therefore, based on the similar frequencies of the lines, these are most likely related to the filtering of large-scale emission and other imperfections in the imaging procedures. Further investigations showed that the differences between the radial velocity of spectra and the true gas velocities are caused by temperature variations and optical-depth effects, which change the relative contribution of different LOS regions. The non-LTE effects (i.e., $T_{\rm ex}$ variations in addition to $T_{\rm kin}$ variations) had a smaller impact (Figs.~\ref{fig:plot_VLOS_2}-\ref{fig:plot_VLOS_2_ZOOM}). The synthetic ngVLA observations traced well the large-scale velocity structures, from the beam size up to at least $\sim 4\arcsec$. Thanks to the hyperfine structure, the velocity determination is particularly accurate with ${\rm N_2 H^{+}}$ observations (Fig.~\ref{fig:plot_VLOS_1}). Zooming into the cores, one interesting detail in Fig.~\ref{fig:plot_VLOS_2_ZOOM} is the presence of a filament that is feeding the central core. Although this was clearly identifiable in the 3D model due to its different LOS velocity, it almost completely disappeared in synthetic observations, in the line-velocity maps. This could be attributed to its much lower kinetic (and excitation) temperature. The filament was detected in channel maps (for example in HCO$^{+}$), but only marginally. This is because the width of the structure is below the beam size and the structure is weak compared to other emission along the LOS. The detectability of such kinematic features should be studied further, especially with models that more accurately describe the temperature and chemical composition of such structures. The fact that radial velocities measured from synthetic ngVLA observations are close to those in ideal observations will be important for studies of the accretion associated with cores and especially in the context of high-mass star formation and hub-filament systems. Figure~\ref{fig:H13COp_velocity} shows that ${\rm H^{13}CO^{+}}$(1-0) provides accurate measurements of the radial velocity and velocity gradients, as judged by the comparison with ideal observations. This will be important, when gradients are used to detect infall and to estimate infall rates. For example, \citet{Zhou2022_hubs} presented ALMA observations of hub-filament systems in a number of proto-clusters. The systematic velocity gradients scales of a few $0.1$\,pc were up to tens of km\,s$^{-1}$, suggesting almost free-fall motions towards the massive hubs. In Fig.~\ref{fig:H13COp_velocity}, 0.1\,pc would correspond to $\sim 5\arcsec$, but the most systematic velocity gradients are seen only close to the resolution limit, in agreement with the lower mass of this system. However, in such complex regions, the estimation of velocity gradients and the corresponding gas-inflow rates is complicated not only by the random inclinations of the structures but also by the presence of many LOS structures that can lead to high apparent and partly artificial gradients. For example, in Fig.~\ref{fig:H13COp_velocity}, the blue path crosses from diffuse background to a filament, leading to an apparent gradient of close to $\sim50$\,km\,s$^{-1}$\,pc$^{-1}$ (around the offset of $3 \arcsec$). Gradients should clearly be measured only along well-defined and continuous structures. The risk for artificial gradients, caused by change alignment of LOS structures, may also increase towards the central regions of hub-filament systems. In Sect.~\ref{sect:PCA}, we looked briefly at the global velocity statistics with the help of the PCA method. The PCA estimates of the $\delta v \sim R^{\alpha}$ scaling relation were well defined and the slopes were almost identical for the $^{13}$CO and ${\rm C^{18}O}$ data, with $\alpha=0.78$. The synthetic observations resulted, however, in clearly steeper relationships ($\alpha=1.00$ and $\alpha=1.05$ for $^{13}$CO and ${\rm C^{18}O}$, respectively), again suggesting some effect from the interferometric filtering. Of course, for such analysis of large-scale density or velocity fields, the coverage of low spatial frequencies with additional single-dish data becomes crucial (Sect.~\ref{sect:singleDish}). In spite of the S/N differences and the high S/N ${\rm C^{18}O}$ observations being limited just to the central filament, the $^{13}$CO and ${\rm C^{18}O}$ values were quite consistent with each other. \subsection{Failure of infall indicators} \label{disc:infall} We compared the collapse indicator $\delta v$ of Eq.~(\ref{eq:infall_dv}) to the actual LOS flows in the model. The latter were quantified with the parameter $\xi$, the density-weighted net mass flow towards the highest LOS density peak (Eq.~(\ref{eq:infall_xi})). The comparison of $\xi$ and $\delta v$ showed little correlation (figures \ref{fig:infall_index_zoom} and \ref{fig:LOS_collapse_LOC}, respectively). This was true even in the case of ideal line observations. The parameter $\delta v$ is based on the picture of a single core, where the excitation temperature increases towards the centre. This leads to an asymmetry, where the optically thick species suffer more self-absorption on its red-shifted side, leading to positive values of $\delta v$ (with the signs used in Eq.~(\ref{eq:infall_dv})). This is clearly not an appropriate description for more complex regions, where the LOS crosses several density peaks. In this case, $\delta v$ depends more on the random superposition of emission from structures with different excitation temperatures and radial velocities. Already in the case of two LOS clumps, the sign of $\delta v$ is likely to depend more on their relative velocities than the infall within individual clumps. The same complexity also affects the parameter $\xi$, but that at least is based on the true velocity field and concentrating on the main LOS density peak. Appendix~\ref{app:LOS_emission} illustrates the complexity of the density and velocity fields for selected sightlines, where large $\xi$ values indicate a particularly strong inflow motion. We examine here the LOS for the fourth core (counted from north) that is indicated in Fig.~\ref{fig:collapse_spectra}. Figure~\ref{fig:plot_LOS_spectra_2} shows, how an observed {\rm HCO$^{+}$(1-0) spectrum is composed of the emission and absorption at different locations along the full LOS through the model cloud. In the case of the optically thinner ${\rm H^{13}CO^{+}}$, up to 30\% of the local emission is absorbed by foreground layers, but the shape of the total spectrum is only little affected by these optical-depth effects. In contrast, HCO$^{+}$ emission can be completely absorbed by foreground structures, and most of the observed emission originates on the front side of cloud. For individual cores, one can often observe the change from redshifted emission on the near side to more blueshifted emission on the far side. However, several LOS structures contribute to the emission and absorption at any given radial velocity, and the spatial separation of the structures is $>$0.1\,pc, much larger than the size of individual cores. Because of the random radial-velocity offsets between the structures, the observed spectrum does not show clear blue asymmetry. Figures in Appendix~\ref{app:LOS_emission} show further plots on how different LOS structures contribute to different spectral lines. In particular, Fig.~\ref{fig:LOS_spectra_2} splits a spectrum to its blue-shifted and red-shifted components in the case of another LOS. Also in that case, the dominant core is consistent with the basic assumptions of the $\delta v$ diagnostic (red-shifted emission on the front side and blue-shifted emission on the far side of the core), but this signature is masked by other LOS structures. One similar concrete example was provided by \citet{Zhou2021}, who carried out ALMA observations of the high-mass star-forming clump G286.21+0.17. The clump contains a filament that, based on the blue asymmetry of the single-dish HCO$^{+}$ spectra, had been interpreted to be in global collapse. Interferometric observations revealed two separate clumps, and the asymmetry could be seen to be caused by the relative velocity of the clumps rather than by any systematic infall. If all LOS structures were at clearly different radial velocities, the $\delta v$ statistics might be applied to individual velocity components. However, it is difficult ascertain from observations that one has correctly isolated a single core in velocity space, especially if the object is not fully spatially resolved. Therefore, the $\delta v$-diagnostic will remain more applicable to nearby regions of low-mass star formation, where the LOS confusion is lower and the spatial resolution typically higher. \subsection{Combination of interferometer and single-dish data \label{sect:singleDish} } Interferometric measurements are usually combined with single-dish data to recover information of large spatial scales \citep{Cornwell1993}. We simulated single-dish N$_{\rm 2}$H$^{\rm +}$(1-0) observations for a 8.1$\arcsec$ beam, which corresponds to a 100-m dish such as the Green Bank Telescope (GBT)\footnote{https://greenbankobservatory.org/science/telescopes/gbt/}. The observational noise was set to 0.1\,K, which is realistic for GBT data \citep{Pingel2021}. The single-dish observations were used as a model in the CASA tclean\footnote{https://casa.nrao.edu/docs/taskref/tclean-task.html} procedure, resulting in a spectral cube with information also of the largest scales. As an alternative, we reduced the ngVLA data separately and joined them with the single-dish data with the feathering procedure. These two alternatives should produce comparable results, although in practice differences up to tens of per cent might be observed in individual spectra \citep{Barnes2021}. We used the uvcombine\footnote{https://github.com/radio-astro-tools/uvcombine} tool for the feathering of the ngVLA ans single-dish images. Figure~\ref{fig:SD_comparison} shows the results in terms of column densities and sample spectra at the 0.55$\arcsec$ resolution. The results are very similar, whether the ngVLA and single-dish data are reduced together or joined by feathering. The spectral profiles are identical to within $\sim$10\%, and the column densities along the selected stripe are also well correlated. Some larger differences are observed close to the position $C$, where the wide lines and the presence of multiple velocity components causes problems for our analysis that is based on hyperfine fits with a single velocity component. With the exception of the position C, all observations tend to underestimate the true column density. This in spite of the fact that the latter values are here again corrected for the spatial abundance variations. Excluding the peak $N({\rm N_2 H^{+}})>2\times 10^{14}$\,cm$^{-2}$ around the position $C$, the differences between the two alternative combinations of ngVLA and single-dish data were $\sim$14\%, the feathering resulting in only 5\% lower average value. Figure~\ref{fig:SD_comparison} also shows that the difference between the synthetic and the ideal ${\rm N_2 H^{+}}$ observations is smaller than the difference between the ideal observations and the true column densities. This applies both to the mean value and the variations along the selected stripe. In Sect.~\ref{sect:colden}, column densities were calculated using single lines, because the interferometric filtering appeared to have different effect on, for example, the optically thin ${\rm C^{18}O}$ and the much more extended $^{13}$CO emission. After combining ngVLA and single-dish data, Figure~\ref{fig:SD_others} shows correlations between the true column density (obtained directly from the model) and estimates calculated using the ${\rm C^{18}O}$ vs. $^{13}$CO and the ${\rm H^{13}CO^{+}}$ vs. HCO$^{+}$ line ratios. The situation is much improved over the ngVLA-only results in Fig.~\ref{fig:plot_colden_cor}, but the use of the $^{13}$CO vs. ${\rm C^{18}O}$ line ratios results in only marginal improvement over the simpler assumption of optically thin ${\rm C^{18}O}$ emission. There is also some bias that is still larger in the synthetic observations than in the ideal observations. Rather than having real physical causes, this might thus result from some imperfections in the data reduction or in the inclusion of the single-dish data. Apart from systematic errors, the dispersion is small and mostly below 25\%. The plot is limited to within 10$\arcsec$ of the two pointing centres, where the hydrogen column densities are $N({\rm H_2})\sim 10^{23}$\,cm$^{-2}$ or even higher. The estimates calculated from the ${\rm H^{13}CO^{+}}$ vs. HCO$^{+}$ line ratio do not show bias, but the increase in dispersion towards lower column densities is more noticeable. In Fig.~\ref{fig:SD_others}a, the assumed $T_{\rm ex}=15$\,K resulted in quite accurate estimates. With $T_{\rm ex}=25$\,K, the column densities would be overestimated by more than 50\%, the effect being almost the same for the whole map. Thus, a single optically thin line can already provide much information on relative structures, and, depending on the science case, the addition of good-quality single-dish data can be more important than the more precise excitation-temperature information obtained from line ratios. The emission at the brightest parts of the filament was captured quite well already in the ngVLA-only observations. Section~\ref{sect:velocity_fields} showed that the same conclusion applies to the velocity fields observed towards the densest sub-structures. Therefore, ngVLA data without any additional single-dish data could be sufficient for studies of the individual densest regions. On the other hand, single-dish data remain essential for extended sources -- such as the examined IRDC model -- and any studies into the large-scale velocity or density statistics (Sect.~\ref{sect:colden}). \section{Conclusions} \label{sect:conclusions} We have simulated observations of a high-column-density molecular cloud to study the foreseen performance of the ngVLA interferometer for studies of the ISM and the early phases of the star-formation process. As the baseline case, we examined observations with the ngVLA Core antenna configuration and a total observing time of six hours. The comparison of the ngVLA synthetic observations, the corresponding ideal line observations, and the actual 3D cloud model resulted in the following conclusions. \begin{enumerate} \item The ngVLA observations described the general structure of the filamentary cloud accurately. In the measurements of the integrated line intensity (e.g. HCO$^{+}$, ${\rm N_2 H^{+}}$, and ${\rm NH_3}$), the noise does not become significant even close to the maximum angular resolution (down to $\sim$0.5$\arcsec$). \item At high frequencies, observations of our model cloud suffer from some loss of extended emission. For ${\rm NH_3}$ the effect remains small at the scale of the examined cloud ($\sim 15\arcsec \times 40\arcsec$), and the column-density estimates remained accurate to about 30\%, even without the use of single-dish data. \item The kinetic temperatures derived from ${\rm NH_3}$ observations were mostly accurate to within $\sim$1\,K. However, at lower column densities some positive bias was observed, which could be attributed to some loss of extended signal even at the ${\rm NH_3}$(1,1) frequency. \item The synthetic ngVLA observations provided an accurate image of the kinematics at intermediate scales. For example, velocity gradients associated with the main cores could be traced mostly with a precision better than $\sim$0.3\,km\,s$^{-1}$. At higher frequencies (e.g., ${\rm C^{18}O}$ and ${\rm N_2 H^{+}}$ lines), the radial-velocity data show low-spatial-frequency deviations, because the target cloud is larger than the maximum recoverable scale. The loss of low-spatial-scale information is reflected in global statistics, but the effects remained moderate, for example, in the PCA analysis of $^{13}$CO (in spite of very extended emission) and ${\rm C^{18}O}$ (in spite of much lower signal-to-noise ratio) lines. \item The ngVLA observations traced the kinematics within the cores down to the resolution limit. However, at the smallest scales some important features remain undetectable either because of beam dilution or because their spectral signatures (even in ideal observations) are weak. The emission from some dense regions was masked by the emission and absorption of other LOS regions, either because of temperature differences or radiative transfer effects. \item We compared standard collapse indicators to the actual infall motions in the 3D model cloud. The blue asymmetry of optically thick lines was not significantly correlated with actual LOS motions in the cloud. The spectra were complex, containing contributions from many LOS density peaks. This makes it difficult to unambiguously interpret any observed spectral asymmetries in terms of a local collapse. \item The addition of single-dish data recovers the lost large-scale emission, and, for example, the synthetic ${\rm N_2 H^{+}}$ spectra were found to be very similar to the ideal observations. However, there can still be significant differences between the true column densities and the estimates, even if these were derived from ideal observations. The complex velocity structure can lead to large errors or even complete failure in the column-density estimation. \end{enumerate} \begin{acknowledgements} EM is funded by the University of Helsinki doctoral school in particle physics and astrophysics (PAPU). Tie Liu acknowledges the supports by National Natural Science Foundation of China (NSFC) through grants No.12073061 and No.12122307, the international partnership program of Chinese Academy of Sciences through grant No.114231KYSB20200009, Shanghai Pujiang Program 20PJ1415500 and the science research grants from the China Manned Space Project with no. CMS-CSST-2021-B06. We thank Troels {Haugb{\o}lle} for providing the data for the MHD simulations, and we acknowledge PRACE for awarding access to Curie at GENCI@CEA, France to carry out those simulations. % \end{acknowledgements} \bibliographystyle{aa} \bibliography{my.bib} \appendix \section{Maps of dust emission and temperature} \label{sect:Tdust} The line simulations assumed that the kinetic temperature is equal to the dust temperature (Sect.~\ref{sect:simulations}). Figure ~\ref{fig:plot_Tdust} shows for reference maps of the 250\,$\mu$m surface brightness and dust colour temperature. The temperatures are estimated with modified blackbody fits to the synthetic 160, 250, 350, and 500\,$\mu$m surface brightness maps, giving equal weight to data in all four bands. The dust opacity spectral index was assumed to be a constant $\beta$=1.8, which is close to the value in the used dust model. \section{Origin of observed emission} \label{app:LOS_emission} Figure~\ref{fig:PPP_PPV} shows the main structures of the 3D density field and the kinematic structures seen in the PPV cube of the synthetic $^{13}$CO data. The density values were thresholded at $n=5\times 10^5$\,cm$^{-3}$ and the detected regions, each with an individual label, were further extended to the region with $n>3\times 10^5$\,cm$^{-3}$. Regions with fewer than 100 cells were removed, where the cell size corresponded to 0.25$\arcsec$. The $^{13}$CO data were analysed using the results from Gaussian fits with up to three velocity components, only using the components with brightness temperatures exceeding 3.5 K. These provide discrete points in the PPV space, where neighbouring points were further connected, if their distance was below \begin{equation} \sqrt{ \left(\frac{\Delta x}{\delta x}\right)^2 + \left(\frac{\Delta v}{\delta v}\right)^2} < 2. \end{equation} Here $\Delta v$ and $\Delta x$ are the distances along the velocity and spatial coordinates. The scales were set to $\delta v = 0.2$\,km\,s$^{-1}$ and to $\delta x$ equal to half of the beam FWHM. Each connected PPV region (velocity-coherent region) was given a unique label. Figure \ref{fig:PPP_PPV} shows the extracted PPP and PPV regions as stereographic images. Because the usual collapse indicator $\delta v$ showed little correlation with the actual infall motions in the model cloud, we examined further the contribution of different LOS regions to the observed spectra. Figure~\ref{fig:collapse_spectra} shows spectra towards five positions with large $\xi$ values. These all contain several velocity components. We examine further the northernmost LOS marked with a cross in Fig.~\ref{fig:collapse_spectra}. Figure~\ref{fig:LOS_spectra_1} plots the density, kinetic temperature, and relative abundance along the full LOS. The LOS consist of a number of smaller density peaks, some of which are close in density to the strongest one. The lower frames of Fig.~\ref{fig:collapse_spectra} show the contributions of different LOS regions to the intensity in the observed spectrum. One takes here into account not only the local emission but also how the intensity is attenuated by foreground absorption. There are several density peaks with almost equal contributions to the observed spectra. In Fig.-\ref{fig:LOS_spectra_2}, we examine further two pairs of spectra, ${\rm C^{18}O}$ and $^{13}$CO and, on the other hand, ${\rm H^{13}CO^{+}}$ and HCO$^{+}$. We separate the lines to their red-shifted and blue-shifted parts using the mean velocity of the optically thinner line. The main density peaks (indicated with arrows) are associated with red-shifted emission at the front side and blue-shifted emission at the back side, which is consistent with collapse motions and agrees with assumptions of the $\delta v$ diagnostic. However, in the final spectra such local effects are masked by the superposition of emission from many density peaks. This supports a scenario where, even if there were systematic inflow at all scales, the values of $\delta v$ will depend more on the relative density, excitation, and radial velocity of individual density structures, rather than the emission asymmetry in individual cores. \section{Column density estimates} \label{app:colden} The observed brightness temperatures are \begin{equation} T_{\rm b} = \eta \, [ J_{\nu}(T_{\rm ex})-J_{\nu}(T_{\rm bg}) ] \, (1-e^{-\tau}), \end{equation} where we assume an efficiency (including beam filling) of $\eta=1$ and a background temperature of $T_{\rm bg}$=2.73\,K (same as in the radiative transfer simulations), and function $J$ is defined as \begin{equation} J_{\nu}(T) = \frac{h \nu / k}{e^{h \nu / k T}-1}. \end{equation} For the ${\rm NH_3}$(1,1) and ${\rm N_2 H^{+}}$ lines the optical depth $\tau$ and the excitation temperature $T_{\rm ex}$ are obtained directly from the fitting of the above equation to the observed spectra, simultaneously including all hyperfine components with their known relative intensities. Optical depth can also be estimated from the intensity ratio of lines with a known opacity ratio. This is often used with ${\rm C^{18}O}$ and $^{13}$CO lines, and we apply the method to the HCO$^{+}$ and ${\rm H^{13}CO^{+}}$ lines. Assumptions of the same excitation temperature, beam filling, and line width for both lines lead to the expression \begin{equation} \frac{T_{\rm b}(A)}{T_{\rm b}(B)} = \frac{1-e^{-r \times \tau(B)}}{1-e^{-\tau(B)}} \end{equation} for the ratio of the measured brightness temperatures \citep{Myers1983}. With the assumed ratio $r$ between the optical depths of the two species $A$ and $B$, the optical depths can be solved. Once the optical depth $\tau_{\nu}$ and excitation temperature $T_{\rm ex}$ are known, the column density of molecules on the upper energy level of the transition can be calculated as \begin{equation} N_u = \frac{8 \pi \nu^3}{c^3 A_{u l}} \frac{\int \tau_{\nu} d v}{e^{\frac{h \nu}{k T_{\rm ex}}} - 1}, \end{equation} or, using the peak optical depth $\tau_0$ and FWHM line width $\Delta v$ from Gaussian fits, \begin{equation} N_u = \frac{4 \pi^{3 / 2}}{\sqrt{\ln 2}} \frac{\nu^3 \tau_{_0} \Delta v}{c^3 A_{u l} \left( e^{\frac{h \nu}{k T_{\rm ex}}} - 1 \right)} . \end{equation} The total column density of the molecules is obtained by scaling with the ratio of the sum of populations on all energy levels relative to the population of the level $u$, \begin{equation} \Gamma = \frac{Q}{g_u e^{-E_{\rm u}/kT_{\rm ex}}} = \frac{\sum_{i} g_i e^{-E_i/k T_{\rm ex}} }{g_u e^{-E_{\rm u}/kT_{\rm ex}}}, \end{equation} assuming the same excitation temperature for all transitions. Here $g_i$ are the statistical weights and $E_i$ the energies of the energy levels, and $Q$ is the partition function. If the emission is optically thin, column density estimates can be written in terms of an assumed excitation temperature $T_{\rm ex}$ and the integrated line intensity $W$, \begin{equation} N_{\rm tot} = \frac{8 \pi \nu^3 W Q}{g_u c^2 A_{ul}} \frac{e^{E_l/k T_{\rm ex}}}{1-e^{-h \nu / k T_{\rm ex}}} \frac{1}{J_{\nu}(T_{\rm ex})-J_{\nu}(T_{\rm bg})}. \end{equation} \citep{Caselli2002b}. We use this in calculations based on Gaussian fits, where \begin{equation} W = \sqrt{\frac{\pi}{4 \log 2}} \, T_{\rm b} \, \Delta v. \end{equation} In particular, for the $J = 1 \rightarrow 0$ transition of C$^{18}$O, Mangum \& Shirley (2015) provide the formula \begin{equation} N_{\rm tot} \left( {\rm C^{18}O} \right) = \frac{2.48 \cdot 10^{14} (T_{\rm ex} + 0.88) e^{T_0 / T_{\rm ex}} \int T_b dv {\rm (km/s)}} {\eta [e^{T_0 / T_{\rm ex}} - 1] [J_{\nu} (T_{\rm ex}) - J_{\nu} (T_{\rm bg})]}, \end{equation} where $T_0 = \frac{h \nu}{k} = 5.27$\,K. Similarly, for optically thin ${\rm N_2H^+}$ emission \begin{equation} N_{\rm tot} \left( {\rm N_{2}H^{+}} \right) = \frac{6.25 \cdot 10^{15} e^{T_0 / T} (T_b / R_i) \Delta v}{\nu \eta [e^{T_0 / T} - 1] [J_{\nu} (T_{\rm ex}) - J_{\nu} (T_{\rm bg})]} [{\rm cm}^{- 2}]. \end{equation} In these formulas, frequency $\nu$ is given in units of GHz and the line width $\Delta v$ in units of km s$^{- 1}$, and the result is in units of cm$^{-2}$. In the ${\rm N_2H^{+}}$ equation, $T_b$ refers to the brightness temperature of one of the hyperfine components, which is then scaled with the relative intensity $R_i$ of that component ($\Sigma_i R_i \equiv 1$). However, if the line is not optically thin, one can estimate $T_{\rm ex}$ and $\tau$ by a simultaneous fit to all hyperfine components, as mentioned above. The hyperfine fit to ammonia ${\rm NH}_3 (1, 1)$ spectra also provides $T_{\rm ex}$ and $\tau$. Assuming Gaussian line parameters, the integration of optical depth profile gives \begin{equation} N_u ({\rm NH}_3 (1, 1)) = 1.6 \cdot 10^{13} \frac{\tau (1, 1, m) \Delta v}{e^{T 0 / T} - 1} \end{equation} for the column density of the upper level, with $T_0 = 1.14$ K, the line width $\Delta v$ being included in units of km s$^{- 1}$ \citep{Harju1993}. As in the case of ${\rm N_2H^{+}}$, if the emission is too weak for the hyperfine fit, column density can still be estimated from the integrated brightness temperature that is assumed to be optically thin, \begin{equation} N_u = \frac{1}{\eta} \xi \frac{1}{1 - \frac{\exp (T_0 / T_{\rm ex}) - 1}{\exp (T_0 / T_{{\rm bg}}) - 1}} \int T_b d v \quad [{\rm cm}^{- 2}] . \end{equation} The NH$_3$(2, 2) transition can be handled in the same way, integrating only over the main component. The numerical factor $\xi$ is $1.3 \cdot 10^{13}$ for the NH$_3$(1,1) and $6.2 \cdot 10^{12}$ for the NH$_3$(2,2) line. The total column density of NH$_3$(1,1) is obtained as the sum of populations on the upper and lower level, \begin{equation} N ({\rm NH}_3 (1, 1)) = N_u (1 + e^{T_0 / T_{{\rm ex}}}). \end{equation} The rotation temperature $T_{12}$ is obtained from the column density ratio \begin{equation} \frac{N (2, 2)}{N (1, 1)} = \frac{5}{3} e^{\Delta E / k T_{12}}. \end{equation} \citet{Walmsley_Ungerechts_1983} provide the transformation from $T_{12}$ to kinetic temperature, based on a three-level model of the (1,1), (2,2), and (2,1) levels of para ammonia. Finally, the total ammonia column density can be estimated as \begin{equation} N ({\rm NH}_3) = N (1, 1) \left\{ \frac{1}{3} e^{23.4 / T_{12}} + 1 + \frac{5}{3} e^{- 41.5 / T_{12}} + \frac{14}{3} e^{- 101.5 / T_{12}} \right\}, \end{equation} assuming that only metastable levels are populated \citep{Ungerechts1986, Harju1993}. \section{Fits of spectral profiles} \label{sect:spectral_fits} Most of the analysis was based on Gaussian fits or, in the case of ${\rm N_2 H^{+}}$ and ${\rm NH_3}$, simultaneous fitting of all hyperfine components and assuming a single velocity component. All Gaussian fits were also repeated with up to three velocity components. The calculations were made by our own GPU-accelerated fitting routine, with approximately 1\,ms run time per spectrum (less for single-component Gaussians, slightly more in case of multiple velocity components or hyperfine spectra). In Sect.~\ref{sect:singleDish}, the pyspeckit program \citep{pyspeckit} was used for comparison and to confirm the correct performance of our own routine. When the spectra contain multiple velocity components, a fit may converge to a wrong solution, failing to fit the main velocity feature or, in fits with $N_{\rm C}$ velocity components, failing to fit the $N_{\rm C}$ most important features. Each fit was therefore repeated four times, using different initial parameter values and keeping the results from the fit with the lowest $\chi^2$ value. Figure~\ref{fig:fit_c18o_CASA} shows examples of the fits of ${\rm C^{18}O}$ spectra towards the northern core, using 1-3 velocity components. Not all fits are perfect (e.g., three-component fits sometimes missing one of the three most important features, like in the second frame of the plot). On the other hand, even the single-component fits usually approximate the total emission well, biasing the column-density estimates by much less than 50\%. The presence of multiple velocity components presents problems especially for hyperfine fits, because, without additional constraints, some components might converge to an unphysical solution (e.g. a combination of a very low $T_{\rm ex}$ and a very large optical depth). This is not possible in the case of a single component that alone must match the observed spectrum. Conversely, if a spectrum that contains multiple velocity components is fitted with a model that contains only a single velocity component, the results are again biased. Figure~\ref{fig:test_HF_fit_MC2} shows, how the recovered optical depth and column density depend on the velocity difference between two equally strong velocity components. In this example, the original optical depth (the sum of the hyperfine components) is ten, and the velocity difference is increased from zero to 4.0\,km\,s$^{-1}$. The two components both have (before optical-depth effects) a line width of $\Delta v=$1.0\,km\,s$^{-1}$. As the velocity components move further apart and the fitted spectrum falls between them, the optical depth is increasingly underestimated. On the other hand, the estimated column density, which also depends on both $T_{\rm ex}$ and $\Delta v$, remains remarkably accurate. For large velocity differences, $\delta v \ga 2$\,km\,s$^{-1}$, we occasionally saw alternative fits that indicated much higher optical depths. These can be understood as an attempt to match the two velocity components with a single wide, flat-topped (i.e. completely saturated) spectral profile. These solutions tended to have higher $\chi^2$ values than the low-$\tau$ solutions, such as shown in Fig.~\ref{fig:test_HF_fit_MC2}, suggesting that the alternative solutions might be caused by poor convergence or a local $\chi^2$ minimum. The differences in the $\chi^2$ values between the good and the alternative worse fit was only at 10\% level. Therefore, one might encounter more of such bad fits (although possibly corresponding to a global $\chi^2$ minimum) when the signal-to-noise ratio of the observations is lower. They would then result in both the optical depth and the column density to be overestimated, potentially even by a factor of a several.
Title: A lensed protocluster candidate at $z=7.66$ identified in JWST observations of the galaxy cluster SMACS0723-7327
Abstract: According to the current paradigm of galaxy formation, the first galaxies have been likely formed within large dark matter haloes. The fragmentation of these massive haloes led to the formation of galaxy protoclusters, which are usually composed of one to a few bright objects, surrounded by numerous fainter (and less massive) galaxies. These early structures could have played a major role in reionising the neutral hydrogen within the first billion years of the Universe; especially, if their number density is significant.Taking advantage of the unprecedented sensitivity reached by the \textit{James Webb Space Telescope (JWST)}, galaxy protoclusters can now be identified and studied in increasing numbers beyond $z\geq\ $6. Characterising their contribution to the UV photon budget could supply new insights into the reionisation process. We analyse the first JWST dataset behind SMACS0723-7327 to search for protoclusters at $z\geq6$, combining the available spectroscopic and photometric data. We then compare our findings with semi-analytical models and simulations. In addition to two bright galaxies ($\leq$26.5 AB in F277W), separated by $\sim$11\arcsec and spectroscopically confirmed at $z_{spec}=7.66$, we identify 6 additional galaxies with similar colors in a $\theta\sim20$\arcsec radius around these (corresponding to R$\sim60-90$ kpc in the source plane). Using several methods, we estimate the mass of the dark matter halo of this protocluster, $\sim$3.3$\times$10$^{11}$M$_{\odot}$ accounting for magnification, consistent with various predictions. The physical properties of all protocluster members are also in excellent agreement with what has been previously found at lower redshifts: star-formation main sequence and protocluster size. This detection adds to just a few protoclusters currently known in the first billion years of the universe.
https://export.arxiv.org/pdf/2208.04930
\title{A lensed protocluster candidate at $z=7.66$ identified in JWST observations of the galaxy cluster SMACS0723-7327} \titlerunning{Protocluster candidate at $z=7.66$} \author{N. Laporte \inst{1,2} \and A. Zitrin \inst{3} \and H. Dole \inst{4} \and G. Roberts-Borsani \inst{5} \and L. J. Furtak \inst{3} \and C. Witten \inst{6} } \institute{ Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK \email{nl408\@cam.ac.uk} \and Cavendish Laboratory, University of Cambridge, 19 JJ Thomson Avenue, Cambridge CB3 0HE, UK \and Physics Department, Ben-Gurion University of the Negev, P.O. Box 653, Be’er-Sheva 8410501, Israel \and Université Paris-Saclay, CNRS, Institut d'Astrophysique Spatiale, 91405, Orsay, France \and Department of Physics and Astronomy, University of California, Los Angeles, 430 Portola Plaza, Los Angeles, CA 90095, USA \and Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK } \date{Received 9 August 2022 ; Accepted 3 October 2022} \abstract {According to the current paradigm of galaxy formation, the first galaxies have been likely formed within large dark matter haloes. The fragmentation of these massive haloes led to the formation of galaxy protoclusters, which are usually composed of one to a few bright objects, surrounded by numerous fainter (and less massive) galaxies. These early structures could have played a major role in reionising the neutral hydrogen within the first billion years of the Universe; especially, if their number density is significant. } {Taking advantage of the unprecedented sensitivity reached by the \textit{James Webb Space Telescope (JWST)}, galaxy protoclusters can now be identified and studied in increasing numbers beyond $z\geq\ $6. Characterising their contribution to the UV photon budget could supply new insights into the reionisation process. } {We analyse the first JWST dataset behind SMACS0723-7327 to search for protoclusters at $z\geq6$, combining the available spectroscopic and photometric data. We then compare our findings with semi-analytical models and simulations. } {In addition to two bright galaxies ($\leq$26.5 AB in F277W), separated by $\sim$11\arcsec and spectroscopically confirmed at $z_{spec}=7.66$, we identify 6 additional galaxies with similar colors in a $\theta\sim20$\arcsec radius around these (corresponding to R$\sim60-90$ kpc in the source plane). Using several methods, we estimate the mass of the dark matter halo of this protocluster, $\sim$3.3$\times$10$^{11}$M$_{\odot}$ accounting for magnification, consistent with various predictions. The physical properties of all protocluster members are also in excellent agreement with what has been previously found at lower redshifts: star-formation main sequence and protocluster size. This detection adds to just a few protoclusters currently known in the first billion years of the universe. Such $z \ge 7 $ galaxy protoclusters may play an important role in cosmic reionisation.}% {} \keywords{ Galaxy: formation -- Galaxies: distances and redshifts -- Galaxies: groups: general } \section{Introduction} Understanding the formation and evolution of the first population of galaxies a few million years after the Big-Bang is one of the most active topics of current extragalactic astronomy. For decades, instruments have been built to push even further our observational limits. The current most distant and detailed picture of the Universe, the Cosmic Microwave Background (CMB), has been obtained by the \textit{Planck} mission \citep{2020A&A...641A...6P}. It shows that already 380,000 years after the Big-Bang, the matter density in the Universe was inhomogeneously distributed, suggesting that small amplitude density fluctuations were on-going in the early phase of the Universe. These fluctuations grow and eventually the denser regions collapse to form the first bound objects \citep{2011ARA&A..49..373B}. Moreover, the first dark matter haloes undergo a process of fragmentation, suggesting that the most massive galaxies may have formed in overdense regions - so-called protoclusters % \citep{2010ApJ...719..229G}. Recent $N$-body simulations and semi-analytic models demonstrate that the first protoclusters may have contributed up to $\sim$50\% of the Cosmic Star Formation Rate Density at $z\sim\ $10 \citep{2017ApJ...844L..23C}. Therefore, determining the number density of $z\geq\ $6 protoclusters could supply new insights into the reionisation process. A natural method of identifying % protoclusters at $z\geq\ $6 is to search for overdensities of photometrically selected dropout galaxies at similar high-redshifts. For example, using \textit{Hubble} Frontier Fields data \citep{2017ApJ...837...97L}, \citet{2014ApJ...795...93Z} identified a likely protocluster of several galaxies at $z\sim\ 8$, one of which was later targeted and found to show UV and FIR emission lines placing it at $z=8.38$ \citep{2017ApJ...837L..21L}. Indeed, searching for galaxies with Lyman-$\alpha$ emission at similar redshifts is another successful route to identify early protoclusters. Since the neutral hydrogen surrounding galaxies at $z\geq\ $6 makes the detectability of this line very low at high-redshift \citep{2017A&A...608A.123D}, a Ly$\alpha$ detection from a galaxy at $z\geq\ $6 suggests that the galaxy sits in a large enough ionized bubble, a result of either high-ionising power of the galaxy itself or the cumulative contribution of several objects at the same redshift \citep{2022arXiv220701629R}. Using the \textit{Hubble} Space Telescope (HST) and deep spectroscopic follow-up campaigns, several groups have identified large ionised bubbles up to $z\sim$8.68 (e.g. \citealt{2016ApJ...818L...3C,2022A&A...662A.115C}, \citealt{2020ApJ...891L..10T}, \citealt{2021arXiv211207675L}, \citealt{Larson2022ApJ...930..104L}) with the detection of Ly-$\alpha$ in bright galaxies. However, the sensitivity of HST is not sufficient to search for much fainter objects in the local environment of these bright galaxies. The successful launch of the JWST on December 25, 2021 from Europe's Spaceport in French Guyana has opened a new window to study protoclusters. Its unprecedented sensitivity will allow the community to not only identify and spectroscopically confirm bright galaxies, but also fainter galaxies at similar redshifts. On July 12th 2022, the first images and spectra obtained by \textit{Webb} were % released, and preliminary analysis show that galaxies had % already formed at $z\geq\ $12 (e.g. \citealt{2022arXiv220712474F}, \citealt{2022arXiv220709434N}, \citealt{2022arXiv220712356D}) reinforcing the idea that a protocluster of galaxies may already be % in place at $z\geq$10. In this paper, we analyse the first dataset released by the JWST behind the lensing cluster SMACS0723-7327 to search for protoclusters at $z\geq$6. In section \ref{sec:search}, we describe our method which leads to the identification of a protocluster at $z=7.66$ using NIRSpec, NIRISS and NIRCam data. Then, we determine the physical properties of protocluster members as well as the global properties of the protocluster (Section \ref{sec:properties}). Finally, in section \ref{sec:implication} we discuss the implication of our findings on the reionisation process. Throughout this paper we assume a standard $\Lambda$CDM cosmology with parameters from \citet{2020A&A...641A...6P}. All magnitudes are in the AB system \citep{1983ApJ...266..713O}. \section{Search for protocluster behind SMACS0723} \label{sec:search} The first JWST dataset included NIRCam images in F090W, F150W, F200W, F277W, F356W and F444W filters; MIRI images in F770W, F1000W, F1500W and F1800W filters ; NIRSpec spectra in F170LP and F190LP as well as NIRISS spectra obtained with F115W and F200W filters. We first look at the NIRSpec spectra whose integration time is 2.45hrs in each filter. We use the publicly available level 2 data and visually inspect 1D spectra using \texttt{Jdaviz}\footnote{https://doi.org/10.5281/zenodo.6824713}. The spectroscopic redshift is obtained by fitting a list of nebular lines to the brightest detected lines. Two objects among the 35 observed have a similar redshift of $z=7.665$ and $z=7.659$ (herafter SMACS0723\_PC\_6 and SMACS0723\_PC\_7) and show several UV lines such as [OIII]$\lambda \lambda$4959, 5007 and [OII]$\lambda \lambda$3727, 3729 (Figure \ref{fig:spectra}). Our redshift measurements are consistent with what has been previously measured by other groups (e.g. \citealt{2022arXiv220712375C}, \citealt{2022arXiv220708778C}, \citealt{2022arXiv220712338A}, \citealt{2022arXiv220713693K}, \citealt{2022arXiv220714265T}). Theoretical studies demonstrate that at $z\sim$7 the size of a protocluster is below 10 comoving Mpc, corresponding to $\sim$1 physical Mpc \citep{2017ApJ...844L..23C} . However, the protocluster core, where the large majority of galaxies is expected, is much smaller. For example, \citet{2011Natur.470..233C} identified a protocluster at $z=$5.3, with a protocluster core size of 0.14 physical Mpc. While it appears complicated to entirely cover % a protocluster at such high-redshift with the first JWST dataset, one can easily search for a protocluster core. In the following, we will search for galaxies with similar colors in a 40\arcsec $\times$40\arcsec box centered on the two galaxies identified at $z=7.66$. Moreover, to increase the number of constraints on the SED, we combine JWST/NIRCam and MIRI data with HST/ACS and WFC3 data. To define the color-color criteria needed to select galaxies at $z\sim$7.66, we use BAGPIPES \citep{2018MNRAS.480.4379C} to simulate galaxies whose parameters range from 0.0 to 10 for the redshift, $10^{6}$ to $10^{10} M_{\odot}$ for the stellar mass, -4.0 to -0.5 for the ionisation parameter. The dust attenuation follows \citet{2000ApJ...533..682C} and the dust reddening ranges from $A_v$=0.0 to 2.0mag. IGM attenuation is included and modelled from \citet{2014MNRAS.442.1805I}. We assume 4 different Star Formation History (SFH - constant, burst, delayed and a combination of a constant with a young burst), and simulate a total of 40,000 galaxies . To account for the depth difference between JWST and HST images, we only defined criteria based on NIRCam filters. We obtain the following criteria to select objects at $z\ge$7 (Figure \ref{fig:color}): \begin{align} F090W - F150W & > 1.1 \\ F150W - F200W & < 0.3 \end{align} We combine these criteria with non-detection criteria ($<2\sigma$) in HST/ACS filters F435W, F606W and F814W and detection criteria ($>5\sigma$) in F150W, F160W and F200W. Colors are measured on psf-matched images degraded to the seeing of F200W (0.07\arcsec), whereas non-detection criteria are measured on original images. We identify 6 more objects in a 40\arcsec $\times$40\arcsec box (Figure ~\ref{fig:FoV}). The photometry of all candidates is reported in Table ~\ref{tab:photometry}. The SMACS0723 ERO data set also includes NIRISS direct imaging and wide-field slitless spectroscopy (WFSS) in the F115W and F200W filters, which affords sufficient wavelength coverage to observe Ly$\alpha$ at $z>7.2$ or e.g., H$\beta$, OIII]$\lambda$5007 and/or H$\alpha$ for $z\sim2-3$ galaxies. We reduce and analyse the data set as described by \citet{2022arXiv220711387R}, ensuring the modelling and subtraction of neighbouring sources and optimal extraction of 1D spectra using the 2D source profile of the galaxies. One galaxy (SMACS0723\_PC\_8) resides outside of the NIRISS WFSS footprint, while SMACS0723\_PC\_2 and SMACS0723\_PC\_1 are too faint to be detected in the pre-imaging. We visually inspect the resulting spectra of the remaining galaxies (SMACS0723\_PC\_3, SMACS0723\_PC\_4, SMACS0723\_PC\_5, SMACS\_PC\_6 and SMACS0723\_PC\_7) and find no clear emission lines indicative of Ly$\alpha$ or low-$z$ interlopers. Given the spectroscopic verification of SMACS0723\_PC\_6 and SMACS0723\_PC\_7, the lack of emission lines suggest the galaxies have not carved out sufficiently large ionised bubbles for Ly$\alpha$ to escape, while the absence of strong lines in SMACS0723\_PC\_6 and SMACS0723\_PC\_7 is supportive of such a hypothesis rather than a low-$z$ interloper solution. \noindent With these numbers in hand % , we can estimate the overdensity parameter as defined in \citet{2014A&A...568A...1M} : \begin{equation} \delta = \frac{\rho}{\dot{\rho}}-1 \end{equation} where $\rho$ is the number of objects identified and $\dot{\rho}$ the number of objects expected from the shape of the UV Luminosity Function in our search box. To obtain the latest value, one first need to estimate the surface explored by our survey by masking all bright objects (stars and obvious low-$z$ galaxies) and by correcting each pixel by the magnification by the foreground galaxy cluster SMACS0723-7327. The $z\sim$8 UV LF published in \citep{2022arXiv220511526B} is defined between $z\sim$7.5 and $z\sim$8.5, we therefore estimate from the previous effective surface the volume explored within our search box. We find an overdensity parameter of $\delta=4.0^{+2.4}_{-1.6}$. This value is comparable to what has been computed for protocluster identified at similar redshift, for example $\delta$= 5.11$^{+2.06}_{-1.70}$ for LAGERzOD1 at $z=$6.9 \citep{2021NatAs...5..423H} and $\delta \sim$ 4.5 for the $z\sim$8 overdensity identified in the BoRG survey \citep{2012ApJ...746...55T}. \citet{2022arXiv220802794N} also reported a similar $\sim$4 times overdensity at $z=5.3$ in the CEERS data. \section{Physical properties of the protocluster } \label{sec:properties} One of the key parameter one has to determine to characterise a protocluster is its total dark matter halo mass. Several methods have recently been used % in the literature to probe the total halo mass of confirmed protoclusters. In the following, we will apply some of these methods % to demonstrate that this structure is a convincing protocluster at $z=7.66$. Before estimating the total halo mass, one needs % to estimate the stellar mass of each candidate protocluster member % (e.g. \citealt{2020ApJ...898..133L}). We use \texttt{BAGPIPES} \citep{2018MNRAS.480.4379C} and assume several SFH : constant, burst, delayed and a combination of a young burst with a constant SFH. We allow a redshift range of $z\in$[0.0:10.0], a stellar mass ranging from $\log M_{\star} \in$[6.0:12.0] and a reddening range of $A_v \in$ [0.0:2.0]. The best SED-fit is defined as the fit reducing the Bayesian Information Criterion (BIC - see \citealt{2021MNRAS.505.3336L} for more details) and the results of this fitting are presented in Table~\ref{tab:SED-properties}. The photometric redshift of the 6 dropouts identified near the 2 spectroscopically confirmed galaxies is consistent with these galaxies being at $z=$7.66 (Figure \ref{fig:redshift-protocluster} - Right). We apply the same method to obtain the photometric redshift of all objects in the NIRCam field of view. The distribution in redshift of objects in our search box compared to the distribution in the entire field-of-view suggest a small excess of objects at $z\geq$7, consistent with the presence of an overdensity in this region (Figure \ref{fig:redshift-protocluster} - Left). As expected in a protocluster environment (e.g. \citealt{2015MNRAS.452.2528M}, \citealt{2021MNRAS.501.1803L}, \citealt{2021MNRAS.504.5054A}, \citealt{2022A&A...664A.155G}), 2 members are more massive than the others, namely SMACS0723\_PC\_6 and SMACS0723\_PC\_7, with a stellar mass $\sim$10$^9 M_{\odot}$. Figure~\ref{fig:main-sequence} shows the position of the 8 galaxies in our sample on a $M_{\star}$ vs SFR diagram compared with previous findings at $z\geq$7 (\citealt{2022arXiv220711135L},\citealt{2022arXiv220307392T}). It confirms that all these sources have properties consistent with what is expected in terms of the star-formation main sequence currently known at $z\geq\ $7. We also expand our search over the entire NIRCam field of view to identify other $z\sim$7.66 objects. 26 objects follow our selection criteria (including the 8 protocluster members candidates discussed in this letter) but no other region in the field shows an overdensity as large as in the protocluster candidate region. Moreover, no candidate has a stellar mass larger than PC\_7 confirming that the two spectroscopically confirmed galaxies are the most massive $z=7.66$ galaxies in this field of view. We also note that the projected distance between the protocluster core discussed in this paper and the closest $z\sim$7.66 galaxy outside of the 40$\arcsec \times$ 40$\arcsec$ box is 0.38 Mpc, suggesting that some of these galaxies may also be related to the protocluster. \setcounter{table}{1} \begin{table*} \centering \begin{tabular}{l|cc|cccc|c} \hline ID & RA & DEC & $z_{phot}$ & $\log M_{\star}$ & SFR & $A_v$ & $\mu$ [95\% C.I]\\ & [deg] & [deg] & & [$M_{\odot}$] & [$M_{\odot}$/yr] & [mag] & \\ \hline PC\_1 & 110.823826 & -73.437880 & 6.68$^{+0.04}_{-0.06}$ & 7.79$^{+0.20}_{-0.13}$ & 3.37$^{+0.45}_{-0.18}$ & 0.35$^{+0.30}_{-0.23}$ & 1.90 [1.49 -- 2.06] \\ PC\_2 & 110.820512 & -73.436305 & 7.69$^{+1.88}_{-0.85}$ & 8.19$^{+0.26}_{-0.31}$ & 1.71$^{+0.88}_{-0.77}$ & 0.11$^{+0.11}_{-0.07}$ & 1.79 [1.44 -- 1.92] \\ PC\_3 & 110.845907 & -73.435998 & 7.76$^{+1.19}_{-1.02}$ & 7.96$^{+0.32}_{-0.36}$ & 0.72$^{+0.31}_{-0.25}$ & 0.12$^{+0.15}_{-0.09}$ & 1.73 [1.41 -- 1.83] \\ PC\_4 & 110.846149 & -73.435474 & 7.20$^{+0.55}_{-0.45}$ & 8.57$^{+0.22}_{-0.26}$ & 3.25$^{+1.09}_{-0.83}$ & 0.33$^{+0.14}_{-0.16}$ & 1.70 [1.39 -- 1.80] \\ PC\_5 & 110.847412 & -73.435129 & 8.19$^{+0.83}_{-0.57}$ & 8.50$^{+0.20}_{-0.23}$ & 3.23$^{+1.17}_{-1.18}$ & 0.19$^{+0.12}_{-0.11}$ & 1.68 [1.54 -- 1.77] \\ PC\_6$^{\star}$ & 110.844634 & -73.435054 & \textit{7.665} & 8.59$^{+0.19}_{-0.21}$ & 3.55$^{+1.08}_{-1.14}$ & 0.15$^{+0.09}_{-0.07}$ & 1.68 [1.38 -- 1.78] \\ PC\_7$^{\star}$ & 110.834062 & -73.434509 & \textit{7.659} & 8.95$^{+0.04}_{-0.04}$ & 15.06$^{+2.02}_{-1.89}$ & 1.02$^{+0.03}_{-0.03}$ & 1.68 [1.41 -- 1.79]\\ PC\_8 & 110.835151 & -73.429499 & 7.33$^{+1.02}_{-0.09}$ & 8.26$^{+0.16}_{-0.08}$ & 2.61$^{+2.20}_{-0.57}$ & 0.54$^{+0.04}_{-0.06}$ & 1.50 [1.29 -- 1.57] \\ \hline \end{tabular} \caption{ Physical properties computed with \texttt{BAGPIPES} of the 8 protoclusters members candidates identified behind SMACS0723. All the values are not corrected for (the best-fit) magnification. The last column shows the magnification factor estimated from the lens model presented in \citet{2022arXiv220707102P}, assuming a redshift of $z=7.66$ for all objects. \\ $^{\star}$ : spectroscopically confirmed at $z=7.66$} \label{tab:SED-properties} \end{table*} The individual halo mass can be estimated from the stellar mass of each galaxy using the \citet{2013ApJ...770...57B} relationship. The halo masses of the 8 galaxies range % from 2$\times$10$^{10}$ to 6$\times$10$^{11}$ M$_{\odot}$ corrected for magnification, with a total protocluster halo mass of % $M_h$=3.34$^{+0.59}_{-0.50}\times$10$^{11}$M$_{\odot}$ . Another method of estimating % the halo mass of a % protocluster is to sum all stellar masses ($M^{tot}_{\star}$=1.46$^{+0.63}_{-0.41} \times 10^9 $ M$_{\odot}$) corrected for magnification and to convert it into halo mass using the baryonic-to-dark matter fraction measured by \citet{2020A&A...641A...6P}. Following this method, we estimate a total halo mass of 6.52$^{+2.82}_{-1.85} \times 10^{10}$M$_{\odot}$. Finally, we can also determine the halo mass of the most massive galaxy in our sample from its stellar mass, which should dominate the total halo mass of the protocluster. SMACS0723\_PC\_7 has a stellar mass of 5.31$^{+0.54}_{-0.41} \times 10^8$ M$_{\odot}$ corrected for magnification , which converts into a minimum halo mass of 6.86$^{+2.51}_{-2.42}\times$10$^{10}$. In an independent analysis, we use the observed stellar-to-halo mass ratios measured by \citet{2022arXiv220310895S} with the COSMOS2020 catalog, which represents the most complete deep galaxy catalog to date \citep{2022ApJS..258...11W}, to compute an additional measurement of the halo mass of our protocluster candidate. We fit the redshift-evolution of the \citet{2022arXiv220310895S} stellar-to-halo mass ratio and extrapolate it out to the redshift of our cluster ($z_{spec}=7.66$) using a Monte-Carlo Markov Chain (MCMC) analysis to rigorously propagate the uncertainties. The resulting halo masses of the 8 cluster member galaxies range from 2$\times$10$^{10}$ to 1$\times$10$^{11}$ M$_{\odot}$, which broadly agrees with the range found using the \citet{2013ApJ...770...57B} relation above. Summing over these halo masses, we find a total halo mass of $M_h$=2.07$\pm1.51\times$10$^{11}$M$_{\odot}$ which is higher than our previous estimates but agrees within the uncertainties ($1\sigma$. Note that with this conversion, the halo mass of the most massive galaxy in the protocluster is also higher than our previous estimate with a halo mass of 10.32$\pm9.78\times$10$^{10}$M$_{\odot}$, though it also has much higher uncertainties. Moreover, \citet{2013ApJ...779..127C} studied the evolution of a Coma-like cluster from $z\sim$7 to $z=0$. Assuming a smooth evolution at $z\geq$7, the expected halo mass of a protocluster at $z=7.66$ is $\sim$3$\times$10$^{11}$M$_{\odot}$. This value agrees well with the total halo mass we have estimated for our protocluster candidate. Besides the halo mass of the protocluster and the main sequence, another relevant, physical property is its size. As seen in Figure \ref{fig:FoV}, the protocluster constituent galaxies are all found to % sit within a 40\arcsec $\times$ 40\arcsec box, or within a radius of $\sim20\arcsec$ from the two brighter, central protocluster galaxies. We use the analytic mass model presented in \citet{2022arXiv220707102P} to estimate the magnification of the protocluster. In Table \ref{tab:SED-properties} we list the magnifications of each of the 8 galaxies associated with the protocluster. The magnifications vary between 1.5 and 2, roughly, with a typical $\mu\simeq1.7$ at the center of the protocluster. We do not find a strong shear at the location of the protocluster so that a circle of radius $\sim$20\arcsec encircling the observed protocluster galaxies, is delensed into, approximately, an ellipse of axis ratio 1:1.35, located a few arcseconds closer to the Bright Central Galaxy. Taking the typical magnification into account ($\mu\simeq1.7$), we estimate the size of the protocluster to be about 120 kpc in diameter. This size is consistent with what is expected from \citet{2017ApJ...844L..23C} simulations. It is also similar with what have been observed at $z=5.3$ in \citet{2011Natur.470..233C} and at $z=6.5$ by \citet{2019ApJ...877...51C}, with protocluster core size of 0.14Mpc and 0.45 Mpc. \section{Implication for Cosmic Reionisation } \label{sec:implication} The transmission and escape of Lyman-$\alpha$ and Lyman continuum photons through the intergalactic medium around a galaxy depends exponentially on the optical depth (\citealt{2004ApJ...613....1F}, \citealt{2000ApJ...530....1M}). As such, even small amounts of neutral Hydrogen will absorb potentially ionizing photons. As a result, smaller and fainter galaxies can only very locally ionize their surroundings, if at all. This leads to a strong bias towards detecting Lyman-$\alpha$ from brighter and more massive galaxies in the reionization era (e.g. \citealt{2011ApJ...728L...2S}), since typically, only these radiate strongly enough to ionize the surrounding hydrogen and create a sufficiently large bubble to allow these photons to escape (e.g. \citealt{2021arXiv211207675L}, \citealt{2022ApJ...930..104L}). Hence, smaller galaxies that sit close to these more massive and brighter galaxies, effectively sit in an ionized bubble and their UV photons can travel large distances, to help reionize the Universe (photons that would otherwise % be absorbed % in the local vicinity of the galaxies). Therefore, protoclusters may play a significant and important role in the reionization process. \citet{2017ApJ...844L..23C} demonstrated, using $N-$body simulations and semi-analytical models that protoclusters could have contributed up to $\sim$50\% of the Cosmic Star Formation Rate density at $z=10$. They also show that a protocluster core at $z=$7.66 can itself represent $\sim 10\%$ of the total ionising budget. Furthermore, \citet{2021MNRAS.504.4062M} analyse in their simulations a protocluster with a halo mass similar to the protocluster we report in this letter ($M_h \sim 10^{11} M_{\odot}$), and conclude that the escape fraction for this type of protocluster could reach $f_{esc}\sim$20\%, a \textit{golden number} to explain the end of the reionisation process by $z\sim$6. Moreover, the latest constraints on the shape of the UV Luminosity Function at $z\geq$6 show that the number density of galaxies within the first billion years of the Universe is $\leq$10$^{-6}$Mpc$^{-3}$ (e.g. \citealt{2022arXiv220511526B}, \citealt{2015ApJ...810...71F}) which is similar to the number density of rich clusters at $z=0$ ($\ge$10$^{15}$ M$_{\odot}$ - e.g. \citealt{2010MNRAS.407..533W}). This could suggest that the brightest $z\geq$6 galaxies detected in deep HST surveys may lie in overdense regions with the vast majority of sources well below the detection limit of \textit{Hubble}. If this is the case, and assuming that each $z\geq\ $6 protocluster has an escape fraction of $\sim$20\%, as suggested by simulations, it could open a new route to solve the UV photon deficit observed during the reionisation process. Figure~\ref{fig:evol-proto-cluster} demonstrates that SMACS0723\_PC could be seen as the progenitor of a Coma-like cluster ($M_h \ge\ $10$^{15}$M$_{\odot}$), and illustrates the importance of studying galaxy protoclusters from Cosmic Dawn to Cosmic Noon, e.g. in this $M_h-z$ plane. Previous telescopes were only able to detect the brightest members of such structures. Even if recent HST studies start to see overdensities % near % extremely bright objects at $z\geq$7 (e.g \citealt{2021arXiv211207675L}, \citealt{2022A&A...662A.115C}), the density of protoclusters within the first billion years of the Universe is totally unknown. The identification of a protocluster candidate at $z=7.66$ in less than 15hrs with JWST is encouraging. Wider and deeper data are now needed to determine how many bright $z\geq$7 sources lie in overdense regions, which could give new insights into the cosmic reionisation. \section{Summary} In this letter, we report the detection of a protocluster candidate at $z=7.66$ behind the lensing cluster SMACS0723-7327 observed with the \textit{James Webb} Space Telescope. In addition to the spectroscopic confirmation of two $z=7.66$ sources, we identified 6 more galaxies with similar colors whose SED-fitting suggest they may lie at the same redshift, and have physical properties comparable to galaxies at the same redshift. Assuming they are all part of the same structure, we estimate an overdensity parameter $\delta$=4.0$^{+2.4}_{-1.6}$, consistent with previous values found for other protoclusters at $z\geq\ $6. Based on several method, we estimate the total dark matter halo mass of this protocluster candidate to be $M_h=$3.6$^{+13.3}_{2.8} \times$10$^{11}$M$_{\odot}$. This value agrees perfectly with what is expected for progenitors of a Coma-like cluster. Furthermore, the star-formation main sequence at $z \geq 7$ and the estimated size % are in line with expectations. Simulations predict that protoclusters may have played an important role in the cosmic reionisation, with up to a 50\% contribution to the Cosmic Star Formation Rate density. They also suggest that protocluster with the dark matter halo mass of SMACS0723\_PC could have an escape fraction as high as 20\%. The main unknown parameter is their number density. However, the number density of bright galaxies at $z\geq$6 is of the same order of magnitude as the number density of rich clusters at $z\sim$0, suggesting that they may be linked. If this is confirmed with the detection of protocluster candidates around many bright galaxies at $z\geq$6 with the JWST , and given the efficiency at which these early structures ionise the neutral hydrogen, it could give new insight into the reionisation process. \begin{acknowledgements} We thank Francesco D'Eugenio for providing NIRSpec spectra and Richard Ellis for interesting comments on our manuscript. NL acknowledges support from the Kavli foundation. AZ and LF acknowledge support by Grant No. 2020750 from the United States-Israel Binational Science Foundation (BSF) and Grant No. 2109066 from the United States National Science Foundation (NSF), and by the Ministry of Science \& Technology, Israel. CW acknowledges support from the Science and Technology Facilities Council (STFC) for a Ph.D. studentship. NL and HD acknowledge the \textit{Astr'Auvergne} astronomy festival (and \textit{InfiniSciences} and \textit{Astro-Jeunes} volunteers) as it all started there. This work is based in part on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. The authors acknowledge the ERO team (program \#2736) for developing their observing program with a zero-exclusive-access period. \end{acknowledgements} \bibliographystyle{aa} % \bibliography{aanda} % \begin{landscape} \setcounter{table}{0} \begin{table} \centering \scriptsize \begin{tabular}{l|ccccccccccccc} \hline ID & F435W & F606W & F814W & F090W & F105W & F125W & F140W & F150W & F160W & F200W & F277W & F356W & F444W \\ & ACS/HST & ACS/HST & ACS/HST & NIRCam/JWST & WFC3/HST & WFC3/HST & WFC3/HST & NIRCam/JWST & WFC3/HST & NIRCam/JWST & NIRCam/JWST & NIRCam/JWST & NIRCam/JWST \\ \hline PC\_1 & > 27.22 & > 27.84 & > 26.80 & 29.22 $\pm$ 0.47 & > 27.63 & > 27.49 &> 27.77 & 27.71 $\pm$ 0.13 &> 27.40 & 27.67 $\pm$ 0.11 & 27.80 $\pm$ 0.24 & 28.19 $\pm$ 0.33 & 28.37 $\pm$ 0.56 \\ PC\_2 & > 27.72 & > 28.47 & > 27.55 &> 29.93 & > 27.51 & > 28.06 &> 27.10 & 28.47 $\pm$ 0.15 &> 28.00 & 28.76 $\pm$ 0.18 & 29.19 $\pm$ 0.48 & 28.71 $\pm$ 0.31 & 29.33 $\pm$ 0.72 \\ PC\_3 & > 27.67 & > 28.21 & > 27.68 &> 29.69 & > 27.38 & 27.78 $\pm$ 0.54 & 27.00 $\pm$ 0.25 & 27.38 $\pm$ 0.07 & 26.85 $\pm$ 0.25 & 27.19 $\pm$ 0.05 & 27.25 $\pm$ 0.14 & 27.16 $\pm$ 0.12 & 27.48 $\pm$ 0.23 \\ PC\_4 & > 27.59 & > 27.99 & > 27.47 &> 29.65 & 27.56 $\pm$ 0.51 & > 26.77 & 26.94 $\pm$ 0.28 & 27.19 $\pm$ 0.06 & 27.08 $\pm$ 0.31 & 27.05 $\pm$ 0.05 & 27.30 $\pm$ 0.18 & 27.28 $\pm$ 0.14 & 27.28 $\pm$ 0.18 \\ PC\_5 & > 27.76 & > 28.31 & > 27.63 & 28.21 $\pm$ 0.13 & 27.34 $\pm$ 0.37 & > 25.89 & 26.70 $\pm$ 0.20 & 26.57 $\pm$ 0.03 & 26.53 $\pm$ 0.18 & 26.58 $\pm$ 0.03 & 26.68 $\pm$ 0.08 & 26.75 $\pm$ 0.07 & 26.79 $\pm$ 0.11 \\ PC\_6 & > 27.66 & > 28.23 & > 27.72 &> 29.73 & 27.46 $\pm$ 0.43 & 26.20 $\pm$ 0.43 & 26.26 $\pm$ 0.14 & 26.02 $\pm$ 0.02 & 26.04 $\pm$ 0.12 & 25.88 $\pm$ 0.02 & 25.74 $\pm$ 0.04 & 25.49 $\pm$ 0.03 & 24.60 $\pm$ 0.02 \\ PC\_7 & > 27.76 & > 28.07 & > 27.69 &> 29.83 & 26.95 $\pm$ 0.33 & 27.06 $\pm$ 0.24 & 26.54 $\pm$ 0.35 & 26.82 $\pm$ 0.04 & 26.78 $\pm$ 0.20 & 26.92 $\pm$ 0.04 & 26.87 $\pm$ 0.07 & 26.85 $\pm$ 0.07 & 26.09 $\pm$ 0.05 \\ PC\_8 & > 27.70 & > 28.22 & > 27.65 &> 28.23 & OOF & OOF & OOF & 27.62 $\pm$ 0.19 & OOF & 27.60 $\pm$ 0.14 & 27.23 $\pm$ 1.10 & 27.37 $\pm$ 0.33 & >27.19 \\ \end{tabular} \caption{ Photometry of the selected protocluster members candidates. 2$\sigma$ non-detection are measured at the position of the candidate in a 0.3\arcsec radius aperture. Object PC\_8 is not covered by current HST data, and is marked as Out of the Field (OOF).} \label{tab:photometry} \end{table} \end{landscape}
Title: Positivity bounds in vector theories
Abstract: Assuming unitarity, locality, causality, and Lorentz invariance of the, otherwise unknown, UV completion, we derive a new set of constraints on the effective field theory coefficients for the most general, ghost-free Generalized Proca and Proca Nuevo massive vector models. For the Generalized Proca model, we include new interactions that had not been previously considered in the context of positivity bounds and find these additional terms lead to a widened parameter space for the previously considered interactions. Although, the Generalized Proca and Proca Nuevo models are inequivalent, we find interesting analogues between the coefficients parameterizing the two models and the roles they play in the positivity bounds.
https://export.arxiv.org/pdf/2208.12631
\hspace{4.2in} \mbox{Imperial/TP/2022/CdR/02}\\\vspace{1.53cm} \flushbottom \section{Introduction} Effective field theories (EFTs), describe low energy (IR) physics, up to a certain cutoff $\Lambda$ beyond which additional ingredients needs to be considered. Upon constructing an EFT, every operator consistent with the field content and the symmetry of the system should in principle be included with an arbitrary Wilsonian coefficient, leading to parameter spaces which may be typically be significant. Consider for instance the Standard Model EFT (SMEFT), just accounting for up to dimension-6 baryon-number conserving operators already leads to a 59-dimensional phase space \cite{Buchmuller1986621,Grzadkowski:2010es}. When considering experiments or observations where searching large parameter spaces may be costly or infeasible, narrowing down the parameter space can be invaluable, allowing for more targets searches or even allowing one to theoretically rule out certain theories as incompatible with a standard UV completion. Even if the full (UV complete) model at high energies is unknown, the parameter space of an EFT may be restricted using constraints arising from hypotheses on the UV completion. The physical requirements of unitarity, locality, causality, and Lorentz invariance of the UV completion of an EFT translate into unitarity, analyticity and crossing symmetry of scattering amplitudes. Put together, these properties give rise to a dispersion relation, whose positivity constrains the amplitudes. Schematically, analyticity allows one to write the amplitude $\mathcal{A}$ in the complex plane of the Mandelstam center of mass energy square variable $s$ as a contour integral and to deform the contour to obtain an integral of the imaginary part, which is positive by unitarity (optical theorem) and crossing relations, \begin{equation} \frac{1}{2}\mathcal{A}''(s) = \frac{1}{2\pi i}\oint_\mathcal{C} \d s'\frac{\mathcal{A}(s')}{(s'-s)^3} =\frac{1}{\pi} \int_\text{cuts}\d\mu\frac{\Im A(\mu)}{(\mu-s)^3}>0\, . \end{equation} In practice, $\mathcal{A}''(s)$ is computed in a given EFT, whereas the last inequality is not usually computed explicitly, but its positivity is guaranteed by the assumptions on the UV completion and in turn ensures the positivity of the $\mathcal{A}''(s)$. This procedure provides positivity bounds which can be used to constrain the parameter space of EFTs. It has long been known that analyticity and dispersion relations lead to positivity constraints, but it is only quite recently that \cite{Adams_2006} exploited these constraints to bound EFT coefficients. Since their foundational work, which applied to scalar theories in the forward limit, some extensions have been worked out both away from the forward limit \cite{de_Rham_2017} and for spinning particles \cite{Bellazzini2017,de_Rham_2018,davighi2021natural}. These bounds have been applied to massive particles of spin-1 \cite{Bonifacio_2016,de_Rham_2017gal,de_Rham_2019}, as well as on massive spin-2 fields \cite{Cheung2016,Bonifacio_2016,Bellazzini:2017fep,de_Rham_2018improved,de_Rham_2019,Alberte2020,Alberte20202,Wang2021}. In particular it was shown in \cite{Cheung2016,de_Rham_2019,Alberte2020,Alberte20202} how involving bounds with mixed field polarization can significantly reduce the allowed region of parameter space and lead to compact bounds. Recently, the parameter space has been carved in an even more systematical way making use of clever non-linear bounds \cite{Bellazzini_2021,Arkani-Hamed:2020blm,Chiang2021}. Insight from full crossing symmetric dispersion relations as highlighted in \cite{Sinha:2020win,Haldar:2021rri,Raman:2021pkf,Chowdhury:2021ynh,Sinha:2022sdo} was for instance folded into new types of compact bounds in \cite{tolley2021new,Caron-Huot2021} with applications to multi-field EFTs in \cite{Du_2021} and EFTs involving higher spin fields in \cite{Chowdhury:2021ynh}. Using these methods, the massless and massive Galileon have been found to admit no standard UV completion \cite{Adams_2006,tolley2021new}. Likewise, in \cite{Bonifacio_2016}, it was shown that the simplest vector Galileon model also cannot admit a standard UV completion. Nonetheless, as argued in \cite{Cheung2016,de_Rham_2018improved,de_Rham_2019}, such conclusions do not prevent massive gravity (or GP, PN) to possibly admit one, even though the massive Galileon emerges as their helicity-0 mode in the decoupling limit. Moreover, a violation of the bounds may not be dramatic in itself. It simply implies that if a UV completion of the model is to exist, it may not be enjoy the same level of locality as required in the derivation of positivity bounds (and in particular a violation of the Froissart-like bound may occur) as for instance illustrated in \cite{Keltner:2015xda} or may occur in other UV finite and unitary models \cite{Tolley:2019nmm}. Due to their relevance in modern physics, some EFTs have been particularly popular for the application of the bounds. Positivity bounds have been extensively studied in the case of (massless) gravity, where both pure gravity \cite{Bern_2021,Chiang2022,CaronHuot2022,Herrero_Valea_2021} and other fields in the gravitational context \cite{Alberte_2020spin2,Tokuda_2020,Alberte2021,Alberte_202222,Caron_Huot_2021GR}. Positivity bounds for gravity require special care due to its massless nature and the presence of a pole in the $t$-channel exchange, leading to additional complications. Moreover, positivity bounds in gravity can be related to the Weak Gravity Conjecture \cite{Cheung_2014,Hamada_2019,Arkani_Hamed_2022,Henriksson2022}. The Standard Model EFT has also drawn a great deal of attention in the positivity bounds community \cite{Zhang_2019,Zhou2019,Bellazzini_2018SM,Remmen_2019,Zhang_2020,Zhou2021,Fuks_2021,Remmen_2020,Bonnefoy_2021,Chala2021}. Constraining the SMEFT parameter space makes both the experimental searches and the theoretical interpretations of the data more efficient. If some violations of the bounds were detected, it would indicate that some assumptions on the UV-complete theory are erroneous. The application of the bounds in cosmology, as conducted in \cite{Melville_2020,Noller2021,Kim_2019,Herrero_Valea_2019,Ye_2020,melville2022positivity,Creminelli:2022onn}, can be combined with observational data to strongly constrain the parameter space of dark energy and modified gravity models. (See \cite{deRham:2022hpx} for a recent review of the positivity bounds and their applications.) The aim of this work is to provide new constraints on vector EFTs via the positivity bounds technology. Positivity bounds on certain Proca interactions have been considered in \cite{Bonifacio_2016, de_Rham_2019}, but these were not the most general interactions allowed within these models. We consider the most general possible parity even interaction terms allowed for Generalized Proca theories which include terms with scalar-vector mixing in the decoupling limit. Positivity bounds on Proca Nuevo models have never previously been considered and are studied here for the first time. We begin by presenting explicitly the two massive vector models of interest, the Generalized Proca \cite{Heisenberg2014} and Proca-Nuevo \cite{derham2020} in section~\ref{sec.models}. Then, we review the spinning positivity bounds from \cite{de_Rham_2018} in section~\ref{sec.bounds}. The reader familiar with these topics may skip these first two sections. Finally, we present our results for the constraints on the two Proca theories in section~\ref{sec.results}. We work in a $3+1$ dimensional flat space using the mostly-plus signature $(-,+,+,+)$. \section{Vector Models}\label{sec.models} There were initial attempts to write down derivative self-interactions for a massless Abelian vector field, but it was concluded that the Maxwell term is the only possible term compatible with gauge invariance and leading to second order equation of motion (EOM) on flat space, hence a no go theorem for massless vector Galileons \cite{nogo2014}\footnote{Even though a non-minimal coupling of the gauge-invariant field strength to gravity exists, for most of the relevant applications it is unstable \cite{BeltranJimenez:2013btb}.}. On the other hand, it is possible to construct derivative self-interactions with second order EOM if we give up gauge symmetry and consider massive vector theories, also called Generalized Proca theories. The Generalized Proca (GP) model \cite{Heisenberg2014,Tasinato2013,Allys2016,Jimenez2016} is the most general Abelian self-interacting massive spin-1 theory\footnote{A non-Abelian version of the Generalized Proca has been investigated in \cite{BeltranJimenez:2016afo,Allys_2016}.} with second order EOM, even if the interaction terms from the Lagrangian appear to have higher order derivatives (see \cite{Heisenberg:2018vsk} for a review). Moreover, it enjoys a Vainshtein screening mechanism, making it compatible with solar system tests of gravity \cite{De_Felice_2016}. These models have found various phenomenological applications in astrophysics \cite{Chagoya_2017,Kase_2018,Kase_2020,Garcia_Saenz_2021,Brihaye_2022} notably for black holes \cite{Chagoya_2016,Minamitsuji_2016,Cisterna_2016,Chagoya_2017,Heisenberg_2017,Minamitsuji2017,Heisenberg_20172,Kase_20182,Kase_20183,Rahman_2019}, and in cosmology \cite{Felice_2016,De_Felice_20162,Heisenberg_2016,Nakamura_2017,Emami_2017,De_Felice_2017,Kase_20184,Dom_nech_2018,Nakamura_2019,Oliveros_2019,Felice_2020,Heisenberg_2021}. In order to avoid instabilities, a massive vector theory should propagate three degrees of freedom. This is achieved by imposing a constraint, such as the degeneracy of the Hessian matrix. In the Generalized Proca model, this constraint manifests itself by the non-propagation of the temporal component, and the absence of Ostrogradski instability is made apparent by the second order EOM. However, there is no reason that the constraint may not be realized in another way (in fact see \cite{Heisenberg:2016eld} and \cite{BeltranJimenez:2019wrd}). Another kind of self-interacting massive vector theory has recently been discovered \cite{derham2020}. In fact, a Proca theory different from GP emerges from the decoupling limit of massive gravity on AdS space \cite{Laura2018}, and inspired the construction of a new Proca theory on flat space, dubbed Proca-Nuevo (PN) \cite{derham2020}. As intended, this model propagates three degrees of freedom, but realizes the constraint non-linearly as opposed to GP, where the constraint is realized order by order. Moreover, both the GP and PN theories lead to a time-dependent vector condensate which could play the role of a dark energy fluid, driving the accelerated expansion of the universe that we currently observe. Additionally, these models have a technically natural vector mass and dark energy scale \cite{Zosso2021,derham2021quantum}. The application of an extended PN model in cosmology have been successfully considered in \cite{derham2021cosmology,Pozsgay2022}. \subsection{Generalized Proca}\label{sec.GP} The Generalized Proca interactions \cite{Heisenberg2014}, sometimes referred to as vector Galileons, are massive vector self-interactions built out of two requirements. One, that the equations of motion are second order. Two, that the temporal component is not dynamical. In turn, these guarantee that only three healthy degrees of freedom propagate. A way to construct this theory is to write down all possible interactions at each order in derivatives and tune the coefficients to have a degenerate Hessian \cite{Heisenberg2014}. This procedure produces the following model for the massive vector field $A_\mu$ \begin{equation} \L_{\text{GP}} = \sum_{n=2}^6 \L_n\,, \label{eq:LGP} \end{equation} with \begin{equation}\label{eq.lgp} \begin{aligned} \L_2 &= f_2(A_{\mu}, F_{\mu\nu}, \tilde{F}_{\mu\nu})\\ \L_3 &= f_3(A^2) (\partial \cdot A)\\ \L_4 &= f_4(A^2) [(\partial \cdot A)^2 - \partial_{\mu} A_{\nu} \partial^{\nu} A^{\mu}]\\ \L_5 &= f_5(A^2) [(\partial \cdot A)^3 -3 (\partial \cdot A) \partial_{\mu} A_{\nu} \partial^{\nu} A^{\mu} + 2 \partial_{\mu} A_{\nu} \partial^{\nu} A^{\rho} \partial_{\rho} A^{\mu} ] \\ & + \tilde{f}_5(A^2) \tilde{F}^{\mu \alpha} \tilde{F}^{\nu}_{\phantom{\nu} \alpha} \partial_{\mu} A_{\nu}\\ \L_ 6 &= \tilde{f}_6(A^2) \tilde{F}^{\mu \nu} \tilde{F}^{\alpha \beta} \partial_{\alpha} A_{\mu} \partial_{\beta} A_{\nu}\, , \end{aligned} \end{equation} where $F_{\mu\nu}=\del_\mu A_\nu-\del_\nu A_\mu$, and $\Tilde{F}^{\alpha\beta}=\frac{1}{2}\epsilon^{\alpha\beta\mu\nu}F_{\mu\nu}$ is the dual field-tensor. The $f_n$'s are arbitrary functions of their arguments. In particular, $f_2$ contains the kinematic and mass terms. This theory is the most general one respecting the two requirements. Any other possible interactions are related to these by total derivatives and disformal transformations of the metric. Moreover, expanding the $f_n(A^2)$ in power series of $A^2$, the full Lagrangian can be expressed perturbatively as \begin{equation} \mathcal{L}_{\text{GP}}=\sum_{n=2}^\infty\frac{1}{\Lambda_2^{2(n-2)}}\mathcal{L}_{\text{GP}}^{(n)}\, , \end{equation} where each $\mathcal{L}^{(n)}$ contains the $n$-point interactions, and $\Lambda_p=(m^{p-1}M_{\text{Pl}})^{1/p}$ is a dimensionful scale. In this work, we are interested in computing tree-level 2-2 scattering amplitudes, such that it is sufficient to consider only up to quartic interactions. The first terms of the perturbative expansion can be expressed as \begin{equation}\label{eq:LGPpert} \begin{aligned} \mathcal{L}_{\text{GP}}^{(2)}&=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-\frac{1}{2}m^2 A^2 \\ \mathcal{L}_{\text{GP}}^{(3)}&=a_1m^2A^2\del_\mu A^\mu+ a_2\Tilde{F}^{\mu\alpha}\Tilde{F}^\nu_{\ \alpha}\del_\mu A_\nu \\ \mathcal{L}_{\text{GP}}^{(4)}&=b_1 m^4A^4+b_2m^2A^2F_{\mu\nu}^2+ b_3m^2A^2[(\del\cdot A)^2-\del_\mu A_\nu\del^\nu A^\mu]+b_4m^2A_\mu A^\nu F^{\alpha\mu}F_{\alpha\nu} \\ &\, + b_5F^{\mu\nu}F^{\alpha\beta}F_{\mu\alpha}F_{\nu\beta}+b_6(F_{\mu\nu}^2)^2 +b_7 \Tilde{F}^{\alpha\beta}\Tilde{F}^{\mu\nu}\del_\alpha A_\mu\del_\beta A_\nu \, , \end{aligned} \end{equation} where the coefficients $a_i$ and $b_i$ are dimensionless coupling constants. There exist different ways to define the theory perturbatively, but they only differ by field redefinitions and total derivatives, and are therefore equivalent. In any formulation, there are respectively 2 and 7 independent terms at cubic and quartic orders. In this work, we follow the formulation of \cite{derham2020}, which offers a good comparison with the Proca-Nuevo model. In order to study the decoupling limit of the theory, we perform the StГјckelberg procedure, introducing the StГјckelberg field $\phi$, \begin{equation} A_\mu\to A_\mu+\frac{1}{m}\del_\mu \phi\, , \end{equation} such that, in the decoupling limit, the 3 helicities ($\lambda=\pm1,0$) of the massive vector decomposes into 2 of a massless vector $A_\mu$ (helicity-1 modes) and 1 of a massless scalar $\phi$ (helicity-0 mode). Then, taking the decoupling limit corresponds to send the mass to zero and the scale to infinity while keeping the lowest interaction scale constant: \begin{equation}\label{eq.DL} m\to0, \quad \Lambda_2\to\infty,\quad \text{while} \quad \Lambda_3\equiv(m\Lambda_2^2)^{1/3}=\text{const}. \end{equation} In these limits, the perturbative Lagrangian reads \begin{equation} \mathcal{L}_{\text{DL GP}}=\sum_{n=2}^\infty\frac{1}{\Lambda_3^{3(n-2)}}\mathcal{L}_{\text{DL GP}}^{(n)}\, , \end{equation} with \begin{equation}\label{eq.GPDL} \begin{aligned} \mathcal{L}_{\text{DL GP}}^{(2)}&=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-\frac{1}{2}(\del\phi)^2 \\ \mathcal{L}_{\text{DL GP}}^{(3)}&=a_1(\del\phi)^2\Box\phi+ a_2\Tilde{F}^{\mu\alpha}\Tilde{F}^\nu_{\ \alpha}(\del_\mu\del_\nu\phi) \\ \mathcal{L}_{\text{DL GP}}^{(4)}&=b_3(\del\phi)^2[(\Box\phi)^2-(\del_\mu\del_\nu\phi)^2] +b_7 \Tilde{F}^{\alpha\beta}\Tilde{F}^{\mu\nu}(\del_\alpha \del_\mu\phi)(\del_\beta\del_\nu\phi). \end{aligned} \end{equation} $\L^{(2)}$ contains the decoupled kinetic terms of the massless vector and scalar modes. The terms in $a_1$ and $b_3$ correspond respectively to the cubic and quartic Galileon interactions, whereas those in $a_2$ and $b_7$ mix the scalar and vector modes. In particular, \cite{de_Rham_2019,Bonifacio_2016} use models that only gives Galileons in the decoupling limit, i.e.\ with $a_2$, $b_7=0$ to derive some positivity bounds for the Generalized Proca. \subsection{Proca-Nuevo}\label{sec.PN} The Proca-Nuevo model \cite{derham2020} is built from the same determinant formulation as dRGT massive gravity \cite{deRham:2010kj}. There, the reference metric is defined as $f_{\mu\nu}=\del_\mu\phi^a\del_\nu\phi^b\eta_{ab}$, where $a$ is a Lorentz index, such that the quadruplet of StГјckelberg fields $\phi^a$ is identified as a Lorentz vector, which can subsequently be decomposed to \begin{equation} \phi^\mu=x^\mu+\frac{1}{\Lambda_2^2}A^\mu\, , \end{equation} for a vector field $A_\mu$. Then, in terms of this vector field, $f_{\mu\nu}$ reads \begin{equation}\label{eq.fofA} f_{\mu\nu}[A] = \eta_{\mu\nu} + 2 \frac{\del_{(\mu} A_{\nu)}}{\Lambda_2^2} + \frac{\del_{\mu} A_\alpha \del_{\nu} A_\beta \eta^{\alpha \beta}}{\Lambda_2^4}\,. \end{equation} Next, we keep the same deformed determinant as in massive gravity, but with $g_{\mu\nu}=\eta_{\mu\nu}$ the flat Minkowski metric and $f_{\mu\nu}$ expressed in terms of the vector field as defined in \eqref{eq.fofA}, \begin{equation} \K^\mu_{\ \nu}=\left(\sqrt{\eta^{-1}f[A]}\right)^\mu_{\ \nu}-\delta^\mu_{\ \nu}\, . \end{equation} Finally, the full Lagrangian of this theory, defined by $\det(\delta^\mu_{\ \nu}+\K^\mu_{\ \nu})$, is given by \begin{equation} \L_{\text{PN}} = \Lambda_2^4\sum_{n=0}^4 \alpha_n(A^2)\L_n[\K]\,, \end{equation} with $\L_n[\K]$, the elementary Lagrangians \begin{equation} \begin{aligned} \L_0 &= 4!\\ \L_1 &= 3![\K] \\ \L_2 &= 2!([\K]^2 - [\K^2]) \\ \L_3 &= [\K]^3 - 3[\K][\K^2] + 2[\K^3] \\ \L_4 &= [\K]^4 - 6[\K]^2[\K^2] + 3[\K^2]^2 + 8[\K][\K^3] - 6[\K^4] \,. \end{aligned} \end{equation} Note that, despite its similar construction to massive gravity, by the definition \eqref{eq.fofA} of $f_{\mu\nu}$ in terms of a vector field, this model contains no tensor degrees of freedom. It is a pure vector model, with an infinite tower of self-interactions. These interactions lead to higher order equation of motion, such that one may worry about the potential propagation of an Ostrogradski ghost. However, the authors of \cite{derham2020} exhibited a null eigenvector of the Hessian, implying the presence of a constraint that removes the ghostly degree of freedom. Therefore, the above construction gives rise to a self-interacting massive vector theory which propagates only three healthy degrees of freedom. This theory is called Proca-Nuevo. Additionally, the Proca-Nuevo theory is inequivalent to the Generalized Proca. This can be proved explicitly by trying to match the scattering amplitudes of both theories, as was done in \cite{derham2020}. This is not in contradiction with the uniqueness of GP as the defining hypotheses differ. Indeed, GP is characterized by having second order equations of motion, while PN violates this condition. Moreover, the realisation of the constraint is imposed at each order in GP, whereas all orders are needed for PN to realise it. As a consequence of this last point, the null eigenvectors of the two theories are fundamentally different, such that the constraint can not be simultaneously realised \cite{derham2020}. Therefore, a model including both the GP and PN interactions would suffer from ghost instabilities unless some non-trivial restrictions are made (at least in the absence of gravity) \cite{derham2021cosmology}. Furthermore, the PN Lagrangian can also be written perturbatively, \begin{equation}\label{eq.LPNpert} \mathcal{L}_{\text{PN}}=\sum_{n=2}^\infty\frac{1}{\Lambda_2^{2(n-2)}}\mathcal{L}_{\text{PN}}^{(n)}\,, \end{equation} where the expressions for $\L^{(n)}$ are obtained by expanding the arbitrary functions $\alpha_n$ in power of $A^2$ \begin{equation} \alpha_n(A^2)=\bar\alpha_n+\frac{m^2}{\Lambda_2^4}\bar\gamma_n A^2+\frac{m^4}{\Lambda_2^6}\bar\lambda_n A^4+\dots \end{equation} The dimensions are contained in the scale $\Lambda_2$, while the coefficients are dimensionless. In order to get the usual normalization for the Maxwell and mass terms, we set $\bar\alpha_1=-\frac{1}{3}(1-2\bar\alpha_2)$ and $ \bar\gamma_0=-\frac{1}{48}$. We observe that the combination $(1+4\bar\alpha_2-6\bar\alpha_3)$ appears repeatedly in the results and thus redefine $\bar\alpha_2$ accordingly. We also rescale the coefficients $\bar\gamma_1$, $\bar\gamma_2$, and $\bar\lambda_0$ so that they later compare nicely to the GP parameters $a_1$, $b_3$, and $b_1$ respectively. In sum, we define the following parameters \begin{equation}\label{eq.PNredef} \begin{aligned} &\alpha_2'=1+4\bar\alpha_2-6\bar\alpha_3, \quad \alpha_3'=3(\bar\alpha_3-4\bar\alpha_4), \\ &\gamma_1'=6\bar\gamma_1, \quad \gamma_2'=2\bar\gamma_2, \quad \lambda_0'=24\bar\lambda_0. \end{aligned} \end{equation} Under this redefinition, the perturbative PN Lagrangian \eqref{eq.LPNpert} reads \begin{equation}\label{eq.LPNredef1} \begin{aligned} &\mathcal{L}_{\text{PN}}^{(2)}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-\frac{1}{2}m^2 A^2 \\ &\mathcal{L}_\text{PN}^{(3)}= \gamma_1' m^2 A^2\del_\mu A^\mu+\frac{1}{8}(\alpha_2'-1)[F^2][\del A]+\frac{1}{4}(2-\alpha_2')F^2_{\mu\nu}\del^\mu A^\nu \\ &\mathcal{L}_\text{PN}^{(4)}= \lambda_0' m^4 A^4 +m^2 A^2\Big[\gamma_2'[\del A]^2-\frac{1}{2}\left(\frac{1}{2}\gamma_1'+\gamma_2'\right)\del_\mu A_\nu \del^\nu A^\mu +\frac{1}{2}\left(\frac{1}{2}\gamma_1'-\gamma_2'\right)\del_\mu A_\nu \del^\mu A^\nu \Big] \\ & \qquad + \frac{1}{128}(-1+\alpha_2'-2\alpha_3')[F^2]^2+\frac{1}{64}(10-5\alpha_2'-14\alpha_3')F^2_{\mu\nu}F^{2\mu\nu} \\ & \qquad+\frac{1}{8}\Big[\alpha_3'[F^2]([\del A]^2-\del_\mu A_\nu\del^\nu A^\mu)+\left(-2+\alpha_2'+4\alpha_3'\right)F^{2\mu\nu}\del^\alpha A_\mu\del_\alpha A_\nu \\& \qquad\qquad +\left(1-\alpha_2'-4\alpha_3'\right)F^{2\mu\nu}[\del A]\del_\mu A_\nu +\left(-2+\alpha_2'+2\alpha_3'\right)F^{\mu\nu}F^{\alpha\beta}\del_\mu A_\alpha\del_\nu A_\beta\Big], \end{aligned} \end{equation} where we used the notation $F^2_{\mu\nu}=F_\mu^{\ \alpha}F_{\nu\alpha}$ and $[F^2]=F^{\mu\nu}F_{\mu\nu}$. Following this reformulation, all of the combinations of the parameters $\{\bar\alpha_2,\bar\alpha_3,\bar\alpha_4\}$ in the Lagrangian reduce to combinations of only $\{\alpha_2',\alpha_3'\}$. We mentioned before that the PN interactions have to be related in a specific way for the constraint to be respected. This is well illustrated in the perturbative Lagrangian, where the coefficients do not correspond to a specific interaction, but rather intertwine them. This is in contrast with GP in \eqref{eq.GPDL} where each interaction has its own coefficient. Finally, taking the decoupling limit \eqref{eq.DL} of this theory gives \begin{equation} \mathcal{L}_{\text{DL PN}}=\sum_{n=2}^\infty\frac{1}{\Lambda_3^{3(n-2)}}\mathcal{L}_{\text{DL PN}}^{(n)}\, , \end{equation} with \begin{equation}\label{eq.PNDL} \begin{aligned} &\mathcal{L}_{\text{DL PN}}^{(2)}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-\frac{1}{2}(\del\phi)^2 \\ &\mathcal{L}_{\text{DL PN}}^{(3)}=\gamma_1'(\del\phi)^2\Box\phi+\frac{1}{8}(\alpha_2'-1)[F^2]\Box\phi+\frac{1}{4}(2-\alpha_2')F^2_{\mu\nu}(\del^\mu\del^\nu\phi)\\ &\mathcal{L}_{\text{DL PN}}^{(4)}=\gamma_2'(\del\phi)^2[(\Box\phi)^2-(\del_\mu\del_\nu\phi)^2] \\ & \:+\frac{1}{8}\Big[\alpha_3'[F^2][(\Box\phi)^2-(\del_\mu\del_\nu\phi)^2]+\left(-2+\alpha_2'+4\alpha_3'\right)F^{2\mu\nu}(\del^\alpha \del_\mu\phi)(\del_\alpha \del_\nu\phi) \\ & \: +\left(1-\alpha_2'-4\alpha_3'\right)F^{2\mu\nu}\Box\phi(\del_\mu\del_\nu\phi)+\left(-2+\alpha_2'+2\alpha_3'\right)F^{\mu\nu}F^{\alpha\beta}(\del_\mu\del_\alpha\phi)(\del_\nu\del_\beta\phi)\Big]\, . \end{aligned} \end{equation} The scalar modes are the $\bar\gamma_1$ and $\bar\gamma_2$ terms and correspond to the cubic and quartic Galileon interactions respectively. The other terms match the vector-scalar sector of the decoupling limit of massive gravity \cite{derham2020}. \section{Positivity Bounds}\label{sec.bounds} In this section, we review how the physical assumptions of unitarity, causality, and Lorentz invariance on the UV completion of an EFT gives rise to bounds on the 2-2 scattering amplitudes. We consider four identical particles of mass $m$ and integer spin $S$. The generalization to particles with distinct mass or spin follows a similar derivation and can be found in \cite{de_Rham_2018}, so does the fermionic case. The amplitudes are expressed in terms of the Mandelstam variables $(s,t,u)$ \cite{Mandelstam1958}, where $s$ is the center of mass energy, $t$ the momentum transfer, and $u=4m^2-s-t$ their conjugate variable. In particular, $t$ is related to the scattering angle $\theta$ as $ \cos\theta = 1 + \frac{2t}{s-4m^2} $. Further definitions of the kinematic variables are presented in Appendix~\ref{sec.kinematics}. A difficulty that arises in deriving bounds for spinning particles comes from the crossing relations which are trivial in the scalar case, but not in the spinning case (except in the forward limit $t=0$). Therefore, to derive spinning bounds beyond the forward limit, it has been suggested in \cite{de_Rham_2018} to introduce a basis, known as \textit{transversity} basis, which diagonalizes the spinning crossing relations. The singularities of the 2-2 amplitudes are well-known: there are simple poles at the physical mass in the different exchange channels $s,t,u=m^2$ and a branch cut starting at $s=4m^2$ corresponding to multi-particle production. Crossing symmetry between the $s$ and $u$ channels implies that $A(s,t)$ has two branch cuts on the real $s$-axis, from $s=-\infty$ to $-t$ and $s=4m^2$ to $\infty$, which are referred to as the left hand (LH) and right hand (RH) cuts respectively. The amplitudes are usually assumed to be otherwise analytic in the whole Mandelstam complex plane \cite{Mandelstam1958}. Moreover, the spinning amplitudes have the same domain of analyticity as the scalar ones \cite{MartinSpin,Hara64}. Finally, the relation between causality and analyticity has been established for a long time \cite{Bogoliubov:1959bfo,Bremermann1958}. It is reviewed in the Appendix A of \cite{de_Rham_2018}, and we shall not derive it again. \paragraph{Tree-level} The bounds presented in this section are valid to all orders in loops. However, in practice, it turns out to be convenient to work at tree-level. The main difference is that, at tree-level, $\Im[A(s,t)]$ can only be non-zero for $s\geq \Lambda^2$, where $\Lambda$ is the mass of the lightest state outside of the EFT, i.e.\ the cutoff of the EFT \cite{de_Rham_2017}. In this case, we can take the integration along the cuts to start at $\Lambda^2$. In the following, we write the integrals on the branch cuts to run from $\mu_b$ to $\infty$, where $\mu_b=4m^2$, but in general, one can take $\mu_b=\Lambda^2$ at tree-level. In this work, we compute the bounds at tree-level. This is justified if we assume the presence of a weak coupling in the theory, corresponding to the suppression factor of the loops. As tree and loop effects are mixed in the bounds, it would make no sense to only consider tree-level amplitudes if they did not dominate the loop contributions \cite{de_Rham_2017gal}. We consider weak coupling, but there are recent works exploring beyond it \cite{Bellazzini_2021,Bellazzini2021}. \subsection{Transversity Formalism} Here we review the transversity basis introduced in \cite{de_Rham_2018}. The total bosonic\footnote{There is an additional change of signs in the crossing relations for fermions.} amplitude for the process $AB\to CD$ associated with the $s$-channel is related to the corresponding amplitude for $A\bar D\to C\bar B$ (where the bar denotes anti-particles) associated with the $u$-channel by a reordering of the particles \cite{de_Rham_2019}, \begin{equation} \mathcal{A}^{s}(p_1,p_2,p_3,p_4) = \mathcal{A}^u(p_1,-p_4,p_3,-p_2)\, . \end{equation} The right hand side (RHS) can be expressed in terms of the usual $(p_1,p_2,p_3,p_4)$ configuration via a Lorentz transformation (hence the need for the hypothesis that the UV-theory is Lorentz invariant). Such a Lorentz transformation is trivial for scalar amplitudes, $ A^s(s,t)=A^u(u,t)$, but not for spinning particles. Usually, amplitudes are expressed in helicity basis \cite{JACOB1959404}. However, the crossing relations in this basis are not convenient to deal with, except in the forward limit where positivity bounds for spinning particles have therefore been derived and used in the helicity formalism \cite{Cheung2016,Bellazzini2017,Bonifacio_2016}. Away from the forward limit, they are not diagonal, and not sign-definite \cite{Trueman:1964zzb,Hara:1970gc,Hara71}, such that it is difficult to conclude on the positivity along the LH cut in this formalism, which is a crucial ingredient to derive the positivity bounds. Therefore, to derive bounds beyond the forward limit, we use the transversity formalism \cite{Kotanski66}, which have simplified crossing relations \cite{Kotanski70}. Helicity polarizations, denoted by $\lambda$, are defined along the momentum direction, while transversity polarizations, denoted by $\tau$, are defined along the direction transverse to the interaction plane. This is depicted in Fig.\ \ref{fig.trans}. Amplitudes in the transverity basis, denoted by $\T$, are related to the ones in helicity basis, denoted by $\H$, by \begin{equation}\label{eq.htot} \mathcal{T}_{\tau_1\tau_2\tau_3\tau_4} = \sum_{\lambda_1\lambda_2\lambda_3\lambda_4}u_{\lambda_1\tau_1}^Su_{\lambda_2\tau_2}^Su_{\lambda_3\tau_3}^{S*}u_{\lambda_4\tau_4}^{S*}\mathcal{H}_{\lambda_1\lambda_2\lambda_3\lambda_4}\, , \end{equation} where $u_{\lambda\tau}^S=D^S_{\lambda\tau}(\pi/2,\pi/2,-\pi/2)$ with $D^S$ a Wigner $D$-matrix \cite{wignergruppentheorie}. These transversity amplitudes follow nice crossing relations which further simplifiy for elastic scattering or in the forward limit. \begin{equation}\label{eq.crossrel} \mathcal{T}^s_{\tau_1\tau_2\tau_3\tau_4}(s,t,u)=e^{i\pi\sum_i\tau_i}e^{-i\chi_u\sum_i\tau_i}\mathcal{T}^u_{-\tau_1-\tau_2-\tau_3-\tau_4}(u,t,s)\, , \end{equation} with \begin{equation}\label{eq.chiu} e^{\pm i\chi_u}=\frac{-su\mp2im\sqrt{stu}}{\sqrt{s(s-4m^2)u(u-4m^2)}}\, . \end{equation} \paragraph{Kinematical singularities} By its definition \eqref{eq.chiu}, the factor $e^{i\chi_u\sum_i\tau_i}$ in the crossing relation \eqref{eq.crossrel} introduces additional singularities of order $\sum_i\tau_i\leq 4S$. Namely, poles at $s,u=0$, $4m^2$ and a branch point at $\sqrt{stu}=0$. These are kinematical singularities, and it is more convenient to subtract them. Let us consider these singularities separately. First, it has been shown in \cite{osti_4534874} that helicity amplitudes are regular at $s=0$. Then, by \eqref{eq.htot}, so are the transversity amplitudes. Second, the pole at $s=4m^2$ can be removed by multiplying by a factor $\sqrt{s(s-4m^2)}^{\sum_i\tau_i}$. In practice we use the maximal possible value of the exponent, that is $4S$, so that it works for any configuration of polarizations. Finally, as $\sqrt{stu}\sim\sin\theta$, we have that $\sqrt{stu}\to-\sqrt{stu}$ under $\theta\to-\theta$, such that any even function of $\theta$ does not contain the branch cut. Altogether, these considerations imply that the quantity \begin{equation}\label{eq.tplus} \mathcal{T}_\taus^+(s,\theta)=\left(s(s-4m^2)\right)^{2S}\left(\mathcal{T}_\taus(s,\theta)+\mathcal{T}_\taus(s,-\theta)\right)\,, \end{equation} is free of kinematical singularities. This regularized amplitude plays a similar role to a scalar amplitude in the derivation of the bounds which follows. \subsection{Spinning Bounds}\label{sec.spinbounds} \paragraph{Unitarity and analyticity} The requirement of unitarity imposes, via the optical theorem, that $\Abs_s\T^+_{\tau_1\tau_2}(s,0)$ together with its $t$-derivatives is positive on the RH cut. Thanks to the crossing relations \eqref{eq.crossrel} of the transversity amplitudes, this consideration can be extended to the LH cut.\footnote{This extension is not trivial and can be found in \cite{de_Rham_2018}.} Note that $\Abs_s\T^+(s)=\frac{1}{2i}\text{Disc}\T^+(s)=\frac{1}{2i}\lim_{\epsilon\to 0}[\T^+(s+i\epsilon)-\T^+(s-i\epsilon)]$ denotes the absorptive part of the amplitude, which is equal to the imaginary part if the theory is time reversal invariant. Next, analyticity allows one to analytically continue these properties away from $t=0$, such that \begin{equation}\label{eq.posabs} \begin{aligned} &\frac{\del^n}{\del t^n} \Abs_s\T^+_{\tau_1\tau_2}(s,t) >0, \quad \forall\, n\geq0, \, s\geq 4m^2, \, 0\leq t<m^2, \\ &\frac{\del^n}{\del t^n} \Abs_u\T^+_{\tau_1\tau_2}(s,t) >0, \quad \forall\, n\geq0, \, u\geq 4m^2, \, 0\leq t<m^2. \end{aligned} \end{equation} Furthermore, together with unitarity and analyticity, the requirement of locality in a gapped theory implies the presence of a Froissart bound \cite{Froissart,Martin1963}, which also exists for spinning particles \cite{Hara64} and can be extended to non zero $t$ \cite{JinMartin}. With $\epsilon(t)<1$ for $0\leq t< m^2$, it reads \begin{equation}\label{eq.spinfroissart} \lim_{s\to\infty}|\T_\taus(s,t)|< s^{1+\epsilon(t)} \, \implies\, \lim_{s\to\infty}|\T^+_\taus(s,t)|< s^{N_S}. \end{equation} The implication for $\T^+(s,t)$ comes from its definition in \eqref{eq.tplus} where it has an additional $s^{4S}$ factor compared to $\T(s,t)$, such that we define \begin{equation}\label{eq.ns} N_S = 4S+2. \end{equation} Note that $\T^+$ could have been defined with a minimal factor of $s^{\sum_i\tau_i}$, in which case $N_S=\sum_i\tau_i+2$ is sufficient. However, this definition depending on the configuration of polarizations is not really convenient, especially to study indefinite transversities, so we shall use the definition \eqref{eq.ns} which works for any polarization. It has been argued in \cite{de_Rham_2019} that not using the minimal number of subtractions only leads to small differences and does not change the qualitative form of the bounds. \paragraph{Dispersion relation} To derive a dispersion relation for the regularized amplitude $\T^+(s,t)$ using Cauchy's theorem, we first define the pole subtracted amplitude \begin{equation} \tilde\T^+_{\tau_1 \tau_2}(s,t) = \T^+_{\tau_1 \tau_2}(s,t) - \frac{\text{Res}\T^+_{\tau_1 \tau_2 }(s=m^2,t)}{s-m^2} - \frac{\text{Res}\T^+_{\tau_1 \tau_2} (s=3m^2-t,t)}{s+t-3m^2}\,, \end{equation} which is analytic in the whole $s$-complex plane (minus the branch cuts) and can therefore be expressed via a contour integral, see Fig.\ \ref{fig.contour}, \begin{equation}\label{eq.contour} \tilde\T^+_{\tau_1 \tau_2}(s,t) = \frac{1}{2\pi i} \oint_\mathcal{C} ds' \; \frac{\T^+_{\tau_1 \tau_2 }(s',t)}{(s'-s)}\,. \end{equation} Next, deforming the contour to $\mathcal{C}'$, the amplitude is given by the arcs at infinity and the contributions along the cuts. According to \eqref{eq.spinfroissart}, the arc contributions can be dropped by performing $N_S$ subtractions. This introduces subtraction functions $a_n(t)$ and additional powers of $s$. Then, the contributions along the cuts are given by the discontinuity of the amplitude along the cuts; that is the absorptive part. All in all, we can express the contour integral as \begin{equation} \label{eq.disrel1} \begin{aligned} \tilde\T^+_{\tau_1 \tau_2 }(s,t) = \sum_{n=0}^{N_S-1} a_n(t) s^n &+ \frac{s^{N_S}}{\pi} \int_{\mu_b}^\infty d \mu \frac{ {\text{Abs}}_s \T^+_{\tau_1 \tau_2 }(\mu,t) }{ \mu^{N_S} (\mu - s) } \\& +\frac{u^{N_S}}{\pi} \int_{\mu_b}^\infty d \mu \frac{ {\text{Abs}}_u \T^+_{\tau_1 \tau_2 }(4m^2-t-\mu,t) }{ \mu^{N_S} ( \mu - u) } \,. \end{aligned} \end{equation} The dispersion relation \eqref{eq.disrel1} allows us to derive positivity constraints on the derivatives of $\tilde\T^+_{\tau_1 \tau_2 }(s,t)$, using \eqref{eq.posabs}. Indeed, defining \begin{equation}\label{eq.fvtmain} f_{\tau_1\tau_2}(v,t)=\frac{1}{N_S!}\frac{\del^{N_S}}{\del s^{N_S}}\tilde\T^+_{\tau_1\tau_2}(s,t)\Big|_{s=v+2m^2-t/2}\, , \end{equation} where the $N_S$ derivatives get rid of the subtraction functions, the following quantities are positive \begin{equation}\label{eq.boundssum} \begin{aligned} & \del_v^{2N} f_{\tau_1\tau_2}(v,t)>0 \quad \forall\, N\geq 0, \\ &\del_t f_{\tau_1\tau_2}(v,t)+\frac{N_S+1}{2\M^2}f_{\tau_1\tau_2}(v,t)>0 \, , \end{aligned} \end{equation} where at tree level, \begin{equation} |v|<\Lambda^2, \; 0\leq t<m^2, \; \M^2=\Lambda^2\, . \end{equation} In particular, the tree-level bounds can be used to obtain constraints on the EFT cutoff $\Lambda$, as performed in e.g. \cite{de_Rham_2017}. The details of the derivation of \eqref{eq.boundssum} can be found in Appendix~\ref{app.bounds}. \paragraph{Indefinite transversity bounds} We can also consider initial and final particles that are not polarized in a definite transversity direction, but rather in some superpositions of transversity polarizations. The elastic scattering amplitudes for these indefinite transversity states are computed as \begin{equation} \T_{\alpha\beta}(v,t) = \sum_{\taus}\alpha_{\tau_1}\beta_{\tau_2}\alpha^*_{\tau_3}\beta^*_{\tau_4}\T_\taus(v,t)\, , \end{equation} where $\alpha$ and $\beta$ are generic unit vectors. For given indefinite polarizations, their components are the projection of the polarization along the definite transversity polarization vectors. Next, we define the indefinite polarization generalization of $f_{\tau_1\tau_2}$ as \begin{equation}\label{eq.funpol} f_{\alpha\beta}(v,t) = \sum_{\taus}\alpha_{\tau_1}\beta_{\tau_2}\alpha^*_{\tau_3}\beta^*_{\tau_4}f_\taus(v,t)\, , \end{equation} where $f_\taus$ is the inelastic generalization of $f_{\tau_1\tau_2}$, i.e.\ it is computed by \eqref{eq.fvtmain} for $\tilde\T_\taus$. The arguments we used to prove the positivity of $f_{\tau_1\tau_2}$, and its $v$-derivatives, are still valid for $v=0$ and in the forward limit $t=0$ (see the Appendix A of \cite{de_Rham_2019})\footnote{It is an interesting question whether the condition $v=0$ may be relaxed.} \begin{equation} \del_v^{2N}f_{\alpha\beta}(0,0)>0 \quad \forall \quad N\geq 0\, . \end{equation} These bounds hold for indefinite transversities. Therefore, they also hold for definite and indefinite helicities. Finally, note that it is sufficient to study $f_{\alpha\beta}$ as it contains all the definite polarization quantities: $f_{\tau_1\tau_2}$ corresponds to the $\mathcal{O}(\alpha_{\tau_1}^2\beta_{\tau_2}^2)$ terms. \section{Bounds on Proca Theories}\label{sec.results} In the following, we obtain some linear $f$-quantities for the vector theories we study. We can express them as \begin{equation}\label{eq.fdefmain} \begin{aligned} f_{\tau_1\tau_2}(v,t)&=f_{\tau_1\tau_2}(0,0)+ \del_t f_{\tau_1\tau_2}\cdot t\equiv \frac{1}{\Lambda_2^4}\left[ \mu_{\tau_1\tau_2}+\lambda_{\tau_1\tau_2}\frac{t}{m^2}\right], \end{aligned} \end{equation} with the $\mu$ and $\lambda$ being some combinations of the EFT parameters. There are four independent quantities for the definite elastic polarizations, denoted \begin{equation} f_{SS}\equiv f_{00}\,,\quad f_{SV}\equiv f_{0\pm1}\, ,\quad f_{V_+}\equiv f_{\pm1\pm1} \, , \quad f_{V_-}\equiv f_{\pm1\mp1} \, . \end{equation} We can also write $f_{\alpha\beta}(0,0)$ as \begin{equation}\label{eq.fabmain} \begin{aligned} f_{\alpha\beta}(0,0)=\frac{1}{\Lambda_2^4}\Big[&\tilde\mu_1|\alpha_+|^2|\beta_+|^2 \\ +&\tilde\mu_2\big[|\alpha_+|^2(1-|\beta_+|^2)+|\beta_+|^2(1-|\alpha_+|^2)\big] \\ +&\tilde\mu_3\big[|\alpha_-|^2|\beta_0|^2+|\alpha_0|^2|\beta_-|^2\big] \\ +&\tilde\mu_4\big[|\alpha_-|^2|\beta_-|^2+|\alpha_0|^2|\beta_0|^2\big]\\ +&2(\tilde\mu_3-\tilde\mu_4)\Re(\alpha_-\alpha_0^*)\Re(\beta_-\beta_0^*)\\ +&\tilde\mu_5\big[\Re(\alpha_-\alpha_+^*)\Re(\beta_-\beta_+^*)-\Re(\alpha_0\alpha_+^*)\Re(\beta_0\beta_+^*)\big]\Big]>0\, . \end{aligned} \end{equation} Then, the bounds from the previous section translate into a set of 10 independent bounds: \begin{equation}\label{eq.sumboundsmain} \begin{aligned} &\tilde\mu_1>0\, , \quad \tilde\mu_2>0 \, ,\quad \tilde\mu_3>0\, , \\ &\mu_{SS}>0\, ,\quad \mu_{V_+}>0\, , \quad \mu_{V_-}>0\, , \\ &\lambda_{SS}\geq0\, , \quad \lambda_{SV}\geq0\, , \quad \lambda_{V_+}\geq0\, , \quad \lambda_{V_-}\geq0\, . \end{aligned} \end{equation} The formal derivation of this statement is presented in Appendix~\ref{app.linbounds}. The $\mu$'s are bounds in the forward limit; those with a $^\sim$ come from indefinite polarizations. The $\lambda$ bounds correspond to the $t$-derivative bounds, which are available thanks to the analysis beyond the forward limit. We shall now compute these coefficients explicitly for the two types of Proca theories. \subsection{Generalized Proca} For convenience, we recall the perturbative interacting Lagrangian \eqref{eq:LGPpert} for Generalized Proca \begin{equation}\label{eq.lgp2} \begin{aligned} \mathcal{L}_{\text{GP}}^{(3)}&=a_1m^2A^2\del_\mu A^\mu+ a_2\Tilde{F}^{\mu\alpha}\Tilde{F}^\nu_{\ \alpha}\del_\mu A_\nu \\ \mathcal{L}_{\text{GP}}^{(4)}&=b_1 m^4A^4+b_2m^2A^2F_{\mu\nu}^2+ b_3m^2A^2[(\del\cdot A)^2-\del_\mu A_\nu\del^\nu A^\mu]+b_4m^2A_\mu A^\nu F^{\alpha\mu}F_{\alpha\nu} \\ &\, + b_5F^{\mu\nu}F^{\alpha\beta}F_{\mu\alpha}F_{\nu\beta}+b_6(F_{\mu\nu}^2)^2 +b_7 \Tilde{F}^{\alpha\beta}\Tilde{F}^{\mu\nu}\del_\alpha A_\mu\del_\beta A_\nu\, . \end{aligned} \end{equation} Computing the amplitudes from this Lagrangian, we obtain $f_{\tau_1\tau_2}$ with the following coefficients, which must be positive, \begin{equation}\label{eq.gpb1} \begin{aligned} \mu_{SS}=& 8 [2b_5 + 4 b_6+a_2(a_1+\frac{1}{4}a_2)]>0, \qquad \mu_{SV}= b_4 + 4 b_5-\frac{1}{2}a_2^2>0, \\ \mu_{V_+}=& 2[ 2b_5+4 b_6+a_2(a_1+\frac{1}{4}a_2)+a_1^2+b_1-2b_2+b_3]>0,\\ \mu_{V_-}=&2[2b_5+4 b_6+a_2(a_1-\frac{1}{4}a_2)-3a_1^2+b_1+2b_2-b_3+b_4]>0, \end{aligned} \end{equation} and \begin{equation}\label{eq.gpb2} \begin{aligned} \lambda_{SS}=& \frac{3}{2} a_2^2\geq0, \qquad \lambda_{SV}= \frac{1}{4}[3a_2^2-4b_7]\geq0,\\ \lambda_{V_+}=& \frac{3}{2}[b_3+a_1^2+ a_2(a_1+\frac{1}{4}a_2)]\geq0,\\ \lambda_{V_-}=& \frac{3}{2}[b_3+a_1^2-a_2(a_1+\frac{1}{12}a_2)]\geq0. \end{aligned} \end{equation} The indefinite polarizations in the forward limit $f_{\alpha\beta}(0,0)$ are of the form given by \eqref{eq.fabmain} with \begin{equation}\label{eq.gpb3} \begin{aligned} \tilde\mu_1 &= 8[b_1-a_1^2] >0, \qquad \tilde\mu_2 = 2b_4-a_2^2>0, \\ \tilde\mu_3 &= 8b_5>0, \qquad \tilde\mu_4= 8[2b_5 + 4 b_6+a_2(a_1+\frac{1}{4}a_2)]>0, \\ \tilde\mu_5 &= -4[4b_2+b_4-2b_3-\frac{1}{2}a_2^2-4a_1^2]\, . \end{aligned} \end{equation} Note that, as mentioned in \eqref{eq.defindef}, the constraint on $\tilde\mu_5$ is encoded in $\mu_{V\pm}$. Additionally, $\mu_{SS}=\tilde\mu_4$ and $\mu_{SV}\propto\tilde\mu_2+\tilde\mu_3$, so some of the previous bounds are redundant. To summarize, the constraints on the EFT coefficients from the bounds \eqref{eq.sumboundsmain} for Generalized Proca are given by \begin{equation}\label{eq.gpbounds} \begin{aligned} & b_1>a_1^2, \quad 2b_4>a_2^2, \quad b_5>0, \\ & b_3+a_1^2\geq \text{Max}\left[-a_2(a_1+\frac{1}{4}a_2),a_2(a_1+\frac{1}{12}a_2)\right],\quad 4b_7\leq 3a_2^2, \\ &2b_5 + 4 b_6+a_2(a_1+\frac{1}{4}a_2)>0,\\ & 2b_5+4 b_6+a_2(a_1+\frac{1}{4}a_2)+a_1^2+b_1-2b_2+b_3>0,\\ &2b_5+4 b_6+a_2(a_1-\frac{1}{4}a_2)-3a_1^2+b_1+2b_2-b_3+b_4>0, \end{aligned} \end{equation} where the first line shows the $\tilde\mu$ bounds, the second line the $\lambda$ bounds (without the trivial $\lambda_{SS}$), and the rest are the $\mu$ bounds. These results are in agreement with those of \cite{de_Rham_2019} (and \cite{Bonifacio_2016}\footnote{Note that the $S$ and $V$ denote helicity polarizations in their results, rather than transversity polarizations.} in the forward limit) up to some parameter redefinitions. We also have additional constraints arising from the interactions parameterized by $a_2$ and $b_7$, which were not previously considered. Finally, although only $f_{\alpha\beta}(0,0)$ is relevant for the bounds we consider, the full $f_{\alpha\beta}(v,t)$ contains a linear $t$ and $v$ dependence that is worth commenting on. In particular, the $v$ contribution gather all of the $s^3$ terms in the amplitudes (every polarization is included in $f_{\alpha\beta}$) and therefore indicates the scale at which unitarity is perturbatively broken. For the Generalized Proca this contributaion is given by \begin{equation}\label{eq.fabv} \begin{aligned} f_{\alpha\beta}^v = \frac{14}{\Lambda_2^4m^2}a_2\Big[&a_2\Im(\alpha_-\alpha_0^*)\Im(\beta_0\beta_-^*)\\ &+(2a_1+a_2)[\Im(\alpha_+\alpha_0^*)\Im(\beta_0\beta_+^*)-\Im(\alpha_-\alpha_+^*)\Im(\beta_+\beta_-^*)]\Big]. \end{aligned} \end{equation} Then, perturbative unitarity breaks at $s^3\sim\Lambda_2^4m^2\sim\Lambda_3^6$, which confirms the existence of non-trivial operators at the scale $\Lambda_3$. What is interesting here is the overall $a_2$ coefficient which means that, for any polarizations, the unitarity breaking term is parametrized by $a_2$. Therefore, setting $a_2=0$ raises the cutoff of the model. \paragraph{Some examples} \begin{itemize} \item First, in addition to raising the cutoff scale, setting $a_2=0$ greatly simplifies the expression of the bounds. It (along with the $b_7$ interaction term) generates scalar-vector mixing term in the decoupling limit \eqref{eq.GPDL}, and, as seen from \eqref{eq.fabv}. The bounds \eqref{eq.gpbounds} for $a_2=0$ are given by \begin{equation}\label{eq.a20bounds} \begin{aligned} & b_1>a_1^2, \quad b_4>0, \quad b_5>0, \quad b_3\geq -a_1^2, \quad b_7\leq 0, \quad 2b_5+4 b_6>0, \\ & b_1-2b_2+b_3+a_1^2+2b_5+4 b_6>0,\quad b_1+2b_2-b_3+b_4-3a_1^2 +2b_5+4 b_6>0. \end{aligned} \end{equation} Hence, having the dimension-6 operator represented by $a_2$ weakens the positivity constraints. This seems to be aligned with the findings in \cite{Zhang:2018shp}. \item Additionally, $b_7$ only appears in the beyond forward limit bound $\lambda_{SV}\geq 0$. Note here the importance of the beyond forward limit analysis. Without it $b_7$ would remain unconstrained. Therefore, if we consider the case with $b_7=0$ (i.e. canceling the other scalar-vector mixing term than the previous example), we get the bounds \eqref{eq.gpbounds} but without the constraint $4b_7\leq 3a_2^2$ which trivializes. Similarly, setting both $a_2$ and $b_7$ to 0 gives again \eqref{eq.a20bounds}, but without the constraint $b_7\leq 0$. In other words, the presence or absence of $b_7$ does not influence the constraints on the other parameters. \item Then, studying the simplest vector Galileon model, keeping only the interactions in $a_1$ and $b_3$ leads to the inconsistent bounds \begin{equation} 3a_1^2<-b_3\leq a_1^2. \end{equation} As already noted in \cite{de_Rham_2019}, such a theory admits no standard UV completion. In this work, they indicated that other interactions should be included in order to obtain a window of parameters satisfying the positivity bounds. Here, our analysis on indefinite polarizations provides a more precise result on which interactions to include. The bounds for any model containing the cubic vector Galileon can be satisfied by adding the purely quartic interaction parametrized by $b_1$, which must be positive due to the condition $b_1>a_1^2$, coming from $\tilde\mu_1$. This minimal model is given by the Lagrangian \begin{equation} \begin{aligned} \L=&-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-\frac{1}{2}m^2 A^2 +\frac{1}{\Lambda_2^2}a_1m^2A^2\del_\mu A^\mu \\ &+\frac{1}{\Lambda_2^4} b_3m^2A^2[(\del\cdot A)^2-\del_\mu A_\nu\del^\nu A^\mu] +\frac{1}{\Lambda_2^4} b_1m^4A^4\, , \end{aligned} \end{equation} with the bounds reducing to \begin{equation}\label{eq.galbound} \begin{aligned} 3a_1^2-b_1 < -b_3\leq a_1^2. \end{aligned} \end{equation} The corresponding window of parameter is pictured on figure~\ref{fig.galbounds}. Note that the bounds on $\tilde\mu_1$ and $\mu_{V_+}$ ($b_1>a_1^2$ and $b_1+b_3+ a_1^2>0$) are also non-trivial, but are automatically fulfilled when \eqref{eq.galbound} is respected. \item If we only consider a model containing the interaction terms leading to scalar-vector mixing in the decoupling limit, with coefficients $a_2$ and $b_7$, then, the bounds cannot be satisfied. The positivity bound, $\tilde\mu_2>0$ requires the $b_4$ interaction to also be included, with the constraints \begin{equation}\label{eq.ba2b7} 2b_4>a_2^2, \quad 4b_7\leq 3a_2^2. \end{equation} With only these interactions $0\leq\lambda_{V_-}= -a_2^2/8 $ is violated. Hence, we need to also add either $a_1$ (and therefore $b_1$) or $b_3$, i.e.\ again the vector Galileon interactions. The minimal such model would therefore contain $\{a_2,b_3,b_4,b_7\}$ and be constrained by the bounds \begin{equation} \frac{a_2^2}{12}\leq b_3<b_4-\frac{a_2^2}{4}. \end{equation} \item Finally, by the previous considerations, the simplest model including both the vector Galileon $\{a_1, b_3\}$ and mixing interactions $\{a_2,b_7\}$ should also contain the quartic interactions with coefficients $b_1$ (to satisfy $\tilde{\mu}_1>0$) and $b_4$ (to satisfy $\tilde{\mu}_2>0$). The bounds for such a model correspond to setting $b_2,b_5,b_6=0$ in \eqref{eq.gpbounds}. Note that we could also consider only the mixing term $a_2$, setting $b_7=0$, which would give the same bounds. \item Similarly, the simplest vector Galileon model with $\{a_1, b_3\}$ and only the mixing term $b_7$ should contain $b_1$ as well, and the constraints would be given by \eqref{eq.galbound} plus $b_7\leq 0$. \end{itemize} \subsection{Proca-Nuevo} First, we recall the perturbative Proca-Nuevo Lagrangian, whose coefficients we want to constrain. \begin{equation}\label{eq.LPNredef} \begin{aligned} &\mathcal{L}_\text{PN}^{(3)}= \gamma_1' m^2 A^2\del_\mu A^\mu+\frac{1}{8}(\alpha_2'-1)[F^2][\del A]+\frac{1}{4}(2-\alpha_2')F^2_{\mu\nu}\del^\mu A^\nu \\ &\mathcal{L}_\text{PN}^{(4)}= \lambda_0' m^4 A^4 +m^2 A^2\Big[\gamma_2'[\del A]^2-\frac{1}{2}\left(\frac{1}{2}\gamma_1'+\gamma_2'\right)\del_\mu A_\nu \del^\nu A^\mu +\frac{1}{2}\left(\frac{1}{2}\gamma_1'-\gamma_2'\right)\del_\mu A_\nu \del^\mu A^\nu \Big] \\ & \qquad + \frac{1}{128}(-1+\alpha_2'-2\alpha_3')[F^2]^2+\frac{1}{64}(10-5\alpha_2'-14\alpha_3')F^2_{\mu\nu}F^{2\mu\nu} \\ & \qquad+\frac{1}{8}\Big[\alpha_3'[F^2]([\del A]^2-\del_\mu A_\nu\del^\nu A^\mu)+\left(-2+\alpha_2'+4\alpha_3'\right)F^{2\mu\nu}\del^\alpha A_\mu\del_\alpha A_\nu \\& \qquad\qquad +\left(1-\alpha_2'-4\alpha_3'\right)F^{2\mu\nu}[\del A]\del_\mu A_\nu +\left(-2+\alpha_2'+2\alpha_3'\right)F^{\mu\nu}F^{\alpha\beta}\del_\mu A_\alpha\del_\nu A_\beta\Big]. \end{aligned} \end{equation} In contrast with the GP model, the PN parameters do not correspond to a specific interaction (except for $\lambda_0'$), but rather appear in various combinations to parametrize several interactions. It is due to the special tuning required to have a ghost-free theory despite higher order EOM. However, note that $\lambda_0'$ parametrizes a purely quartic interaction that vanishes in the decoupling limit, $\gamma_1'$ and $\gamma_2'$ give rise to the cubic and quartic Galileons interactions respectively in the decoupling limit, and the interactions entering with $\alpha$'s give rise to scalar-vector mixing terms in the decoupling limit. Then, the $f_{\tau_1\tau_2}(v,t)$ is again of the form \eqref{eq.fdefmain} with coefficients that must satisfy the positivity bounds. At constant order the bounds are given by \begin{equation}\label{eq.pnb1} \begin{aligned} &\mu_{SS}=\frac{1}{8}[2+\alpha_2'(\alpha_2' - 4 (1 + 4 \gamma_1'))]>0, \quad \mu_{SV}=\frac{1}{32}[8 - 2 \alpha_2' -\alpha_2'^2 + 4 \alpha_3']>0, \\ &\mu_{V_+}=\frac{1}{32}[10+\alpha_2'(\alpha_2'-4(1+4\gamma_1'))-16\gamma_1'(1-4\gamma_1')+96\gamma_2'+64\lambda_0']>0, \\ &\mu_{V_-}=\frac{1}{32}[2-\alpha_2'(\alpha_2'+4(1+4\gamma_1'))-48\gamma_1'(1+4\gamma_1')-96\gamma_2'+64\lambda_0']>0, \end{aligned} \end{equation} and at linear $t$ order they are given by \begin{equation}\label{eq.pnb2} \begin{aligned} & \lambda_{SS}=\frac{3}{32}\alpha_2'^2\geq 0, \quad \lambda_{SV}=\frac{1}{64}[3\alpha_2'^2 - 8 \alpha_2' +16 \alpha_3'+4]\geq 0, \\ & \lambda_{V_+}=\frac{3}{32}[16\gamma_2'+16\gamma_1'^2-\alpha_2'(1+4\gamma_1'-\frac{1}{4}\alpha_2')+2]\geq 0, \\ & \lambda_{V_-}=\frac{3}{32}[16\gamma_2'+16\gamma_1'^2+\alpha_2'(1+4\gamma_1'-\frac{1}{12}\alpha_2')-\frac{2}{3}]\geq 0. \end{aligned} \end{equation} Additionally, the indefinite polarizations quantity $f_{\alpha\beta}(0,0)$ is given by \eqref{eq.fabmain} with \begin{equation}\label{eq.pnb3} \begin{aligned} &\tilde\mu_1 = 4[2\lambda_0'-\gamma_1'(1+2\gamma_1')] >0, \quad \tilde\mu_2 = \frac{1}{16}[4-\alpha_2'^2]>0, \\ &\tilde\mu_3 = \frac{1}{8}[2-\alpha_2'+2\alpha_3']>0, \quad \tilde\mu_4= \frac{1}{8}[2+\alpha_2'(\alpha_2'-4(1+4\gamma_1'))]>0, \\ &\tilde\mu_5 = \frac{1}{8}[\alpha_2'^2+16\gamma_1'(1+8\gamma_1')+96\gamma_2'+4]. \end{aligned} \end{equation} Before analyzing the bounds, note that the contribution to $f_{\alpha\beta}$ linear in $v$, which corresponds to the $s^3$ terms in all of the amplitudes, is given by \begin{equation}\label{eq.fabvPN} \begin{aligned} f_{\alpha\beta}^v = \frac{7}{8\Lambda_2^4m^2}\alpha_2'\Big[&\alpha_2'\Im(\alpha_-\alpha_0^*)\Im(\beta_0\beta_-^*)\\ &+(\alpha_2'-8\gamma_1'-2)[\Im(\alpha_+\alpha_0^*)\Im(\beta_0\beta_+^*)-\Im(\alpha_-\alpha_+^*)\Im(\beta_+\beta_-^*)]\Big]. \end{aligned} \end{equation} It has exactly the same structure as the GP contribution \eqref{eq.fabv}. Again, this means that perturbative unitarity breaks at scale $\Lambda_3$. The overall coefficient of these terms is now $\alpha_2'$. Therefore, the tuning $\alpha_2'=0$ plays a special role as it raises the cutoff. Unlike for GP, this tuning does not cancel a particular interaction term in the Lagrangian, but rather relates the interactions in a specific way. The positivity constraints \eqref{eq.pnb1}$-$\eqref{eq.pnb3} are highly redundant. They reduce to only 5 independent bounds, which are given by \begin{equation}\label{eq.ReducedBoundsPN} \begin{aligned} \tilde\mu_2>0 : \quad & -2<\alpha_2'<2\, , \\ \lambda_{SV}\geq 0 : \quad &\alpha_3'\geq \frac{1}{16}\left(-3\alpha_2'^2+8\alpha_2'-4\right)\, ,\\ \mu_{SS}>0: \quad &\alpha_2'\gamma_1'<\frac{1}{16}\left(\alpha_2'^2-4\alpha_2'+2\right)\, ,\\ \lambda_{V_-}\geq 0 : \quad & \gamma_2'\geq -\gamma_1'^2+\frac{1}{192}\alpha_2'(\alpha_2'-48\gamma_1'-12)+\frac{1}{24}\, ,\\ \mu_{V_-}>0 : \quad & \lambda_0'-\frac{3}{2}\gamma_2'>3\gamma_1'^2+\frac{3}{4}\gamma_1'+\frac{1}{64}\alpha_2'(\alpha_2'+16\gamma_1'+4)-\frac{1}{32}\, . \end{aligned} \end{equation} Both the analysis of indefinite polarizations, via $\tilde\mu_2$, and beyond forward limit, via $\lambda_{SV}$ and $\lambda_{V_-}$, play an important role in constraining the model. In the following, we comment on the constraints \eqref{eq.ReducedBoundsPN} separately and show them graphically on figures~\ref{fig:a3}$-$\ref{fig.grid}, where the region excluded by each bound is depicted in its own color and the allowed region remains white. First, we note that the particular case $\alpha_2'=0$ greatly simplifies the bounds. Even if this is not necessarily manifest from the Lagrangians, $\alpha_2'$ appears to play a similar role to $a_2$ from the GP model, both in the bounds and in the breaking of perturbative unitarity. Setting $\alpha_2'=0$, the bounds on $\tilde\mu_2$ and $\lambda_{SV}$ are trivially respected, and the other ones simplify to \begin{equation}\label{eq.PNBa20} \begin{aligned} \lambda_{SV}\geq 0 : \quad &\alpha_3'\geq -\frac{1}{4}\, ,\\ \lambda_{V_-}\geq0 : \quad & \gamma_2'\geq-\gamma_1'^2+\frac{1}{24}\, ,\\ \mu_{V_-}>0 : \quad & \lambda_0'-\frac{3}{2}\gamma_2'>3\gamma_1'^2+\frac{3}{4}\gamma_1-\frac{1}{32}\, . \end{aligned} \end{equation} \afterpage{ \clearpage} Now, back to the general $\alpha_2'$ case, the first bound of \eqref{eq.ReducedBoundsPN}, $\tilde\mu_2>0$ restricts one of the parameter, $\alpha_2'$, in a $\mathcal{O}(1)$ interval, independently of the rest of the model. In particular, the coefficients of the interactions $[F^2][\del A]$ and $F^2_{\mu\nu}\del^\mu A^\nu$ in $\L_\text{PN}^{(3)}$ are constrained to lie in the intervals $(-3/8,1/8)$ and $(0,1)$ respectively. Then, $\alpha_3'$ only appears in the second bound, $\lambda_{SV}>0$, together with $\alpha_2'$. The allowed region for $\{\alpha_2',\alpha_3'\}$ is therefore entirely determined by the first two bounds and is shown in figure~\ref{fig:a3}. Although neither of these coefficients corresponds to a specific interaction term by itself, together they parametrize the last three lines of $\L_\text{PN}^{(4)}$, as can be seen in \eqref{eq.LPNredef}. Next, $\mu_{SS}>0$ relates $\gamma_1'$ to $\alpha_2'$ as shown in figure~\ref{fig:c1}. This region corresponds to \begin{equation}\label{eq.signc1} \begin{aligned} &\begin{cases} \gamma_1'> F(\alpha_2')\quad &-2<\alpha_2'<0, \\ \gamma_1'\in\mathbb{R} &\alpha_2'=0, \\ \gamma_1'< F(\alpha_2')\quad &0<\alpha_2'<2, \end{cases} \qquad F(\alpha_2')=\frac{1}{16}\frac{\alpha_2'^2-4\alpha_2'+2}{\alpha_2'}\,. \end{aligned} \end{equation} However, the coefficient $\gamma_1'$ also appears in the remaining bounds. Adding these constraints in figure~\ref{fig:c1} would reduce the allowed region. Finally, $\gamma_2'$ and $\lambda_0'$ are successively introduced in $\lambda_{V_-}$ and $\mu_{V_-}$. These constraints are considered together with those on $\alpha_2'$ and $\gamma_1$. These bounds are shown both in the $\{\gamma_1',\gamma_2'\}$ and $\{\gamma_1',\lambda_0'\}$ planes for various values of the other parameters in the top and bottom lines of figure~\ref{fig.grid} respectively. These allow for a visualization of the influence of different parameters on the allowed regions of the parameter space, shown in white. In particular, we pick $\alpha_2'$ to be successively negative, zero, and positive. Regarding the top line, the islands are reproduced in figure~\ref{fig.PNgamma2} for various values of $\lambda_0'$. We observe that the islands shrink for decreasing values of $\lambda_0'$, with no allowed region for $\lambda_0'< -0.0624$. This bound on $\lambda_0'$, which parametrizes the quartic interaction $A^4$ is similar to the bound on $b_1$ in GP, which also parametrizes the $A^4$ interaction. However, in GP, $b_1$ must be positive, while $\lambda_0'$ is still allowed to be very slightly negative. From the bottom line of figure~\ref{fig.grid}, we can see that the allowed region of the $\{\gamma_1',\lambda_0'\}$ parameter space is the parabola described by $\mu_{V_-}$, which gets shifted upwards for increasing $\gamma_2'$. Also, a band around $\gamma_1'=0$ is forbidden by $\lambda_{V_-}$ for negative $\gamma_2'$. In particular, the region $\lambda_0'< -0.0624$ is always forbidden by one of the bounds, in accordance with our previous remarks. The value of $\alpha_2'$ primarily influences the green region in figure~\ref{fig.grid} and the blue and red regions are fairly stable as $\alpha_2'$ changes. This can be seen explicitly by inspecting the last two bounds for various values of $\alpha_2'$ in its allowed interval, see figure~\ref{fig.a2influence}. Therefore, the $\lambda_{V_-}$ and $\mu_{V_-}$ bounds, whose expressions may seem rather complicated at first glance on \eqref{eq.ReducedBoundsPN}, are well approximated by their simplified expressions at $\alpha_2'=0$ in \eqref{eq.PNBa20}. Accordingly, the main role of $\alpha_2'$ in the three last bounds of \eqref{eq.ReducedBoundsPN} is roughly to pick the sign of $\gamma_1'$ via \eqref{eq.signc1}. This can be seen in figure~\ref{fig.PNgamma2} where only the left/right part of the full island is allowed for positive/negative $\alpha_2'$, but the rest of the shape remains quite stable. As a last comment, we investigate the importance of the beyond forward limit analysis. It was observed in \cite{de_Rham_2019} that the bounds beyond the forward limit had only little influence on the allowed island for dRGT massive gravity and it is interesting to notice that the same does not hold for PN. Inspecting figure~\ref{fig.bflinfluence}, we see that going away from forward limit is particularly relevant for PN. In particular, the island for $\{\gamma_1',\gamma_2'\}$ is reduced by more than a half. However, we should note that we have not used the most stringent bounds in the forward limit, and by minimizing $f_{\alpha\beta}$ numerically, it is possible that the blue region could be closer to the orange region. In either case, this illustrates how, when applied on the decoupling limit of a theory, the impact of positivity bounds may significant differ as compared to the original theory. In itself this statement is not surprising as noted in \cite{de_Rham_2017gal} as taking the decoupling limit of a theory is a different procedure than considering the low-energy EFT and the mixing with other IR operators may be significant. \subsection{Comparison between GP and PN}\label{sec.comparison} Even though there exists no tuning of the coefficients that makes GP and PN equivalent, we find some of the coefficients play similar roles in both theories. This identification is rendered more straightforward by our redefinition \eqref{eq.PNredef} of the PN coefficients. In particular, from our previous results, there is an obvious analogy between $a_2$ and $\alpha_2'$, although they play a quite different at the Lagrangian level, as they both control the scale at which perturbative unitarity breaks in their respective theories. Also, the pure quartic interaction $A^4$ is parametrized by $b_1$ and $\lambda_0'$ in each model. Finally, by looking at the decoupling limits \eqref{eq.GPDL} and \eqref{eq.PNDL}, we see that $a_1$ and $b_3$ are analogous to $\gamma_1'$ and $\gamma_2'$, in the sense that they correspond to the cubic and quartic Galileon interactions respectively. However, they are not truly equivalent parameters, as $\gamma_1'$ and $\gamma_2'$ appear also as the coefficients of additional interactions away from the decoupling limit. To summarize, this analogy of coefficients is given by \begin{equation}\label{eq.param} \begin{aligned} &a_1\,\longleftrightarrow\,\gamma_1'\, , \quad a_2\,\longleftrightarrow\,\alpha_2'\, , \quad b_1\,\longleftrightarrow\,\lambda_0' \, , \quad b_3\,\longleftrightarrow\,\gamma_2' \, . \end{aligned} \end{equation} The results for the ten bounds \eqref{eq.sumboundsmain} for GP and PN are reported in table~\ref{tab.bounds}. Inspecting these results, while keeping this analogy in mind, we observe a similarity between the bounds. \begin{table}[h!] \renewcommand{\arraystretch}{1.5} \centering \begin{tabular}{|l|l|l|} \hline & Generalized Proca & Proca-Nuevo \\ \hline $\tilde\mu_1>0$ & $b_1-a_1^2>0$& $2\lambda_0'-\gamma_1'(1+2\gamma_1')>0$ \\ $\tilde\mu_2>0$ & $2b_4-a_2^2>0$ &$4-\alpha_2'^2>0$ \\ $\tilde\mu_3>0$ & $b_5>0$& $2-\alpha_2'+2\alpha_3'>0$\\ $\mu_{SS}>0$ & $2b_5+4 b_6+a_2(a_1+\frac{1}{4}a_2)>0$ &$2+\alpha_2'(\alpha_2'-4(1+4\gamma_1'))>0 $ \\ $\mu_{V_+}>0$ & $2b_5+4 b_6+a_2(a_1+\frac{1}{4}a_2)$&$10+\alpha_2'(\alpha_2'-4(1+4\gamma_1'))$ \\&$\quad+a_1^2+b_1-2b_2+b_3>0$ & $\quad -16\gamma_1'(1-4\gamma_1')+96\gamma_2'+64\lambda_0'>0$ \\ $\mu_{V_-}>0$&$2b_5+4 b_6+a_2(a_1-\frac{1}{4}a_2)$&$ 2-\alpha_2'(\alpha_2'+4(1+4\gamma_1'))$\\ &$\quad-3a_1^2+b_1+2b_2-b_3+b_4>0$&$\quad -48\gamma_1'(1+4\gamma_1')-96\gamma_2'+64\lambda_0'>0 $ \\ $\lambda_{SS}\geq0$&$a_2^2\geq0$ &$\alpha_2'^2\geq0$\\ $\lambda_{SV}\geq0$ &$a_2^2-4b_7\geq0$ &$ 3\alpha_2'^2 - 8 \alpha_2' +16 \alpha_3'+4\geq0$\\ $\lambda_{V_+}\geq0$ & $b_3 + a_1^2+a_2(a_1+\frac{1}{4}a_2)\geq0$ &$16\gamma_2'+16\gamma_1'^2-\alpha_2'(1+4\gamma_1'-\frac{1}{4}\alpha_2')+2\geq0$\\ $\lambda_{V_-}\geq0$ &$ b_3 +a_1^2-a_2(a_1+\frac{1}{12}a_2)\geq0$&$16\gamma_2'+16\gamma_1'^2+\alpha_2'(1+4\gamma_1'-\frac{1}{12}\alpha_2')-\frac{2}{3}\geq0$\\ \hline \end{tabular} \caption{Summary of the positivity bounds \eqref{eq.gpb1}$-$\eqref{eq.gpb3} for GP and \eqref{eq.pnb1}$-$\eqref{eq.pnb3} for PN. The positive overall factors are omitted. The coefficients of the GP and PN models are defined in \eqref{eq:LGPpert} and \eqref{eq.LPNredef1} respectively.} \label{tab.bounds} \end{table} Moreover, re-establishing the overall coefficients that were omitted in table~\ref{tab.bounds} for readability, there exists a tuning that makes the bounds of both theories perfectly equivalent, which is given by \begin{equation}\label{eq.boundstuning} \begin{aligned} &a_1=\pm(\gamma_1'+\frac{1}{4}-\frac{1}{3}\alpha_2'^{-1}) \\ &a_2=\pm\frac{1}{4}\alpha_2' \\ &b_1=\lambda_0'+\frac{1}{16}-\frac{1}{6}\alpha_2'^{-1}(1+4\gamma_1')+\frac{1}{9}\alpha_2'^{-2} \\ &b_3=\gamma_2'-\frac{1}{2}\gamma_1'-\frac{1}{48}+\frac{1}{6}\alpha_2'^{-1}(1+4\gamma_1')-\frac{1}{9}\alpha_2'^{-2}\\ &b_7=-\frac{1}{16}(1-2\alpha_2'+4\alpha_3') \end{aligned} \qquad \begin{aligned} &b_2=-\frac{1}{4}\gamma_2'+\frac{1}{8}\gamma_1'-\frac{1}{96} \\ &\qquad-\frac{1}{12}\alpha_2'^{-1}(1+4\gamma_1')+\frac{1}{8}\alpha_2'^{-2} \\ &b_4=\frac{1}{8} \\ &b_5=\frac{1}{64}(2-\alpha_2'+2\alpha_3') \\ &b_6=-\frac{1}{384}(11-3\alpha_2'+6\alpha_3')\, . \end{aligned} \end{equation} The first terms correspond to the instinctive matching of \eqref{eq.param}, but there are additional corrections depending mainly on $\alpha_2'$. We could imagine that the tuning takes a simpler form if we consider the bounds with $\alpha_2'=0$. However, quite surprisingly, there is no such tuning in this case. The existence of this matching does not mean that the theories are equivalent as these bounds only contain the $s^2$ contributions to only the elastic amplitudes. Inserting this solution in the full amplitudes does not make them match. For instance, considering the same amplitude in helicity basis as \cite{derham2020}, \begin{equation} \mathcal{A}_{++--}^{\text{GP}}(s,t)-\mathcal{A}_{++--}^{\text{PN}}(s,t)=\frac{1}{96\Lambda_2^4}\, s \left(3 \cos (2 \theta ) \left(s-4 m^2\right)+44 m^2-3 s\right). \end{equation} Therefore, there is no profound meaning to the solution of \eqref{eq.boundstuning}, it is simply a tuning that makes the bounds equivalent. Another consequence of the similarity of the bounds is that the study of the simplest Galileon from GP with parameters $\{a_1,b_3,b_1\}$ relates very well with the $\{\gamma_1',\gamma_2',\lambda_0'\}$ sector of PN for $a_2=\alpha_2'=0$. Indeed, keeping in mind the analogy \eqref{eq.param} and reporting these bounds from \eqref{eq.galbound} and \eqref{eq.PNBa20} for GP and PN respectively \begin{equation} \begin{aligned} \lambda_{V_-}\geq0 &\, : \, b_3\geq-a_1^2, \quad\qquad \gamma_2'\geq-\gamma_1'^2+\frac{1}{24}\\ \mu_{V_-}>0 &\, : \, b_1-b_3>3a_1^2, \quad \, \lambda_0'-\frac{3}{2}\gamma_2'>3\gamma_1'^2+\frac{3}{4}\gamma_1'-\frac{1}{32}, \end{aligned} \end{equation} we observe a similar structure. It can also be seen by comparing figure~\ref{fig.galbounds} with the left of figure~\ref{fig.PNgamma2}. Note that, even in this case, PN still has a full additional sector parametrized by $\alpha_3'$ that can not be removed. Finally, we found in both models that the quartic interaction $A^4$ is bounded from below, needing to be positive for GP ($b_1>0$) and for PN, $\lambda_0'>-0.06$. Possibly, some more sringent bounds on PN would constrain $\lambda_0'$ to be strictly positive as well. \section{Conclusion} In this work, we have reviewed how EFT amplitudes for spinning particles are constrained by assuming a unitary, causal, Lorentz invariant UV completion and how this motivates the use of the transversity formalism due to its crossing symmetry properties to derive positivity bounds away from the forward limit. The application of these bounds to the Generalized Proca and Proca-Nuevo has allowed us to strongly restrict the parameter space of these models. We have first studied them separately, by exhibiting a set of ten inequalities each of their coefficients had to satisfy. This set further reduced to five independent bounds for PN, confining its parameters in islands, which we displayed in several figures. In particular, this work furnishes the first positivity bounds analysis of the PN model. Finally, we have highlighted an analogy between certain coefficients between the two theories, which lead to similar structures in the bounds for both theories. Overall, we have found that PN is more constrained by the bounds than GP. This makes sense as the PN coefficients are associated with multiple interactions, due to the non-linear realization of the Hessian constraint, and therefore appear repetitively in the bounds. We emphasize that this analysis has been performed at tree-level however the one-loop corrections considered in \cite{Zosso2021,derham2021quantum} where shown to arise at a higher scale. From a technical point of view, there would be various ways to tighten the bounds we derived in this work. First, we could use insights from fully triple crossing symmetric bounds as derived in \cite{Sinha:2020win,Haldar:2021rri,Raman:2021pkf,Chowdhury:2021ynh,Sinha:2022sdo,tolley2021new,Caron-Huot2021,Du_2021}, however applying those bounds beyond the forward limit will require further generalizing the formalism. Second, the minimization of the indefinite polarization bound \eqref{eq.fabmain} could be made more precise, either using some numerical methods \cite{Cheung2016} or via a more systematic analytical minimization \cite{Zhou2019,Zhou2021}. These bounds could also potentially be further tighten or complemented with the use of pure causality bounds as illustrated in \cite{CarrilloGonzalez:2022fwg}. The application of these causality bounds could prove particularly useful when considering these EFTs on curved background as would be relevant for cosmology and Black Hole constraints. In practice, the bounds we derived could be used to restrict the parameter space of dark energy models, in complement to observational data. This analysis could also be combined with other constraints on the parameters arising for example from the presence of a Vainshtein mechanism on spherically symmetric background, as studied in \cite{De_Felice_2016} for the Generalized Proca, or else from imposing subluminal propagation of gravitational waves \cite{Melville_2020,Noller2021}. Similarly, these bounds could be joined with restrictions coming from stability analysis of quantum corrections or Swampland conjectures. Finally, it may be interesting to further study the perturbative PN model \eqref{eq.LPNredef1} under the redefinition of parameters we have proposed in \eqref{eq.PNredef}. In particular, it eliminates a free parameter. Moreover, the tuning $\alpha_2'=0$, which does not obviously play a particular role at the Lagrangian level, seems to give an interesting realisation of the model, as we have notably shown in \eqref{eq.fabvPN} that it raises the cutoff. Even though GP and PN are fundamentally different theories, if one restricts their parameter space adequately they might share equivalent positivity bounds and suffer from similar quantum corrections, which might then indicate that they descent from the same more fundamental UV-complete theory. As was shown in \cite{Zosso2021,derham2021quantum}, both have the same high energy behaviour when quantum corrections are included. Loop induced counter terms have the exact same structure and scaling. In this work we have further shown, that despite of the fact that PN is more constrained by the positivity bounds than GP, similar structures in the bounds for both theories arise and this might signal that their spin-1 field originates from the same UV theory and the general properties of this fundamental theory give rise to the same structures. As a final note, it is worth emphasizing that the results presented here are relevant for beyond models of dark energy. In constraining allowed massive spin-1 interactions, this framework opens the door for better understanding of the allowed operators for vector bosons beyond the standard model and could be relevant for models of dark matter, particularly those involving a (massive) dark photon. \newpage \appendix \section{Set-up} \label{sec.kinematics} We consider 2-2 amplitudes of four identical particles of mass $m$ and spin $S$ with momenta $p_1$, $p_2$, $p_3$ and $p_4$. Working in the center of mass frame, considering the scattering in the $xz$-plane, and treating the particles as all incoming by flipping the sign of the outgoing particles' momenta, the momenta are given by \begin{equation}\label{eq.mom} p_i = (-1)^\eta(E,p\cos\theta_i,0,p\sin\theta_i)\,, \end{equation} with $\eta=0$ for the incoming momenta $p_1$ and $p_2$, $\eta=1$ for the outgoing momenta $p_3$ and $p_4$, and \begin{equation} \theta_1 = 0, \quad \theta_2 = \pi \quad \theta_3 = \theta, \quad \theta_4 = \pi+\theta. \end{equation} We work with the Mandelstam variables $s=-(p_1+p_2)^2, t=-(p_1+p_3)^2$, and $u=-(p_1+p_4)^2$, with $s+t+u=m^2$. The parameters of \eqref{eq.mom} can be expressed in terms of the Mandelstam variables as \begin{equation}\label{eq.eptheta} E = \frac{\sqrt{s}}{2}, \quad p = \frac{1}{2}\sqrt{s-4m^2}, \quad \cos\theta = 1 + \frac{2t}{s-4m^2}, \quad \sin\theta=2\frac{\sqrt{tu}}{(s-4m^2)}, \end{equation} such that the physical region corresponds to $s>4m^2$. Any two independent variables are sufficient to express the theory. Examples of such sets are $(p, \theta)$ or else $(s,t)$. Also, note that the forward limit $t=0$ corresponds to $\cos\theta=1$, or equivalently, $\theta=0$, hence the name forward. To compute amplitudes in transversity basis, we use the polarization vectors from \cite{de_Rham_2018}, given by \begin{equation} \begin{aligned} &\epsilon^\mu_{\tau=\pm1}(p_i)=\frac{i}{\sqrt{2}m}(p,E\sin\theta_i\pm im\cos\theta_i,0,E\cos\theta_i\mp im\sin\theta_i), \\ &\epsilon^\mu_{\tau=0}(p_i)=(0,0,1,0). \end{aligned} \end{equation} \section{Bounds Derivation}\label{app.bounds} First, we define a quantity that gets rid off the subtraction functions by taking $N_S$ derivatives on $\tilde\T^+$, defined in \eqref{eq.tplus}, as follows \begin{equation} f_{\tau_1\tau_2}(s,t)=\frac{1}{N_S!}\frac{\del^{N_S}}{\del s^{N_S}}\tilde\T^+_{\tau_1\tau_2}(s,t)\, . \end{equation} Using the dispersion relation \eqref{eq.disrel1}, or equivalently taking the derivatives on the contour integral \eqref{eq.contour} and deforming the contour afterwards, we can write \begin{equation} f_{\tau_1\tau_2}(s,t)=\frac{1}{\pi}\int_{\mu_b}^\infty d\mu \frac{ {\text{Abs}}_s \T^+_{\tau_1 \tau_2 }(\mu,t) }{ (\mu - s)^{N_S+1} } +\frac{1}{\pi}\int_{\mu_b}^\infty d\mu \frac{ {\text{Abs}}_u \T^+_{\tau_1 \tau_2 }(4m^2-t-\mu,t) }{ ( \mu - u)^{N_S+1} }\, . \end{equation} We know from \eqref{eq.posabs} that the numerators are positive for $0\leq t<m^2$. The denominators are also positive for $s$ within the branch cuts, such that \begin{equation} f_{\tau_1\tau_2}(s,t)>0 \quad \text{for } \; 4m^2-t-\mu_b<s<\mu_b, \; 0\leq t<m^2. \end{equation} Now, let us consider this quantity in terms of the crossing symmetric variable $v=s+t/2-2m^2$, \begin{equation}\label{eq.fvt} f_{\tau_1\tau_2}(v,t)=f_{\tau_1\tau_2}(s=v+2m^2-t/2,t)=\frac{1}{N_S!}\frac{\del^{N_S}}{\del s^{N_S}}\tilde\T^+_{\tau_1\tau_2}(s,t)\Big|_{s=v+2m^2-t/2}\, . \end{equation} Then, \begin{equation} f_{\tau_1\tau_2}(v,t)=\frac{1}{\pi}\int_{\mu_b}^\infty d\mu \frac{ {\text{Abs}}_s \T^+_{\tau_1 \tau_2 }(\mu,t) }{ (\mu - 2m^2+t/2-v)^{N_S+1} } + \frac{ {\text{Abs}}_u \T^+_{\tau_1 \tau_2 }(4m^2-t-\mu,t) }{ ( \mu -2m^2+t/2+v)^{N_S+1} }\, , \end{equation} and we still have \begin{equation} f_{\tau_1\tau_2}(v,t)>0 \quad \text{for} \; |v|<\mu_b-2m^2+t/2, \, 0\leq t<m^2. \end{equation} Furthermore, the $v$-derivatives are given by \begin{equation} \begin{aligned} \del_v^N f_{\tau_1\tau_2}(v,t)=\frac{(N_S+N)!}{N!}\frac{1}{\pi}\int_{\mu_b}^\infty d\mu &\frac{ {\text{Abs}}_s \T^+_{\tau_1 \tau_2 }(\mu,t) }{ (\mu - 2m^2+t/2-v)^{N_S+1+N} }\\ +(-1)^N &\frac{ {\text{Abs}}_u \T^+_{\tau_1 \tau_2 }(4m^2-t-\mu,t) }{ ( \mu -2m^2+t/2+v)^{N_S+1+N} }\, , \end{aligned} \end{equation} such that every even $v$-derivative is positive \begin{equation} \del_v^{2N} f_{\tau_1\tau_2}(v,t)>0 \quad \forall\, N\geq 0, \quad |v|<\mu_b-2m^2+t/2, \, 0\leq t<m^2. \end{equation} Next, looking at the first $t$-derivative, \begin{equation} \begin{aligned} \del_t f_{\tau_1\tau_2}(v,t)=\,&\frac{1}{\pi}\int_{\mu_b}^\infty d\mu \frac{ \del_t{\text{Abs}}_s \T^+_{\tau_1 \tau_2 }(\mu,t) }{ (\mu - 2m^2+t/2-v)^{N_S+1} } + \frac{\del_t{\text{Abs}}_u \T^+_{\tau_1 \tau_2 }(4m^2-t-\mu,t) }{ ( \mu -2m^2+t/2+v)^{N_S+1} } \\ -\, \frac{N_S+1}{2}\,&\frac{1}{\pi}\int_{\mu_b}^\infty d\mu \frac{ {\text{Abs}}_s \T^+_{\tau_1 \tau_2 }(\mu,t) }{ (\mu - 2m^2+t/2-v)^{N_S+2} } + \frac{ {\text{Abs}}_u \T^+_{\tau_1 \tau_2 }(4m^2-t-\mu,t) }{ ( \mu -2m^2+t/2+v)^{N_S+2} }\, , \end{aligned} \end{equation} we know by \eqref{eq.posabs} that the first integral of the RHS is positive for $0\leq t<m^2$. The second line looks like $f_{\tau_1\tau_2}(v,t)$, but with an additional power of the denominator. Noting the integral inequality for any positive definite function $\rho(\mu)$ \begin{equation} \int_{\mu_b}^\infty d\mu \frac{ \rho(\mu) }{ (\mu - 2m^2+t/2\pm v)^{N_S+2} } < \frac{1}{\M^2} \int_{\mu_b}^\infty d\mu \frac{ \rho(\mu) }{ (\mu - 2m^2+t/2\pm v)^{N_S+1} }\, , \end{equation} for \begin{equation} \M^2 = {\rm Min}_{ \mu \geq \mu_b}(\mu - 2m^2+t/2\pm v)= \mu_b - 2m^2+t/2\pm v\, , \end{equation} we conclude that \begin{equation} \del_t f_{\tau_1\tau_2}(v,t)+\frac{N_S+1}{2\M^2}f_{\tau_1\tau_2}(v,t)>0 \, . \end{equation} Similar arguments can be used to obtain bounds on higher derivatives in $t$. Our bounds are valid, in general, for the ranges of parameters \begin{equation} |v|<2m^2+t/2, \; 0\leq t<m^2, \; \M^2=2m^2+t/2\pm v\, , \end{equation} and at tree level, with $\Lambda$ the EFT cutoff, for the ranges \begin{equation} |v|<\Lambda^2, \; 0\leq t<m^2, \; \M^2 = \Lambda^2 - 2m^2+t/2\pm v\approx\Lambda^2\, . \end{equation} \section{Linear Vector Bounds}\label{app.linbounds} In this part, we express a set of 10 bounds out of the analysis of section~\ref{sec.bounds}. First, we recall the definition of the main quantities we use here, \begin{equation} f_{\tau_1\tau_2}(v,t)=\frac{1}{N_S!}\frac{\del^{N_S}}{\del s^{N_S}}\tilde\T^+_{\tau_1\tau_2}(s=v+2m^2-t/2,t)\, , \end{equation} \begin{equation} f_{\alpha\beta}(v,t) = \sum_{\taus}\alpha_{\tau_1}\beta_{\tau_2}\alpha^*_{\tau_3}\beta^*_{\tau_4}f_\taus(v,t)\, , \end{equation} where $\T^+_\taus$ is the combination of spinning amplitudes in transversity basis defined in \eqref{eq.tplus}. This work focuses on spin-1 theories such that \begin{equation} N_S = 4S+2= 6. \end{equation} The bounds can be summarized as follows\footnote{Some higher $t$-derivative bounds could also be considered, but are not relevant here.}. \begin{align} &\del_v^{2N} f_{\tau_1\tau_2}(v,t)>0 \quad \forall\quad N\geq 0\, , \label{eq.sumbound1} \\ &\del_t f_{\tau_1\tau_2}(v,t)+\frac{N_S+1}{2\M^2}f_{\tau_1\tau_2}(v,t)>0\, , \label{eq.sumbound2} \\ &\del_v^{2N}f_{\alpha\beta}(0,0)>0 \quad \forall \quad N\geq 0\, , \label{eq.sumbound3} \end{align} where, at tree level, \begin{equation} |v|<\Lambda^2, \; 0\leq t<m^2, \; \M^2=\Lambda^2. \end{equation} Furthermore, the quantities we are going to find for the GP and PN models are linear in $t$ and $v$. They may be written in the schematic form \begin{equation}\label{eq.fdef} \begin{aligned} f_{\tau_1\tau_2}(v,t)&=f_{\tau_1\tau_2}(0,0)+ \del_t f_{\tau_1\tau_2}\cdot t\\ &\equiv \frac{1}{\Lambda_2^4}\left[ \mu_{\tau_1\tau_2}+\lambda_{\tau_1\tau_2}\frac{t}{m^2}\right], \end{aligned} \end{equation} and \begin{equation} f_{\alpha\beta}(v,t)=f_{\alpha\beta}(0,0)+\del_t f_{\alpha\beta}\cdot t +\del_v f_{\alpha\beta}\cdot v\, , \end{equation} where $f(0,0)$, $\del_t f$, and $\del_v f$ are linear combinations of the EFT parameters with no kinematics dependence left, where we introduce $\mu$ and $\lambda$ for the bounds that follow. These are statements coming from our explicit results rather than from any theoretical inputs\footnote{The term in $v$ may seem surprising. Indeed, if there is still an $s$ (hence $v$) dependence after taking $N_S$ derivatives, it means that the amplitude was not respecting the $s^2$ growth of the Froissart bound. That is alright, as we are working with an EFT, and only indicates that unitarity may be perturbatively broken. Our amplitudes in the definite transversity polarization basis do not have such a dependence left.}. Then, the only non-trivial bounds are the $N=0$ $v$-derivative bounds in \eqref{eq.sumbound1} and \eqref{eq.sumbound3}, and the first $t$-derivative bound \eqref{eq.sumbound2}. Moreover, the second term of the latter is suppressed by a factor of $m^2/\Lambda^2$, but is assured to be strictly positive by \eqref{eq.sumbound1}. Therefore, neglecting this term, we obtain a non-strict inequality for $\del_t f_{\tau_1\tau_2}$, later denoted by $\lambda_{\tau_1\tau_2}$. Explicitly, the remaining bounds are \begin{equation}\label{eq.b1} f_{\tau_1\tau_2}(v,t) =f_{\tau_1\tau_2}(0,0)+ \del_t f_{\tau_1\tau_2}\cdot t >0\, , \quad 0\leq t<m^2, \end{equation} and \begin{equation}\label{eq.b23} \begin{aligned} &\del_t f_{\tau_1\tau_2}(v,t)\geq 0\, , \\ &f_{\alpha\beta}(0,0)>0\, . \end{aligned} \end{equation} Note that \eqref{eq.b1} is included in these two last bounds (as the definite polarizations are just particular cases of indefinite ones, and $t\geq 0$). Hence, we can focus on the bounds in \eqref{eq.b23}, which have no dependence in $t$ or $v$. There are four independent quantities for the definite elastic polarizations \begin{equation} \begin{aligned} &f_{SS}\equiv f_{00}\, ,\\ &f_{SV}\equiv f_{0+1}= f_{0-1}=f_{+10}= f_{-10}\, ,\\ &f_{V_+}\equiv f_{+1+1}=f_{-1-1} \, , \\ &f_{V_-}\equiv f_{+1-1}=f_{-1+1} \, , \end{aligned} \end{equation} where we use $S$ and $V$ to denote 0- and 1-transversity modes respectively. This follows the convention used by \cite{Cheung2016}\footnote{Note that they were working in helicity basis, such that $S$ and $V$ respectively denote 0- and 1-helicity modes in this paper.}. In the following, $\mu_{SS}=f_{SS}(0,0)$, $\lambda_{SS}=\del_t f_{SS}$ and similarly for the other polarizations. Thus, the $t$-derivative bound correspond to four distinct bounds \begin{equation} \lambda_{SS}\geq 0\, , \quad \lambda_{SV}\geq 0\, , \quad \lambda_{V_+}\geq 0\, , \quad \lambda_{V_-}\geq 0\, . \end{equation} To study the indefinite polarization bounds we introduce the quantities $\alpha_{\pm1,0}$, $\beta_{\pm1,0}$, which designate the projections along the definite polarization vectors in the transversity basis. Also, \begin{equation} \alpha_\pm \equiv \frac{1}{\sqrt{2}}(\alpha_{-1}\pm\alpha_{+1})\, ,\quad \beta_\pm \equiv \frac{1}{\sqrt{2}}(\beta_{-1}\pm\beta_{+1})\, , \end{equation} where we have the normalization condition \begin{equation} |\alpha|^2= |\alpha_-|^2+|\alpha_0|^2+|\alpha_+|^2=1\, , \end{equation} and similarly for $\beta$. Then, we explicitly find that the indefinite polarization $f_{\alpha\beta}(0,0)$ can be written in the following form for both GP and PN \begin{equation}\label{eq.fab} \begin{aligned} f_{\alpha\beta}(0,0)=\frac{1}{\Lambda_2^4}\Big[&\tilde\mu_1|\alpha_+|^2|\beta_+|^2 \\ +&\tilde\mu_2\big[|\alpha_+|^2(1-|\beta_+|^2)+|\beta_+|^2(1-|\alpha_+|^2)\big] \\ +&\tilde\mu_3\big[|\alpha_-|^2|\beta_0|^2+|\alpha_0|^2|\beta_-|^2\big] \\ +&\tilde\mu_4\big[|\alpha_-|^2|\beta_-|^2+|\alpha_0|^2|\beta_0|^2\big]\\ +&2(\tilde\mu_3-\tilde\mu_4)\Re(\alpha_-\alpha_0^*)\Re(\beta_-\beta_0^*)\\ +&\tilde\mu_5\big[\Re(\alpha_-\alpha_+^*)\Re(\beta_-\beta_+^*)-\Re(\alpha_0\alpha_+^*)\Re(\beta_0\beta_+^*)\big]\Big]>0\, , \end{aligned} \end{equation} where the $\tilde\mu$'s are combinations of the EFT coefficients\footnote{This expression for $f_{\alpha\beta}$ is equivalent to (4.4) of \cite{de_Rham_2019}, where we redefined their coefficients in the following way $\tilde\mu_1=\mu_1+2\mu_2$, $\tilde\mu_2=\mu_2$, $\tilde\mu_3=\mu_4$, $\tilde\mu_4=\mu_5$ and $\tilde\mu_5=\mu_3$.}. The quantity $f_{\alpha\beta}$ has to be positive for any polarization. In particular, we focus on some specific ones to deduce some bounds. First, picking $|\alpha_+|^2=|\beta_+|^2=1$ implies, by the normalization condition, that only the first line is non-zero, such that \eqref{eq.fab} reduces to $\tilde{\mu_1}>0$. Similarly, picking $|\alpha_+|^2=1$, $|\beta_+|^2=0$ implies that $\tilde{\mu_2}>0$, $|\alpha_-|^2=|\beta_0|^2=1$ (or $|\alpha_0|^2=|\beta_-|^2=1$) implies that $\tilde{\mu_3}>0$, and $|\alpha_-|^2=|\beta_-|^2=1$ (or $|\alpha_0|^2=|\beta_0|^2=1$) implies that $\tilde{\mu_4}>0$. In summary, we obtain the following bounds \begin{equation}\label{mubounds} \tilde\mu_1>0\, , \quad \tilde\mu_2>0\, , \quad \tilde\mu_3>0\, , \quad \tilde\mu_4>0\, . \end{equation} A condition on $\tilde\mu_5$ is less straightforward to obtain. In order to obtain the most stringent bound one could minimize $f_{\alpha\beta}$, e.g. numerically \cite{Cheung2016,Wang2021}. Whereas for spin-2 theories only a numerical approach is available, for spin-1 theories an analytic minimazation is conceivable. It has been worked out for the gauge-bosons in the Standard Model EFT in \cite{Zhou2019,Zhou2021}. It would be interesting to see how to apply their analytical procedure in our models. Here, we want to keep a simple analytical approach and aim to obtain only sufficient conditions on $\tilde\mu_5$\footnote{ A similar philosophy is adopted in \cite{de_Rham_2019}, where they derive a slightly stronger sufficient condition than ours in their Appendix C. We rather use ours, which is given by the bound on definite polarizations, as it has a more straightforward physical meaning.}. We establish these conditions by looking at the bounds for the definite polarizations $f_{\tau_1\tau_2}(0,0)>0$. Indeed, by considering the $\mathcal{O}(\alpha^2_{\tau_1}\beta^2_{\tau_2})$ in $f_{\alpha\beta}$, we see that \begin{equation}\label{eq.defindef} \begin{aligned} &\mu_{SS}=\tilde\mu_4>0\, , \\ &\mu_{SV}=\frac{1}{2}[\tilde\mu_2+\tilde\mu_3]>0\, , \\ &\mu_{V_+}=\frac{1}{4}[\tilde\mu_1+2\tilde\mu_2+\tilde\mu_4+\tilde\mu_5]>0\, , \\ &\mu_{V_-}=\frac{1}{4}[\tilde\mu_1+2\tilde\mu_2+\tilde\mu_4-\tilde\mu_5]>0\, . \end{aligned} \end{equation} Then, the bounds we are going to use for $\tilde\mu_5$ are the following \begin{equation} |\tilde\mu_5|< \tilde\mu_1+2\tilde\mu_2+\tilde\mu_4 \quad\Longleftrightarrow\quad \mu_{V\pm}>0\, . \end{equation} As already mentioned, stricter bounds could be found, in particular using numerical methods. Furthermore, the bounds \eqref{mubounds} derived from the indefinite polarizations are respected by the definite bounds (as expected) on $\mu_{SS}$ and $\mu_{SV}$, but also provide additional information that would not be available from the definite polarization analysis alone. Namely, the bound on $\tilde\mu_1$, and the positivity of $\tilde\mu_2$ and $\tilde\mu_3$ not only as a sum, but also separately. In sum, we have derived a total of 10 bounds: \begin{equation}\label{eq.sumbounds} \begin{aligned} &\tilde\mu_1>0\, , \quad \tilde\mu_2>0 \, ,\quad \tilde\mu_3>0\, , \\ &\mu_{SS}>0\, ,\quad \mu_{V_+}>0\, , \quad \mu_{V_-}>0\, , \\ &\lambda_{SS}\geq0\, , \quad \lambda_{SV}\geq0\, , \quad \lambda_{V_+}\geq0\, , \quad \lambda_{V_-}\geq0\, , \end{aligned} \end{equation} where the $\mu$'s are bounds in the the forward limit, those with $\tilde{\mu}$'s come from indefinite polarizations, and the $\lambda$'s correspond to the $t$-derivative bounds, which are available due to the analysis beyond the forward limit. \acknowledgments LH is supported by funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme grant agreement No 801781 and by the Swiss National Science Foundation grant 179740. CdR thanks the Royal Society for support at ICL through a Wolfson Research Merit Award. CdR is supported by the European Union's Horizon 2020 Research Council grant 724659 MassiveCosmo ERC--2016--COG, by a Simons Foundation award ID 555326 under the Simons Foundation's Origins of the Universe initiative, `\textit{Cosmology Beyond Einstein's Theory}' and by a Simons Investigator award 690508. CdR is also supported by STFC grants ST/P000762/1 and ST/T000791/1. \bibliographystyle{JHEP} \bibliography{biblio}
Title: Updated constraints from the effective field theory analysis of BOSS power spectrum on Early Dark Energy
Abstract: Analyses of the full shape of BOSS DR12 power spectrum using the one-loop prediction from the Effective Field Theory of Large-Scale Structure (EFTBOSS) have led to new constraints on extensions to the $\Lambda$CDM model, such as Early Dark Energy (EDE) which has been suggested as a resolution to the "Hubble tension". In this paper, we re-assess the constraining power of the EFTBOSS on EDE in light of a correction to the normalization of BOSS window functions. Overall we find that constraints from EFTBOSS on EDE are weakened, and represent a small change compared to constraints from Planck and the conventional BAO/$f\sigma_8$ measurements. The combination of Planck data with EFTBOSS provides a bound on the maximal fractional contribution of EDE $f_{\rm EDE}<0.083$ at 95% C.L. (compared to $<0.054$ with the incorrect normalization, and $<0.088$ without full-shape data) and the Hubble tension is reduced to $2.1\sigma$. However, the more extreme model favored by an analysis with just data from the Atacama Cosmology Telescope is disfavored by the EFTBOSS data. We also show that the updated Pantheon+ Type Ia supernova analysis can slightly increase the constraints on EDE. Yet, the inclusion of the SN1a magnitude calibration by SH0ES strongly increases the preference for EDE to above $5\sigma$, yielding $f_{\rm EDE}\sim 0.12^{+0.03}_{-0.02}$ around the redshift $z_c=4365^{+3000}_{-1100}$. Our results demonstrate that EFTBOSS data (alone or combined with Planck data) do not exclude the EDE resolution of the Hubble tension.
https://export.arxiv.org/pdf/2208.05930
\title{Updated constraints from the effective field theory analysis of BOSS power spectrum on Early Dark Energy} \author{Th\'eo Simon} \email{theo.simon@umontpellier.fr} \affiliation{Laboratoire Univers \& Particules de Montpellier (LUPM), CNRS \& Universit\'e de Montpellier (UMR-5299), Place Eug\`ene Bataillon, F-34095 Montpellier Cedex 05, France} \author{Pierre Zhang} \email{pierrexyz@protonmail.com} \affiliation{Department of Astronomy, School of Physical Sciences, University of Science and Technology of China, Hefei, Anhui 230026, China \\ CAS Key Laboratory for Research in Galaxies and Cosmology, University of Science and Technology of China, Hefei, Anhui 230026, China \\ School of Astronomy and Space Science, \\ University of Science and Technology of China, Hefei, Anhui 230026, China} \author{Vivian Poulin} \affiliation{Laboratoire Univers \& Particules de Montpellier (LUPM), CNRS \& Universit\'e de Montpellier (UMR-5299), Place Eug\`ene Bataillon, F-34095 Montpellier Cedex 05, France} \author{Tristan L.~Smith} \affiliation{Department of Physics and Astronomy, Swarthmore College, Swarthmore, PA 19081, USA} \section{\label{sec:Intro}Introduction} In recent years, several tensions between probes of the early- and late-universe analyzed under $\Lambda$CDM have emerged. The ``Hubble tension'' refers to the inconsistency between local measurements of the current expansion rate of the Universe, i.e.~the Hubble constant $H_0$, and the value inferred from early-Universe data using the \lcdm\ model. This tension is predominantly driven by the {\it Planck} collaboration's observation of the cosmic microwave background (CMB), which predicts a value in \lcdm\ of $H_0 = 67.27 \pm 0.60$ km/s/Mpc \cite{Planck:2018vyg}, and the value measured by the SH0ES collaboration using the Cepheid-calibrated cosmic distance ladder, whose latest measurement yields $H_0 = 73\pm1$ km/s/Mpc \cite{Riess:2021jrx,Riess:2022mme}. Taken at face value, these observations alone result in a $\sim 5\sigma$ tension\footnote{A new calibration including cluster Cepheids and Gaia EDR3 parallaxes further increase the tension to $5.3\sigma$ \cite{Riess:2022mme}.}. Experimental efforts are underway to establish whether this discrepancy can be caused by yet unknown systematic effects (appearing in either the early or late Universe measurements, or both). It appears that various attempts to alter the modeling of dust extinction are not successful in altering the Hubble constant \cite{Mortsell:2021nzg,Mortsell:2021tcx,Follin:2017ljs}, nor is there support for different populations of SN Ia at low-z and high-z causing significant impact \cite{Rigault:2014kaa,NearbySupernovaFactory:2018qkd,Jones:2018vbn,Brout:2020msh}. In fact, the SH0ES team recently provided a comprehensive measurement of the $H_0$ parameter to 1.3\% precision, addressing these potential systematic errors, and concluded that there is ``{\em no indication that the discrepancy arises from measurement uncertainties or [over 70] analysis variations considered to date}'' \cite{Riess:2021jrx}. On the side of the CMB, it has been noted that \Planck{}~data carries a number of anomalies of low statistical significance that may play a role in this tension \cite{Addison:2015wyg,Planck:2016tof,Planck:2018vyg,DiValentino:2019qzk}. Nevertheless, the appearance of this discrepancy across an array of probes\footnote{For a very short summary of alternative methods, let us mention that, on the one hand there exists a variety of different techniques for calibrating $\Lambda$CDM at high-redshifts and subsequently inferring the value of $H_0$, which do not involve {\em Planck} data. For instance, one can use alternative CMB datasets such as WMAP, ACT, or SPT, or even remove observations of the CMB altogether and combine measurements of BBN with data from BAO \cite{2019JCAP...10..029S,2018ApJ...853..119A}, resulting in $H_0$ values in good agreement with {\em Planck}. On the other hand, alternative methods for measuring the local expansion rate have been proposed in the literature, in an attempt at removing any bias introduced from Cepheid and/or SNIa observations. The Chicago-Carnegie Hubble program (CCHP), which calibrates SNIa using the tip of the red giant branch (TRGB), obtained a value of $H_0=69.8 \pm 0.6~\mathrm{(stat)} \pm 1.6~\mathrm{(sys)}$ km/s/Mpc \cite{Freedman:2019jwv,Freedman:2021ahq}, in between the {\em Planck} CMB prediction and the SH0ES calibration measurement, and a re-analysis of the CCHP data by Anand et al. yields $H_0=71.5 \pm1.9$km/s/Mpc \cite{Anand:2021sum}. The SH0ES team, using the parallax measurement of $\omega-$Centauri from GAIA DR3 to calibrate the TRGB, obtained $H_0=72.1 \pm2.0$km/s/Mpc~\cite{Yuan:2019npk,Soltis:2020gpl}. Additional methods intended to calibrate SNIa at large distances include: surface brightness fluctuations of galaxies \cite{Khetan:2020hmh}, MIRAS \cite{Huang:2019yhh}, or the Baryonic Tully Fisher relation \cite{Schombert:2020pxm}. There also exists a variety of observations which do not rely on observations of SNIa -- these include e.g. time-delay of strongly lensed quasars \cite{Wong:2019kwg,Birrer:2020tax}, maser distances \cite{Pesce:2020xfe}, or gravitational waves as ``standard sirens'' \cite{Abbott:2019yzh}.} (although not always with strong statistical significance) suggests that a single systematic effect may not be sufficient to resolve it. For recent reviews on the topic, we refer the reader to Refs.~\cite{DiValentino:2021izs,Abdalla:2022yfr}. Additionally, within $\Lambda$CDM, the parameter $S_8\equiv \sigma_8(\Omega_m/0.3)^{0.5}$, where $\sigma_8$ is the root-mean-squared of matter fluctuations on a $8 h^{-1}$Mpc scale and $\Omega_m$ the (fractional) matter density today, inferred from CMB is about $2-3\sigma$ larger than that deduced from weak lensing surveys such as the CFHTLenS \cite{Heymans:2012gg}, KiDS-1000 \cite{Heymans:2020gsg}, DESY3 \cite{DES:2021wwk} as well as from {\em Planck} SZ cluster abundances \cite{Planck:2018vyg, Planck:2015lwi} and SPT \cite{SPT:2018njh}. Additionally, the measurements of $S_8$ on large-scales with galaxy clustering from BOSS full shape data that have been reported, also indicate a value that is on a low-side, although not at an important significant level due to large error bars ($\sim 2\sigma$) \cite{Zhang:2021yna,Philcox:2021kcw}.~\footnote{Note that however these $S_8$ measurements might be affected by prior volume effects, as shown and quantified in~\cite{DAmico:2022osl}. Once those accounted, BOSS full-shape results and \Planck{} are brought to good agreement (see also~\cite{Amon:2022azi}).} It is yet to be understood whether the $S_8$ tension is due to systematic effects \cite{Amon:2022ycy}, non-linear modelling including the effect of baryons at very small scales \cite{Amon:2022azi}, or physics beyond $\Lambda$CDM. Along with experimental developments to confirm the Hubble and $S_8$ tension, a lot of effort has been given to explain these discrepancies with some new physical mechanism, often in the form of extensions to the $\Lambda$CDM model that may be connected to the (still unknown) nature of dark matter or dark energy. It has been argued that the most promising category of solutions to resolve the $H_0$ tension involve physics in the pre-recombination era leading to a decrease of the sound horizon at recombination \cite{Bernal:2016gxb,Aylor:2018drw,Knox:2019rjx,Camarena:2021jlr,Efstathiou:2021ocp,Schoneberg:2021qvd}, such as model involving dark radiation and/or new neutrino properties \cite{Kreisch:2019yzn,Berbig:2020wve,Ghosh:2019tab,Forastieri:2019cuf,Escudero:2019gvw,Escudero:2021rfi,Blinov:2020hmc,Ghosh:2021axu,Archidiacono:2022ich,Aloni:2021eaq, Schoneberg:2022grr}, early dark energy (EDE) \cite{Karwal:2016vyq,Poulin:2018cxd,Smith:2019ihp,Niedermann:2019olb,Niedermann:2020dwg,Ye:2020btb}, modified gravity \cite{Renk:2017rzu,Umilta:2015cta,Ballardini:2016cvy,Rossi:2019lgt,Braglia:2020iik,Zumalacarregui:2020cjh, Abadi:2020hbr,Ballardini:2020iws,Braglia:2020bym,DiValentino:2015bja,Bahamonde:2021gfp,Raveri:2019mxg,Yan:2019gbw,Frusciante:2019puu,SolaPeracaula:2019zsl,SolaPeracaula:2020vpg,Ballesteros:2020sik,Braglia:2020auw,Desmond:2019ygn,Lin:2018nxe} or exotic recombination \cite{Chiang:2018xpn,Hart:2019dxi,Sekiguchi:2020teg,Jedamzik:2020krr,Cyr-Racine:2021alc} (for reviews, see Refs.~\cite{DiValentino:2021izs,Schoneberg:2021qvd}). Interestingly, these models tend to leave signatures in the matter power spectrum on large scales that can be probed by large scale structures surveys such as SDSS/BOSS~\cite{BOSS:2016wmc}. In fact, developments of the one-loop prediction of the galaxy power spectrum in redshift space from the Effective Field Theory of Large-Scale Structures\footnote{See also the introduction footnote in e.g.~\cite{DAmico:2022osl} for relevant related works on the EFTofLSS.}~\cite{Baumann:2010tm,Carrasco:2012cv,Senatore:2014via,Senatore:2014eva,Senatore:2014vja,Perko:2016puo} have made possible the determination of the $\Lambda$CDM parameters from the full-shape analysis of SDSS/BOSS data~\cite{BOSS:2016wmc} at precision higher than that from conventional BAO and redshift space distortions (which measure the product $f\sigma_8$, where $f$ is the growth function) analyses, and even comparable to that of CMB experiments. This provides an important consistency test for the $\Lambda$CDM model, while allowing to derive competitive constraints on models beyond $\Lambda$CDM (see e.g. Ref.~\cite{DAmico:2019fhj,Ivanov:2019pdj,Colas:2019ret,DAmico:2020kxu,DAmico:2020tty,Simon:2022ftd,Chen:2021wdi,Zhang:2021yna,Philcox:2021kcw,Kumar:2022vee,Nunes:2022bhn,Lague:2021frh}). A thorough study of the consistency of EFTBOSS analyses within the $\Lambda$CDM model is presented in a companion paper \cite{Paper1}. In this paper, we re-assess the constraints on EDE from the full shape of the most recent measurements of the power spectrum (or correlation function) of BOSS in light of a correction to the normalization of BOSS window functions (presented in App.~\ref{app:normalization}). EDE has been shown to reduce the Hubble tension to the $\sim1.5\sigma$ level, with an energy density representing at most a fraction $\fedezc\sim 12\%$ at the critical redshift $z_c\sim3500$ after which the fields start to dilute away \cite{Karwal:2016vyq,Poulin:2018cxd,Smith:2019ihp,Schoneberg:2021qvd}. There exists a variety of other EDE models which can similarly reduce the tension to the $1.5-2.5\sigma$ level \cite{Lin:2019qug,Niedermann:2019olb,Berghaus:2019cls,Ye:2020btb,Karwal:2021vpk}. Recently, several groups have reported ``hints'' of EDE within ACT data at the $\sim3\sigma$ level, alone or in combination with WMAP (or equivalently \Planck{} temperature data restricted to $\ell < 650$) and \Planck{} polarization data \cite{Hill:2021yec,Poulin:2021bjr}, as well as with SPT-3G data \cite{LaPosta:2021pgm,Smith:2022hwi}. However, it has also been pointed out that EDE leaves an impact in the matter power spectrum that can be constrained thanks to the EFTofLSS applied to BOSS data, or through measurements of the parameter $S_8$. Typically, in the EDE cosmology that resolves the Hubble tension, the amplitude of fluctuations $\sigma_8$ is slightly larger due to increase in $\omega_{\rm cdm}$ and $n_s$ which are necessary to counteract some of the effects of the EDE on the CMB power spectra \cite{Poulin:2018cxd,Hill:2020osr,Vagnozzi:2021gjh}. As a result, the ``$S_8$ tension'' tends to increase by $\sim 0.5\sigma$ in the EDE cosmology, and LSS measurements may put pressure on the EDE model \cite{Hill:2020osr}. Additionally, it has been argued that the full-shape analysis of the galaxy power spectrum of BOSS disfavors the EDE model as an efficient resolution of the $H_0$ tension~\cite{Ivanov:2020ril,DAmico:2020ods}. Indeed, in order to adjust the BAO data seen either in 3D or 2D at different comoving distances in a galaxy clustering survey (typically at $z\sim0.1-1$), it requires in the EDE cosmology an increase in $\omega_{\rm cdm}$\footnote{A similar increase is required to keep the CMB peaks height fixed \cite{Poulin:2018cxd}, in particular through the ISW effect \cite{Vagnozzi:2021gjh}.} \cite{Poulin:2018cxd,Jedamzik:2020krr}, which can affect the fit to the full shape~\cite{Hill:2020osr,DAmico:2020ods,Ivanov:2020ril}. Thus, galaxy clustering data can provide a way to break the degeneracy introduced by EDE, in particular due to the constraints it provides on $\omega_{\rm cdm}$ and $\sigma_8$. Although these effects are certainly relevant in constraining EDE, the original interpretation of the additional constraining power suggested in Refs.~\cite{DAmico:2020ods,Ivanov:2020ril} was disputed in Refs.~\cite{Smith:2020rxx,Murgia:2020ryi}. There, it was argued that the apparent constraining power from the BOSS full-shape analysis may be artificially amplified by (i) the impact of the prior volume artificially favoring $\Lambda$CDM in the Bayesian context (later verified with a profile likelihood approach\footnote{For further discussion about the mitigation of projection and prior volume effect, see Ref.~\cite{Gomez-Valent:2022hkb}.} \cite{Herold:2021ksg,Reeves:2022aoi}); (ii) a potential $\sim 20\%$ mismatch in the overall amplitude (typically parameterized by the primordial power spectrum amplitude $A_s$) between BOSS and \Planck, rather than additional constraints on $\omega_{\rm cdm}$. In App.~\ref{app:normalization}, we explore the impact of the correction to the normalization of the BOSS data window function within $\Lambda$CDM, and show that it leads to a $1\sigma$ shift upwards in the value of $A_s$, now in better agreement with {\it Planck}.~\footnote{Note that in our companion paper~\cite{Paper1}, we argue that the remaining difference on the amplitude might be explained by projection effects from the prior volume associated to the marginalization of the EFT parameters. } Given that previous analyses, e.g. Refs.~\cite{DAmico:2020ods,Ivanov:2020ril}, have used the measurements inconsistently normalized between the power spectrum and the window function (as already acknowledged in Ref.~\cite{Philcox:2022sgj} for their previous analyses), the constraints from EDE are expected to change with these corrected BOSS measurements. While Refs.~\cite{DAmico:2020ods,Ivanov:2020ril} concluded that the BOSS data, combined with {\it Planck} data, disfavored the EDE model as a potential candidate to solve the $H_0$ tension, we find here that the conclusions reached strongly depend on the normalization of the window functions used in the BOSS measurements. Our paper is structured as follows: In Sec.~\ref{sec:modelanddata}, we review the EDE model and data considered in this work. In particular, we detail the possible choice of BOSS measurements and EFT likelihoods. In Sec.~\ref{sec:ede_eftoflss}, we assess the constraining power of corrected BOSS data alone on the EDE resolution to the Hubble tension and discuss differences between the constraints derived from the various BOSS data and EFT likelihoods. In Sec.~\ref{sec:EFTCMB}, we derive constraints on EDE from the EFTBOSS data combined with either \Planck{} data (with and without SH0ES) or ACT data. We also show the impact of the new Pantheon+ SN1a catalogue \cite{Brout:2022vxf} on the constraints on EDE. We eventually present our conclusions in Sec.~\ref{sec:conclusions}. App.~\ref{app:normalization} present details on how to consistently normalize the window function with the power spectrum measurements. App.~\ref{app:classpt_vs_pybird_EDE}, provides additional comparison between EFTofLSS likelihoods within the EDE model. Finally, App.~\ref{app:chi2} lists additional relevant information about $\chi^2$ statistics. \section{Early Dark Energy Model and Data} \label{sec:modelanddata} \subsection{Brief review of the model} The EDE model corresponds to an extension of the $\Lambda$CDM model, where the existence of an additional sub-dominant oscillating scalar field $\phi$ is considered. The EDE field dynamics is described by the Klein-Gordon equation of motion (at the homogenous level): \begin{equation} \ddot{\phi} + 3 H \dot{\phi} + V_{n,\phi}(\phi) = 0\,, \end{equation} where $V_n(\phi)$ is a modified axion-like potential defined as \begin{equation} V_n(\phi) = m^2f^2\left[ 1- \cos(\phi/f) \right]^n. \end{equation} $f$ and $m$ correspond to the decay constant and the effective mass of the scalar field respectively, while the parameter $n$ controls the rate of dilution after the field becomes dynamical. In the following, we will use the re-defined field quantity $\Theta = \phi/f$ for convenience, such that $-\pi \le \Theta \le +\pi$. At early times, when $H\gg m$, the scalar field $\phi$ is frozen at its initial value since the Hubble friction prevails, which implies that the EDE behaves like a form of dark energy and that its contribution to the total energy density increases relative to the other components. When the Hubble parameter drops below a critical value ($H \sim m$), the field starts evolving towards the minimum of the potential and becomes dynamical. The EDE contribution to the total budget of the Universe is maximum around a critical redshift $z_c$, after which the energy density starts to dilute with an approximate equation of state $w_{\phi} = P_{\phi}/\rho_{\phi}$~\cite{1983PhRvD..28.1243T,Poulin:2018dzj}: \begin{equation} w_{\phi} = \left\{ \begin{array}{ll} -1 \ &\text{if} \ z>z_c, \\ \frac{n-1}{n+1} \ &\text{if} \ z<z_c. \end{array} \right. \end{equation} In the following we will fix $n=3$ as it was found that the data are relatively insensitive to this parameter provided $2\lesssim n \lesssim 5$ \cite{Smith:2019ihp}. Instead of the theory parameters, $f$ and $m$, we make use of $f_{\rm EDE}(z_c)$ and $z_c$ determined through a shooting method \cite{Smith:2019ihp}. We also include the initial field value $\Theta_i$ as a free parameter, whose main role once $f_{\rm EDE}(z_c)$ and $z_c$ are fixed is to set the dynamics of perturbations right around $z_c$, through the EDE sound speed $c_s^2$. The EDE field will provide a small contribution to the expansion rate $H(z)$ around $z_c$ (we will focus on $\sim 10^3-10^4$ in the context of the Hubble tension), which causes a modification of the sound horizon at the recombination: \begin{equation} r_{\rm s}(z_{\rm rec}) = \int^{+\infty}_{z_{\rm rec}} \frac{c_s(z')}{H(z')}dz', \end{equation} where $c_s$ corresponds to the sound speed of the photon-baryon fluid acoustic waves. The sound horizon is observationally determined through the angular acoustic scale at recombination $\theta_s$, defined as: \begin{equation} \theta_s = \frac{r_s(z_{\rm rec})}{D_A(z_{\rm rec})} \ , \end{equation} where $D_A(z_{\rm rec}) = \int_0^{z_{\rm rec}} dz'/H(z')\propto 1/H_0$ is the comoving angular diameter distance. Given that $\theta_s$ is determined from \Planck{} CMB power spectra with a very high accuracy, the change in the sound horizon must be compensated by a readjustment of the angular diameter distance in order to keep the angular acoustic scale constant. This readjustment is automatically done by increasing $H_0$ (and additional shift in $\omega_{\rm cdm}$ and $n_s$ to compensate effect of EDE on the growth of perturbations), which can, by design, bring the CMB measurements and the late-time estimate of the Hubble constant from the SH0ES collaboration into agreement. In this paper, we address the question of whether the current full shape of galaxy-clustering data analyzed using the EFTofLSS, can accommodate EDE. Indeed, on the one hand, the sound horizon seen at baryon-drag epoch $r_s(z_{\rm drag})$, is measured through another angular acoustic scale in galaxy surveys: \begin{equation} \theta_g = \frac{r_s(z_{\rm drag})}{D_V(z_{\rm eff})} \ , \end{equation} where $z_{\rm eff}$ is the effective redshift of the survey, and $D_V(z) = (D_A^2(z) \frac{c\cdot z}{H(z)})^{1/3}$ is a volume average of the comoving distances in the directions parallel and perpendicular to the line-of-sight, with $c$ the speed of light. The angle $\theta_g$ typically summarizes the information from the BAO, and measuring it with high precision has the potential to break the degeneracy between $r_s(z_{\rm drag})$ and $H_0$ introduced by the EDE. In practice, BAO from BOSS were shown to be well fit in combination with \Planck{} and SH0ES when allowing for EDE~\cite{Poulin:2018cxd}, at the cost of a larger $\omega_{\rm cdm}$ \cite{Jedamzik:2020zmd}, which can simultaneously allow for the CMB peaks height to be kept fixed \cite{Poulin:2018cxd} through the ISW effect \cite{Vagnozzi:2021gjh}. However, the full-shape of the galaxy power spectrum also contains additional information. For example, the amplitude of the small-scale galaxy power spectrum at $k>k_{\rm eq}$, where $k_{\rm eq}$ is the wavenumber entering the horizon at matter/radiation equality, contains information about $\omega_m$, $h$ and the spectral tilt $n_s$ \cite{DAmico:2019fhj,Colas:2019ret}. As the values of $\omega_{cdm}$ and $n_s$ are uplifted to compensate the growth of perturbations in the presence of EDE, the full-shape of the galaxy power spectrum (with $\omega_b$ fixed by CMB or a BBN prior) is also modified in that respect. In the following, we quantify if these modifications from the EDE as a resolution of the $H_0$ tension are consistent with current cosmological data, including the full-shape galaxy power spectrum from BOSS modeled with the EFT. \subsection{Data and method}\label{sec:data} We analyze the EDE model in light of recent cosmological observations through a series of Markov-Chain Monte Carlo (MCMC) analyses using the MontePython-v3\footnote{\url{https://github.com/brinckmann/montepython_public}} code \cite{Audren:2012wb,Brinckmann:2018cvx} interfaced with our modified\footnote{\url{https://github.com/PoulinV/AxiCLASS}} version of CLASS\footnote{\url{https://lesgourg.github.io/class_public/class.html}} \cite{Blas:2011rf}. In this paper, we carry out various analyses from a combination of the following datasets: \begin{itemize} \item {\bf PlanckTTTEEE:} The low-$l$ CMB TT, EE, and the high-$l$ TT, TE, EE data from \Planck{} 2018 \cite{Planck:2018vyg}. \item {\bf PlanckTT650TEEE:} Same dataset as \Planck{} TTTEEE, but in this case the TT power spectrum has a multipole range restricted to $l< 650$. \item {\bf Lens:} The CMB gravitational lensing potential reconstructed from \Planck{} 2018 temperature and polarization data \cite{Planck:2018lbu}. When used without high-$l$ TT, TE, EE data, we use the CMB-marginalized version of the likelihood.\footnote{We thank Oliver Philcox for his help with correcting a bug in the standard Plik implementation.} \item {\bf ACT:} The temperature and polarization angular power spectrum of the CMB from the Atacama Cosmology Telescope’s (ACT DR4) \cite{ACT:2020frw}. \item {\bf BBN:} The BBN measurement of $\omega_b$ \cite{Schoneberg_2019} that uses the theoretical prediction of \cite{Consiglio_2018}, the experimental Deuterium fraction of \cite{Cooke_2018} and the experimental Helium fraction of \cite{Aver_2015}. \item {\bf BAO:} The measurements of the BAO from the CMASS and LOWZ galaxy samples of BOSS DR12 at $z = 0.38$, 0.51, and 0.61 \cite{BOSS:2016wmc}, which we refer to as ``BOSS BAO DR12''. The BAO measurements from 6dFGS at $z = 0.106$ and SDSS DR7 at $z = 0.15$ \cite{Beutler:2011hx,Ross:2014qpa}, which we refer to as ``BOSS BAO low$-z$''. \item {\bf BOSS $\bm{f\sigma_8}$ DR12:} We also sometimes include the redshift space distortion at $z = 0.38$, 0.51, and 0.61 which we refer to as $f\sigma_8$ \cite{BOSS:2016wmc}, taking into account the cross-correlation with BAO measurements. \item {\bf EFTBOSS:} The full-shape analysis of the BOSS power spectrum from the EFTofLSS, namely $P_\textsc{fkp}^\textsc{lz/cm}$ \cite{Zhang:2021yna}, cross-correlated with reconstructed BAO, namely $\alpha^\textsc{lz/cm}_{rec}$ \cite{Gil-Marin:2015nqa}. The measurements are defined in Table~\ref{tab:twopoint_summary}. The SDSS-III BOSS DR12 galaxy sample data and covariances are described in Refs.~\cite{BOSS:2016wmc,Kitaura:2015uqa}. The measurements, obtained in Ref.~\cite{Zhang:2021yna}, are from BOSS catalogs DR12 (v5) combined CMASS-LOWZ~\footnote{\url{https://data.sdss.org/sas/dr12/boss/lss/}}~\cite{Reid:2015gra}, and are divided in redshift bins LOWZ, $0.2<z<0.43 \ (z_{\rm eff}=0.32)$, and CMASS, $0.43<z<0.7 \ (z_{\rm eff}=0.57)$, with north and south galactic skies for each, respectively denoted NGC and SGC. For the EDE analyses, we analyze the full shape of CMASS NGC, CMASS SGC, and LOWZ NGC, cross-correlated with post-reconstruction BAO. The analysis includes the monopole and quadrupole between $(k_{\rm min}, k_{\rm max}) = (0.01, 0.20/0.23) \hinvMpc$ in Fourier space and $(s_{\rm min}, s_{\rm max}) = (25/20, 200) \Mpcinvh$ in configuration space~\cite{Colas:2019ret,DAmico:2020kxu,Zhang:2021yna}, for LOWZ / CMASS. The theory prediction and likelihood are made available through \code{PyBird}. We also compare \code{PyBird} to \code{CLASS-PT}. More details on the differences between these likelihoods are given in Sec. II of Ref.~\cite{Paper1}. When computing constraints with \code{CLASS-PT}, we use the galaxy power spectrum monopole, quadrupole, and hexadecapole, for $0.01\ h{\rm Mpc}^{-1} \leqslant k \leqslant 0.2\ h{\rm Mpc}^{-1}$ as well as the the real-space extension, $Q_0$, up to $k_{\rm max} = 0.4\ h{\rm Mpc}^{-1}$, and the post-reconstructed BAO parameters. We use the standard \code{CLASS-PT} priors on the bias parameters. \item {\bf Pan18:} The Pantheon SNIa catalogue, spanning redshifts $0.01 < z < 2.3$ \cite{Scolnic:2017caz}. We will also study in Sec.~\ref{sec:PanPlus} the impact of the newer Pantheon+ catalogue, favoring a larger $\Omega_m$ \cite{Brout:2022vxf}, on our conclusions. \item {\bf SH0ES:} The SH0ES determination of $H_0 = 73.04\pm1.04$ km/s/Mpc from cepheid calibrated SNIa, modeled as a Gaussian likelihood.\footnote{For discussions about this modeling, see Refs.~\cite{Camarena:2021jlr,Efstathiou:2021ocp,Schoneberg:2021qvd}} \end{itemize} We will refer to the combination of \Planck TTTEEE+BAO+Pan18 as ``BaseTTTEEE", and to ``BaseTT650TEEE" when replacing \Planck TTTEEE with \Planck TT650TEEE. In the absence of CMB TTTEEE data, we refer to the dataset EFTBOSS+BBN+Lens+BAO+Pan18 as ``BaseEFTBOSS". For all runs performed, we use \textit{Planck} conventions for the treatment of neutrinos, that is, we include two massless and one massive species with $m_{\nu} = 0.06$ eV \cite{Planck:2018vyg}. In addition, we impose a large flat prior on the dimensionless baryon energy density $\omega_b$, the dimensionless cold dark matter energy density $\omega_{\rm cdm}$, the Hubble parameter today $H_0$, the logarithm of the variance of curvature perturbations centered around the pivot scale $k_p = 0.05$ Mpc$^{-1}$ (according to the \Planck{} convention), $\ln(10^{10}\mathcal{A}_s)$, the scalar spectral index $n_s$, and the re-ionization optical depth $\tau_{\rm reio}$. Regarding the 3 free parameters of the EDE model, we impose a logarithmic priors on $z_c$, and flat priors for $f_{\rm EDE}(z_c)$ and $\Theta_i$: \begin{align*} &3 \le \log_{10}(z_c) \le 4, \\ &0\le f_{\rm EDE}(z_c) \le 0.5, \\ &0 \le \Theta_i \le \pi\,. \end{align*} We define our MCMC chains to be converged when the Gelman-Rubin criterion $R-1 < 0.05$, except for runs combining \Planck{}+EFTBOSS+ACT, for which we use a relaxed criterion of $R-1 < 0.1$ due to the complicated nature of the parameter space for the MCMC to explore.\footnote{Most parameters are converged at 0.01-0.05, the parameter with the worse convergence is $\theta_i$ which is often unconstrained or multimodal in the analyses.} Finally, we extract the best-fit parameters from the procedure highlighted in appendix of Ref.~\cite{Schoneberg:2021qvd}, and we produce our figures thanks to \code{GetDist} \cite{Lewis:2019xzd}. \subsection{Details on the BOSS measurements and EFT likelihoods} \label{sec:comparison_measurements} \begin{table*} \begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{6}{|c|}{\bf Pre-reconstructed measurements}\\ \hline & Ref. & Estimator & Code & Redshift split & Window \\ \hline $\mathcal{P}_\textsc{fkp}^\textsc{lz/cm}$ & \cite{Gil-Marin:2015sqa} & FKP & \code{Rustico}\footnote{\url{https://github.com/hectorgil/Rustico}}\cite{Gil-Marin:2015sqa} & LOWZ / CMASS & Inconsistent norm. \\ $P_\textsc{fkp}^\textsc{lz/cm}$ & \cite{Zhang:2021yna} & FKP & \code{PowSpec}\footnote{\url{https://github.com/cheng-zhao/powspec}}~\cite{Zhao:2020bib} / \code{nbodykit}\footnote{\url{https://github.com/bccp/nbodykit}}~\cite{Hand:2017pqn} & LOWZ / CMASS & Consistent norm. \\ $\xi^\textsc{lz/cm}$ & \cite{Zhang:2021yna} & Landy \& Slazay & \code{FCFC}\footnote{\url{https://github.com/cheng-zhao/FCFC}}~\cite{Zhao:2020bib} & LOWZ / CMASS & Window-free \\ $P_\textsc{fkp}^{z_1/z_3}$ & \cite{Beutler:2021eqq}~\footnote{\url{https://fbeutler.github.io/hub/deconv_paper.html}} & FKP & -- & $z_1$ / $z_3$ & Consistent norm. \\ $P_\textsc{quad}^{z_1/z_3}$ & \cite{Philcox:2021kcw} & Quadratic & \code{Spectra without Windows}\footnote{\url{https://github.com/oliverphilcox/Spectra-Without-Windows}}~\cite{Philcox:2020vbm} & $z_1$ / $z_3$ & Window-free \\ \hline \multicolumn{6}{c}{}\\[-0.5em] \hline \multicolumn{6}{|c|}{\bf Post-reconstructed measurements}\\ \hline & Ref. & -- & -- & Redshift split & Method \\ \hline $\alpha^\textsc{lz/cm}_{rec}$ & \cite{Gil-Marin:2015nqa} & -- & -- & LOWZ / CMASS & \cite{DAmico:2020kxu} \\ $\alpha^{z_1/z_3}_{rec}$ & \cite{BOSS:2016hvq} & -- & -- & $z_1$ / $z_3$ & \cite{DAmico:2020kxu} \\ $\beta^{z_1/z_3}_{rec}$ & \cite{BOSS:2016hvq} & -- & -- & $z_1$ / $z_3$ & \cite{Philcox:2020vvt} \\ \hline \end{tabular} \caption{Comparison of pre-reconstructed and post-reconstructed BOSS two-point function measurements: reference, estimator, code of the measurements, redshift split (LOWZ: $0.2<z<0.43 \ (z_{\rm eff}=0.32)$, CMASS: $0.43<z<0.7 \ (z_{\rm eff}=0.57)$; $z_1$: $0.2<z<0.5 \ (z_{\rm eff}=0.38)$, $z_3$: $0.5<z<0.7 \ (z_{\rm eff}=0.61)$), and window function treatment. For the post-reconstructed measurements, while we instead provide under ``Method'' the references presenting the algorithm used to extract the reconstructed BAO parameters and how the cross-correlation with the pre-reconstructed measurements is performed, ``Ref.'' now refers to the public post-reconstructed measurements used. The SDSS-III BOSS DR12 galaxy sample data are described in Refs.~\cite{BOSS:2016wmc,Kitaura:2015uqa}. The pre-reconstructed measurements are from BOSS catalogs DR12 (v5) combined CMASS-LOWZ~\footnote{\url{https://data.sdss.org/sas/dr12/boss/lss/}}~\cite{Reid:2015gra}. More details can be found in Sec. IV of Ref.~\cite{Paper1}. } \label{tab:twopoint_summary} \end{table*} In this paper, we perform a thorough comparison of the constraints derived from the EFTBOSS data, in order to assess the consistency of the various analyses presented in the literature. Indeed, there are various BOSS two-point function measurements available to perform full-shape analyses, as well as a different EFT code. As described in more detail in Ref.~\cite{Paper1}, the BOSS DR12 data can be divided into two different sets of redshift splitting (``LOWZ/CMASS'' vs.~``$z_1$/$z_3$''). Furthermore, depending on the estimator, the data are sometimes analyzed by convolving the theory model with a window functions, or not. For a window-free analysis, one way is to use the configuration-space correlation function, $\xi$, another is to use a quadratic estimator which we denote with the subscript ``QUAD''. Finally, there are different ways to analyze the post-reconstructed parameters, which are then combined with the EFTBOSS data, denoted by $\alpha_{\rm rec}$ and $\beta_{\rm rec}$. These different datasets include slightly different amounts of information (due to different scale cuts) but they all represent reasonable choices on how to analyze the BOSS DR12 observations. The characteristics of each measurements are listed in Tab.~\ref{tab:twopoint_summary} and more details can be found in Sec.~IV of Ref.~\cite{Paper1}. The EFT implementation and BOSS data we will focus on in this study are packaged in the \code{PyBird} likelihood, based on the EFT prediction and likelihood from~\code{PyBird}~\footnote{\url{https://github.com/pierrexyz/pybird}}~\cite{DAmico:2020kxu}, and the \code{CLASS-PT} likelihood, based on the EFT prediction from~\code{CLASS-PT}~\footnote{\url{https://github.com/michalychforever/CLASS-PT}}~\cite{Chudaykin:2020aoj} and likelihood from Ref.~\cite{Philcox:2021kcw}.~\footnote{\url{https://github.com/oliverphilcox/full_shape_likelihoods}} Details about the \code{PyBird} and \code{CLASS-PT} likelihoods are presented in Sec.~II of Ref.~\cite{Paper1}. Here, let us simply mention that \code{CLASS-PT} implements the IR-resummation scheme proposed in Ref.~\cite{Blas:2016sfa}, and generalized to redshift space in Ref.~\cite{Ivanov:2018gjr}. This is different than that implemented in \code{PyBird}, proposed in Ref.~\cite{Senatore:2014via}, generalized to redshift space in Ref.~\cite{Lewandowski:2015ziq}, and made numerically efficient in Ref.~\cite{DAmico:2020kxu}. The \code{CLASS-PT} scheme has been shown to be an approximation of the one used in \code{PyBird} in Ref.~\cite{Lewandowski:2018ywf}, where one considers only the resummation of the bulk displacements around the BAO peak, $r_{\rm BAO} \sim 110 \Mpcinvh$. For this scheme to be made practical, one further relies on a wiggle-no-wiggle split procedure to isolate the BAO part. Although this scheme has been shown to work fairly well within $\Lambda$CDM for cosmologies not too far from the one of \Planck, we cautiously observe that in far-away cosmologies as the ones probed in EDE, the BAO peak location happens to be dramatically modified, and it thus remains to be check that the approximations still hold in these cases. For our prior choice (on $f_{\rm EDE}$), we have checked that at least the wiggle-no-wiggle split procedure as implemented in \code{CLASS-PT} is as numerically stable as for a fiducial case where the BAO peak is $\sim 110 \Mpcinvh$. In addition, in Ref.~\cite{Paper1}, we have checked the validity of the two pipelines by implementing in the \code{PyBird} likelihood the exact same prior as those used in the \code{CLASS-PT} likelihood, and found agreement on the 1D posteriors of the cosmological parameters at $\lesssim 0.2\sigma$ in $\Lambda$CDM, where these residuals differences can be attributed to the different implementations of the IR-resummation mentioned above. \section{Updated EFTBOSS constraints on EDE} \label{sec:ede_eftoflss} \subsection{Preliminary study} In the recent literature, there has been a number of analyses showing hints of EDE and allowing for a resolution of the Hubble tension \cite{Poulin:2018cxd,Niedermann:2019olb,Hill:2021yec,Poulin:2021bjr,LaPosta:2021pgm,Smith:2022hwi}. In this preliminary study, we will take the results of two representative analyses. First, the baseline analysis of BaseTTTEEE+Lens+SH0ES data (second column of Tab.~\ref{tab:cosmoparam_planck}) has a best-fit of $f_{\rm EDE}(z_c) = 0.122$, $H_0 = 71.89$ km.s$^{-1}$.Mpc$^{-1}$. Second, the analysis of BaseTT650TEEE+ACT (first column of Tab.~\ref{tab:cosmoparam_act}) favors an EDE model with significantly larger values of $f_{\rm EDE}(z_c)$ and $H_0$ compared to the BaseTTTEEE+Lens+SH0ES, namely $f_{\rm EDE}(z_c) = 0.159$, $H_0 = 73.30$ km.s$^{-1}$.Mpc$^{-1}$ (see also \cite{Hill:2021yec,LaPosta:2021pgm,Poulin:2021bjr,Smith:2022hwi}). In this section, we will gauge how these two specific models fair against BOSS data following Refs.~\cite{DAmico:2020ods,Ivanov:2020ril}. Using the best-fit parameters listed in Tab.~\ref{tab:cosmoparam_planck} (second column) and Tab.~\ref{tab:cosmoparam_act} (first column), we perform a preliminary study where we determine the $\chi^2$ of the EFTBOSS data (using our fiducial ``$P_\textsc{fkp}^\textsc{lz/cm}+\alpha^\textsc{lz/cm}_{rec}$'' data) after optimising the EFT parameters. Using the \code{PyBird} code, we show in Tab.~\ref{tab:EDE_preliminary} the $\chi^2$ associated to the EFTBOSS data, and we plot, in Fig.~\ref{fig:galaxy_power_spectrum_prliminary}, the residuals with respect to $\Lambda$CDM from the BaseTTTEEE+Lens+EFTBOSS analysis\footnote{When combined with EFTBOSS, we do not include the BOSS BAO+$f\sigma_8$ data. } \cite{Simon:2022ftd}. We also show the BOSS data residuals for comparison with respect to the same model. First, one can see that the change in the residuals between those various fits are almost imperceptible by eye with respect to BOSS error bars. We find that the $\chi^2$ of the BOSS data is degraded by $+1.1$ for BaseTTTEEE+Lens+SH0ES (to be compared with $\sim+2.5$ in Ref.~\cite{Ivanov:2020ril}) and $+2.4$ for BaseTT650TEEE+ACT, compared to the best-fit $\chi^2$ of EFTBOSS data in the $\Lambda$CDM model. Despite this small $\chi^2$ degradation, we note that the $p$-value of BOSS data in the EDE models that resolve the Hubble tension is still very good. Nevertheless, we anticipate that the EFTBOSS data could have a non-negligible constraining power in combination with BaseTT650TEEE+ACT, while its impact should be small in the context of the BaseTTTEEE+Lens+SH0ES analysis. \begin{table*}[t] \begin{tabular}{|l|c|c|c|} \hline & BaseTTTEEE+Lens & BaseTT650TEEE & BaseTTTEEE+Lens\\ & +SH0ES (EDE) & +ACT (EDE)& +EFTBOSS ($\Lambda$CDM)\\ \hline $\chi^2_{\text{CMASS NGC}}$ & 39.3 & 39.1 & 40.3 \\ $\chi^2_{\text{CMASS SGC}}$ & 45.2 & 46.0 & 44.0 \\ $\chi^2_{\text{LOWZ NGC}}$ & 34.4 & 35.1 & 33.5 \\ \hline $\chi^2_{\text{EFTBOSS}}$ & 118.9 & 120.2 & 117.8 \\ $\Delta\chi^2_{\rm min} (\text{EDE}-\Lambda\text{CDM})$ & +1.1 & + 2.4 & -- \\ \hline $p-$value & $16.7\%$ & $14.7\%$ & $18.5\%$ \\ \hline $N_{\rm data}$ &\multicolumn{3}{|c|}{132}\\ \hline \end{tabular} \caption{$\chi^2$ of each sky-cut of the EFTBOSS dataset for the EDE best-fit models extracted from a fit to BaseTTTEEE+Lens+SH0ES and BaseTT650TEEE+ACT and the $\Lambda$CDM model from a fit to BaseTTTEEE+Lens+EFTBOSS. We also indicated the $\Delta\chi^2$ with respect to the $\Lambda$CDM best-fit model. The associated $p$-value is calculated assuming that the data points are uncorrelated and taking $3 \cdot 9$ EFT parameters in each fit (given that the cosmology is fixed). } \label{tab:EDE_preliminary} \end{table*} \subsection{Constraints from various BOSS data} \label{sec:EDE_arena_main} As is done in Ref.~\cite{Paper1} for $\Lambda$CDM, we compare the constraints on EDE from the various BOSS two-point function measurements, described in Tab.~\ref{tab:twopoint_summary}, in combination with the BBN prior on $\omega_b$. The comparison of the the 2D posteriors is shown in Fig.~\ref{fig:boss_ede_summary_triangle}, while the 1D posteriors of $\{f_{\rm EDE}(z_c), h,\omega_{\rm cdm},\ln(10^{10}A_s),n_s,\Omega_m\,\sigma_8,S_8\}$ are shown in Fig.~\ref{fig:boss_ede_summary}. In these figures, we also display the results from the BOSS data analyzed with the EFT predictions convolved with inconsistently-normalized window functions, namely $\mathcal{P}_\textsc{fkp}^\textsc{lz/cm}+\alpha^\textsc{lz/cm}_{rec}$, which disfavor the EDE model when they are combined with {\em Planck} data \cite{DAmico:2020ods,Ivanov:2020ril} (see the discussion in App.~\ref{app:normalization} for the impact of inconsistent normalization within the $\Lambda$CDM model). Interestingly, using the \code{PyBird} likelihood, the $\Lambda$CDM parameters are broadly consistent between $P_\textsc{fkp}^\textsc{lz/cm}+\alpha^\textsc{lz/cm}_{rec}$ and $P_\textsc{quad}^{z_1/z_3}+\alpha^{z_1/z_3}_{rec}$, as we have a shift of $\lesssim 0.3\sigma$ on $\Lambda$CDM parameters between these two measurements. However, we find that $P_\textsc{fkp}^\textsc{lz/cm}+\alpha^\textsc{lz/cm}_{rec}$ leads to stronger constraints on EDE, namely\footnote{Per convention, we cite 1-sided bound at 95\% C.L. and 2-sided ones at 68\% C.L.} $f_{\rm EDE}(z_c) < 0.321$, while $P_\textsc{quad}^{z_1/z_3}+\alpha^{z_1/z_3}_{rec}$ yields $f_{\rm EDE}(z_c) < 0.382$. Concerning $\xi^\textsc{lz/cm}+\alpha^\textsc{lz/cm}_{rec}$, we find different constraints, even for the $\Lambda$CDM parameters: comparing $\xi^\textsc{lz/cm}+\alpha^\textsc{lz/cm}_{rec}$ to $P_\textsc{fkp}^\textsc{lz/cm}+\alpha^\textsc{lz/cm}_{rec}$, we find shifts of $\lesssim 1.2\sigma$, whereas comparing $\xi^\textsc{lz/cm}+\alpha^\textsc{lz/cm}_{rec}$ to $P_\textsc{quad}^{z_1/z_3}+\alpha^{z_1/z_3}_{rec}$, we find shifts of $\lesssim 1.0\sigma$. Let us note that the constraints on $\Lambda$CDM parameters reconstructed from $\xi^\textsc{lz/cm}+\alpha^\textsc{lz/cm}_{rec}$ are weaker than those of $P_\textsc{fkp}^\textsc{lz/cm}+\alpha^\textsc{lz/cm}_{rec}$ and $P_\textsc{quad}^{z_1/z_3}+\alpha^{z_1/z_3}_{rec}$, which is consistent with what was found within the $\Lambda$CDM model in our companion paper~\cite{Paper1} (see also Ref.~\cite{Zhang:2021yna} and explanations therein). Regarding the EDE parameters, we obtain weaker constraints on $f_{\rm EDE}$, namely $f_{\rm EDE}(z_c) < 0.468$. It is worth noting that, for the same likelihood, the constraints on $f_{\rm EDE}(z_c)$ can be up to $\sim35\%$ different depending on the data (especially between $P_\textsc{fkp}^\textsc{lz/cm} + \alpha^\textsc{lz/cm}_{rec}$ and $\xi^\textsc{lz/cm} + \alpha^\textsc{lz/cm}_{rec}$). However, regardless of the data we consider, the BOSS full-shape (analyzed on their own with a BBN prior) within EDE leads to reconstructed values of $H_0$ that are compatible with what is obtained by the SH0ES collaboration. This conclusion also holds for the \code{CLASS-PT} baseline (last line of Fig.~\ref{fig:boss_ede_summary}), which is less constraining than the \code{PyBird} likelihood for the EDE model. Indeed, we obtain $f_{\rm EDE}(z_c) < 0.448$, which is $\sim 15\%$ weaker than the constraint obtained with the \code{PyBird} likelihood, even for similar data ($P_\textsc{quad}^{z_1/z_3}$). Furthermore, we note that the $f_{\rm EDE}(z_c)$ constraint reconstructed from $P_\textsc{fkp}^\textsc{lz/cm} + \alpha^\textsc{lz/cm}_{rec}$, analysed with the \code{PyBird} likelihood, is $\sim35\%$ weaker than the constraint obtained from $P_\textsc{quad}^{z_1/z_3}+\beta^{z_1/z_3}_{rec}$, analysed with the \code{CLASS-PT} likelihood. We conclude that the standard \code{PyBird} analysis setup (which consists in our baseline setup) shows a higher constraining power than the standard \code{CLASS-PT} analysis. For a more detailed discussion, including other data combinations, of the differences between \code{PyBird} and \code{CLASS-PT} for the EDE model, we refer to App.~\ref{app:classpt_vs_pybird_EDE}. We however warn that the cosmological constraints from EFTofBOSS at the level of the 1D posteriors might be affected by prior effects, as discussed in our companion paper~\cite{Paper1} in the context of $\Lambda$CDM. \subsection{Primary CMB-free constraints on EDE} To fully gauge the constraining power of a primary CMB-free analysis, on top of the fiducial EFTBOSS data and BBN prior, we now include other BOSS BAO measurements, {\em Planck} lensing and the Pantheon18 datasets. We recall that this dataset is simply called ``BaseEFTBOSS'', and we plot the associated reconstructed 2D posteriors in Fig.~\ref{fig:EDE_EFT} (blue contours). We compare our results with the posteriors reconstructed from a BaseTTTEEE+Lens+SH0ES (red contours) and BaseTT650TEEE+ACT (orange contours) analysis. One can see that, while the primary CMB-free analysis do not favor EDE (in the absence of a SH0ES prior), constraints are relatively weak and the reconstructed posteriors from the BaseEFTBOSS data are not in tension with those reconstructed from the BaseTTTEEE+Lens+SH0ES and BaseTT650TEEE+ACT analyses. Nevertheless, we note a clear narrowing of the constraints in the $\{f_{\rm EDE}(z_c),\log_{10}(z_c)\}$ parameter space around $\log_{10}(z_c)\sim 3.5$, indicating that BOSS gains constraining power right around matter-radiation equality. To extract a meaningful CMB-independent bound on $f_{\rm EDE}(z_c)$, we perform an additional analysis now restricting the $\log_{10}(z_c)$-range to $\log_{10}(z_c)\in[3.4,3.7]$, which corresponds to the region favored to resolve the Hubble tension. We find that the combination of EFTBOSS+BBN+Lens+BAO+Pan18 (i.e., BaseEFTBOSS) leads to $f_{\rm EDE}(z_c)<0.2$ (95\% C.L.) and $h = 0.710_{-0.025}^{+0.015}$, which does not exclude the EDE models resolving the Hubble tension. When performing the same analysis with \code{CLASS-PT}, we find significantly weaker constraints, with $f_{\rm EDE}(z_c)<0.284$ (95\% C.L.) and $h =0.726_{-0.04}^{+0.02}$. Constraints from \code{CLASS-PT} are shown in App.~\ref{app:classpt_vs_pybird_EDE}, Fig.~\ref{fig:EDE_PyBird_vs_CLASSPT}. \section{EFTBOSS combined with CMB data} \label{sec:EFTCMB} \begin{table*}[] \centering \scalebox{0.8}{ \begin{tabular}{|l|c|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{BaseTTTEEE+Lens} & \multicolumn{2}{|c|}{BaseTTTEEE+Lens+$f\sigma_8$}& \multicolumn{2}{|c|}{BaseTTTEEE+Lens+EFTBOSS}\\ \hline \hline $H_0$ prior? & no & yes & no & yes & no & yes\\ \hline $f_{\rm EDE}(z_c)$ & $< 0.091(0.088)$ & $0.109(0.122)^{+0.030}_{-0.024}$ & $< 0.088(0.057)$ & $0.102(0.118)^{+0.030}_{-0.024}$ & $< 0.083(0.082)$ & $0.103(0.116)^{+0.027}_{-0.023}$ \\ $\log_{10}(z_c)$ & unconstrained $(3.55)$ & $3.599(3.568)^{+0.029}_{-0.081}$ & unconstrained (3.78) & $3.603(3.569)^{+0.037}_{-0.11}$ & unconstrained (3.82) & $3.67(3.83)^{+0.21}_{-0.15}$ \\ $\theta_i$ & unconstrained (2.8) & $2.65(2.73)^{+0.22}_{-0.025}$ & unconstrained (2.94) & $2.58(2.76)^{+0.33}_{+0.034}$ & unconstrained (2.9) & $2.73(2.89)^{+0.19}_{-0.065}$ \\ \hline $h$ & $0.688(0.706)^{+0.006}_{-0.011}$ & $0.715(0.719)\pm 0.009$ & $0.687(0.694)^{+0.006}_{-0.011}$ & $0.712(0.718)\pm 0.009$ & $0.687(0.700)^{+0.006}_{-0.011}$ & $0.713(0.715)\pm 0.009$ \\ $\omega_{\rm cdm }$ & $0.1227(0.1281)^{+0.0018}_{-0.0036}$ & $0.1303(0.1319)\pm 0.0035$ & $0.1227(0.1246)^{+0.0016}_{-0.0036}$ & $0.1296(0.1314)\pm 0.0035$ & $0.1221(0.1269)^{+0.0015}_{-0.0033}$ & $0.1288(0.1297)\pm 0.0032$ \\ $10^{2}\omega_{b }$ & $2.258(2.266)^{+0.018}_{-0.020}$ & $2.283(2.303)\pm 0.020$ & $2.258(2.266)^{+0.017}_{-0.021}$ & $2.282(2.279)\pm 0.021$ & $2.257(2.275)^{+0.017}_{-0.020}$ & $2.287(2.301)\pm 0.023$ \\ $10^{9}A_{s }$ & $2.122(2.135)\pm 0.032$ & $2.153(2.145)\pm 0.032$ & $2.119(2.119)^{+0.029}_{-0.033}$ & $2.146(2.164)\pm 0.031$ & $2.113(2.120)\pm 0.032$ & $2.144(2.144)\pm 0.032$ \\ $n_{s }$ & $0.9734(0.9823)^{+0.0053}_{-0.0076}$ & $0.9883(0.9895)\pm 0.0060$ & $0.9730(0.9809)^{+0.0048}_{-0.0074}$ & $0.9868(0.9899)\pm 0.0062$ & $0.9715(0.9827)^{+0.0049}_{-0.0076}$ & $0.9867(0.9921)\pm 0.0065$ \\ $\tau_{\rm reio }$ & $0.0570(0.0574)^{+0.0069}_{-0.0076}$ & $0.0582(0.0579)\pm 0.0075$ & $0.0564(0.0553)\pm 0.0072$ & $0.0572(0.059)\pm 0.0073$ & $0.0562(0.0553)\pm 0.0073$ & $0.0586(0.0599)^{+0.0068}_{-0.0076}$ \\ \hline $S_8$ & $0.831(0.839)^{+0.011}_{-0.013}$ & $0.839(0.843)\pm 0.012$ & $0.831(0.833)^{+0.011}_{-0.012}$ & $0.838(0.843)\pm 0.013$ & $0.826(0.836)\pm 0.011$ & $0.833(0.835)\pm 0.012$ \\ $\Omega_{m }$ & $0.3084(0.3041)\pm 0.0058$ & $0.3008(0.3005)\pm 0.0048$ & $0.3089(0.3074)\pm 0.0054$ & $0.3019(0.3003)\pm 0.0051$ & $0.3077(0.3065)\pm 0.0054$ & $0.2998(0.3004)\pm 0.0050$ \\ \hline total $\chi^2_{\rm min}$ & 3799.2&3802.9 & 3801.8& 3806.1 & 3912.7 & 3917.3\\ $\Delta \chi^2_{\rm min}$ & -3.8 & -23.7 & -3.9&-23.0 & -4.7 & -22.7\\ \hline $Q_{\rm DMAP}$&\multicolumn{2}{|c|}{1.9$\sigma$} &\multicolumn{2}{|c|}{2.0$\sigma$}& \multicolumn{2}{|c|}{2.1$\sigma$}\\ \hline \end{tabular}} \caption{ Mean (best-fit) $\pm 1\sigma$ (or $2\sigma$ for one-sided bounds) of reconstructed parameters in the EDE model confronted to various datasets, including \Planck TTTEEE.} \label{tab:cosmoparam_planck} \end{table*} \begin{table*}[] \centering \scalebox{0.9}{ \begin{tabular}{|l|c|c|c|c|} \hline & BaseTT650TEEE+ACT & BaseTT650TEEE+ACT & BaseTT650TEEE+ACT & BaseTT650TEEE+ACT \\ & &+$f\sigma_8$ & +EFTBOSS & +Lens+EFTBOSS\\ \hline \hline $f_{\rm EDE}(z_c)$ & $0.128(0.159)^{+0.064}_{-0.039}$ & $0.116(0.148)^{+0.059}_{-0.046}$ & $0.093(0.148)^{+0.047}_{-0.066}$ & $< 0.172(0.148)$ \\ $\log_{10}(z_c)$ & $3.509(3.521)^{+0.048}_{-0.033}$ & $3.505(3.514)^{+0.056}_{-0.049}$ & $3.493(3.514)^{+0.080}_{-0.093}$ & $3.486(3.514)^{+0.091}_{-0.13}$ \\ $\theta_i$ & $2.63(2.77)^{+0.24}_{+0.023}$ & $2.53(2.78)^{+0.37}_{+0.094}$ & $2.54(2.78)_{0.065}^{+0.47}$ & $2.41(2.78)_{0.12}^{+0.65}$ \\ \hline $h$ & $0.723(0.733)^{+0.021}_{-0.017}$ & $0.718(0.728)\pm 0.018$ & $0.713( 0.730)^{+0.017}_{-0.021}$ & $0.708(0.725)^{+0.015}_{-0.022}$ \\ $\omega{}_{\rm cdm }$ & $0.1332(0.1369)^{+0.0071}_{-0.0059}$ & $0.1320(0.1355)\pm 0.0062$ & $0.1285(0.1355)^{+0.0057}_{-0.0067}$ & $0.1276(0.1355)^{+0.0047}_{-0.0074}$ \\ $10^{2}\omega{}_{b }$ & $2.268( 2.267)\pm 0.019$ & $2.266(2.261)\pm 0.020$ & $2.265(2.266)\pm 0.020$ & $2.263(2.265)\pm 0.019$ \\ $10^{9}A_{s }$ & $2.144(2.148)\pm 0.037$ & $2.136(2.144)\pm 0.038$ & $2.128(2.147)\pm 0.040$ & $2.127(2.143)\pm 0.034$ \\ $n_{s }$ & $0.9928(0.9963)^{+0.0092}_{-0.0078}$ & $0.9910(0.9936)^{+0.0090}_{-0.0081}$ & $0.9885(0.9936)\pm 0.0091$ & $0.9865(0.9936)\pm 0.0086$ \\ $\tau{}_{\rm reio }$ & $0.0520(0.0508)\pm 0.0077$ & $0.0511(0.0506)\pm 0.0079$ & $0.0519(0.0506)\pm 0.0077$ & $0.0523(0.0506)\pm 0.0072$ \\ \hline $S_8$ & $0.842(0.846)\pm 0.016$ & $0.841(0.845)\pm 0.017$ & $0.830(0.838)\pm 0.016$ & $0.831(0.837)^{+0.013}_{-0.014}$ \\ $\Omega{}_{m }$ & $0.2996(0.2982)^{+0.0061}_{-0.0072}$ & $0.3013(0.2995)\pm 0.0068$ & $0.2990(0.2995)\pm 0.0069$ & $0.3008(0.2995)\pm 0.0059$ \\ \hline total $\chi^2_{\rm min}$ & 3571.9&3575.8 &3688.3 &3698.4 \\ $\Delta\chi^2$(EDE$-\Lambda$CDM)& -14.6 & -13.3 & -12.0 & -11.1 \\ \hline \end{tabular}} \caption{ Mean (best-fit) $\pm 1\sigma$ (or $2\sigma$ for one-sided bounds) of reconstructed parameters in the EDE model confronted to various datasets, including \Planck TT650TEEE+ACT.} \label{tab:cosmoparam_act} \end{table*} \subsection{EFTBOSS+{\em Planck}TTTEEE} We now turn to studying the constraining power of EFTBOSS data in combination with primary CMB datasets. We start by performing joint analyses with the full {\em Planck}TTTEEE datasets. All relevant $\chi^2$ statistics are given in App.~\ref{app:chi2}, Tabs.~\ref{tab:chi2_Planck_LCDM} and \ref{tab:chi2_Planck_EDE}, while the reconstructed posteriors and best-fit values of parameters are given in Tab.~\ref{tab:cosmoparam_planck}. In the left panel of Fig.~\ref{fig:EDE_EFT_Planck}, we compare constraints obtained with the consistently and inconsistently normalized EFTBOSS data to that obtained with the compressed BAO/$f\sigma_8$ data. One can see that the correction of the normalization of the window function leads the new EFTBOSS data to have a constraining power only slightly stronger than the compressed BAO/$f\sigma_8$ data. We derive a BaseTTTEEE+Lens+EFTBOSS constraints of $f_{\rm EDE}(z_c)<0.083$, to be compared with $f_{\rm EDE}(z_c)<0.088$ from BaseTTTEEE+Lens+$f\sigma_8$, while the EFTBOSS data with wrong normalization incorrectly leads to $f_{\rm EDE}(z_c)<0.054$. Moreover, as was already pointed out in various works \cite{Murgia:2020ryi,Smith:2020rxx,Schoneberg:2021qvd,Herold:2021ksg}, posteriors are highly non-Gaussian with long tails towards high-$H_0$, and therefore these constraints should be interpreted with care. This is further attested by the fact that the best-fit point lies at the $2\sigma$ limit of our constraints (e.g.~$f_{\rm EDE}$ at the best-fit is $0.082$ for BaseTTTEEE+Lens+EFTBOSS). We defer to future work to compare constraints derived here with a Bayesian analysis, to those derived with a profile likelihood approach (e.g. \cite{Herold:2021ksg,Reeves:2022aoi}), which will be affected by our update to the survey window function calculation. As advocated recently, we will gauge the level of the Hubble tension by computing the tension metric $Q_{\rm DMAP}\equiv \sqrt{\chi^2({\rm w/~SH0ES})-\chi^2({\rm w/o~SH0ES})}$ \cite{Raveri:2018wln,Schoneberg:2021qvd}, which agrees with the usual Gaussian metric tension for Gaussian posteriors, but better captures the non-Gaussianity of the posterior. Once the SH0ES prior is included in the BaseTTTEEE+Lens+EFTBOSS analysis, we reconstruct $f_{\rm EDE}(z_c)=0.103^{+0.027}_{-0.023}$ with $h = 0.713\pm0.009$ and find the tension metric $Q_{\rm DMAP}=2.1\sigma$ (while we find 4.8$\sigma$ in $\Lambda$CDM), see Tab.~\ref{tab:cosmoparam_planck} and Fig.~\ref{fig:EDE_EFT_Planck}, right panel. This is only a minor difference compared to the results without BOSS $f\sigma_8$ or full-shape information, for which we get $f_{\rm EDE}(z_c)=0.109^{+0.030}_{-0.024}$ with $h = 0.715\pm0.009$ and the $Q_{\rm DMAP}$ metric gives a $1.9\sigma$ tension between SH0ES and other datasets.\footnote{This is different than what was reported in Ref.~\cite{Schoneberg:2021qvd}, because of an updated $H_0$ prior with tighter error bars.} Similarly, when the $f\sigma_8$ information is included, we find a $2.0\sigma$ tension with $f_{\rm EDE}(z_c)=0.102^{+0.030}_{-0.024}$ and $h = 0.712\pm0.009$. Analyses with \code{CLASS-PT} are presented in App.~\ref{app:classpt_vs_pybird_EDE}, and similar results are found. Therefore, current full-shape EFTBOSS data provide little additional constraining power ($\sim 10\%$) on the EDE model over {\em Planck} and $f\sigma_8$. We conclude that the EFTBOSS data are in agreement with the model reconstructed when including a SH0ES prior, as the preliminary study suggested, and BOSS data do not exclude the EDE resolution to the Hubble tension. \subsection{EFTBOSS+{\em Planck}TT650TEE+ACT} We now turn to the combination of {\em Planck} data with ACT. We start with a restricted version of {\em Planck} temperature data at $\ell<650$ (chosen to mimic WMAP and perform a consistency test between CMB datasets), combined with {\em Planck} polarization and ACT data. This data combination\footnote{The preference persists until {\em Planck}TT data at $\ell\gtrsim1300$ are included, while the inclusion of SPT-3G TEEE data has little impact (in fact slightly strengthening the hint of EDE) \cite{Smith:2022hwi}. } is known to favor\footnote{As discussed by the ACT collaboration \cite{Hill:2021yec}, it is still a possibility that the apparent preference for EDE arise from remaining systematic errors in the data.} EDE at $\sim 3\sigma$ \cite{Hill:2021yec,Poulin:2021bjr,LaPosta:2021pgm,Smith:2022hwi}, with large values of $f_{\rm EDE}(z_c)=0.128^{+0.064}_{-0.039}$ and $h =0.723^{+0.021}_{-0.017} $ (see Tab.~\ref{tab:cosmoparam_act}, first column). In Ref.~\cite{Smith:2022hwi}, it was shown that BOSS $f\sigma_8$ and {\em Planck} lensing data decreased the preference\footnote{In the following, the preference is computed assuming the $\Delta\chi^2$ follows a $\chi^2$-distribution with three degrees of freedom. We stress that this is just an approximation, as the true number of degrees of freedom is more complicated to estimate due to $\log_{10}(z_c)$ and $\theta_i$ becoming ill defined when $f_{\rm EDE}\to 0$.} to 2.6$\sigma$. We now test whether the EFT analysis of BOSS data can put further pressure on this hint of EDE, as our preliminary study indicates. All relevant $\chi^2$ statistics are given in App.~\ref{app:chi2}, Tab.~\ref{tab:chi2_ACT}, while we give the reconstructed posteriors of parameters in Tab.~\ref{tab:cosmoparam_act}. We show in Fig.~\ref{fig:EDE_EFT_ACT} (left panel) the 2D posterior distribution $\{f_{\rm EDE}(z_c),\omega_{\rm cdm},h, \log_{10}(z_c) \}$ reconstructed from the analysis of BaseTT650TEEE+ACT compared with that reconstructed with the addition of either $f\sigma_8$ or EFTBOSS data. One can see that in this case, the EFTBOSS data do reduce the preference for EDE, with $f_{\rm EDE}$ now compatible with zero at $1\sigma$. For the BaseTT650TEEE +ACT+Lens+EFTBOSS dataset, represented by the dark blue line on Fig.~\ref{fig:EDE_EFT_ACT} (left panel), we find a weak upper limit $f_{\rm EDE} < 0.172$ and $h = 0.708^{+0.015}_{-0.022}$, with best-fit values $f_{\rm EDE}\simeq 0.148$ and $h\simeq 0.725$ in good agreement with the SH0ES determination. Quantifying the preference over $\Lambda$CDM, we find a $\Delta\chi^2 = -11.1$ in favor of EDE (2.5$\sigma$), decreased from $-14.6$ without EFTBOSS and {\em Planck} lensing data. The $\chi^2$ of EFTBOSS data is degraded by $+1.7$ in the EDE model compared to $\Lambda$CDM, while the improvement in the fit of ACT and \Planck TT650TEEE is fairly stable, with $\Delta\chi^2({\rm ACT})=-7.6$ and $\Delta \chi^2({\rm \Planck TT650TEEE}) = -6.1$ respectively. Additionally, we note that for this more extreme EDE model, the full EFTBOSS data provide stronger constraints than the conventional BAO/$f\sigma_8$ data. Although current data do not fully erase the preference for EDE over $\Lambda$CDM, this confirms that BOSS data, and more generally measurement of the matter power spectrum in the late-universe, provide an important probe of large EDE fraction in the early universe. We find similar results with \code{CLASS-PT} (see App.~\ref{app:classpt_vs_pybird_EDE} for details), attesting that once BOSS data are combined with CMB data, the results obtained are robust to reasonable choices in the EFT analysis. \begin{table*}[] \centering \scalebox{1}{ \begin{tabular}{|l|c|c|} \hline & \multicolumn{2}{|c|}{BaseTTTEEE+ACT+Lens+EFTBOSS}\\ \hline \hline $H_0$ prior? & no & yes \\ \hline $f_{\rm EDE}(z_c)$ & $< 0.110 (0.074)$ & $0.108(0.124)^{+0.028}_{-0.021}$ \\ $\log_{10}(z_c)$ & $3.48(3.51)\pm 0.21$ & $3.552(3.531)^{+0.026}_{-0.065}$ \\ $\theta_i$ & unconstrained & $2.77(2.81)^{+0.13}_{-0.070}$ \\ \hline $h$ & $0.691(0.7)^{+0.006}_{-0.013}$ & $0.715(0.72)\pm 0.009$ \\ $\omega{}_{\rm cdm }$ & $0.1229(0.1267)^{+0.0017}_{-0.0042}$ & $0.1300(0.1322)^{+0.0035}_{-0.0031}$ \\ $10^{2}\omega{}_{b }$ & $2.247(2.248)^{+0.015}_{-0.017}$ & $2.260(2.255)\pm 0.018$ \\ $10^{9}A_{s }$ & $2.126(2.133)^{+0.028}_{-0.032}$ & $2.153(2.156)\pm 0.030$ \\ $n_{s }$ & $0.9758(0.9795)^{+0.0049}_{-0.0080}$ & $0.9873(0.9893)\pm 0.0058$ \\ $\tau{}_{\rm reio }$ & $0.0540(0.0534)\pm 0.0070$ & $0.0548(0.0539)\pm 0.0070$ \\ \hline $S_8$ & $0.829(0.843)^{+0.010}_{-0.012}$ & $0.837(0.843)\pm 0.012$ \\ $\Omega{}_{m }$ & $0.3061(0.3052)\pm 0.0054$ & $0.2997(0.3)\pm 0.0047$ \\ \hline total $\chi^2_{\rm min}$ & 4157.6 & 4159.8 \\ $\Delta \chi^2_{\rm min}({\rm EDE}-\Lambda{\rm CDM})$ &-6.4 &-26.1 \\ \hline $Q_{\rm DMAP}$ &\multicolumn{2}{|c|}{1.5$\sigma$}\\ \hline \end{tabular}} \caption{ Mean (best-fit) $\pm 1\sigma$ (or $2\sigma$ for one-sided bounds) of reconstructed parameters in the EDE model confronted to BaseTTTEEE+ACT+Lens+EFTBOSS, with and without SH0ES.} \label{tab:cosmoparam_planck_act} \end{table*} \subsection{EFTBOSS+{\em Planck}TTTEE+ACT} Except for consistency tests, there are no good reasons to remove part of the high-$\ell$ \Planck{} TT data. In the following, we present results of combined analyses of {\em Planck}TTTEEE+ACT+EFTBOSS (i.e. including full \Planck{} data) in Tab.~\ref{tab:cosmoparam_planck_act} and Fig.~\ref{fig:EDE_EFT_ACT} (right panel). All relevant $\chi^2$ statistics are given in App.~\ref{app:chi2}, Tab.~\ref{tab:chi2_FullPlanckACT}. We quantify the residual tension with SH0ES using the $Q_{\rm DMAP}$ metric introduced previously. In that case, we find that the preference for EDE without SH0ES is strongly reduced, in agreement with previous works, but the $2\sigma$ upper limit on $f_{\rm EDE} < 0.110 $ is much weaker than in the BaseTTTEEE+Lens+EFTBOSS analysis presented previously, $f_{\rm EDE} < 0.083$. As a result, the tension metric between BaseTTTEEE+ACT+Lens+EFTBOSS and SH0ES is released to $1.5\sigma$ compared to $ 4.7\sigma$ in $\Lambda$CDM (and $2.1\sigma$ without ACT data). When the SH0ES prior is included, we find $f_{\rm EDE} =0.108_{-0.021}^{+0.028}$ and $h = 0.715\pm0.009$ (in very good agreement with the results presented earlier without ACT), with no degradation in the $\chi^2$ of EFTBOSS. This confirms that the EFTBOSS data can accomodate the amount of EDE required to resolve the Hubble tension (with $f_{\rm EDE}\sim 0.1$ and $h\sim 0.72$), but constrain more extreme EDE contributions. \subsection{Impact of Pantheon+ data} \label{sec:PanPlus} \begin{table*}[] \centering \scalebox{0.9}{ \begin{tabular}{|l|c|c|c|c|} \hline & BaseEFTBOSS & BaseTTTEEE+Lens & BaseTTTEEE+Lens& BaseTT650TEEE+ACT+Lens\\ & +PanPlus& +EFTBOSS+PanPlus & +EFTBOSS+PanPlus+SH0ES & +EFTBOSS+PanPlus\\ \hline \hline $f_{\rm EDE}(z_c)$ & $ < 0.228(0.01)$ & $< 0.079(0.056)$ & $0.123(0.141)^{+0.030}_{-0.018}$ & $ < 0.137(0.11)$ \\ $\log_{10}(z_c)$ & unconstrained $(3.91)$ & $3.59(3.57)^{+0.25}_{-0.21}$ & $3.64(3.57)^{+0.23}_{-0.13}$ & $< 3.5(3.5)$ \\ $\theta_i$ & unconstrained$(2.98)$ & unconstrained$(2.74)$ & $2.59(2.77)^{+0.31}_{+0.064}$ & unconstrained$(2.78)$ \\ \hline $h$ & $0.717(0.692)^{+0.015}_{-0.026}$ & $0.684(0.692)^{+0.006}_{-0.001}$ & $0.719(0.724)^{+0.009}_{-0.008}$ & $0.700(0.714)^{+0.013}_{-0.019}$ \\ $\omega{}_{\rm cdm }$ & $0.142(0.131)^{+0.010}_{-0.014}$ & $0.1222(0.1251)^{+0.0015}_{-0.0028}$ & $0.1317(0.1346)\pm 0.0031$ & $0.1258(0.1306)^{+0.0039}_{-0.0058}$ \\ $10^{-2}\omega{}_{b }$ & $2.276(0.023)^{+0.035}_{-0.039}$ & $2.251(2.254)\pm 0.018$ & $2.291(2.275)^{+0.020}_{-0.024}$ & $2.258(2.259)\pm 0.019$ \\ $10^9A_s$ & $1.88(1.929)^{+0.16}_{-0.20}$ & $2.114(2.148)\pm 0.029$ & $2.155(2.157)^{+0.030}_{-0.036}$ & $2.120(2.135)\pm 0.033$ \\ $n_{s }$ & $0.873(0.889)\pm 0.049$ & $0.9700(0.9752)^{+0.0046}_{-0.0071}$ & $0.9911(0.9912)^{+0.0062}_{-0.0071}$ & $0.9827(0.9877)\pm 0.0081$ \\ $\tau_{\rm reio}$ & $-$ & $0.0562(0.0558)\pm 0.0069$ & $0.0582(0.0554)\pm 0.0077$ & $0.0519(0.0516)^{+0.0065}_{-0.0075}$ \\ \hline $S_8$ & $0.815(0.824)\pm 0.018$ & $0.832(0.837)\pm 0.010$ & $0.840(0.847)\pm 0.012$ & $0.831(0.839)^{+0.012}_{-0.011}$ \\ $\Omega{}_{m }$ & $0.321(0.324)\pm 0.013$ & $0.3116(0.3093)\pm 0.0056$ & $0.3000(0.3014)\pm 0.0047$ & $0.3041(0.3016)\pm 0.0061$ \\ \hline total $\chi^2_{\rm min}$ & 1537.9 &4304.0 & 4187.0 &4085.1 \\ $\Delta \chi^2_{\rm min}\textrm{\scriptsize (EDE-$\Lambda$CDM)}$ & 0 &-1.1 & -32.3 & -9.2 \\ \hline \end{tabular}} \caption{ Mean (best-fit) $\pm 1\sigma$ (or $2\sigma$ for one-sided bounds) of reconstructed parameters in the EDE model confronted to various datasets, including the recent PanPlus SNIa catalogue.} \label{tab:cosmoparam_PanPlu} \end{table*} To finish, we perform an analysis with the new Pantheon+ SNIa catalogue \cite{Brout:2022vxf}, which are known to favor a higher $\Omega_m = 0.338\pm0.018$, to illustrate the impact that these new data have on the EDE model. We perform analyses of four datasets combination with Pantheon+, following our baseline data, namely BaseEFTBOSS, BaseTTTEEE+Lens+EFTBOSS(+SH0ES) and BaseTT650TEEE+ACT+Lens+EFTBOSS. The results of these analyses are presented in Tab.~\ref{tab:cosmoparam_PanPlu} and in Fig.~\ref{fig:EDE_PanPlus}, while all relevant $\chi^2$ statistics are given in App.~\ref{app:chi2}, Tab.~\ref{tab:chi2_PanPlus}. First, without information from the primary CMB, we find that the combination of EFTBOSS+BBN+Lens+BAO+PanPlus (i.e, BaseEFTBOSS+PanPlus) leads to a weak constraint on $f_{\rm EDE}(z_c)<0.228$ with $h=0.717^{+0.015}_{-0.026}$ in good agreement with SH0ES. In fact, even within $\Lambda$CDM we find $h=0.694_{-0.014}^{+0.012}$, which is not in significant tension with SH0ES. This data combination was recently argued to constrain new physics solution to the Hubble tension that affects the sound horizon, due to the fact that measurement of $h$ based on the scale of matter-radiation equality $k_{\rm eq}$ (which can be extracted by marginalizing over the sound horizon information\footnote{More precisely, in Refs~\cite{Philcox:2020vbm,Philcox:2021kcw,Philcox:2022sgj}, the marginalization over the sound horizon information is intended as a consistency test to be performed within $\Lambda$CDM.}) are in tension with the SH0ES measurement~\cite{Philcox:2020vbm,Philcox:2021kcw,Philcox:2022sgj}. In our analysis, we stress that we do not marginalize over the sound horizon in the EFTBOSS analysis. We do not expect that removing part of the data through the marginalization procedure would make BOSS data appear in strong tension with SH0ES, at least in EDE. Rather, we expect that constraints would significantly weaken. We leave for future work to test whether the determination of $h$ from $k_{\rm eq}$ is robust to changes in the cosmological model. When combining with \Planck TTTEEE, we find that constraints on EDE are increased by $\sim 5\%$ with respect to the analogous analysis with Pantheon18, with $f_{\rm EDE}(z_c)<0.079$. This can be understood by noting that the larger $\Omega_m$ favored by Pantheon+, coupled with the positive correlation between $f_{\rm EDE}(z_c)-h$, can lead to high $\omega_m=\Omega_mh^2$ which are constrained by CMB data. However, once the SH0ES cepheid calibration of SNIa is included, we find a strong preference for EDE, with $f_{\rm EDE}(z_c)=0.123^{+0.030}_{-0.018}$ (i.e., non-zero at more than $5\sigma$) and a $\Delta\chi^2({\rm EDE}-\Lambda{\rm CDM})=-32.3$ (compared to $-22.7$ with Pantheon18). The cost in $\chi^2$ for \Planck TTTEEE+Lens and EFTBOSS compared to the analysis without the SH0ES calibration is small, with $\chi^2({\rm \Planck})$ increasing by $+2.3$ and $\chi^2({\rm EFTBOSS})$ increasing by $+0.9$, which further attests of the non-gaussianity of the posterior in the absence of the SH0ES calibration. The $Q_{\rm DMAP}$ tension metric introduced earlier cannot be used as easily, due to the fact that the SH0ES data are now modeled in a more involved way, making use of a correlation matrix connecting SNIa calibrators and high$-z$ SNIa \cite{Riess:2021jrx}, rather than the simple Gaussian prior on $h$. Finally, when combining with \Planck~TT650TEEE and ACT, we find that the preference for EDE seen within ACT data further decreases to $\Delta\chi^2=-9.2$ (2.2$\sigma$) and we derive a limit $f_{\rm EDE}(z_c)<0.137$, with $h=0.700^{+0.013}_{-0.019}$ and a $\lesssim 2\sigma$ tension with SH0ES. We defer to future work to further test the ability of EDE (and other promising models) to resolve the Hubble tension in light of this new Pantheon+ SNIa catalogue. \section{Discussion and Conclusions} \label{sec:conclusions} The developments of the predictions for the galaxy clustering statistics from the EFTofLSS have made possible the study of BOSS data beyond the conventional analyses dedicated to extracting BAO and $f\sigma_8$ information. There has been in the recent literature a number of studies aiming at measuring the $\Lambda$CDM parameters at precision comparable with that of \Planck~CMB data (see e.g. Refs.~\cite{DAmico:2019fhj,Ivanov:2019pdj,Colas:2019ret,DAmico:2020kxu,DAmico:2020tty,Chen:2021wdi,Zhang:2021yna,Philcox:2021kcw}). Additionally, it was shown that BOSS full-shape data, when analyzed using the one-loop predictions from the EFTofLSS (here called EFTBOSS data), can lead to strong constraints on extension to the $\Lambda$CDM model. In particular, the EDE model, currently one of the most promising models to resolve the Hubble tension \cite{Poulin:2018cxd,Schoneberg:2021qvd}, was shown to be severely constrained by EFTBOSS data \cite{DAmico:2020ods,Ivanov:2020ril}. However, it was subsequently argued that part of the constraints may come from a mismatch in the primordial power spectrum $A_s$ amplitude between EFTBOSS and \Planck~\cite{Smith:2020rxx}. Recently, it was found that the original EFTBOSS data used in these analyses were affected by an inconsistency between the normalization of the survey window function and the one of the data measurements, that lead to a mismatch in $A_s$. A proper reanalysis of the EFTBOSS data constraints the EDE model was lacking until now. In this paper, we have performed a thorough investigation of the constraints on EDE in light of the correctly normalized EFTBOSS data, and estimated the shifts introduced on the reconstructed cosmological parameters and their errors between various analyses strategy. A similar analysis within the $\Lambda$CDM model is presented in Sec. IV of our companion paper~\cite{Paper1}. Our results are summarized in the following. \\ \subsection{EFTBOSS constraints on EDE alone} We have shown in Sec.~\ref{sec:EDE_arena_main}, that regardless of the BOSS data or the likelihood we consider, the BOSS full-shape (analyzed on their own with a BBN prior) leads to reconstructed values of $H_0$ that are compatible with what is obtained by the SH0ES collaboration. Yet, the various EFTBOSS measurements, as well as the \code{PyBird} and \code{CLASS-PT} likelihoods, do not have the same constraining power on EDE: \begin{itemize} \item[\textbullet] When using the \code{PyBird} likelihood, we found $f_{\rm EDE}(z_c) < 0.321$ when analyzing $P_\textsc{fkp}^\textsc{lz/cm} + \alpha^\textsc{lz/cm}_{rec}$, while analyzing $P_\textsc{quad}^{z_1/z_3} + \alpha^{z_1/z_3}_{rec}$ yields $f_{\rm EDE}(z_c) < 0.382$, a $\sim 20\%$ difference.\\ \item[\textbullet]When using the same BOSS data, namely $P_\textsc{quad}^{z_1/z_3}$, we have found that the \code{PyBird} likelihood gives $f_{\rm EDE}(z_c) < 0.382$, while the \code{CLASS-PT} likelihood gives $f_{\rm EDE}(z_c) < 0.448$, i.e., a $\sim 15\%$ difference.\\ \item[\textbullet] Restricting our analysis to the range of critical redshift $\log_{10}(z_c)\in[3.4,3.7]$ which can resolve the Hubble tension, we have shown that the combination of EFTBOSS+BBN+Lens+BAO+Pan18, leads to the constraints $f_{\rm EDE}(z_c)<0.2$ (95\% C.L.) and $h = 0.710_{-0.025}^{+0.015}$ , which does not exclude the EDE models resolving the Hubble tension. \\ \item[\textbullet] The inclusion of the recent Pantheon+ data does not affect this conclusion as we find $h=0.717^{+0.015}_{-0.026}$. We do not expect that marginalizing over the sound-horizon as done in Refs.~\cite{Philcox:2020vbm,Philcox:2021kcw,Philcox:2022sgj} would alter our conclusions, as it would simply remove information from the data. This question will be thoroughly explored elsewhere. \end{itemize} \subsection{{\em Planck}+EFTBOSS~constraints on EDE} In combination with \Planck~TTTEEE data, we have shown that constraints on EDE have changed due to the correction of the normalization of the window function: \begin{itemize} \item[\textbullet] The combination of \Planck TTTEEE+Lens+BAO +Pan18+EFTBOSS leads to $f_{\rm EDE}(z_c)<0.083$, which is a $\sim10\%$ improvement over the constraints without BOSS data, and a $\sim 5\%$ improvement over the constraints with conventional BAO/$f\sigma_8$ data. Yet, this is much weaker than the constraints reported with the incorrect normalization, namely $f_{\rm EDE}<0.054$. We quantify that the Hubble tension is reduced to the $2.1\sigma$ level in the EDE cosmology ($1.9\sigma$ without EFTBOSS) compared to $4.8\sigma$ in the $\Lambda$CDM model, and we find $f_{\rm EDE}(z_c)=0.103^{+0.027}_{-0.023}$ at $z_c=3970^{+255}_{-205}$ when the SH0ES prior is included. \item[\textbullet] Replacing Pantheon18 by the new Pantheon+ data improves the constraints on EDE to $f_{\rm EDE}(z_c)<0.079$. Yet, the inclusion of the SH0ES cepheid calibration leads to $f_{\rm EDE}(z_c)=0.123^{+0.030}_{-0.018}$ at $z_c=4365^{+3000}_{-1100}$, i.e., a non-zero $f_{\rm EDE}(z_c)$ at more than $5\sigma$ with $\Delta\chi^2({\rm EDE}-\Lambda{\rm CDM})=-32.3$. The cost in $\chi^2$ for \Planck TTTEEE+Lens and EFTBOSS compared to the analysis without the SH0ES calibration is small, with $\chi^2({\rm \Planck})$ increasing by $+2.3$ and $\chi^2({\rm EFTBOSS})$ increasing by $+0.9$, which attests of the non-gaussianity of the posterior in the absence of the SH0ES calibration. This deserves to be studied further through a profile likelihood approach \cite{Herold:2021ksg,Reeves:2022aoi}. \end{itemize} \subsection{ACT+EFTBOSS constraints on EDE} Finally, we have studied the impact of EFTBOSS data on the recent hints of EDE observed within ACT DR4 data: \begin{itemize} \item[\textbullet] EFTBOSS reduces the preference for EDE over $\Lambda$CDM seen when analyzing ACT DR4, alone or in combination with restricted \Planck TT data. The combination of \Planck TT650TEEE+Lens +BAO+Pan18+ACT+EFTBOSS leads to a mild constraints on $f_{\rm EDE}(z_c)<0.172$ with $\Delta\chi^2({\rm EDE}-\Lambda{\rm CDM})=-11.1$, to be compared with $f_{\rm EDE}(z_c)=0.128^{+0.064}_{-0.039}$ without EFTBOSS+Lens, with $\Delta\chi^2({\rm EDE}-\Lambda{\rm CDM})=-14.6$. \item[\textbullet] The inclusion of Pantheon+ data further restricts $f_{\rm EDE}(z_c)<0.137$, with $\Delta\chi^2({\rm EDE}-\Lambda{\rm CDM})=-9.2$. \item[\textbullet] When full \Planck{} data are included, we derived a constraints $f_{\rm EDE}(z_c)<0.110$, which is $\sim 30\%$ weaker than without ACT data. When all CMB data are included in combination with EFTBOSS, the Hubble tension is reduced to $1.5\sigma$ in the EDE model, to be compared with $4.7\sigma$ in $\Lambda$CDM. The inclusion of the SH0ES prior leads to $f_{\rm EDE}(z_c)=0.108^{+0.028}_{-0.021}$ at $z_c = 3565^{+220}_{-495}$. \end{itemize} We conclude that EFTBOSS data do not exclude EDE as a resolution to the Hubble tension, where we consistently find $f_{\rm EDE}(z_c)\sim 10-12\%$ at $z_c\sim3500-4000$, with $h\sim 0.72$, when the cepheid calibration is included in the analyses. However, EFTBOSS data do constrain very high EDE fraction as seen when analyzing ACT DR4 data. \subsection{Final comments} There are a number of relevant caveats to stress regarding our analyses. First, we note that the reconstructed $S_8$ values from the various analyses that favor EDE are $\sim 2.8-3.2\sigma$ higher than those coming from weak lensing measurement (and their cross-correlation with galaxy clustering) such as DES \cite{DES:2021wwk} and KiDS \cite{Heymans:2020gsg}. As was already pointed out in the past, this indicates that weak lensing data (and the existence of a ``$S_8$ tension'') could be used to further restrict the existence of EDE. Nevertheless, it has been noted that solutions to the $S_8$ tension may be due to systematic effects \cite{Amon:2022ycy} or non-linear modelling including the effect of baryons at very small scales \cite{Amon:2022azi} or to a more complete dynamics in the dark sector \cite{McDonough:2021pdg,Sabla:2022xzj}. In fact, models that resolve the $S_8$ tension leave the EDE resolution unaffected \cite{Allali:2021azp,Clark:2021hlo} such that, although perhaps theoretically unappealing, it is possible that solutions to the $H_0$ and $S_8$ lie in different sectors. We leave for future work a robust study of EDE in light of the combination of EFTBOSS and weak lensing data, which will require to better handle the modelling of physical effects at scales beyond the range of validity of our EFT. Second, it will be very important to extend this work to include the bispectrum, which was recently analyzed at the one-loop level within $\Lambda$CDM~\cite{DAmico:2022osl,DAmico:2022gki}. It will also be interesting to see if the eBOSS surveys can shed light on EDE~\cite{eBOSS:2020yzd}: although the inclusion of eBOSS BAO was shown to not significantly modify the constraints on EDE (see e.g.~Refs.~\cite{Schoneberg:2021qvd,DAmico:2020ods}), the analysis of the full-shape of eBOSS quasars may have the potential to put stronger limits given the large size of the survey. Additional constraints on EDE may also arise from measurements of the age of old objects such as globular clusters of stars \cite{Bernal:2021yli,Boylan-Kolchin:2021fvy}, or the halo mass function at high$-z$ \cite{Klypin:2020tud}. Interestingly, using $N$-body simulations Ref.~\cite{Klypin:2020tud} showed that EDE predicts 50\% more massive clusters at z = 1 and twice more galaxy-mass halos at z = 4 than \lcdm. These predictions can be tested by observations from JWST and the first publicly available data are, in part, better fit by EDE than \lcdm\ \cite{Boylan-Kolchin:2022kae}. To close this work, we mention that we find here in agreement with previous literature, that the cosmological data including SH0ES prefer a higher value for the spectral tilt $n_s$ in the EDE model than in $\Lambda$CDM, with $n_s \sim 1$ allowed at $\lesssim 2\sigma$ depending on the combination of data considered. Of interest here, we see that the inclusion of EFTBOSS data does not significantly pull back $n_s$ to lower value, and when analyzed alone (with a BBN prior) also independently favors a value of $n_s$ consistent with scale-independence at $\sim 1\sigma$. A value of $n_s$ close to that of the Harrison–Zeldovich spectrum, when put in perspective of CMB measurements of the tensor-to-scalar ratio, would dramatically change the status of the preferred inflationary models~\cite{DAmico:2021fhz} (see also~Refs.~\cite{Kallosh:2022ggf,Jiang:2022uyg}). Therefore, if EDE is firmly detected with future cosmological data, beyond serving as resolution of the $H_0$ tension, it would have also have important consequences for early-Universe physics. \begin{acknowledgements} We thank Adam Riess for interesting discussions and comments at various stages of the project, as well as kindly sharing Pantheon+ and SH0ES data, and Dillon Brout and Dan Scolnic for their precious help with the implementation of the likelihood into \code{MontePython}. We thank Guillermo Franco Abellán and José Louis Bernal for their contribution in the early stages of this project, and Guido D'Amico, Kevin Pardede for useful discussions. PZ would like to thank the organizers of the workshop \emph{LSS2022: Recent Developments in Theoretical Large-Scale Structure - IFPU} for hospitality in Trieste during the late stage of completion of this project. This work has been partly supported by the CNRS-IN2P3 grant Dark21. The authors acknowledge the use of computational resources from the Excellence Initiative of Aix-Marseille University (A*MIDEX) of the “Investissements d’Avenir” programme. This project has received support from the European Union’s Horizon 2020 research and innovation program under the Marie Skodowska-Curie grant agreement No 860881-HIDDeN. This work used the Strelka Computing Cluster, which is run by Swarthmore College. TLS is supported by NSF Grant No.~2009377, NASA Grant No.~80NSSC18K0728, and the Research Corporation.th \end{acknowledgements} \appendix \section{Window function normalization}\label{app:normalization} As discussed in Refs.~\cite{deMattia:2019vdg,deMattia:2020fkb,Beutler:2021eqq} (see also~\cite{Sugiyama:2018yzo}), the window functions measurements, which are required to make an accurate theoretical calculation, have to be consistently normalized with the power spectrum measurements. The estimator for the power spectrum we are concerned with is the FKP estimator~\cite{Feldman:1993ky}, later generalized to redshift space in Refs.~\cite{Yamamoto:2002bc,Yamamoto:2005dz}. For fast estimation using FFTs~\cite{Bianchi:2015oia,Scoccimarro:2015bla}, the line-of-sight for a given galaxy pair is chosen to be in the direction of one of galaxy in the pair, $\r_1$. For clarity in the discussion we are going to have next, let us first gather here pieces of derivations that can be found partially in Refs.~\cite{Beutler:2018vpe,DAmico:2019fhj}. It is easy to see that the expectation value of the power spectrum FKP estimator reads (see e.g.~\cite{Zhang:2021uyp}): \begin{widetext} \begin{equation} \braket{ \hat P_\ell(k) } = \frac{2\ell+1}{N_P} \int \frac{d\Omega_k}{4\pi} d^3r_1 d^3s \, e^{-i \k \cdot \s} \Theta(\r_1) \Theta(\r_1 + \s) \bar n_w(\r_1) \bar n_w(\r_1 + \s) \xi(\s, \r_1) \mathcal{L}_\ell(\hat k \cdot \hat r_1) \, , \end{equation} where $\mathcal{L}_\ell$ is the Legendre polynomial of order $\ell$. Here $\bar n_w(\r) \equiv w(\r) \bar n(\r)$ is the weighted mean galaxy density, with weight $w(\r)$ being the FKP weights times some correction weights (usually to account for veto and instrumental/observational systematics), $\Theta(\r)$ is $1$ if the galaxy at position $\r$ falls inside the survey, $0$ otherwise, and $\xi(\s, \r_1)$ is the correlation function, with $\s$ the separation between two galaxies. Importantly, $N_P$ is a normalization factor that is \emph{chosen by the user}, as we will precise below. Using the following identity: \begin{equation} \int \frac{d\Omega_k}{4 \pi} e^{-i \k \cdot \s} \mathcal{L}_\ell (\hat k \cdot \hat r_1) = (-i)^\ell j_\ell(ks) \mathcal{L}_\ell (\hat s \cdot \hat r_1) \ , \end{equation} where $j_\ell$ is the spherical-Bessel function of order $\ell$, we obtain \begin{equation}\label{eq:mean_fkp_estimator} \braket{ \hat P_\ell(k)} = \frac{(2\ell+1)}{N_P} (-i)^\ell \int ds \, s^2 j_\ell(ks) \int d\Omega_s \int d^3 r_1 \Theta(\r_1) \Theta(\r_1 + \s) \bar n_w(\r_1) \bar n_w(\r_1 + \s) \xi(\s, \r_1) \mathcal{L}_\ell(\mu) \ , \end{equation} where we have introduced the notation $\mu \equiv \hat s \cdot \hat r_1$. We now make the following approximation. We assume that the redshift evolution of the correlation function can be neglected within the observational bin such that $\xi(\s, \r_1) \equiv \xi(s, \mu, r_1(z)) \simeq \xi(s, \mu, z_{\rm eff}) \equiv \xi(s, \mu)$, where the latter is evaluated at the effective redshift $z_{\rm eff}$ of the survey.~\footnote{See Ref.~~\cite{Zhang:2021uyp} for a BOSS analysis that does not rely on this approximation.} As such, we can pull out $\xi(s, \mu)$ from the integral over $d^3 r_1$. We can further expand in multipoles $\xi(s, \mu) = \sum_{\ell'} \xi_{\ell'}(s) \mathcal{L}_{\ell'}(\mu)$ to pull out $\xi_{\ell'}(s)$ from the angular integrals. Then, using the identity \begin{equation} \mathcal{L}_{\ell}(\mu) \mathcal{L}_{\ell'}(\mu) = \sum_L \left(\begin{matrix} \ell & L & \ell'\\ 0 & 0 & 0 \end{matrix}\right)^2 (2L+1) \mathcal{L}_L(\mu) \ , \end{equation} where $\left(\begin{matrix} \ell & L & \ell'\\ 0 & 0 & 0 \end{matrix}\right)$ are the Wigner 3-j symbols, we get \begin{equation} \braket{ \hat P_\ell(k)} = 4\pi (2\ell+1) (-i)^\ell \sum_{\ell', L} \left(\begin{matrix} \ell & L & \ell'\\ 0 & 0 & 0 \end{matrix}\right)^2 \int ds \, s^2 j_\ell(ks) \xi_{\ell'}(s) Q_L(s) \ , \end{equation} where we have defined the window functions \begin{equation}\label{eq:Q_L} Q_L(s) \equiv \frac{(2L+1)}{N_P}\int \frac{d\Omega_s}{4\pi} \int d^3 r_1 \Theta(\r_1) \Theta(\r_1 + \s) \bar n_w(\r_1) \bar n_w(\r_1 + \s)\mathcal{L}_\ell(\mu) \ . \end{equation} Inserting the relation between the multipoles of the correlation function and those of the power spectrum, \begin{equation}\label{eq:xi_from_ps} \xi_{\ell'}(s) = i^{\ell'} \int \frac{dk'}{2\pi^2} k'^2 \, P_{\ell'}(k') j_{\ell'}(k's) \ , \end{equation} we finally obtain \begin{equation}\label{eq:P_W} \braket{\hat P_\ell(k)} = \int dk' \ k'^2 \sum_{\ell'} W_{\ell \ell'}(k, k') P_{\ell'}(k') \ , \end{equation} where we have defined \begin{equation}\label{eq:W} W_{\ell, \ell'}(k, k') = \frac{2}{\pi} (2\ell+1) (-i)^\ell i^{\ell'} \int ds \ s^2 j_\ell(ks) j_{\ell'}(k's) \sum_L \left(\begin{matrix} \ell & L & \ell'\\ 0 & 0 & 0 \end{matrix}\right)^2 Q_L(s) \ . \end{equation} \end{widetext} Notice that, for clarity, we have neglected the integral constraints~\cite{deMattia:2019vdg}, as well as wide-angle contributions~\cite{Beutler:2018vpe}.~\footnote{We have checked that neglecting the integral constraints in the BOSS full-shape analysis leads to small shifts in the posteriors of $\lesssim 1/4 \cdot \sigma$.} Our master formula is~Eq.~\eqref{eq:P_W}: to predict the observed power spectrum $\braket{\hat P_\ell(k)}$, we simply need to convolve our predictions $P_{\ell'}(k')$ with $W_{\ell, \ell'}(k,k')$ given by Eq.~\eqref{eq:W}. $W_{\ell, \ell'}(k,k')$ can be pre-computed, and the only input we need is $Q_L(s)$. The window function $Q_L(s)$, Eq.~\eqref{eq:Q_L}, can be obtained in the following way~\cite{Beutler:2018vpe}. Using Eq.~\eqref{eq:xi_from_ps} and the identity: \begin{equation} \int dk \frac{(ks)^2}{2\pi^2} j_L(ks) j_L(ks') = \frac{1}{4\pi}\delta_D(s-s') \ , \end{equation} where $\delta_D$ is the Dirac delta distribution, we see that \begin{equation}\label{eq:Q_L_s_from_Q_L_k} Q_L(s) = (-i)^L \int \frac{dk}{2\pi^2} k^2 \mathcal{Q}_L(k) j_L(ks)\ , \end{equation} where $\mathcal{Q}_L(k)$ is the expectation value of a power spectrum as defined in Eq.~\eqref{eq:mean_fkp_estimator} given $\xi(\s, \r_1) \equiv 1$. Therefore, $\mathcal{Q}_L(k)$ can be measured as the power spectrum $P^r_L(k)$ of random objects (whose distribution is approaching Poisson) within the same geometry survey that we are dealing with: \begin{equation}\label{eq:Q_L_k} \mathcal{Q}_L(k) \equiv \alpha \braket{\hat P^r_L(k)} \ , \end{equation} where $\alpha=N_g/N_r$ is the ratio of the number of data ``galaxy'' objects to the number of random objects. Such catalog of random objects is already available to us, as it is also required for the estimation of the power spectrum. The key point is the following: $\mathcal{Q}_L(k)$ is normalized by the same normalization factor as $P_\ell(k)$, namely $N_P$. As such, in the limit of vanishing separation, $s\rightarrow 0$, the window function monopole does not go to unity, $Q_0(s) \neq 1$, but instead \begin{equation}\label{eq:Q_0} Q_0(s\rightarrow 0) \rightarrow \frac{1}{N_P}\int d^3 r_1 \bar n_w^2(\r) \ . \end{equation} Given that, one does not know the value of the numerator in the equation above prior to making the measurement, $N_P$ can only be estimated \emph{approximately} in order to have $Q_0(s)$ approaching $1$ at vanishing separation $s\rightarrow 0$. It is in this sense that $N_P$ is chosen by the user. However, the normalization choice is not important as long as the window function measurements is consistently normalized with the power spectrum measurements. Given the measurement protocol sketched above, this is automatic if one is able to evaluate \eqref{eq:Q_L_s_from_Q_L_k} accurately.~\footnote{At~\url{https://github.com/pierrexyz/fkpwin}, we provide a code written to perform the window function measurements, based on \code{nbodykit}. Let us note that we find that it is not straightforward to get a precise measurements of $\hat{\mathcal{Q}}_L(k)$, namely, the power spectrum of the random objects over the \emph{whole} range of $k$ for which $\hat{\mathcal{Q}}_L(k)$ contributes significantly to the integral in Eq.~\eqref{eq:Q_L_s_from_Q_L_k}. Furthermore, the estimator in Eq.~\eqref{eq:Q_L_k} might have a non-negligible variance, given that only one catalog is used. We nevertheless have checked that, letting the normalization of the window functions to be different from the one of the power spectrum by a few percents lead to tolerable shifts in the posteriors ($\lesssim 1\sigma/5$) inferred fitting BOSS data. For future large-volume datasets, it would be however desirable to have a better numerical control over the measurements of $Q_L(s)$ such that the normalization consistency with $P_\ell(k)$ is achieved to sufficient accuracy given increasing precision of the data. } In past BOSS full-shape analyses, e.g.~\cite{DAmico:2019fhj,Ivanov:2019pdj,Colas:2019ret,Ivanov:2020ril,DAmico:2020ods}, the window functions normalization were instead inconsistently enforced to $Q_0^{\rm wrong}(0) \equiv 1$, while in reality $Q_0(0) \sim 0.9$ given the choice of $N_P$. Such inconsistency of $\sim 0.9$ lead to shift in $A_s$ of around $- 1\sigma$ depending on the normalization choice. Let us list two choices for the normalization factor $N_P$: \begin{itemize} \item \textit{Choice 1}: $N_P = \alpha \sum_{\lbrace i \in \text{randoms}\rbrace} \bar n(\r_i) w^2_{\rm FKP}(\r_i)$.~\footnote{Naively one might think that the sum over enough objects is a good approximation to the volume integral; Actually, \textit{Choice 1} is poorly estimating the integral in Eq.~\eqref{eq:Q_0} because in the FKP estimator, $\bar n$ is measured from the grid for FFT with finite cell resolution, while in \textit{Choice 1}, we are counting the objects instead. } This was the choice in Ref.~\cite{BOSS:2016psr}, which measurements were used in e.g. Refs.~\cite{Ivanov:2019pdj,Ivanov:2020ril}. \item \textit{Choice 2}: $N_P = \ \mathcal{A} * \int dr \bar n_w^2(r)$, where $\bar n_w(r)$ is inferred from counting galaxies and binning them in shells and $\mathcal{A}$ is an associated estimated area.~\footnote{We thank Hector Gil-Mar\'in for private correspondence on this point.} This was the choice in Ref.~\cite{Gil-Marin:2015sqa}, which measurements, $\mathcal{P}_\textsc{fkp}^\textsc{lz/cm}$, were used in e.g. Refs.~\cite{DAmico:2019fhj,Colas:2019ret,DAmico:2020ods}. $\mathcal{P}_\textsc{fkp}^\textsc{lz/cm}$, as defined in Table~\ref{tab:twopoint_summary}, is assigned window functions that are inconsistently normalized. \end{itemize} We stress again that those choices are not important as long as the same $N_P$ is used to normalize the window functions and the power spectrum measurements. As already mentioned in the main text, except $\mathcal{P}_\textsc{fkp}^\textsc{lz/cm}$ that is used in this paper for illustration purpose, all power spectrum measurements obtained with the FKP estimator, namely $P_\textsc{fkp}^\textsc{lz/cm}$ and $P_\textsc{fkp}^{z_1/z_3}$, are instead consistently normalized with their window functions (see Table~\ref{tab:twopoint_summary} for more details on the measurements). We finish this section by noting that, in analyses using measurements obtained from the FKP estimator, but also from the other estimators, the posteriors may depend on the effective-redshift approximation used above. This suggests that, for each estimator, more work is needed to understand the accuracy of this approximation, along the line of e.g.~\cite{Zhang:2021uyp} for the correlation function. \\ In Fig.~\ref{fig:inconsist_norm}, we show a comparison of the 1D posteriors from the full-shape analysis of BOSS power spectrum measured with the FKP estimator, using window functions with consistent or inconsistent normalization. The inconsistency leads to a lower amplitude $A_s$, or equivalently $\sigma_8$, as well as higher $\Omega_m \sim f$, where $f$ is the logarithmic growth rate, through anti-correlation. We find notable shifts on $\omega_{\rm cdm}$, $\ln(10^{10}A_s)$, $\Omega_m$ and $\sigma_8$ of 0.9$\sigma$, 1.1$\sigma$, 1.1$\sigma$ and 0.8$\sigma$, respectively. \section{Additional comparison between the PyBird and CLASS-PT likelihood in EDE} \label{app:classpt_vs_pybird_EDE} In Figs.~\ref{fig:EDE_PyBird_vs_CLASSPT}, \ref{fig:EDE_Planck_PyBird_vs_CLASSPT}, and \ref{fig:EDE_ACT_noLens_PyBird_vs_CLASSPT}, we show the 2D posterior distributions reconstructed from BaseEFTBOSS, BaseTTTEEE+Lens+EFTBOSS, BaseTT650TEEE+ACT+Lens+EFTBOSS respectively, comparing the results from the \code{PyBird} and the \code{CLASS-PT} likelihoods.~\footnote{For this comparison, LOWZ SGC is not included in the \code{PyBird} likelihood. As expected, we have checked that the addition of this sky-cut does not change the posteriors for the corresponding analyses. } In addition, we recall that EFTBOSS corresponds to $P_\textsc{fkp}^\textsc{lz/cm} + \alpha^{z_1/z_3}_{rec}$ in the framework of the \code{PyBird} likelihood and to $P_\textsc{quad}^{z_1/z_3} + \beta^{z_1/z_3}_{rec}$ in the framework of the \code{CLASS-PT} likelihood (see Tab.~\ref{tab:twopoint_summary}). The most striking differences occur in the BaseEFTBOSS alone case, for which \code{CLASS-PT} leads to much weaker constraints on $f_{\rm EDE}(z_c)$ and much larger error bars on $h$ and $\omega_{\rm cdm}$. The origin of these differences can be trace back to the discussion presented in our companion paper~\cite{Paper1}, namely to the choice of the power spectrum estimators, the BOSS post-reconstructed measurements used, the scale cut, the number of multipoles, and more importantly the choice of EFT parameter priors. Once {\Planck}TTTEEE or {\Planck}TT650TEEE+ACT data are included in the analysis, we find that the reconstructed posteriors are very similar between the two EFTBOSS implementation, and mostly driven by CMB data. We conclude that the main results of this paper, drawn from the combination of CMB and LSS data, are unaffected by the choice of EFT implementation. However, parameters reconstruction based on EFTBOSS data alone may vary at the $1\sigma$ level. \section{$\chi^2$ per experiment} \label{app:chi2} In this appendix, we report the best-fit $\chi^2$ per experiment for both $\Lambda$CDM and EDE models. In Tabs.~\ref{tab:chi2_Planck_LCDM} and \ref{tab:chi2_Planck_EDE} are presented the runs including {\it Planck} data, in Tab.~\ref{tab:chi2_ACT} the runs including ACT data, and in Tab.~\ref{tab:chi2_FullPlanckACT} the combination of the full {\it Planck} data and ACT data. Finally, Tab.~\ref{tab:chi2_PanPlus} present runs including the PanPlus data. \begin{table*}[t!] \def\arraystretch{1.2} \scalebox{1}{ \begin{tabular}{|l|c|c|c|c|c|c|} \hline \multicolumn{7}{|c|}{$\Lambda$CDM} \\ \hline \hline {\emph{Planck}}~high$-\ell$ TTTEEE &2342.2 & 2345.0 &2342.2 & 2344.6&2342.2& 2345.2 \\ {\emph{Planck}}~low$-\ell$ TT & 23.4& 22.9 & 23.5&23.0 & 23.4& 22.8\\ {\emph{Planck}}~low$-\ell$ EE &396.3 & 397.2 & 396.1& 397.2 & 396.3 & 397.2 \\ {\emph{Planck}}~lensing &8.9 & 9.4 & 9.0 &9.4 & 9.0 & 9.4 \\ BOSS BAO low$-z$& 1.2& 1.9 &1.2 & 1.8 & 1.2 & 1.9\\ BOSS BAO DR12& 4.3& 3.4 &$-$ & $-$& $-$&$-$ \\ BOSS BAO/$f\sigma_8$ DR12&$-$ &$-$ &6.7 & 5.9& $-$&$-$ \\ EFTBOSS CMASS &$-$ & $-$ &$-$ &$-$ & 84.6&83.1 \\ EFTBOSS LOWZ&$-$ & $-$ & $-$&$-$ & 33.5& 33.7 \\ Pantheon &1027.2 & 1026.9 & 1027.2& 1026.9 & 1027.2 & 1026.9\\ SH0ES & $-$& 19.9& $-$& 20.4& $-$ &19.8 \\ \hline total $\chi^2_{\rm min}$ & 3803.6& 3826.6 &3805.7 & 3829.1 & 3917.4&3940.0 \\ \hline $Q_{\rm DMAP}$ & \multicolumn{2}{|c|}{4.8$\sigma$}&\multicolumn{2}{|c|}{4.8$\sigma$} &\multicolumn{2}{|c|}{4.8$\sigma$}\\ \hline \end{tabular}} \caption{Best-fit $\chi^2$ per experiment (and total) for $\Lambda{\rm CDM}$ when fit to different data combinations: BaseTTTEEE+Lens, BaseTTTEEE+Lens+$f\sigma_8$, BaseTTTEEE+Lens+EFTBOSS, with and without SH0ES. We also report the tension metric $Q_{\rm DMAP}\equiv \sqrt{\chi^2({\rm w/~SH0ES})-\chi^2({\rm w/o~SH0ES})}$. } \label{tab:chi2_Planck_LCDM} \end{table*} \begin{table*}[t!] \def\arraystretch{1.2} \scalebox{1}{ \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multicolumn{7}{|c|}{EDE} \\ \hline \hline {\emph{Planck}}~high$-\ell$ TTTEEE & 2339.4& 2341.5 & 2339.1& 2340.9 & 2339.3& 2341.1\\ {\emph{Planck}}~low$-\ell$ TT & 21.8 & 20.4 & 22.0 & 20.6 & 21.1& 20.5 \\ {\emph{Planck}}~low$-\ell$ EE & 396.4 &396.8 & 396.1& 396.4& 396.1 & 396.9 \\ {\emph{Planck}}~lensing & 9.5& 10.0 &9.3 & 9.9 & 9.6 & 9.9\\ BOSS BAO low$-z$ & 1.6&1.8 & 1.4& 1.7& 1.4 & 1.9 \\ BOSS BAO DR12 & 3.7&3.5 & $-$& $-$ & $-$& $-$\\ BOSS BAO/$f\sigma_8$ DR12 &$-$ &$-$ &6.5& 7.0 &$-$ &$-$\\ EFTBOSS CMASS&$-$ &$-$ & $-$& $-$&84.1 & 83.3 \\ EFTBOSS LOWZ & $-$&$-$ &$-$ &$-$ & 34.0 & 34.4 \\ Pantheon & 1027.0& 1026.9 & 1027.0&1026.9 & 1027.0 & 1026.9\\ SH0ES & $-$&2.0 &$-$ & 3.2 & $-$ & 2.3 \\ \hline total $\chi^2_{\rm min}$ & 3799.2&3802.9 & 3801.8& 3806.1 & 3912.7 & 3917.3\\ $\Delta \chi^2_{\rm min}({\rm EDE}-\Lambda{\rm CDM})$ & -3.8 & -23.7 & -3.9&-23.0 & -4.7 & -22.7\\ Preference over $\Lambda$CDM & 1$\sigma$& 4.2$\sigma$& 1.1$\sigma$& 4.1$\sigma$ & 1.3$\sigma$ & 4.1$\sigma$ \\ \hline $Q_{\rm DMAP}$&\multicolumn{2}{|c|}{1.9$\sigma$} &\multicolumn{2}{|c|}{2.0$\sigma$}& \multicolumn{2}{|c|}{2.1$\sigma$}\\ \hline \end{tabular}} \caption{Best-fit $\chi^2$ per experiment (and total) for EDE when fit to different data combinations: BaseTTTEEE+Lens, BaseTTTEEE+Lens+$f\sigma_8$, BaseTTTEEE+Lens+EFTBOSS, with and without SH0ES. We also report the $\Delta \chi^2_{\rm min}\equiv\chi^2_{\rm min}({\rm EDE})-\chi^2_{\rm min}(\Lambda{\rm CDM})$ and the tension metric $Q_{\rm DMAP}\equiv \sqrt{\chi^2({\rm w/~SH0ES})-\chi^2({\rm w/o~SH0ES})}$. } \label{tab:chi2_Planck_EDE} \end{table*} \begin{table*}[t!] \def\arraystretch{1.2} \scalebox{1}{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{5}{|c|}{$\Lambda$CDM}& \multicolumn{5}{|c|}{EDE} \\ \hline \hline {\emph{Planck}}~high$-\ell$ TT650TEEE &1843.5 & 1842.6 & 1842.9 & 1842.8&1842.6 & 1837.5 & 1838.0 & 1836.9 & 1836.8 & 1837.7 \\ {\emph{Planck}}~low$-\ell$ TT & 21.5 & 21.7& 21.5& 21.7& 21.8 & 20.7& 20.9& 20.8& 20.9 & 21.2\\ {\emph{Planck}}~low$-\ell$ EE &395.7 &395.7 & 395.8 & 395.9 & $-$ & 395.8& 395.8& 395.8& 395.8 & 395.8 \\ {\emph{Planck}}~lensing & $-$ & $-$ & $-$ & 9.0 &9.0 & $-$ & $-$ & $-$& 10.2 &9.9 \\ ACT DR4& 293.8 &294.5 & 294.4 & 294.2&294.3 & 285.4&285.0& 285.9& 286.4 & 286.9\\ BOSS BAO low$-z$& 1.5& 1.4 & 1.6 & 1.5&1.4 &2.1 & 2.0& 2.4& 2.3 &1.9 \\ BOSS BAO DR12&3.7 &$-$ & $-$& $-$ & $-$ & 3.5&$-$ & $-$ & $-$ & $-$\\ BOSS BAO/$f\sigma_8$ DR12& $-$ & 6.1 & $-$& $-$ & $-$&$-$ & 7.2 & $-$ & $-$ & $-$\\ EFTBOSS CMASS & $-$ & $-$ & 83.4 & 83.6 & 84.9 & $-$ & $-$ & 84.5& 84.3 &84.3\\ EFTBOSS LOWZ &$-$ & $-$ & 33.7 &33.7& 33.7& $-$ & $-$ & 35.1&34.7 &34.4 \\ Pantheon&1026.8 & 1027.0 & 1027.0 & 1027.0& $-$& 1026.9 & 1026.9 & 1026.9& 1026.9 & $-$\\ Pantheon+ & $-$ & $-$& $-$&$-$ & 1411.8&$-$ &$-$ & $-$ & $-$ & 1413.0 \\ \hline total $\chi^2_{\rm min}$ &3586.5 & 3589.1& 3700.3 & 3709.5 &4094.3 & 3571.9 & 3575.8 & 3688.3 & 3698.4 & 4085.1\\ $\Delta \chi^2_{\rm min}({\rm EDE}-\Lambda{\rm CDM})$ & $-$ & $-$& $-$ & $-$ & $-$ & -14.6 & -13.3 & -12.0 & -11.1 & -9.2 \\ Preference over $\Lambda$CDM & $-$ & $-$& $-$ & $-$& $-$ & 3.1$\sigma$ & 2.9$\sigma$& 2.7$\sigma$& 2.5$\sigma$ & $2.2\sigma$ \\ \hline \end{tabular}} \caption{Best-fit $\chi^2$ per experiment (and total) for $\Lambda{\rm CDM}$ and EDE when fit to different data combinations: BaseTT650TEEE+ACT, BaseTT650TEEE+ACT+$f\sigma_8$, BaseTT650TEEE+ACT+EFTBOSS, BaseTT650TEEE+ACT+Lens+EFTBOSS and BaseTT650TEEE+ACT+Lens+EFTBOSS+PanPlus. We also report the $\Delta \chi^2_{\rm min}\equiv\chi^2_{\rm min}({\rm EDE})-\chi^2_{\rm min}(\Lambda{\rm CDM})$ and the corresponding preference over $\Lambda$CDM, computed assuming the $\Delta\chi^2$ follows a $\chi^2$-distribution with three degrees of freedom.} \label{tab:chi2_ACT} \end{table*} \begin{table*}[t!] \def\arraystretch{1.2} \scalebox{1}{ \begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{2}{|c|}{$\Lambda$CDM}& \multicolumn{2}{|c|}{EDE} \\ \hline \hline {\emph{Planck}}~high$-\ell$ TTTEEE & 2349.8 & 2352.0& 2346.2& 2347.2 \\ {\emph{Planck}}~low$-\ell$ TT & 22.4 & 22.0& 21.9&21.2 \\ {\emph{Planck}}~low$-\ell$ EE &396.2 & 396.8&396.1 & 396.4 \\ {\emph{Planck}}~lensing &8.9 & 8.9& 9.6&9.8 \\ ACT DR4 & 240.6& 241.0& 236.8& 236.2 \\ BOSS BAO low$-z$& 1.4&2.0 &1.7 & 2.2\\ EFTBOSS CMASS &84.1 &82.9 &84.2 & 84.2 \\ EFTBOSS LOWZ & 33.6 &33.8 & 34.2& 34.6 \\ Pantheon &1027.1 &1026.9 &1026.9 & 1026.9 \\ SH0ES &$-$ & 19.5& $-$ & 1.10\\ \hline total $\chi^2_{\rm min}$ &4164.0 &4185.9 & 4157.6 & 4159.8 \\ $\Delta \chi^2_{\rm min}({\rm EDE}-\Lambda{\rm CDM})$ & $-$ & $-$ &-6.4 &-26.1 \\ Preference over $\Lambda$CDM &$-$& $-$ & 1.7$\sigma$ & 4.4$\sigma$\\ \hline $Q_{\rm DMAP}$&\multicolumn{2}{|c|}{4.7$\sigma$} &\multicolumn{2}{|c|}{1.5$\sigma$}\\ \hline \end{tabular}} \caption{Best-fit $\chi^2$ per experiment (and total) for $\Lambda{\rm CDM}$ and EDE when fit to BaseTTTEEE+ACT+Lens+EFTBOSS, with and without SH0ES. We also report the $\Delta \chi^2_{\rm min}\equiv\chi^2_{\rm min}({\rm EDE})-\chi^2_{\rm min}(\Lambda{\rm CDM})$ and the tension metric $Q_{\rm DMAP}\equiv \sqrt{\chi^2({\rm w/~SH0ES})-\chi^2({\rm w/o~SH0ES})}$. } \label{tab:chi2_FullPlanckACT} \end{table*} \begin{table*}[t!] \def\arraystretch{1.2} \scalebox{1}{ \begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{2}{|c|}{$\Lambda$CDM}& \multicolumn{2}{|c|}{EDE} \\ \hline \hline {\emph{Planck}}~high$-\ell$ TTTEEE & 2346.18 & 2349.5& 2344.0& 2346.9 \\ {\emph{Planck}}~low$-\ell$ TT &23.0 & 22.4& 22.3& 21.0 \\ {\emph{Planck}}~low$-\ell$ EE & 396.1 & 397.7&396.3 & 396.3\\ {\emph{Planck}}~lensing & 8.8& 9.1&9.0 & 9.6\\ BOSS BAO low$-z$&1.1 & 2.1&1.3 & 1.8\\ EFTBOSS CMASS &85.2 & 82.9&85.0 & 85.1 \\ EFTBOSS LOWZ &33.6 & 33.8& 33.8& 34.6\\ Pantheon+ &1411.1 & $-$&1411.6 & $-$\\ Pantheon+SH0ES & $-$ &1321.9 & $-$& 1291.6\\ \hline total $\chi^2_{\rm min}$ & 4305.1&4219.3 &4303.2 & 4187.0 \\ $\Delta \chi^2_{\rm min}({\rm EDE}-\Lambda{\rm CDM})$ & $-$ & $-$ &-1.9 & -32.3\\ Preference over $\Lambda$CDM &$-$& $-$ & 0.5$\sigma$ & 5$\sigma$\\ \hline \end{tabular}} \caption{Best-fit $\chi^2$ per experiment (and total) for $\Lambda{\rm CDM}$ and EDE when fit to BaseTTTEEE+Lens+EFTBOSS+PanPlus, with and without SH0ES. We also report the $\Delta \chi^2_{\rm min}\equiv\chi^2_{\rm min}({\rm EDE})-\chi^2_{\rm min}(\Lambda{\rm CDM})$ and the corresponding preference over $\Lambda$CDM, computed assuming the $\Delta\chi^2$ follows a $\chi^2$-distribution with three degrees of freedom.} \label{tab:chi2_PanPlus} \end{table*} \newpage \bibliography{biblio}
Title: On the Kinematics of Cold, Metal-enriched Galactic Fountain Flows in Nearby Star-forming Galaxies
Abstract: We use medium-resolution Keck/Echellette Spectrograph and Imager spectroscopy of bright quasars to study cool gas traced by CaII 3934,3969 and NaI 5891,5897 absorption in the interstellar/circumgalactic media of 21 foreground star-forming galaxies at redshifts 0.03 < z < 0.20 with stellar masses 7.4 < log M_*/M_sun < 10.6. The quasar-galaxy pairs were drawn from a unique sample of Sloan Digital Sky Survey quasar spectra with intervening nebular emission, and thus have exceptionally close impact parameters (R_perp < 13 kpc). The strength of this line emission implies that the galaxies' star formation rates (SFRs) span a broad range, with several lying well above the star-forming sequence. We use Voigt profile modeling to derive column densities and component velocities for each absorber, finding that column densities N(CaII) > 10^12.5 cm^-2 (N(NaI) > 10^12.0 cm^-2) occur with an incidence f_C(CaII) = 0.63^+0.10_-0.11 (f_C(NaI) = 0.57^+0.10_-0.11). We find no evidence for a dependence of f_C or the rest-frame equivalent widths W_r(CaII K) or W_r(NaI 5891) on R_perp or M_*. Instead, W_r(CaII K) is correlated with local SFR at >3sigma significance, suggesting that CaII traces star formation-driven outflows. While most of the absorbers have velocities within +/-50 km/s of the host redshift, their velocity widths (characterized by Delta v_90) are universally 30-177 km/s larger than that implied by tilted-ring modeling of the velocities of interstellar material. These kinematics must trace galactic fountain flows and demonstrate that they persist at R_perp > 5 kpc. Finally, we assess the relationship between dust reddening and W_r(CaII K) (W_r(NaI 5891)), finding that 33% (24%) of the absorbers are inconsistent with the best-fit Milky Way E(B-V)-W_r relations at >3sigma significance.
https://export.arxiv.org/pdf/2208.04973
\thispagestyle{plain} \newcommand{\btx}{\textsc{Bib}\TeX} \newcommand{\thestyle}{\texttt{\filename}} \begin{center}{\bfseries\Large Reference sheet for \thestyle\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \thestyle\ package, \LaTeX\ the source file \thestyle\texttt{.dtx}. \end{quote} \head{Overview} The \thestyle\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \thestyle. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\thestyle|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \thestyle\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \thestyle\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \thestyle\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \thestyle\ is also loaded; instead, add the option to \thestyle. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \thestyle; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \thestyle\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description}
Title: Design of the new CHARA instrument SILMARIL: pushing the sensitivity of a 3-beam combiner in the H- and K-bands
Abstract: Optical interferometry is a powerful technique to achieve high angular resolution. However, its main issue is its lack of sensitivity, compared to other observation techniques. Efforts have been made in the previous decade to improve the sensitivity of optical interferometry, with instruments such as PIONIER and GRAVITY at VLTI, or MIRC-X and MYSTIC at CHARA. While those instruments pushed on sensitivity, their design focus was not the sensitivity but relative astrometric accuracy, imaging capability, or spectral resolution. Our goal is to build an instrument specifically designed to optimize for sensitivity. This meant focusing our design efforts on different parts of the instrument and investigating new technologies and techniques. First, we make use of the low-noise C-RED One camera using e-APD technology and provided by First Light Imaging, already used in the improvement of sensitivity in recent new instruments. We forego the use of single-mode fibers but still favor an image plane design that offers more sensitivity than a pupil plane layout. We also use a minimum number of optical elements to maximize the throughput of the design, using a long focal length cylindrical mirror. We chose to limit our design to 3 beams, to have the capability to obtain closure phases, but not dilute the incoming flux in more beam combinations. We also use in our design an edge filter to have the capability to observe H- and K-band at the same time. We use a low spectral resolution, allowing for group delay fringe tracking but maximizing the SNR of the fringes for each spectral channel. All these elements will lead to a typical limiting magnitude between 10 and 11 in both H- and K-bands.
https://export.arxiv.org/pdf/2208.12963
\keywords{e-APD, sensitivity, optical interferometry, near-infrared, K-band, H-band} \section{INTRODUCTION} \label{sec:intro} % Interferometer arrays possess between one to two orders of magnitude gain in spatial resolution over the largest single-aperture telescopes currently under development. The CHARA Array’s longest baseline yields angular diameter measurements as small as 0.7 mas in K-band improving to 0.2 mas in R-band. CHARA can resolve stellar disks all along the main sequence from O- to M-types, whereas there will be no main sequence stars resolvable by the next generation of single aperture telescopes. A boost in sensitivity will expand all of our observing programs, and in particular those involving the faintest targets like AGN cores, YSO’s, binary stars in different stages of evolution, faint dwarf stars, and exoplanet hosts. The CHARA Array’s six 1-m aperture telescopes are arranged in a Y-shaped configuration yielding 15 interferometric baselines from 34 to 331 meters and 10 independent closure phases. These are the longest OIR baselines yet implemented and permit resolutions at the sub-milliarcsecond (mas) level. The facility’s primary components and sub-systems are described more fully by ten Brummelaar et al.\ (2005)~\cite{2005ApJ...628..453T} CHARA Classic (ten Brummelaar et al.\ 2005~\cite{2005ApJ...628..453T}) is a two-beam, J-, H- and K-band open-air, beamsplitter-based system providing visibility amplitude measurements optimized for sensitivity in the NIR. We routinely observe science targets as faint as 7.5 magnitude in H- and K-bands, and in good seeing we reach magnitude 8. CLIMB (ten Brummelaar et al.\ 2013~\cite{2013JAI.....240004T}) is an expansion of CLASSIC able to combine three beams and obtain three visibility amplitudes and one closure phase. Since there are six beams in the Array, there are two independent sets of CLIMB optics, the second of which can be reconfigured as CLASSIC. The magnitude limit of CLIMB is not as good as CLASSIC as the beams must be divided three ways instead of two resulting in a loss of signal to noise ratio (SNR). However, two new beam combiners have recently been implemented at the CHARA Array, MIRC-X\cite{MIRCX} and MYSTIC\cite{MYSTIC,SetterholmSPIE2022}, reaching the same limit in magnitude as CLASSIC/CLIMB, but recombining all six telescopes. Therefore, the usefulness of CLASSIC/CLIMB has decreased. The most significant challenge in ground based interferometry is sensitivity, so we propose to increase the faint limit of the CHARA Array’s CLASSIC/CLIMB beam combiner by 2 magnitudes in the near infrared. This will be achieved by replacing the 20-year-old PICNIC detector with a modern SELEX MOVPE SAPHIRA\cite{gert2016,2018arXiv180508419G} based detection system. This upgrade will also include spectral resolution in both H- and K-bands, and simultaneous observations in both bands. We describe the different elements allowing SILMARIL to push for sensitivity in Section~\ref{sec:pushsens}. We then present the theoretical performance that SILMARIL should achieve in Section~\ref{sec:simulations}. Software issues are considered in Section~\ref{sec:soft}. Increasing the faint limit opens up a larger volume of parameter space for investigation and offers more opportunities to study rare objects not found in the solar neighborhood. A few of these new scientific opportunities are described in Section~\ref{sec:sci}. We offer conclusions in Section~\ref{sec:concl}. \section{Push for sensitivity}\label{sec:pushsens} The optical design of SILMARIL is focused on maximizing sensitivity, while making as few compromises as possible in the other aspects of the instrument. \subsection{e-APD Detector} The main element that will improve sensitivity is the use of the new SAPHIRA detector\cite{gert2016,2018arXiv180508419G}, using e-APD technology. This detector was built specifically for use in the Leonardo Adaptive Optics (AO) and Fringe Tracker (FT) of the GRAVITY instrument at VLTI~\cite{GRAVITY}. This detector has the advantage of having a sub-electron readout noise, with a fast frame rate, and a relatively decent dark current. All these characteristics are what is needed for interferometric instruments as well as AO systems. The SAPHIRA detector is now used by several new interferometric instruments, including the MIRC-X~\cite{MIRCX,2019A&A...625A..38L} and MYSTIC~\cite{MYSTIC,SetterholmSPIE2022} instruments at the CHARA Array that have a ready-to-use camera, the C-RED One~\cite{CRED}, employing the SAPHIRA detector and built by First Light Imaging. Because those cameras are already successful at the CHARA Array and we have the technical expertise to use them, we also chose to adopt this ready-to-use solution, which will make it easier to develop the necessary software, as explained in Section~\ref{sec:soft}. \subsection{3-Telescope Beam Combiner} The effect of the number of beams that an interferometric instrument will recombine is still debated in the community. Some claim that the SNR is not impacted by the number of beams recombined, others claim that the more beams you recombine, the lower the SNR. From our experience with CLASSIC (2-Telescope beam combiner) and CLIMB (3-Telescope beam combiner), using the same optical design and camera, we can notice that CLIMB has a lower sensitivity than CLASSIC. We conclude that, at least in this case, the more beams you combine, the less sensitivity you attain. Thus, for SILMARIL, we are conservative and limit the number of telescopes we combine to three in order to limit the loss in sensitivity, while still being able to measure one closure phase. \subsection{Image Plane Combiner} The original plan for the improvement of the sensitivity of CLASSIC/CLIMB was to reuse the original design, limiting the change of optics, and installing the new C-RED One camera. However, as we want to push as far as possible for sensitivity, we ran simulations to compare the performances of a 3-beam pupil plane design, such as CLIMB, and a 3-beam image plane design. The parameters adopted in the simulations are summarized in Table~\ref{tab:planesimparam}. The details of these simulations are the subject of another paper (Tuthill et al. in prep.). \begin{table}[ht] \label{tab:planesimparam} \begin{center} \caption{Model Beam Combiner Parameters} \begin{tabular}{|c|c|c|} \hline \rule[-1ex]{0pt}{3.5ex} Parameter & Pupil Plane Design & Image Plane Design \\ \hline \rule[-1ex]{0pt}{3.5ex} Readout Rate (Hz) & 750 & 83.3 \\ \hline \rule[-1ex]{0pt}{3.5ex} Sample Time (ms) & 1.33 & 12 \\ \hline \rule[-1ex]{0pt}{3.5ex} Photometry per pixel (N$_{ph}$) & 1 & 0.04 \\ \hline \rule[-1ex]{0pt}{3.5ex} Delay Range ($\mu$m) & $\pm$ 50 & $\pm$ 50 \\ \hline \rule[-1ex]{0pt}{3.5ex} Spectral Dispersion R & 1 & 30\\ \hline \rule[-1ex]{0pt}{3.5ex} Central Wavelength ($\mu$m) & 1.673 & 1.673 \\ \hline \rule[-1ex]{0pt}{3.5ex} Bandwidth Per Channel ($\mu$m) & 0.285 & 0.056 \\ \hline \rule[-1ex]{0pt}{3.5ex} Number of Spectral Channels & 1 & 5 \\ \hline \rule[-1ex]{0pt}{3.5ex} Number pixels across fringes & 3 -- 6 -- 9 & 5 -- 10 -- 15\\ \hline \rule[-1ex]{0pt}{3.5ex} Pixels read per sample & 1 & 225 \\ \hline \rule[-1ex]{0pt}{3.5ex} Field of View & 3.78 $\mu$rad & Airy Disk \\ \hline \rule[-1ex]{0pt}{3.5ex} Integration Time (s) & 0.72 & 0.72 \\ \hline \rule[-1ex]{0pt}{3.5ex} Samples per integration & 538 & 60 \\ \hline \rule[-1ex]{0pt}{3.5ex} Total pixels per integration & 538 & 13500 \\ \hline \rule[-1ex]{0pt}{3.5ex} Camera Readout Noise (e$^-$ ) & 0.7 & 0.7 \\ \hline \rule[-1ex]{0pt}{3.5ex} Camera Dark Current (e$^-$p$^{-1}$s$^{-1}$) & 80 & 80 \\ \hline \rule[-1ex]{0pt}{3.5ex} Camera Excess Noise Factor & 1.25 & 1.25 \\ \hline \end{tabular} \end{center} \end{table} For each simulations, we proceed in four steps: \begin{itemize} \item The creation of a phase mask, representing the atmospheric turbulence \item The simulation of the instrument and the corresponding interferometric signal \item The addition of the detector noise on the signal \item The measurement of the SNR from the extraction of the fringe peak \end{itemize} The results of the simulations are presented in Figure~\ref{fig:ressimplane}. In this figure, we display the SNR as a function of the magnitude of the target, in the H-band, for different parameters, corresponding to the 3 different combination of 2 of the 3 telescopes. Image plane, D represents the diameter of the beam (19 mm), xD representing then the distance separating the 2 combined beams expressed in number of beams. We can see that the SNR of the image plane simulation is always better than that of the pupil plane. With an image plane design we should be able to gain an extra 1 magnitude compared to a pupil plane design. We therefore chose to switch from the initial pupil plane design of CLIMB to a new image plane design described in Section~\ref{sec:minidesign}. \subsection{Minimal Design}\label{sec:minidesign} The optical design of SILMARIL is inspired from the FOURIER instrument~\cite{2020SPIE11446E..0VM} at MROI. The concept is to keep the number of optics used to a minimum to avoid the loss of flux that each optical element introduces. A scheme of the design is shown in Figure~\ref{fig:SILMARILdesign}. The first important element is the use of very long focal length (5.41m) cylindrical (CL1) mirrors to form the image in the fringe axis (horizontal) and a much shorter focal length (0.35m) second cylindrical lens (CL2) to form the image in the spectrally dispersed axis (vertical). The intent, based on the desire to optimize for faint light operation, is to minimize the number of optics and reduce background in K-band as much as possible, while at the same time allowing a development path that enables us to test the fundamental optical design on the sky without using a second dewar. For ultimate performance an upgrade including a second dewar with specialized cold stops is planned. Mirrors at the standard beam height are used to send the beams to long focal length cylindrical mirrors CL1 at a higher level. These cylindrical mirrors are arrayed in the standard 2-4-6 non-redundant beam pattern, and they send the beams to a common focal point in the horizontal axis. These are followed by a second cylindrical lens CL2 acting in the vertical axis and of shorter focal length. This compresses the image to a small number of pixels in the vertical axis. When used in concert with a vertical dispersive element, the instrument now yields a modest spectral resolution (R$\sim$30), a layout usually termed channel spectrum fringes. This low spectral resolution allows for group delay tracking of the fringes. An extra fold mirror is necessary to fit things into the available space. This arrangement has minimal optics and will produce an image that can be placed directly onto the C-RED One camera. At a later time, we intend to add a fore-dewar based largely on the MYSTIC~\cite{MYSTIC,SetterholmSPIE2022} design. One primary area of concern is contamination in the H-band from background counts in the K-band. This can be resolved by replacing the first filter in the camera (closest to the detector) with a custom "edge filter" that has an H-band coating only on the lower half. There are two problems with an edge filter such as this, both to do with how much K-band light reaches the H-band part of the detector. The first issue is that some of the background light will be diffracted by the edge of the filter and reach the longest wavelength channel of the H-band system. The Fresnel scale for diffraction is given by \begin{align} d_F = \sqrt{\frac{x \lambda}{\pi}} \end{align} where $x$ is the distance from the refracting edge and the first filter in the camera is 2.4 mm away from the detector. For K-band we will have a Fresnel scale of 40 $\mu$m. For the 24 $\mu$m pixel size and a gap of 6 pixels on the chip between H and K, we have 3.2 Fresnel scale units, resulting in much less than 1\% of the K-band background light diffracting into the longest wavelength channel of the H-band system and much less in the other pixels. A more serious concern is the geometric contamination of light from K-band in the converging beam as shown in Figure~\ref{fig:KinH}. The K-band light will no longer reach the chip after a distance of \begin{align} \frac{d_{KE}}{f_{CL2}} \times D_B. \end{align} For the focal length of 350mm and the filter to detector distance $d_{KE}=2.4$mm, this will be 130$\mu$m or about 5.5 pixels while the gap between H- and K-bands is 6 pixels. This could be improved by either increasing the focal length of the CL2 optic, thereby reducing the spectral resolution of the system, or placing the filter surface much closer to the detector surface. In Figure~\ref{fig:edgefiltscheme} on the left, we show a filter substrate design that will allow us to get to within 0.9mm of the chip. On the right we show the effect of the distance $d_{KE}$ on the geometry of the contaminated area. A design like this would reduce the area of contamination to 2.5 pixels. However, getting a filter with such a design and getting closer to the detector will increase the risk of damaging the detector if an incident should happen with the filter. Furthermore, this calculation assumes a perfectly sharp edge on the filter that turns out to be very difficult to make. After numerous discussions with optical vendors, we have found that this custom shape filter and sharp edge boundary (specified as $<$50$\mu$m) is very difficult to acquire. We contacted 10 companies who we have either used before or have been recommended by First Light or other members of the consortium. Most put in a no-bid, while the two that did bid gave prices of \$18k and \$50k. A third vendor has told us that they cannot make the odd shape we are asking for, as it will not fit inside their coating chamber, but they can make a filter of the standard shape for the camera with a transition zone of the filter edge as small as 100$\mu$m for a price of \$ 5k. However, after studying more in details all the requirements of the edge filter, it appeared that this company couldn't meet them and decided not to bid anymore. The amount of K-band light getting through now becomes an integral of the edge shape, assumed to be linear, and the beam shape in this dimension is rectangular. We show in Figure~\ref{fig:edgefiltscheme}, right, the final transition shape for a 100$\mu$m edge boundary at various distances from the detector. At the default distance of 2.4mm we have less than 10\% contamination of background light in the longest wavelength channel of the H-band. We conclude that using the standard design, rather than a higher risk specialized shape, and a specification of 100$\mu$m on the boundary edge will be fine for our needs. The final consideration is the shape of the cold input pupil. The beams are anamorphic so rather than use the default F-20 circularly symmetric cold stop, we plan to use a rectangular or square shape. First Light have done a preliminary design of a rectangular cold stop. We plan in the end to implement a second set of 3 beams that would fit in the current design, to obtain a 2$\times$3-telescope beam combiner, making use of the six telescopes of the CHARA Array at the same time. A scheme of the final layout is presented in Figure~\ref{fig:finallayout}. \section{Performance Simulations} \label{sec:simulations} With a design with parameters fixed, we can perform a simulation of the performance of such a design. The SILMARIL performance simulation consists of two programs. The first one simulates the effect of the atmospheric turbulence on the fringe phase seen by the detector. This model is based on the one described in CHARA Technical Report~\#103\footnote{\href{https://www.chara.gsu.edu/astronomers/technical-reports}{https://www.chara.gsu.edu/astronomers/technical-reports}}. The parameters we changed for this first part of the simulation are: \begin{itemize} \item The $r_0$ parameter (r): this seeing parameter is a measure of the stability of the atmospheric turbulence. The higher it is, the better the seeing, and the more stable the fringes will be. In this study we used 3 different values for this parameter: $r_0 = 5$~cm, representing bad seeing, $r_0 = 10$~cm representing an average seeing and $r_0 = 20$~cm, representing great seeing. \item The spectral resolution (R): this parameter will determine the number of pixels in the spectral dimension. The advantage of having a lower spectral resolution is to disperse the light over fewer pixels, giving a better signal to noise per pixel. The advantage of having a higher spectral resolution is having a bigger gap between the H and K band, making the transition of transmission of the edge filter less critical. Another advantage is to have a longer coherence length, which can make it easier to track the fringes. For this study, we fixed this parameter to R = 35, which is the initial value we have assumed for this parameter. \item The number of pixels to sample the fringe dimension (n). This number depends on the number of pixels we want to use to sample one fringe and the spectral channel we sample, as the size of the fringe depends on the wavelength we probe. As we fixed the sampling of the fringes to 2.5 pixel/fringe, the parameter n goes from 43 to 50 pixels in the H-band and from 54 to 64 pixels in the K-band (for the previous K filter; the current K’ filter gives from 57 to 65 pixels). For the H-band we used n = 43, 47, and 50. For the K-band, we used n = 54, 59, and 64. \item The spectral bandwidth parameter (l): this parameter allows us to specify the spectral band we want to work on, by specifying the central wavelength and the bandwidth. For this study, we used the default H-band and the previous specification of the K-band. This parameter will play a role in the number of pixels over which the light will be dispersed, along with the spectral resolution parameter R. It will also play a role in the atmospheric turbulence, as its parameters change as a function of the wavelength. The larger the wavelength, the more stable is the atmospheric turbulence. \end{itemize} The second program simulates the detector itself, including sample time and incoherent integration time, simulating its performance and the tracking of the fringes. In addition, the script computes the SNR of the fringes. To compute the SNR for the signal, we take the maximum of the power spectrum in the position of a fringe (3 pixels large), for each pair of telescopes. For the noise, we compute the RMS of all the pixels in the power spectrum that are not in the area of a fringe, while not including the low frequencies where there is a great deal of power due to the atmospheric noise. The parameters we changed for this second part of the simulation are: \begin{itemize} \item The magnitude of the observed star (c): this parameter allows us to probe the limiting magnitude we can observe with SILMARIL. The model used for the expected number of photons is the same as that was used for TR~\#103 and the original proposal. For this parameter, we use c = 7.0, 8.0, 9.0, 10.0, and 11.0. \item The number of frames we incoherently add (s): this is the same as changing the sample time of the detector and can increase the SNR of the fringes, increasing the sensitivity of the instrument. The first program does one calculation of the atmosphere for every millisecond. Adding these together allows us to simulate longer sample times. However, if we average too much, the fringes will start to get blurred, and we will start to decrease the SNR of the fringes. The better the seeing, the longer we should be able to add frames incoherently before we start to blur the fringes. For this parameter, we used: s = 1, 3, 6, 9, 12, and 15. \item The number of power spectra we coherently add (p). As for the sample time, increasing the number of power spectra frames we add can increase the SNR. But if we add too many, then the atmospheric noise will overwhelm the signal and reduce the SNR. \item The parameters of the camera, D: these parameters include the readout noise (RON), the excess noise factor (ENF), and the background (BKG). For these simulations, we fixed the RON to 1.0, which is a conservative approximation for the RON of the C-RED One camera, and the ENF to 1.4, which is the characterization of the C-RED One camera’s ENF. For the background, we used BKG = [41, 129, 408, 4082], which correspond, respectively, to the background with the C-RED One with a MYSTIC type second Dewar, without the second Dewar but with a geometric mean of NIRO type improvement and C-RED One, just the NIRO type improvement, and just the C-RED One. \end{itemize} We ran simulations for all of the combinations of the different values of parameters discussed previously. We then plot the SNR as a function of the stellar magnitude for each set of different parameters. To compute the SNR, we take the mean of SNR over the frames for each group of telescopes and then we take the mean of the 3 sets to compute the global SNR in the data. On the plot, we show the position of SNR = 2 by a line, which is the limit for which we should be able to track the fringes while observing. If the SNR is above this line, we should be able to observe the target with the set of parameters we used. Here we describe the results for runs of the simulator for different parameters. \subsection{Effect of the number of frames we add incoherently (s)} The number of frames we add incoherently allows us to improve the SNR of the fringes by integrating more flux. However, if we add too many frames, the fringes will start to move on the detector (as a function of the seeing), and therefore, we would start to blur the fringes, decreasing the signal of the fringes we want to measure. In Fig.~\ref{fig:s}, we display the SNR as a function of the magnitude. The dots are for the H-band simulations, the crosses are for the K-band simulations. The different colors represent different levels of background with the values in legends being in e/s/pixel. The black line is showing the SNR=2, the limit for which we should be able to track the fringes (phase tracking). The different plots are for different numbers of frames incoherently added (s = 1 and 15 frames). Note that the trend on the plot is similar for the other values of s not shown here. As expected, Fig.~\ref{fig:s} shows that increasing the number of frames we add incoherently has a big impact on the SNR, increasing the limit observable magnitude from 9 to 12. In Fig.~\ref{fig:s_mag}, we show the limiting magnitude as a function of number of frames s, for different background levels and the two different spectral bands. We can first see that the limiting magnitude is the same for both spectral bands for each different set of parameters. We also notice that the best limiting magnitude increases with s, as expected. For the highest background, we need to adopt s = 15 to reach a limiting magnitude of 10, otherwise, the limiting magnitude is 9, and even 8.0 for s = 1. For the intermediate values of background, we reach a limiting magnitude of 11 from s = 6 for BKG = 129e/s/frame and from s = 12 for BKG = 129e/s/frame. For the lowest background, we read a limiting magnitude of 12 for s = 12. \subsection{Effect of the number of frames we add coherently (p)} The number of power spectra we add coherently should allow us to improve the SNR of the fringes by integrating more signal in the power spectrum. However, if we add too many power spectra, the signal of the fringes will start to move on the detector (as a function of the seeing), and therefore, we would start to blur their signal in the resulting power spectrum. In Fig.~\ref{fig:p}, we display the SNR as a function of the magnitude. The dots are for the H-band simulations, the crosses are for the K-band simulations. The different colors represent different levels of background with the values in legends shown in e/s/pixel. The black line is showing the SNR = 2, the limit for which we should be able to track the fringes (phase tracking). The different plots are for different numbers of power spectra coherently added (p = 1 and 15 frames). Note that the trend on the plot is similar for the other values of p not shown here. We can notice that for lower magnitude (7, 8, and 9) the SNR for a same background level is improving with increasing p. However the tendency is reverted for higher magnitude (10, 11, 12), the SNR decreasing when increasing p. In Fig.~\ref{fig:p_mag}, we show the limiting magnitude as a function of p, for different backgrounds level and the two different spectral bands. We can first see that the limiting magnitude is the same for both spectral bands for each different set of parameters. We can also see that the limiting magnitude decreases when p increases, which is unexpected. We can also notice that the limiting magnitude does not vary for $\text{p} > 1$. It appears then that for faint stars, the best value is $\text{p} = 1$. For $\text{p} = 1$, the best limiting magnitude is 12, but only for the lowest background value. The two intermediate background levels have a limiting magnitude of 11. The highest background has a limiting magnitude of 10. For $\text{p} > 1$, the two lowest background levels have the same limiting magnitude of 11. For BKG = 408e/s/frame, the limiting magnitude is then 10. Finally, for the highest background level, the limiting magnitude is 9. \subsection{ Effect of the seeing ($r_0$) on the SNR} Changing the $r_0$ parameters allows us to see how the magnitude limit will be impacted by the different seeing conditions. The implementation of the AO system on all telescopes should improve the bad and normal seeing conditions to mostly good seeing conditions. In Fig.~\ref{fig:r0}, we display the SNR as a function of the magnitude; the dots are for the H-band simulations and the crosses are for the K-band simulations. The different colors represent different levels of background with the values in the legends given in e/s/pixel. The black line is showing the SNR=2 case, the limit for which we should be able to track the fringes (phase tracking). The different plots are for different values of $r_0$. Note that the trend is similar for $r_0$ = 5 cm, not shown here. We can see in Fig.~\ref{fig:r0} that the $r_0$ does not have much effect in the K-band, which is expected as the K-band turbulence is more stable than the turbulence in the H-band. In the H-band, we see an improvement with the $r_0$ parameter increasing. But its effect is mostly noticeable at lower magnitudes. The effect at high magnitude (fainter targets) is too small to have an effect on the limit observable magnitude. We can see that with normal atmospheric conditions, most of the background conditions result in a limiting magnitude of 10, while a magnitude of 11 needs better background and is at the limit of being observable. In Fig.~\ref{fig:r0_mag}, we show the be limiting magnitude as a function of p, for different backgrounds level and the two different spectral bands. We can see that the limiting magnitude is increasing with the $r_0$, which means that we can observe fainter stars with better atmospheric conditions, which is expected. We also see that for bad conditions ($r_0$ = 5) the limiting magnitude for both the lower background levels is 11. For BKG = 408, the limiting magnitude is 10. For the highest background level, the limiting magnitude is 9. We reach a limiting magnitude of 12 in normal good atmospheric conditions (respectively $r_0$ = 10 and $r_0$ = 20) for the lowest background level. For the two intermediate background levels, we reach a limiting magnitude of 11 in the same conditions. For the lowest background level, we need to have good atmospheric conditions to reach the limiting magnitude of 10. \subsection{Effect of the number of pixels in the fringe dimension (n) on the SNR} This parameter allows us to see the effect of the number of pixels we use to probe the fringes in the fringe direction. It is helpful to inspect the effect of changing the value of the number of pixels we use to probe a single fringe, but it is actually not useful to vary the number of pixels needed to probe different wavelengths (fixed for now to 2.5 pixel/fringe). The more pixels we use to probe the fringes, the more accuracy we obtain to compute the contrast of the fringes, but more light is needed to be spread over several pixels, losing SNR. \noindent In Fig.~\ref{fig:n}, we display the SNR as a function of the magnitude. Dots portray the H-band simulations while the crosses are for the K-band simulations. The different colors represent different levels of background with the values in the legends being in e/s/pixel. The black line shows the SNR=2, the limit for which we should be able to track the fringes (phase tracking). The different plots are for different numbers of pixels that are probing the fringes. Note that the trend is the same for the values of n not shown here. In Fig.~\ref{fig:n}, the number of pixels we use in the fringe direction seems not to affect the SNR in the H-band for bright stars. However, the more pixels we use to probe the K-band, the higher the SNR for those same bright stars. On the other hand, for faint stars, the H-band seems to be more affected than the K-band, with the SNR going slightly lower for higher numbers of pixels in the fringe direction. \subsection{Summary on the study of the expected performances of SILMARIL} With ideal conditions and the lower background level, we should be able to observe stars with a magnitude up to 12, by incoherently adding 12 frames. In most conditions, by co-adding only 3 frames, we should be able to observe magnitudes of 10 for most of the background conditions. This limiting magnitude corresponds to an improvement of about 2 to 4 magnitudes compared to CLIMB, which is in agreement with the estimates we computed without the simulations. The best parameters in normal to good conditions would be p = 1 and s = 12 for faint stars. \section{Real-time software architecture and data reduction pipeline}\label{sec:soft} For minimum software development efforts and to reuse much of the working code, SILMARIL adopts the MIRC-X software \cite{anugu2020} for both the real-time software and the data reduction pipeline as there are several similarities between SILMARIL with MIRC-X. \begin{itemize} \item \textbf{Detector data acquisition:} The MIRC-X software directly (as-it-is) can be used for the detector data acquisition and critical instrument security and health monitoring. Both these instruments use the (i) same C-RED One camera, (ii) data acquisition hardware (Matrox frame grabber and fiber camera-link extender system), and (iii) same ion pump vacuum and safety system. The data acquisition software runs on a similar computer and grabs images from the C-RED One camera using camera-link cables with a dedicated Matrox frame grabber in real-time with a low latency Linux operating system \cite{anugu2020}. \item \textbf{Fringe Acquisition and Group-delay Tracking:} MIRC-X uses six beams, while SILMARIL uses three-beams only, optimized for sensitivity. We reuse the MIRC-X group delay fringe tracking engine \cite{anugu2020}. \item \textbf{Data reduction pipeline:} We plan to make similar data acquisition sequences. For the data reduction, we plan to adopt the MIRC-X data reduction pipeline \cite{anugu2020} written in python for SILMARIL. The only major difference is the change to three beams. \end{itemize} \section{Science Cases}\label{sec:sci} The gain in sensitivity SILMARIL will open numerous new science cases not achievable yet and will push the science cases already on-going at the Array. Here we present a few science cases for which SILMARIL will be used. \subsection{AGNs} Over the last few years, the study of AGNs has been going through a transformation. In the standard picture, which has been around for more than 30 years\cite{1985ApJ...297..621A}, an obscuring torus is invoked and thought to surround the accreting supermassive black hole at the center. The physical origin of this torus is not clear, but it is assumed to be more or less static, and importantly, it unifies the two major AGN categories: those with a face-on, polar, direct view of the nucleus, called Type 1, and those with an edge-on, equatorial, hidden nuclear view, called Type 2. Recent mid-IR interferometry has shown that a major part of the mid-IR emission, believed to be coming from the outer warm ($\sim$300K) part of this putative dusty torus, has a polar-elongated morphology, rather than the expected equatorially elongated structure\cite{2012ApJ...755..149H,2013ApJ...771...87H,2014A&A...565A..71L}. In addition, this polar-elongated dusty gas is in fact UV-optically-thick, since the measured IR emissivity is a few tenths and consistent with directly illuminated UV-optically-thick gas. Furthermore, at the same spatial scale, ALMA finds a polar outflow~\cite{2016ApJ...829L...7G}, likely to be an inward extension from the 10-100pc scale bipolar outflow directly resolved by HST\cite{2002AAS...200.0510C}. The new scenario is that the torus is an obscuring and outflowing structure, probably being driven by the radiation pressure on dust grains from the anisotropic, polar-strong radiation field of the central accretion disk. This dusty wind is likely giving strong feedback to the host galaxy, resulting in the correlation between the bulge and central black hole masses in galaxies\cite{2000ApJ...539L...9F}. If this picture is correct, the wind must be launched at the innermost radius where the dust grains barely survive, that is in the dust sublimation region. Does this region show a substantial scale height, leading to a polar-elongated morphology at this innermost radius as well? Since the dust sublimation temperature is supposed to be $\sim$1500 K, the region must be the brightest at $\sim$2 $\mu$m, i.e., in the K-band. Up to now, the region has only been marginally resolved by interferometers at $\sim$100 m baselines\cite{2011A&A...536A..78K,2012A&A...541L...9W,2020A&A...635A..92G,2021ApJ...912...96L}. With the CHARA Array’s long baselines of up to 330 m, we will be able to map the elongation of the structure at a sub-mas or 0.1 pc scale in the near-IR for the first time. That will be a fundamental test of the new picture, and a major breakthrough if any elongation is detected at this critical scale. There is a closely related, but very different aspect in this nuclear region study. A supermassive black hole is now believed to reside in essentially every galaxy. Naturally, after collisions and mergers of these galaxies, a binary black hole would form at the center. Theoretical studies with numerical simulations have suggested that the binary separation quickly shortens by ejecting the surrounding stars, but could stall at the 0.1 pc scale known as the "final parsec problem"\cite{2005LRR.....8....8M}. This is consistent with recent Pulsar Timing Array constraints, where low-frequency gravitational waves, expected from supermassive black hole mergers, are not being detected\cite{2015Sci...349.1522S}. A big question is: are these massive binaries really there? We simply do not have direct, spatially resolved constraints at this 0.1 pc scale. However, the CHARA Array can drastically change this situation. We might identify a binary structure directly at the center of an AGN for the first time. We have already demonstrated that we can detect fringes on the brightest Type 1 AGN NGC4151 and resolve its geometry on long 250 m baselines using CLASSIC (Kishimoto, Anderson, ten Brummelaar et al.~submitted). Using the blind observing technique and a sensitivity improvement of at least $\sim$1.5 magnitudes should allow us to observe a number of AGNs and help to resolve several of these issues (Fig.~\ref{fig:AGNimp}). \subsection{Angular Diameter of Stars} Angular diameter measurements are used to determine empirical surface brightness relations for predicting radii based on photometric colors~\cite{2014AJ....147...47B,2018MNRAS.473.3608A, 2021A&A...652A..26S}. Combined with a precise Gaia parallax\cite{GAIADR2} and a measurement of the bolometric flux, the angular diameter can be used to derive the physical radius and effective temperature of a star. At the low-mass end of the main sequence, predictions from evolutionary models tend to overestimate the temperatures of stars by 3\% and underestimate the radii of stars by 5\%~\cite{2012ApJ...757..112B}. Comparing radii and temperatures with evolutionary models provides a way to measure the ages of nearby moving groups~\cite{2011ApJ...743..158S,2015ApJ...813...58J,2018ApJ...858...71S,2022AJ....164...34M} and exoplanet host stars~\cite{2009ApJ...701..154B}. Moreover, these fundamental parameters are used to refine the location of the habitable zone around exoplanet host stars~\cite{2011ApJ...729L..26V,2014MNRAS.438.2413V,2015ApJ...800..115T} and infer the radius of transiting exoplanets based on the stellar diameter and eclipse timing~\cite{2011ApJ...740...49V}. With the successful launch of the TESS mission, the number of exoplanet hosts whose radii can be measured directly by CHARA will increase dramatically; there are $\sim$6500 stars in the TESS input catalog within the resolution and sensitivity limits of CLASSIC. Targets with solar-like oscillations can be used to calibrate asteroseismic scaling relations for measuring the masses and radii of stars~\cite{2012ApJ...760...32H}. The improvement in the sensitivity limit will not impact the number of TESS targets accessible, because the stars with H $>$ 7 are too small to resolve even with the CHARA Array. However, improvements in precision and efficiency will have a dramatic effect on the quality and number of diameters measured. \subsection{Disks around Young Stars} The CHARA Array is unmatched for studying circumstellar disks around young stellar objects (YSOs) because it possesses the unique combination of long baselines ($>$200 m) and sensitive instrumentation in the infrared. CHARA was the first interferometer to confirm the detection of hot gas inside the dust sublimation radius~\cite{2008PhDT........11T,2008SPIE.7013E..0UT}. Most young stellar objects are faint (H, K $>$ 7) and require acquiring low-contrast fringes. With the increased sensitivity of ESO’s Very Large Telescope Interferometer (VLTI) a large number of YSO disks have been observed in recent surveys conducted by VLTI/PIONIER~\cite{2017A&A...599A..85L} and VLTI/GRAVITY \cite{2019A&A...632A..53G}. VLTI instruments are limited by the spatial resolution offered by its maximum baseline of 130m. With nearly triple the resolution of VLTI, CHARA provides opportunities to study the inner structure of disks around YSOs. With improved sensitivity brought by SILMARIL, a larger survey of the inner AU of protoplanetary disks will reveal the importance of gas emission and stellar winds, and shed light on puffed-up inner walls. It should also be possible to detect accreting protoplanets embedded in the disk~\cite{2015Natur.527..342S}. An improvement in sensitivity of 1-2 magnitudes with SILMARIL will increase the sample size by a factor of 2-3. \subsection{Winds from Massive Stars} The extreme luminosity of massive stars drives stellar winds that may carry away a large fraction of a star’s mass over its lifetime. The highest mass loss rates are found among the Luminous Blue Variables (LBVs), and Richardson et al. (2013)~\cite{2013ApJ...769..118R} used the CHARA Array to resolve the wind outflow in the nearest LBV, P Cygni. Two other LBVs will be accessible with greater sensitivity, HD 168607 and V446 Scuti, as well as dozens of distant Wolf-Rayet stars. Many of these objects experienced episodes of very large mass loss or systemic mass loss in a binary system, such as the pinwheel outflow in WR~104~\cite{2008ApJ...675..698T}. CHARA observations will help map these tracers of mass loss processes. \subsection{Interacting Binary Systems} The first instance of Roche lobe overflow in massive binaries may result in large-scale mass loss from the system and the creation of a circumbinary disk such as the huge torus surrounding RY Scuti~\cite{2011MNRAS.418.1959S}. The W Serpentis and FS CMa binaries probably represent systems in this intense stage of mass loss, and several dozen targets will be accessible with a more sensitive detector. Binaries that survive the transformation of the first evolving star into a neutron star or black hole remnant will form a Massive X-Ray Binary system, and some dozen MXRBs will be accessible with better sensitivity to explore their mass loss processes. For example, the GRAVITY Collaboration at VLTI recently resolved some of the inner structure of the MXRB SS 433, and CHARA observations would probe deep into the central engine that launches the relativistic jets~\cite{2017A&A...602L..11G}. Massive stars are often found in multiple systems, and any wide companions of MXRBs discovered by CHARA would help establish the reflex orbits of the visible mass donor stars in these systems~\cite{2014PASA...31...16M}. CHARA observations would also probe mass transfer processes in wide symbiotic binaries~\cite{2007BaltA..16..104F} and end-of-life loss into circumbinary disks surrounding post-AGB stars~\cite{2018arXiv180900871V}. The gain in sensitivity with SILMARIL would open these targets to investigation with CHARA. \subsection{Young Binary Systems} Spatially resolving the orbits of double-lined spectroscopic binaries yields the component masses and distance to the system. Many spectroscopic binaries in nearby star forming regions (Taurus, Orion, Ophiuchus) with ages $<$ 10Myr are just beyond the reach of the current sensitivity limits at the CHARA Array. Figure~\ref{fig:specbin} shows a histogram of known spectroscopic binaries in nearby star forming regions accessible to the CHARA Array. At the current magnitude limit of the MIRC-X combiner in excellent seeing conditions (H $<$ 7.5), only the brightest binaries in most of these regions are resolvable. To extend to lower mass members, where evolutionary tracks are most discrepant at young ages\cite{2017ApJ...841...95S}, an improvement in sensitivity is required. Dynamical masses can be used to calibrate the models of stellar evolution at young ages. Moreover, for a given set of tracks, dynamical masses across a range of spectral types provide a strong constraint for age dating of clusters, which impacts our understanding of the history and chronology of planet formation in these regions. \subsection{Transient Events} In the era of large time-domain surveys like LSST and the Zwicky Transient Survey, the CHARA Array can be used to follow-up bright transient events to measure the spatial extent of the source. This includes the ability to resolve the angular expansion and development of asymmetries during the early stages of nova explosions~\cite{2014Natur.515..234S}. Another promising application would be to measure the image size and centroid displacement in gravitational microlensing events; this would break the degeneracy between the separation and mass ratio of a planet, brown dwarf, or black hole companion relative to the host lens star\cite{2016MNRAS.458.2074C,2019ApJ...871...70D}. \section{Conclusions}\label{sec:concl} Thanks to the new e-APD technology, optical long baseline interferometry is able to attain better sensitivity. With a design that uses both proven concepts, such as a minimum number of optics, and new concepts, such as the edge filter, we can push for even more sensitivity with SILMARIL. The expected performances once achieved with an external dewar should bring the limiting magnitude to 11 both in H- and K-bands in average atmospheric conditions, and to 12 in the best atmospheric conditions. This will open a number of new science cases not achievable yet at the CHARA Array and extend the range of already ongoing science. The implementation of SILMARIL is ongoing, with a first test on sky scheduled in September 2022, with an engineering camera. The C-RED One camera is expected to be delivered around the end of the year 2022. Once the efficiency of the instrument will be proved on sky, we will work on the implementation of the external dewar that will push its performance further, and on the implementation of the second set of 3 beams to make a use of the 6 telescopes of the CHARA Array. \appendix \acknowledgments % The SILMARIL project at the CHARA Array is supported by the National Science Foundation under Grant No. AST-1909858. Institutional support has been provided from the GSU College of Arts and Sciences and the GSU Office of the Vice President for Research and Economic Development. \bibliography{report} % \bibliographystyle{spiebib} %
Title: Moon-packing around an Earth-mass Planet
Abstract: All 4 giant planets in the Solar System host systems of multiple moons, whereas the terrestrial planets only host up to 2 moons. The Earth can capture small asteroids as temporary satellites, which begs the question as to how many moons could stably orbit the Earth, or an Earth-mass exoplanet. We perform a series of N-body simulations of closely-spaced equal mass moons in nested orbits around an Earth-mass planet orbiting a Sun-like star. The innermost moon begins near the host planets Roche radius, and the system is packed until the outermost moon begins near the stability limit for single moons. The initial spacing of the moons follows an iterative scheme commonly used for studies of compact planetary systems around single stars. For 3-moons system, we generate MEGNO maps to calculate periodic and chaotic regions and to identify the destabilizing MMRs. Our calculations show that the maximum number of moons depends on the assumed masses of the satellites (Ceres-, Pluto-, and Luna-mass) that could maintain stable orbits in a tightly-packed environment. Through our N-body simulations, we find stable configurations for up to 7 $\pm$ 1 Ceres-mass, 4 $\pm$ 1 Pluto-mass, and 3 $\pm$ 1 Luna-mass moons. However, outward tidal migration will likely play a substantial role in the number of moons on stable orbits over the 10 Gyr stellar lifetime of a Sun-like star.
https://export.arxiv.org/pdf/2208.03604
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} planets and satellites: dynamical evolution and stability -- Earth -- Moon \end{keywords} \section{Introduction} \label{sec:intro} In our Solar System, most planets contain multiple satellites. Notably, the giant planets in the outer Solar System host multiple-moon systems. The only rocky planets that contain natural satellites are Earth and Mars; with $n_{moons} \le 2$. Given the discrepancies in total number of moons for the giant and terrestrial planets, it is expected that they experience different formation mechanisms and orbital evolution processes. Several satellite formation theories have been proposed for both regular and irregular satellites around planets. The commonly accepted mechanism for regular satellite formation around giant planets is accretion from a circumplanetary disk. \citet{Canup2006} found a planet-disk mass ratio limit of $\sim 10^{-4}$; which is 2-3 orders of magnitude smaller than the mass fraction between the Earth and Moon. The results are consistent with simulations of disks with lesser initial mass that have resulted in formation of satellites that are not massive enough to clear out their orbits, but massive enough to start outward migration due to gravitational interaction with the disk \citep{Hyodo2015} and locked in mean motion resonance. Interactions between giant planets and circumplanetary disks heavily influence the processes of satellite formation, where the physical parameters that shape disk accretion and evolution have been constrained (\citet{Coradini2010}, and references therein). Specifically, the pressure and temperature profiles in the circumplanetary nebulae shaped the chemical gradients in the disk. These chemical gradients set the composition of the satellitesimals, which represent the building blocks of the present regular satellites. Additionally, further studies support the formation of natural satellites around Jupiter and Saturn within the framework of a quasi-steady state system \citep{Batygin2020}. A large-scale meridional flow of gas inside the planetary Hill sphere is developed in the later stages of planet formation which feeds a circumplanetary disk that expels gaseous material back into the parent nebula to maintain equilibrium in the system. Recent studies have attempted to explain the origin of Galilean moon around Jupiter. \citet{Cilibrasi2021} studied less massive satellite systems through 3D radiative simulations and found that only $\sim 15\% $ of the resulting population is more massive than the Galilean satellites, which indicates low rates for tidal migrations and resonant captures are uncommon. \citet{Madeira2021} reproduced the system of Galilean satellites in a gaseous circumplanetary disc around Jupiter. However, their satellites have moderately eccentric orbits ($\sim0.1$), unlike the current real satellites. They propose a pre-existing resonance between Callisto and Ganymede that was broken over time via divergent migration due to tidal planet-satellite interactions. These same effects further damped the orbital eccentricities of these satellites down to their current values ($\lesssim 0.01$). Satellites can be captured into the gravitational field of a planet resulting in a large semimajor axis, eccentricity, inclination, and in retrograde orbits. These irregular satellites offer important insight into the formation processes of regular satellites that likely formed in prograde rotating accretion disks. Irregular satellites can be captured by dissociation of a planetesimal binary in the planet's gravity field \citep{Vokrouhlicky2008}. For example, Triton was likely captured by this process. \cite{Neto2006} shows that the satellites can be captured in prograde orbits (e.g., Leda, Himalia, Ysithea, and Elara by Jupiter) as a gas giant grows within the planetesimal disk. Planet packing studies \citep{Smith2009,Quarles2018,Lissauer2021,Bartram2021} incorporate two or more Earth-mass planets orbiting a Sun-like star in low-eccentricity and low-inclination orbits. Two planets in circular orbits can be Hill stable if the fractional orbital separation is greater than 2.4($\mu_1$ + $\mu_2$)$^{1/3}$, where $\mu_1$, $\mu_2$ are the planets-Sun mass ratios \citep{Gladman1993}. Also, two small planets are stable if the initial semi major axis difference ($\Delta$) exceeds 2$\sqrt{3}$ mutual Hill radii, where systems of more than two planets are stable for $\Delta$ $\gtrsim$ 10 \citep{Chambers1996}. Similar studies of two-planet systems (equal mass, coplanar, circular orbits) exhibited stable chaos beyond the $2\sqrt{3}R_H$ separation \cite{Marzari2014}. Closely spaced five-planet systems can have shorter lifetimes when the planetary orbits begin with a non-zero initial eccentricity ($e_p = 0.05$) in contrast to initially circular orbits \citep{Gratia2021}. Most $({\sim}72\%)$ closely packed five-planet systems with inclined orbits have a prompt collision after their first encounter, where a few $({\sim}1\%)$ survive up to $10^{7.5}$ orbits of the innermost planet \citep{Rice2018}. \cite{Funk2010} studied hypothetical ultra-compact systems of up to 10 planets with Neptune-like masses (17 $M_\oplus$) within 0.26 AU and showed that some systems were stable even with a perturbing gas giant between $0.3-0.5$ AU. The instability in the system arises because the energy and the angular momentum are not conserved due to the perturbations by the additional planet(s). The stability time varies linearly with the initial orbital spacing, and the stability time is significantly higher and can be packed twice as closely together for retrograde planets compared to prograde planets \citep{Smith2009}. On the other hand, planets in circumstellar orbits of binary system requires higher spacing than for the planets in single stars \citep{Quarles2018}. To further understand the dynamical stability of multiple moons, we follow the studies done on planet packing in circumstellar orbits in binary star system due to the similarities in orbital architectures (i.e., natural inner and outer boundaries). \citet{Domingos2006} derived a fitting formula for the stability limit of moons following previous studies of planet stability in binary star systems \citep{Rabl1988,Holman1999}. The fitting formula defines a critical semi-major axis $a_c$ in units of the planetary Hill radius $R_{H,p}$ as $\sim 0.5\ R_{H,p}$ or $\sim 0.9\ R_{H,p}$ for satellites around giant planets in both prograde and retrograde orbits, respectively. Eccentric orbits of either the planet or moon reduce these estimates in a nearly linear manner. \cite{yagirl2020} clarified the stability limit of a moon in a prograde orbit as a fraction of the Hill radius (0.4$R_{H_p}$) through a series of N-body simulations that considered a wider range of initial mean anomaly for the satellite. \citet{Quarles2021} revisited the stability limit for retrograde orbits where they showed a limit of $0.67 R_{H_p}$ and identified how the outer stability limit for a putative exomoon in $\alpha$ Centauri AB system varies due to a forced planetary eccentricity. Earth has captured small bodies in temporary orbits, where these briefly captured rocks (quasi-satellites) either head into the atmosphere to become meteors (or meteoroids), or orbit the Earth until obtaining the necessary escape velocity to leave Earth's sphere of influence. The recently detected asteroid CD3, a quasi-satellite that remained in orbit for at least three years, is a prime example of this type of capture. \citet{Granvik2012} computed the natural Earth satellite capture probability from the near Earth object (NEO) population as a function of a NEO's heliocentric orbital elements. This numerical study included 10 million virtual asteroids and only 18,000 were captured in Earth orbit. They found that the average captured satellites make $\sim 3$ revolutions around Earth in 9 months. The temporary orbits of quasi-satellites around Earth and that giant planets naturally contain multiple moons prompts a sensible question as to: "how many moons can stably orbit the Earth and how massive can they be?" In this paper, we perform a series of N-body simulations of closely-spaced equal-mass moons in nested orbits around an Earth-mass planet orbiting a Sun-like star to determine the maximum number of moons that could stably orbit the Earth and consider a range of three different prototype masses (Ceres-, Pluto-, and Luna-mass). We use the term \emph{Luna} to identify a natural satellite that is similar in mass and radius to Earth's moon. The methodology of our numerical simulations are presented in Section \ref{section:theory} including the design of the system architectures to simulate the multiple moons (up to 9) in a Sun-Earth system. The results in Section \ref{section:results} consider Ceres-mass, Luna-mass, and Pluto-mass moons to identify the most stable orbital configuration for maximum number of moons orbiting Earth-mass planet . A summary of our results and a discussion of the broader context of multiple-moon system are in Section \ref{section:results} and \ref{section:conclusions}. \section{Methodology} \label{section:theory} Earth and Mars are the only terrestrial planets with moons, where Earth hosts a single moon (the Moon or Luna) and Mars has two moons (Phobos and Deimos). Moon formation is a stochastic process, where the amount of material available largely dictates how many moons could form, but the goal of this work is to find the maximum number of moons that \emph{could} exist with respect to orbital stability constraints. \subsection{Numerical Simulations using REBOUND} We use the general N-body orbital evolution software \texttt{REBOUND} \citep{Rein2012} to examine the orbital stability of many equal-mass satellites orbiting an Earth-like planet, which in turn orbits a Sun-like star. \texttt{REBOUND} provides two algorithms (WHFast and IAS15) that are well-suited to evolve the hierarchical configuration of stars, planets, and moons. The accuracy of the numerical simulations using each algorithm is not substantially different when the initial timestep is set to 5\% of the innermost moon's initial orbital period and an 11th order symplectic corrector is used \citep{Rein2015} to minimize the energy error for WHFast. Therefore, we use the WHFast integrator for the sake of numerical expediency. Each simulation is evaluated up to $10^7$ times the period of the innermost moon $P_1$. The timescale for significant orbital evolution due to tides is much longer and thus we do not consider tidal effects in our N-body simulations. Instead, we use a secular tidal model to evaluate the extent of the moons' outward migration. An initial configuration is deemed stable, when all the moons are initially orbiting the host planet are present at the end of the simulation ($10^7$ $P_1$). The dynamical timescale for moon systems is very short, where our timescale greatly exceeds the secular timescale for the Sun's forced eccentricity ($<100$ years; \cite{Andrade-Ines2017}). As a result, systems far from stability boundaries will remain stable for billion-year timescales. Indeed, our own Moon will evolve onto an unstable orbit \citep{Sasaki2012} eventually, but this timescale is longer than the main sequence lifetime of the Sun and renders the issue of stability moot due to the possible engulfment of the Earth-Moon system. Unstable initial conditions are those that result in a close approach (within the Roche radius) with the host planet, collisions between neighboring moons or when a moon's apocenter extends beyond the outer stability limit measured in terms of the planet's Hill sphere ($Q_{sat} > 0.4R_{H,p}$; \citet{yagirl2020}). Although collisions are possible in our simulations, they are rare and scattering events that transport a moon beyond the outer stability limit represent the vast majority of outcomes. An individual simulation is terminated once an instability occurs, which represents the simulation lifetime $t_{sim}$. We scale the simulation lifetime by the orbital period of the innermost moon $T_1$ to obtain the number of orbits completed by the innermost moon, $N_1 = t_{sim}/T_1$. In all of the simulations, the host planet begins on an orbit that is identical to the Sun-Earth system using the JPL Horizons lookup feature of \texttt{Rebound} so that $a_p \approx 0.999$ AU and $e_p \approx 0.0167$. As a result, the Sun will perturb each moon's orbit and lead to a small forced eccentricity \citep{Andrade-Ines2017,Quarles2021}. \subsection{System Architecture and Formulation of Orbital Spacing} \label{sec:architecture} Although we neglect the long-term orbital effects of tides, tidal forces do place a lower limit on how close a smaller body (e.g., planet or moon) could orbit its parent body (e.g., star or planet). Interior to the Roche limit, the orbiting body gets disintegrated by the tidal force when it overcomes the surface gravity. For a moon with a mass $m_{sat}$ and radius $r_{sat}$ orbiting a planet with mass $m_p$ and radius $r_p$, the Roche radius (via the fluid definition) is given as, \begin{equation} \label{eqn:Roche1} R_{Roche} \approx 2.44 ({m_p/m_{sat}})^{1/3}r_{sat}, \end{equation} or \begin{equation}\label{eqn:Roche2} R_{Roche} \approx 2.44 ({\rho_p/\rho_{sat}})^{1/3}r_p, \end{equation} which depends on the bulk density of planet $\rho_p$ and moon $\rho_{sat}$ through $m_p/m_{sat} = (\rho_{sat}/\rho_p)(r_p/r_{sat})^3 $. In the three-body problem, the Hill sphere (or radius) defines a region of space where a planet's gravity dominates over the host star's pull. To first approximation in the planetary eccentricity, the Hill radius for a moon is truncated by the host planet's pericenter by, \begin{equation} \label{eqn:Hill_radius} R_{H,p} = a_p(1-e_p)\left(\frac{m_p}{3M_\star}\right)^{1/3}, \end{equation} which depends on the planet's semimajor axis $a_p$, eccentricity $e_p$, mass $m_p$ and the host star's mass $M_\star$. In the case of large moons, Equation \ref{eqn:Hill_radius} requires modification by replacing the planet mass with $m_p^\prime$, which is the total mass of $i$ moons added to the planet mass (i.e., $m_p^\prime = m_p + i m_{sat}$). The Hill radius is a theoretical point of stability at an instant in time, where many numerical simulations have shown that the outer stability limit actually lies within about half of the Hill radius \citep{Domingos2006, yagirl2020}. Each of the moons begin on a circular, coplanar orbit around an Earth-like planet. The initial semimajor axis of each moon is determined by a unit-less spacing parameter $\beta$ for each simulation. A similar procedure has been used for the study of planet packing around single stars \citep{Chambers1996,Smith2009,Obertas2017} and in binary star systems \citep{Quarles2018}. The spacing parameter $\beta$ is a normalized separation between the two nearby orbits and uses the \emph{mutual Hill radius} $R_{H,m}$ between adjacent moons for the normalization. The mutual Hill radius calculated using the total mass that lies interior to ith moon is given by $\widetilde{M}_i = m_p + (i-1)m_{sat,i}$. The mutual Hill radius for two consecutive moons with mass $m_i$ and $m_{i+1}$ is defined by: \begin{equation}\label{eqn:Mutual_Hill} R_{H,m} = (a_i + a_{i+1})X \;\; {\rm and} \;\; X = \frac{1}{2}\left[\frac{m_i+m_{i+1}}{3\widetilde{M}_i}\right]^{1/3}, \end{equation} where a recurrence relation defines the semimajor axis of the $i+1$-th outer moon using the semimajor axis of the inner moon $a_j$, the input parameter $\beta$, and $X$ (from Equation \ref{eqn:Mutual_Hill}) as \begin{equation}\label{eqn:semi} a_{i+1} = a_i\left(\frac{1+\beta X}{1-\beta X}\right). \end{equation} The above equations can be used to describe the mutual Hill radius for a wide range of orbital architectures. In our problem the $m_i + m_{i+1}$ factor can be replaced with $2m_{sat}$ because we use equal-mass moons. The initial system setup includes the Sun, Earth and a single moon with its semimajor axis $a_1$ at 2 $R_{Roche}$. The position of the subsequent moons (up to 9) are then prescribed using Equation \ref{eqn:semi} for a chosen value of $\beta$. Following \cite{Smith2009} and \cite{Quarles2018}, the initial mean anomaly is set by using multiples of the golden ratio through $2\pi i \phi$ radians = $360i\, \phi$ degrees, where the golden ratio is $\phi = (1 + \sqrt{5})/2$. Using the golden ratio in this context allows us to add the moons to the system so that no pair of moons is initialized at conjunction. This also helps to avoid the mean motion resonances (MMRs) between moons because the first and second order MMRs can reduce a system's lifetime \citep{Quarles2018}. Figure \ref{fig:1} illustrates the orbital architecture for 8 Ceres-mass moons, including their initial angular position (in Fig. 1a). The color scheme for the moon index is consistently used through out this paper. In this color scheme, the Earth is represented by cyan and the orbital dots (red, blue, green, ..., gray) denote each moon in the order from inner to outer. The time variation of a particular moon follows the same color-code in later sections. The inner boundary using Earth's Roche radius $R_{Roche}$ and outer boundary using the stability limit for a single moon ($0.4R_H$) are indicated by the dashed orange circles, respectively. The axes' units in Fig. \ref{fig:1} are converted to Hill radius to provide a logical representation of the Earth's Hill sphere. Since the Hill radius of any outer moon (compared to an immediate inner moon) is longer, the orbital spacing increases going from innermost to outermost orbit by a factor of $\sim$1/3. The displayed orbital spacing and the number of orbits between the stability regions is calculated assuming $\beta$ = 6. Figure \ref{fig:1}b is the projection of the Earth and 2 innermost moons that are drawn within the gray box in Fig. \ref{fig:1}a. This schematic highlights geometrically how the spacing parameter $\beta$ distributes two consecutive moons. The Hill radius of each moon individually ($R_{H1}$ and $R_{H2}$), and their mutual Hill radius ($R_{H,m}$) scaled by the planet's radius $R_p$. The semimajor axis of each moon has a scale break so that the all the bodies can fit on the page. The central body (brown) represents the total moon mass ($m_1$ +$m_2$) with the mutual Hill radius $R_{H,m}$. A similar setup is employed when we consider larger masses for the moons (Pluto-mass and Luna-mass). Note that the mutual Hill radius scales with the assumed mass of the moons, where increasing the moons from a Ceres-mass to a Pluto-mass also increases the mutual Hill radius by a factor of $(m_{Pluto}/m_{Ceres})^{1/3} \approx 2.4$. The mutual Hill radius between Luna-mass moons is $\sim4.3\times$ larger than Ceres-mass moons. As a result, more massive moons will necessarily be limited to smaller values of $\beta$ so that the outermost moon doesn't exceed the outer stability limit. \subsection{Initial System Parameters} \label{sec:parameters} The number of moons that an Earth-like planet can host depends on the assumed satellite masses and their spacing that modulate the gravitational interactions between the satellites. We use three categories (Ceres, Pluto, or Luna ) as prototypes for different sized moons in terms of their mass and radius. The satellite prototypes are used because they represent the most massive object in the asteroid belt (Ceres) and the most massive object in the Kuiper belt (Pluto). The Moon (Luna) is used due to its large relative mass/size among the natural satellites. Phase lag, or constant Q, tidal models suggest that more massive satellites than those we consider can escape an Earth-like planet through outward tidal migration \citep[][see their Fig. 9]{Quarles2021} and thus we limit our study to planet-satellite mass ratios $\lesssim 0.02$. Probing to smaller masses runs into problems, where we need to consider non-gravitational effects (e.g., Yarkovsky effect or Poynting-Roberson drag). Our study is limited to large and massive objects so that such effects are negligible and can be ignored. The mean density for all three bodies varies which results a difference in the Roche radius and the semimajor axis of innermost satellite. Table \ref{tab:Orb_Param} provides the initial values used to define the Roche radius that will scale the innermost satellite orbit. Starting the innermost satellite at the Roche radius could bias our results when that satellite's eccentricity evolves and its pericenter is raised (i.e., $q_1<R_{Roche}$). We begin the innermost satellite on an initially coplanar, circular orbit with a semimajor axis that is 2 times the Roche radius ($a_1 = 2R_{Roche}$). To define the first trial value $\beta_{min}$ in the orbital spacing of the satellites, we use the dynamical results from previous planet packing studies \citep[e.g.,][]{Gladman1993,Chambers1996,Smith2009,Obertas2017,Quarles2018} that show a minimum spacing ($\beta_{min}=2\sqrt{3}$), where smaller values are unstable (and chaotic) due to the overlap of first-order MMRs \citep{Wisdom1980}. For $\beta\gtrsim\beta_{min}$, there is an expected transition regime up to a critical value $\beta_{crit}$ that represents a broad plateau of stable configurations \citep{Obertas2017,Lissauer2021}. The extent of the plateau is expected to change slightly beyond our integration timescale, especially near the MMRs, inner most stable beta, and outermost stable beta due to stochastic encounters. However, this does not exclude the existence of stable conditions within such plateaus. \cite{Quarles2018} showed for binary systems that a maximum value $\beta_{max}$ signifies another transition regime, but from stable to unstable due to MMRs with an external perturber. We adapt the results from \cite{Quarles2018} (see their Eqn. 3) to calculate the maximum $\beta$ through the following: \begin{equation} \label{eqn:beta_max} \beta_{max} = \left(\frac{(a_N/a_1)^{\frac{1}{N-1}}-1}{(a_N/a_1)^{\frac{1}{N-1}}+1}\right) \left(\frac{12m_p}{m_i}\right)^{1/3}, \end{equation} which depends on the number of moons N, the innermost orbit $a_1$, the outermost orbit $a_N$, the planet's mass $m_p$, and each satellite's mass $m_{sat}$. For satellite systems with small $N$, the value of $\beta_{max}$ can be large and thus, we only iterate up to $\beta = 10$. As the number of moons $N$ increases, we can use Eqn. \ref{eqn:beta_max} to verify whether a instability transition occurs. Each of our satellite prototypes (Ceres, Pluto, or Luna) are varied in the initial $\beta$ starting from $2\sqrt{3}\approx 3.5$ up to 10 through steps of 0.01. The spacing parameter $\beta$ increases the semimajor axis of subsequent moons and the associated orbital periods. As a result, the ratio of orbital periods between a pair of moons can start as a near integer ratio to form an MMR. Although we take steps to avoid MMRs through the initial phasing of the moons, some configurations can still enter into the MMR at least temporarily. To identify the expected locations of MMRs as a function of $\beta$ \citep{Obertas2017}, we use the semimajor axis ratio $a_{i+1}/a_i$ from Eqn. \ref{eqn:semi} and apply Kepler's 3rd law to get \begin{equation} \label{eqn:MMR} \frac{P_{i+1}}{P_i} = \left(\frac{1+\beta X}{1-\beta X}\right)^{3/2}, \end{equation} for an adjacent pair of moons. Equation \ref{eqn:MMR} provides the expected location of MMRs assuming that the Sun has a negligible influence on the moons and can be generalize to any pair of moons following the formalism for planet packing \citep{Obertas2017}. \begin{table} \centering \caption{Initial satellite parameters (mass, radius, and density) that define the innermost orbit $a_1$ in terms of $R_{Roche}$ (see Eqns. \ref{eqn:Roche1} and \ref{eqn:Roche2}) for our $N$-body simulations. The volumetric mean radius of each satellite type is used, where $r_p = 6371$ km for the Earth. The period is provided for easier comparisons with other known satellite systems. \label{tab:Orb_Param}} \begin{tabular}{lcccccc} \noalign{\smallskip} \hline \noalign{\smallskip} Body & Mass & Radius & Density & $a_1$ & $a_1$ & Period\\ & (M$_\oplus$) & (km) & (g/cm$^3$) & (au) & (R$_\oplus$) & (days)\\ \noalign{\smallskip} \hline \noalign{\smallskip} Ceres & 0.00015 & 469.7 & 2.162 & 0.000288 & 6.75498 & 1.010\\ Pluto & 0.0022 & 1188 & 1.854 & 0.000299 & 7.01298 & 1.090\\ Luna & 0.0123 & 1737.4 & 3.344 & 0.000247 & 5.79333 & 0.807\\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \end{table} \begin{table} \centering \caption{Values of $\beta_{max}$ (see Eqn. \ref{eqn:beta_max}) when varying the number of moons $n$ using the innermost orbit $a_1$ and mass $m_j$ for each moon type (see Tab. \ref{tab:Orb_Param}). The red text marks when $\beta_{max}<2\sqrt{3}$. \label{tab:beta_max} } \begin{tabular}{lccc} \noalign{\smallskip} \hline \noalign{\smallskip} $N$ & \multicolumn{3}{c}{$\beta_{\rm max}$}\\ & $m_{\rm Ceres}$ & $m_{\rm Pluto}$ & $m_{\rm Luna}$\\ \noalign{\smallskip} \hline \noalign{\smallskip} 3 & 24.85 & 10.04 & 5.96 \\ 4 & 17.76 & 7.17 & 4.29 \\ 5 & 13.68 & 5.51 & \textcolor{red}{3.31} \\ 6 & 11.08 & 4.46 & \textcolor{red}{2.69} \\ 7 & 9.30 & 3.75 & \textcolor{red}{2.26} \\ 8 & 8.00 & \textcolor{red}{3.22} & \textcolor{red}{1.94} \\ 9 & 7.02 & \textcolor{red}{2.83} & \textcolor{red}{1.71} \\ 10 & 6.25 & \textcolor{red}{2.52} & \textcolor{red}{1.52} \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \end{table} \section{Results and Analysis} \label{section:results} \subsection{Case Study 1: Ceres-mass Moons} \label{ceres} Using Ceres-mass moons, we perform numerical simulations varying the number of moons ($n=3-9$) and their orbital spacing through the spacing parameter ($2\sqrt{3}\le \beta \le 10$). Each of the simulations are integrated up to $10^7$ orbits of the innermost moon, which begins at twice the Roche radius. Planet packing studies using a single star \citep{Smith2009,Obertas2017} showed that stability is attained for 3-5 planets when $\beta \sim 7-10$, where the theoretical minimum stable value $\beta_{min}$ is $2\sqrt{3}$ from Hill stability \citep{Gladman1993}. Table \ref{tab:beta_max} provides estimates for the maximum spacing $\beta_{max}$ (see Eqn. \ref{eqn:beta_max}) for each moon prototype. If we assume a maximum spacing equal to the minimum value for stability in single star systems ($\beta \sim 7$), then we would estimate (using Tab. \ref{tab:beta_max}) a maximum of 9 Ceres-mass moons that \emph{could} be stable. However, we confirm this estimate by performing simulations with 10 moons and find that all the simulations are short-lived. Figure \ref{fig:1}a demonstrates how much of the parameter space is filled, where 9 Ceres-mass moons with $\beta = 6$ nearly reaches the outer stability (orange dotted circle). In addition to the simulation lifetime (scaled by $T_1$), we track each satellite's maximum eccentricity ${\rm max}\:e_i$. For most of our simulations, each satellite begins on a circular orbit and thus, the minimum eccentricity is zero. Figure \ref{fig:2} shows the maximum eccentricity attained by each moon (Fig. \ref{fig:2}a) and the simulation lifetime (Fig. \ref{fig:2}b) with respect to the initial orbital spacing parameter $\beta$ for 3 Ceres-mass moons. Figures \ref{fig:2}c and \ref{fig:2}d are similar to \ref{fig:2}a and \ref{fig:2}b, but evaluate 5 moons. Similarly, Figs. \ref{fig:2}e and \ref{fig:2}f evaluate 8 moons. The evaluation of 9 moons is not shown here as the system was unstable in less than a million years. The maximum eccentricity is color-coded (top of the figure) and indicates the index of the moon, where $i=1$ and $i=8$ refer to the innermost and outermost moons, respectively. Since the Earth-mass host planet begins with a non-zero eccentricity ($e_p \approx 0.0167$), then each satellite experiences a forced eccentricity, which arises in solving the secular quadrupole problem of the orbit-averaged disturbing function \citep{Andrade-Ines2017}. The magnitude of the forced eccentricity due to the Sun is typically small, but the eccentricity growth from moon-moon interactions can quickly drive up a moon's eccentricity enough ($e_i \sim 0.1$) for orbit crossings to occur. In Fig. \ref{fig:2}a, the maximum eccentricity of each moon $e_i$ is high due to the chaotic overlap of first order mean motion resonances (MMRs; \cite{Deck2013}) for $2\sqrt{3}\le \beta \lesssim 6$. Once $\beta>6$, the gravitational perturbations from moon-moon interactions weakens and the moon pairs exit the chaotic zone. Beyond $\beta \sim 6$, the maximum eccentricity of each moon remains low ($\mathbf{\lesssim 0.1}$), but non-zero. There are spikes in the maximum eccentricity (in Fig. \ref{fig:2}a) that correlate with dips in the lifetimes $(\log_{10}\,N_1)$ measured in Fig. \ref{fig:2}b. These anomalous features ($\beta = 7.2,\,7.4,\,7.6,\,\ldots$) correspond to first order MMRs between adjacent pairs of moons. The rise in maximum eccentricity at $\beta\sim 9.8$ corresponds to the 2:1 MMR, which is expected at $\beta \approx 9.78$ \citep{Obertas2017}. The stable plateau for three Ceres-mass moons continues until $\beta \sim 24$, where the apocenter of the outermost moon will extend beyond the outer stability limit. Considering systems with additional satellites modifies Figs. \ref{fig:2}a and \ref{fig:2}b by lowering $\beta_{\rm max}$ (see Table \ref{tab:beta_max}) and the limit on the number of moons is restricted by the minimum value, $2\sqrt{3}$. The maximum eccentricity for systems of five and eight Ceres-mass moons are given in Figs. \ref{fig:2}c and \ref{fig:2}e, respectively, while the lifetimes are shown in Figs. \ref{fig:2}d and \ref{fig:2}f, respectively. These panels illustrate similar features as Figs. \ref{fig:2}a and \ref{fig:2}b, where the tail of stability is apparent at $\beta \approx 8$ (see vertical dashed line) Fig. \ref{fig:2}f. For $6.6\lesssim \beta \lesssim 7.6$, a system of eight moons is on the border of stability as evidenced by the growth of eccentricity with $\beta$ for the outermost moon and additional effects (e.g., tides) may prevent long-term stability in general. A dip appears at $\beta \sim 7.3$ in Fig. \ref{fig:2}f, that corresponds to the 5:3 MMR between adjacent moons. The width of the MMRs grows as moons are added to the system because there are more small perturbations possible that can push a moon into a MMR as the system evolves (i.e., evolution of a moon's semimajor axis or $\beta$). Tracking the maximum eccentricity of each moon's orbit shows which values of $\beta$ are likely to produce unstable systems. The timescale for $10^7$ orbits of the innermost moon is only $\sim20,000$ yr, where longer simulations are impractical due to the small timestep required for accurately evolving each system. The gravitational interactions between moons also occur on a short timescale, which further necessitates a small integration timestep. Despite these limitations, the maximum eccentricity provides a good heuristic to show the likely long-term stability of moons. Although not shown here, we perform simulations with 4, 6, and 7 moons and confirm the apparent trends in stability. \subsection{Case Study 2: Pluto-mass Moons} \label{pluto} The mutual Hill radius increases with Pluto-mass moons, which corresponds to a larger physical spacing between moons through the parameter $\beta$. The expected locations of the MMRs between moons more strongly depend on $\beta$, where they move to lower values by a factor of $\sim2.44$, or $\left(m_{Pluto}/m_{Ceres}\right)^{1/3}$. With these expectations, we perform simulations of Pluto-mass moons following the same procedure from Sect. \ref{ceres}. Figure \ref{fig:3} illustrates how the maximum eccentricity of each moon and system lifetime varies with respect to the spacing parameter $\beta$ for a system of 3 (Figs. \ref{fig:3}a and \ref{fig:3}b) and 5 (Figs. \ref{fig:3}c and \ref{fig:3}d) moons. From Table \ref{tab:beta_max}, we expect a system of 3 moons to be stable up to $\beta \sim 10$, where Fig. \ref{fig:3}a shows the increasing maximum eccentricity attained by the outermost moon $(i=3)$ as $\beta$ increases. The minimum spacing appears at $\beta \sim 4.5$ because the 2:1 MMR is expected at $\beta \approx 4$ and planet packing studies have shown broad stability occurring beyond this MMR \citep{Smith2009,Obertas2017,Quarles2018,Lissauer2021}. The underlying physical cause is the wider separation of libration zones for the first order MMRs and the wider physical separation (i.e., less gravitational perturbations) between moons. The spike in Fig. \ref{fig:3}a and corresponding dip in \ref{fig:3}b at $\beta$=6.3 shows the location of the 3:1 MMR, where instabilities occur over longer timescales. For five Pluto-mass moons (in Figs. \ref{fig:3}c and \ref{fig:3}d), there are very narrow ranges $(4.8 \lesssim \beta \lesssim 5.1)$ for which the system can survive up to $10^7$ orbits of the innermost moon. As shown earlier for a system of three moons, the 2:1 MMR delineates a lower boundary and puts a limit on stability at $\beta \sim 4.5$, but the outermost moon's eccentricity grows to nearly 1 at $\beta \sim 5.5$, where the perturbations from the Sun are driving its eccentricity to such high values. From Table \ref{tab:beta_max}, the outermost moon begins beyond the outer stability limit \citep{yagirl2020} at $\beta \sim 5.5$ (yellow dashed line in Fig. \ref{fig:3}d). In between these boundary conditions (MMR overlap and perturbation from the Sun), a system of five Pluto-mass moons could be stable if the apsidal precession rate of the moons is similar enough to prevent orbital overlap or the moons remain out-of-phase in an MMR to avoid collision (e.g., the 3:2 resonance between Neptune and Pluto). \subsection{Case Study 3: Luna-mass Moons} Table \ref{tab:beta_max} shows that the number of Luna-mass moons between the Roche radius and the Hill radius is limited to 4 due to requirement of $\beta_{min} = 2\sqrt{3}\approx 3.5$. Thus, we perform simulations considering only 3 and 4 Luna-mass moons following a similar procedure as in Sections \ref{ceres} and \ref{pluto}. The maximum eccentricity and system lifetime for three moons is shown in Figs. \ref{fig:4}a and \ref{fig:4}b, while Figs. \ref{fig:4}c and \ref{fig:4}d illustrate the same measures for four Luna-mass moons. Three moons can maintain stable orbits for a narrow range in spacing, $4< \beta < 6$. An additional moon (Figs. \ref{fig:4}c and \ref{fig:4}d), shows that the MMRs encountered by the outermost moon from the Sun and the overlap of MMRs between moons destabilizes the entire system. \cite{Quarles2018} explored planet packing in $\alpha$ Centauri, where the secondary star significantly perturbs the planetary system and they showed that 3-planet systems can be long-lived with an appropriately chosen orbital spacing and initial eccentricity of the planets. The system architecture of planets orbiting one star of a stellar binary is similar to our system of moons orbiting an Earth-mass planet, where the forced eccentricity and boundaries of orbital stability limit the total number of satellites. Due to the similarity in structure, we arrive at similar conclusions. \subsection{Log-Linear fit of unstable lifetimes} \label{sec:log-linear} To compare the system lifetimes of different orbital architectures in planet packing, previous studies \citep{Chambers1996,Smith2009,Obertas2017,Quarles2018} employ a log-linear fit to the first transition region to stability $(\beta \sim4-8)$. In figures \ref{fig:2}, \ref{fig:3}, and \ref{fig:4}, we mark these approximately linear trends using cyan lines, where we measure the slope $b$ and y-intercept $c$. \cite{Quarles2018} showed that a constant $\beta$-shift to account $\beta_{\rm min}=2\sqrt{3}$ is necessary to ensure a fair comparison between simulated systems. As a result, we fit the unstable data points in the transition region to a log-linear function of the form: \begin{equation} \label{eqn:log_linear} \log_{10}\,t = b^\prime\beta^\prime + c^\prime, \end{equation} where the prime ($\prime$) coordinates refer to fits made with a shift in the $x$-axis (i.e., $\beta^\prime = \beta-2\sqrt3$). Consequently, this shift in the x-axis minimizes the correlation between the slope ($b^\prime$) and the y-intercept ($c^\prime$). This fit function is similar to the ones used by \cite{Quarles2018} and \citep{Lissauer2021} and has allowed us to make consistent comparison of the coefficients from the previous work. Table \ref{tab:best_fit_coeff} shows a general trend in the coefficients of the log-linear function for all moon-types, where fewer moons $(n)$ correspond to a steeper slope $(b^\prime)$ and a longer system lifetime $(c^\prime)$ at $\beta_{\rm min}$. The slope of the fit indicates how the system lifetime changes as a function of the orbital spacing parameter $\beta$. For example, a steeper slope conveys that an increase in $\beta$ significantly extends the system lifetime. For Ceres-mass moons, the slope only decreases by $\sim10\%$ as more moons are added. The slopes for Ceres-mass moons are more similar to values determined through packed three planet systems around a single star \citep{Lissauer2021} rather than within a binary like $\alpha$ Centauri AB \citep{Quarles2018}. This is likely due to the minimal forcing from the Sun, especially when the moons occupy a smaller portion of the host planet's hill radius. Table \ref{tab:beta_max} suggests that more Ceres-mass moons could stably orbit the host planet, but the moons are perturbed and scattered by the neighboring moons, (or the star) due to their lower inertia. Hence, the system of Ceres-mass moons is largely unstable for $\beta$ spacing $\le$ 6.5. The decreasing slopes also indicate that the orbital spacing $\beta$ must be increased, for increasing n, in order to avoid orbital crossings and maintain stability. The other massive moons (Pluto-mass and Luna-mass) can absorb more internal (moon-moon) perturbation due to the increased inertia, or smaller changes to their angular momentum. Hence, the slopes for 3-moon cases are much steeper compared to the Ceres-mass(6 times for Pluto-mass and 12 times for Luna-mass). In addition, pairs of more massive moons have a wider mutual Hill radius which reduces the number of moons that can fit within the stability boundary. Therefore, the number of stable moons for Ceres-mass is $\le$ 8, for Pluto-mass is $\le$ 5 and for the Luna-mass is $\le$ 4. The decreasing slopes for Pluto-mass and Luna-mass moons enforces our previous conclusion that the $\beta$ must be increased, for increasing n, to avoid the orbital crossings. \begin{table} \centering \caption{Coefficients for the log-linear fits (Cyan colored lines in Figs. \ref{fig:2}-\ref{fig:4}) using $\log_{10}(t) = b^\prime \beta^\prime + c^\prime$ (Eqn. \ref{eqn:log_linear}). The primed values ($m^\prime$ and $b^\prime$) constitute the shift in $\beta$ by $2\sqrt3$. $^1$ A system of up to five Earth-mass planets orbiting $\alpha$ Cen B \citep{Quarles2018}. $^2$ A hypothetical system, where three Earth-like planets orbit a Sun-like star \citep{Lissauer2021}.} \label{tab:best_fit_coeff} \begin{tabular}{lccc} \noalign{\smallskip} \hline \noalign{\smallskip} Mass & n & Slope & y-intercept \\ & & ($b^\prime$) & ($c^\prime$) \\ \noalign{\smallskip} \hline \noalign{\smallskip} Ceres & 3 & 1.37 & 2.21 \\ & 4 & 1.32 & 1.83 \\ & 5 & 1.31 & 1.59 \\ & 6 & 1.25 & 1.56 \\ & 7 & 1.24 & 1.53 \\ & 8 & 1.24 & 1.51 \\ \hline Pluto & 3 & 6.45 & 0.50 \\ & 4 & 5.35 & 0.24 \\ & 5 & 5.01 & 0.20 \\ \hline Luna & 3 & 12.44 & 1.96 \\ & 4 & 6.4 & 1.88 \\ \hline Earth-mass$^1$ & 3 & 0.996 & 2.234 \\ & 5 & 0.742 & 2.084 \\ \hline Earth-mass$^2$ & 3 & 1.68 & 1.799 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \end{table} \subsection{Analysis of MMR using MEGNO maps} \label{mmr_megno} We explore the potential routes to instability in systems of multiple moons by using the chaos indicator MEGNO (mean exponential growth of nearby orbits $\langle Y \rangle$) maps. The MEGNO criterion is generally used to distinguish between chaotic, periodic, and aperiodic orbits within a phase space. Analysis of many body systems using MEGNO was originally developed by \cite{Cincotta1999,Cincotta2000,Cincotta2003} to identify potential instabilities due to resonance overlap more efficiently in a short numerical integration period. It is capable of detecting high-order resonances (for example, see \cite{satyal2014} for a MEGNO map of a circumprimary planet displaying 39:2 MMR in a binary system) due to its sensitivity to unstable orbits and is a global indicator of dynamical changes in any Hamiltonian system. In our case, we are exploiting this particular nature of MEGNO which can reveal fine resonance structures in a phase space to display the unstable and chaotic regions. We have limited the phase space up to $e_o$ = 0.3 because (for higher $e_o$) it is mostly chaotic region induced by the secular evolution of the eccentricities, see for example Fig. 8 from \cite{Tamayo2021}. Figure \ref{fig:5}a displays MEGNO values (color-coded) calculated for the outermost moon in a system with an Earth-mass planet and three Ceres-mass moons (ignoring the Sun) with a wide variation in the initial $\beta$ spacing and eccentricity (of the outermost moon) in the $e_o$ - $\beta$ phase space. MEGNO values where $\langle Y \rangle$ = 2 corresponds to periodic (and presumably stable) orbits, where $\langle Y \rangle$ = 6 indicates initial conditions that drive the system into chaos. Other quasi-periodic orbits can exist between these extremes (purple-orange). The outermost moon's initial eccentricity is varied from 0 to 0.3 and the spacing $\beta$ between all the moons ranges from 3.5 to 9.5. Most regions in the chaotic regions of the phase space (yellow) are unstable, where the outermost moon has a high probability of collision with the host planet or another one of the moons. The existence of the chaotic regions are largely due to MMR overlap or secular evolution of the eccentricity \citep[i.e.,][]{Wisdom1980,Mudryk2006,Quillen2011,Laskar2017,Hadden2018,Petit2020,Tamayo2021}. The stable orbits (black) begin when $\beta \gtrsim 6$, and for lower initial eccentricity. For one initial condition ($e_o$ = 0), the map compliments the analysis presented in Sec. \ref{ceres}, Fig. \ref{fig:2}a, where the orbits are stable for the full integration period when a higher $\beta$ spacing is selected. Figures \ref{fig:5}b and \ref{fig:5}c both explore a similar phase space, but now including the Sun in the simulations. Multiple MMRs are V-shaped (because their libration width increases with increasing eccentricity) in the map, where the 8:5, 5:3, 7:4, and 9:5 are some of the prominent MMRs between moons and are marked (dashed green lines). Figure \ref{fig:5}b demonstrates how the Solar perturbations affect which of the initial conditions are affected by the moon-moon MMRs. As a result, there is an offset observed in the MEGNO phase-space when comparing Figs. \ref{fig:5}a and \ref{fig:5}b. For example, Fig. \ref{fig:5}a shows the 5:3 MMR located at $\beta$ = 7.2, but the MMR structure is located at $\beta$ = 7.4 in Fig. \ref{fig:5}b. This shift is accounted for by the secular perturbations on the moons by the Sun, which causes the orbital spacing of the moons to change over time. A moon can then evolve into and out of a nearby MMR. Figure \ref{fig:5}c illustrates the shifts in orbital spacing between the two inner moons through $\Delta\beta_{12}$, which signifies the maximum change in the orbital spacing. In and around the observed MMRs, the shift in the $\Delta\beta$ is about 0.1, which accounts for the differences in the location of MMRs in Figs. \ref{fig:5}a and \ref{fig:5}b . Also observed in Fig. \ref{fig:5}c is that the $\Delta\beta$ values bigger than 0.3 indicates unstable orbits and those values that are less than 0.3 are stable. It is also apparent from the map that the orbits with $\Delta\beta$ less than 0.2 are not heavily influenced by any MMRs. A similar phase space is explored for larger moons, but focusing on three moon systems with both the host planet and the Sun included in Fig. \ref{fig:6}a (Pluto-mass moons) and \ref{fig:6}b (Luna-mass moons). Figure \ref{fig:6}a shows that the outermost Pluto-mass moon can start with relatively large eccentricity as long as the orbital spacing is large enough $(\beta\gtrsim 6.5)$, while an equivalent system with Luna-mass moons is more limited in terms of its initial orbital spacing (Fig. \ref{fig:6}b). Both systems require larger exchanges of angular momentum to perturb their orbits, as compared to the Ceres-moon case (Fig. \ref{fig:5}b), which explains why a higher initial eccentricity can allow for stable, periodic orbits (black regions). Pluto-mass moons appear the most optimal as nearly half of the parameter space allows for periodic orbits. However the MMR at $\beta \sim 6.4$ may induce instabilities as a primordial moon system evolves outward due to tidal interactions with the host planet. Also, at this resonance, the moons' e$_{max}$ are observed to evolve towards 1, Fig. \ref{fig:3}a. \subsection{Shifting the MMRs} \label{Orb_param} The inclusion (or exclusion) of the Sun and its effect in the dynamics of the moons are viewed in global phase space maps ($e_o$ vs. $\beta$) using by the MEGNO criterion (Fig. \ref{fig:5}). For the case with 3 Ceres-mass moons, the 5:3 MMR is clearly observed at $\beta$ = 7.26 without the Sun, Fig. \ref{fig:5}a) and slightly shifted higher at $\beta$ = 7.4 (with the Sun, Fig. \ref{fig:5}b). To visualize the time-series data of individual orbital elements we use initial conditions ($\beta=7.26$ and $e_o$ = 0.01) from Fig. \ref{fig:5} and simulate the system for 25 years ($\sim 3\times 10^6$ orbits of the innermost moon). Figure \ref{fig:7} shows the time-series evolution of the normalized orbital distance $d/R_H$, eccentricity $e$, orbital spacing $\beta$, and resonant angle $\phi_{5:3}$ for each moon in both systems (with and without including the Sun). Note that the two (inner and middle) moons begin on circular orbits, while the outer moon begins with the eccentricity $e_o$ = 0.01, to maintain consistency with Fig. \ref{fig:5}. The normalized distance ($d/R_H$) is used instead of semimajor axis to better illustrate the correlated changes in distance with eccentricity as it affects the instantaneous measure of the orbital spacing. In Fig. \ref{fig:5}b ($\beta=7.26$ and $e_o$ = 0.01 with the Sun included), the value of MEGNO suggests that the orbits are periodic. This is confirmed in the evolution of each moon's normalized distance (Fig. \ref{fig:7}a) and eccentricity (Fig. \ref{fig:7}b, where the colors refer to the inner (red), middle (blue) and outer (green) moons. The gravitational perturbation of the Sun forces a high frequency variation in the relative distance of each moon, which underlies the variation in the orbital spacing $\beta$ (Fig. \ref{fig:7}c. This forcing also prevents the 5:3 MMR resonant angle of the inner pair $(\phi_{12})$ or the outer pair $(\phi_{23})$ of moons from librating (in Fig. \ref{fig:7}d), even though the respective orbital spacing of each pair ($\beta_{12}$ in orange, or $\beta_{23}$ in cyan) crosses the expected MMR location $\beta = 7.26$. In contrast, Figs. \ref{fig:7}e-\ref{fig:7}h display a similar simulation, where the Sun is removed (i.e., ignoring its secular perturbation). In this case, the normalized distance and the eccentricity of each moon are chaotic, which is indicated by the corresponding MEGNO value from Fig. \ref{fig:5}b. The variation of each moon's eccentricity (in Fig. \ref{fig:7}f) can be $2-4$ times larger as compared to Fig. \ref{fig:7}b due to the eccentricity excitation from the 5:3 MMR. The orbital spacing of the inner and outer moon pairs vary on much slower timescales in Fig. \ref{fig:7}g, which allows for moon pairs to evolve together into the 5:3 MMR with an average $\beta \sim 7.26$. The switching between libration and circulation in Fig. \ref{fig:7}h confirms the chaotic nature of this initial condition and the connection with the 5:3 MMR. For an Earth-mass planet with a semimajor axis of 1 AU, the Sun clearly contributes to the dynamics of the planet and its moons. The inclusion of the Sun in a system is necessary for our analysis. However, the chaotic dynamics that remains by removing the Sun is still applicable, where the relevant initial orbital spacing $\beta$ must be shifted by $\sim0.15$ to $\beta \sim 7.4$. Alternatively, moon systems of long-period Earth-mass planets could undergo similar chaotic variations (e.g., Pluto-Charon and its four moons). \subsection{Mass Distribution and Formation Plausibility} \label{surface_density} Based on the number of moons that stably orbit the planet, we calculate the total mass distribution within the stability boundary for all three systems. The total mass of 3 Luna-mass, 5 Pluto-mass, and 8 Ceres-mass moons is equivalent to 1.11 x 10$^{-7}$ M$_\odot$, 3.30 x 10$^{-8}$ M$_\odot$, and 3.60 x 10$^{-9}$ M$_\odot$, respectively. Despite the small differences in the semimajor axis of the innermost moon $a_1$, the order of magnitude differences in mass (see Table \ref{tab:Orb_Param}) are preserved. Since the mass is distributed roughly in the same surface area between the inner (i.e., Roche limit) and outer stability boundaries, the surface mass density for Ceres ($\sigma_{Ceres}$) is approximately 7.2 x 10$^{-5}$ M$_\odot$/AU$^2$, equivalent to 637 g/cm$^2$. The surface mass densities for Pluto and Luna are 10$\sigma_{Ceres}$, and 30$\sigma_{Ceres}$, respectively. Our work concentrates on the maximum number of satellites, with different masses, that can stably orbit around an Earth-like planet. Whether more than one satellite can form around an Earth-like planet is beyond the scope of this work. However, we can provide some assessment on the plausibility of formation based upon our results of the total mass surface density for the maximum number of moons. Moons can form around a giant planet (e.g., Jupiter and Saturn) from a gaseous circumplanetary disk during the last stages of planet formation as has been shown for the Galilean moons \citep{pollack1989,Canup2006}. But, there is not a known minimum planetary mass threshold at which a circumplanetary disk can form and evolve into moons \citep{Ayliffe2009}. Large moons are expected to arise, primarily due to giant impacts, where \cite{Nakajima2022} suggests that impact-induced large moons are more likely to form around rocky planets whose radius is smaller than 1.6 R$_\oplus$. Moreover, numerical models using smooth particle hydrodynamics (SPH) have shown that the mass surface density of moon-forming disks can reach $\sim 10^7$ g/cm$^2$ only at a few Earth radii and substantially spread over the lifetime of the disk \citep{Nakajima2014,Nakajima2022}. Therefore, it is at least plausible that multiple moons could form around Earth-like planets. Further study or confirmation of the current exomoon candidates (e.g., Kepler 1625b-i \cite{Teachey2018}; Kepler 1708b-i \cite{Kipping2022}) could shed more light on these hypotheses. \subsection{Effects of Tides} \label{section:tides} The outward migration of satellites through tidal interactions modifies their potential lifetime \citep{Barnes2002, Sucerquia2019, Lainey2020}. In the Solar System, \citet{Charnoz2010} suggested that the population of small moons that orbit just outside Saturn's rings could have originated at the edge of the main rings and tidally migrated outward. To obtain a complete picture of the orbital stability of exomoons, it is necessary to consider the contribution of planetary and stellar tides. We apply a secular constant time lag (CTL) tidal model \citep{Leconte2010, Hut1981} and evaluate the migration timescales of moons assuming that moon formation readily occurs near the host planet's Roche limit. The lifetime of a moon system can be reduced as the outermost moon migrates toward the outer stability limit (i.e., $\sim 0.40 R_H$; \citet{yagirl2020}), where this will depend on the mass of the satellite (or moon-planet mass ratio) and the assumed time lag $\Delta t$ for the tidal dissipation. The secular model calculates the changes to the orbital elements of both the host planet and its moon through the respective semimajor axes ($a_{\rm p}$ and $a_{\rm sat}$), eccentricities ($e_{\rm p}$ and $e_{\rm sat}$), and mean motion ($n_{\rm p}$ and $n_{\rm sat}$) averaged over an orbit. The model is scaled by the tidal Love number $k_2$ and the time lag $\Delta t$, where the latter is proportional to $(nQ)^{-1}$ in the constant phase lag (i.e., constant $Q$) tidal models \citep{Leconte2010, Piro2018}. We consider two scenarios for three moon systems: a) keep the satellite mass fixed at $3\ m_{Luna}$, while evaluating a range of constant time lag values, and b) keep the time lag fixed at $\Delta t = 100\ {\rm s}$, while evaluating a range in satellite mass (Ceres-, Pluto-, and Luna-mass). The constant time lag $\Delta t$ is varied between simulations from $10-600\ {\rm s}$ on a logarithmic scale. The moon-moon interactions are ignored since we are applying a secular model, where the total mass of the moon system is combined into the innermost moon. This represents a conservative estimate because the innermost moon would be migrating outward more slowly in reality as it would be 1/3 of our prescribed mass. The Earth-mass host planet is assumed to begin with a rotation period of 5 hours, which is consistent with expectations from terrestrial planet formation \citep{Kokubo2007}, and we are interested in a $10^{10}$ year timescale (i.e., the main-sequence lifetime of a G dwarf). In our first scenario, shown in Figure \ref{fig:tides_secular}a, the mass of the satellite is $3\ m_{Luna}$ and the constant time lag is varied over a range that corresponds to a very low dissipation (10 s; red) up to a very high dissipation (600 s; lavender). In all cases, the satellite's outward migration stalls at $\sim 0.1\ R_H$ as the moon's orbital period synchronizes with the planet's rotation period. Assuming that the moons migrate outward together maintaining an orbital spacing of $\beta = 4$, then the outermost moon would migrate beyond the outer stability limit (i.e., $a_3 \sim 0.55\ R_H$) and thereby reduce the number of stable moons by one. The effect of tidal migration is more dire for lower mass moons because they can migrate closer to the outer stability limit (Fig. \ref{fig:tides_secular}b) and require a larger orbital spacing (see Figs. \ref{fig:5}b and \ref{fig:6}a). This combination of circumstances will likely cause at least 1 moon to scatter and/or migrate past the outer stability limit. Therefore, outward tidal migration will likely reduce the number of moons orbiting an Earth-mass planet by at least one in three moon systems and likely more within moons systems of higher multiplicity. Further consideration of tides is beyond the scope of our work, where others could explore the effects of differential tides on outward migration that is similar to models for Saturn's moons \citep{Crida2012,Cuk2016}. \section{Conclusions} \label{section:conclusions} Through n-body simulations, we investigate the potential for systems of $3-9$ moons orbiting an Earth-mass planet and a Solar-mass star. The moons vary in mass, but are analogous to Ceres, Pluto, and our Moon (Luna). Systems of multiple moons are inherently constrained by the \emph{inner} Roche limit and the \emph{outer} stability limit, which can be also scaled by a planet's Hill radius. Scaling by the Hill radius allows our work to be generalized beyond an exact Earth-Sun analog for the primary bodies, because the Hill radius incorporates potential changes in the planetary semimajor axis, eccentricity, and mass, in addition to the stellar mass. Each moon system begins on a circular and coplanar orbit, where the initial orbital phase is selected through the golden ratio following planet-packing studies \citep{Smith2009,Quarles2018,Lissauer2021}. We find using N-body simulations that $7\pm1$ Ceres-mass moons could stably orbit an Earth-mass planet at 1 AU from a Sun-like star. If the moons are more massive (Pluto- or Luna-mass), then the number of moons with stable orbits reduces to $4\pm1$ and $3\pm1$, respectively. Outward tidal migration will likely modify these estimates by at least one moon, where additional moons could be lost through scattering, collisions, or simply migrating beyond the outer stability limit. The orbital spacing between each moon is measured using a dimensionless parameter $\beta$, which is the distance between two neighboring moons divided by their mutual Hill radius. The maximum number of moons produces a minimum in the orbital spacing, where we find a $\beta = 6,\ 4.5,\ \text{and}\ 3.5$ for Ceres-, Pluto-, and Luna-mass moons, respectively. The potential stability for these moon systems depends on their proximity to MMRs between adjacent pairs of moons. The location of the MMRs are estimated using the chaos indicator MEGNO, which shows a shift of $\sim0.15$ in $\beta$ from the expected location due to perturbations on the moon system from the Sun. We show that a moon can behave chaotically (i.e., periodic switching between circulation and libration in the 5:3 resonant angle) when starting within the libration zone of the 5:3 MMR and ignoring the gravitational solar perturbations. Planet-packing studies have used a best-fit slope from a log-linear model to compare the changes in stability due to planet multiplicity \citep{Chambers1996,Smith2009,Pu2015,Obertas2017,Quarles2018}. We employ a similar technique, but in shifted coordinates so that the $y$-intercept occurs at $\beta = 2\sqrt{3}$ \citep[e.g.,][]{Quarles2018,Lissauer2021}. From these measurements, we find that Ceres-mass moons have a slope from $1.24-1.37$, which is inversely correlated with the number of moons. These slopes are less than the expected values for systems of three Earth-mass planets orbiting a Solar-mass star and greater than the more extreme case where three planets are orbiting $\alpha$ Centauri B, as stellar binary. Pluto- and Luna-mass moons have a much steeper slope because they have a larger mutual Hill radius, which drastically limits the potential orbital spacing between moons (see Table \ref{tab:beta_max}). We use the mass surface density within the stability boundary limits to determine whether systems of multiple moons are at least plausible. \cite{Nakajima2022} showed that the surface density of a moon-forming disk can reach $\sim 10^7$ g/cm$^2$, which is much higher than the mass surface density of three Luna-mass moons ($\sim 10^4$ g/cm$^2$). The mass surface density for five Pluto-mass moons is smaller by a factor of 3, while it decreases by a factor of 30 for eight Ceres-mass moons. It appears plausible that multiple moons could form, but further study using SPH simulations would be necessary and is beyond the scope of our work. Our N-body simulations are mostly limited to only $10^7$ orbits of the innermost moon, which is ${\sim}3\times 10^4$ yr. The long-term evolution of moon systems will be determined by the outward migration of the moons due to tidal interactions with the host planet. We evaluate this possibility using a constant time lag (CTL) secular model \citep{Hut1981,Leconte2010,Quarles2021}, where outward tidal migration becomes significant at long timescales ($\sim 10^8-10^{10}$). As a result, the number of moons that can stably orbit an Earth-mass planet is reduced by one. In the case of Luna-mass moons, only one moon is lost, where systems of less massive moons can have more significant losses due to the relative ease of scattering events and the final migration distance of the innermost moon. Detecting multiple moon systems orbiting other stars is currently out of reach, where there are only a couple of exomoon candidates using the photometric detection method \citep{Teachey2018,Kipping2022}. Recent observations from ALMA \citep{Benisty2021} suggest that a moon-forming disk exists around PDS 70c, which points to the potential for long-wave observations or direct imaging \citep{Vanderburg2018}. From this work, the dynamical stability of moon systems limits the existence of exomoons for Earth-analogs in their respective habitable zones, where confirmation from future observations are needed. \section*{Acknowledgements} The authors appreciate the constructive comments and feedback from the referee. M.R.F. acknowledges support from the NRAO Gr\"{o}te Reber Fellowship and the Louis Stokes Alliance for Minority Participation Bridge Program at the University of Texas at Arlington. This research was supported in part through research cyberinfrastructure resources and services provided by the Partnership for an Advanced Computing Environment (PACE) at the Georgia Institute of Technology. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras} \bibliography{References} % \bsp % \label{lastpage}
Title: Synchrotron Polarization of Gamma-Ray Burst Afterglow Shocks with Hydrodynamic-scale Turbulent Magnetic Field
Abstract: Afterglows of gamma-ray bursts (GRBs) are emitted from expanding forward shocks, which are expected to have magnetic field much stronger than the interstellar field, although the origin of the field is a long-standing problem. Two field amplification mechanisms, plasma kinetic instabilities and magnetohydrodynamic instabilities, have been discussed so far. The coherence length scales of the fields amplified by these two processes are different by 7-10 orders of magnitudes, and the polarimetric observations may distinguish them. We construct a semi-analytic model of the forward shock afterglow polarization under the assumption of hydrodynamic-scale turbulent magnetic field. We perform numerical calculations of synchrotron polarization for the isotropic turbulence and the zero viewing angle. We find that the polarization degrees are ~1-3% when the field coherence length scale in the fluid comoving frame is comparable to the thickness of the shocked regions. This range of polarization degree is comparable to that of the observed late-phase optical afterglows. Our model also shows that the radio polarization degrees are comparable to the optical ones on average but can be higher than the optical ones at some time intervals. The polarization angles are shown to vary randomly and continuously. These polarimetric properties are clearly different from the case of plasma kinetic instability. Simultaneous polarimetric observations of GRB afterglows at the radio and optical bands have recently started, which will help us constrain the magnetic field amplification mechanism.
https://export.arxiv.org/pdf/2208.09242
\title{Synchrotron Polarization of Gamma-Ray Burst Afterglow Shocks with Hydrodynamic-scale Turbulent Magnetic Field} \correspondingauthor{Asuka Kuwata} \email{a.kuwata@astr.tohoku.ac.jp} \author[0000-0002-6169-2720]{Asuka Kuwata} \affiliation{Astronomical Institute, Graduate School of Science, Tohoku University, Sendai 980-8578, Japan} \author[0000-0002-7114-6010]{Kenji Toma} \affiliation{Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Sendai 980-8578, Japan} \affiliation{Astronomical Institute, Graduate School of Science, Tohoku University, Sendai 980-8578, Japan} \author[0000-0003-2579-7266]{Shigeo S. Kimura} \affiliation{Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Sendai 980-8578, Japan} \affiliation{Astronomical Institute, Graduate School of Science, Tohoku University, Sendai 980-8578, Japan} \author[0000-0001-7952-2474]{Sara Tomita} \affiliation{Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Sendai 980-8578, Japan} \affiliation{Astronomical Institute, Graduate School of Science, Tohoku University, Sendai 980-8578, Japan} \author[0000-0003-3383-2279]{Jiro Shimoda} \affiliation{Department of Physics, Graduate School of Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8602, Japan} \keywords{Gamma-ray bursts (629), Magnetic fields (994), Non-thermal radiation sources (1119), Jets (870)} \section{Introduction} \label{sec:intro} Gamma-ray bursts (GRBs) are intense flashes of gamma-rays from cosmological distances, and they are commonly understood as emission from relativistic jets launched after core collapse of massive stars or compact star mergers \citep{Hjorth2012, Abbott2017}. After the prompt gamma-ray emission, long-lasting broadband afterglows are usually observed. In the standard picture, the afterglows are non-thermal electron synchrotron emission from forward and reverse shocks formed by interaction of the relativistic jets with the ambient media \citep[][for reviews]{Meszaros2002, Piran2004, Kumar2015}. The reverse shock emission is bright in some bursts for $\sim 10^3\;$s at the optical band ($\sim 10^5\;$s at the radio band) \citep{Gao2015, Resmi2016}, and in other bursts or at later times the forward shock emission is dominant, which we focus on in this paper. At relativistic collisionless shocks, turbulent magnetic field is amplified and the wave-particle interaction gives rise to non-thermal electrons, but the detailed mechanisms of these processes are still elusive \citep[][for a review]{Sironi2013}. Understanding them will also help us extract more information on the properties of jets, progenitor systems, and their ambient media from observational data of afterglows. A field amplification mechanism widely discussed is Weibel instability at the shock, which amplifies magnetic field turbulence on the plasma skin depth scale \citep{Medvedev1999, 2005PhPl...12h0705K, Sironi2011, Ruyer2018, Takamoto2018, Lemoine2019}, although it is unclear whether this field can be maintained at a sufficiently long distance in the downstream that it can account for the observed synchrotron flux \citep{Gruzinov1999, Chang2008, Keshet2009, Tomita2016, Tomita2019, Asano2020, Huang2022}. Other instabilities that amplify turbulent field on hydrodynamic scales are also studied \citep{Sironi2007, InoueT2013, Mizuno2014, Duffell2014}. Polarization of the synchrotron emission has a potential to be powerful probe for the magnetic field structure. For the late-phase forward shock afterglows, linear polarization degree (PD) has been measured for many bursts at the optical band at a level of PD $\sim 1-3 \%$ \citep{Covino2004, Covino2016}. The distribution of magnetic field can be anisotropic, depending on the angle from the shock normal, and then the net PD can be at the observed level even for the plasma-scale turbulent field model \citep{Sari1999, Ghisellini1999}. In this model, the optical polarization angle (PA) may flip by 90 degrees, although the PA flips were clearly observed only in GRB 121024A and GRB 091018 \citep{Wiersema2014}.\footnote{Jets with angular structure and forward shocks with strong ordered magnetic field can have no PA flips \citep{Rossi2004, Granot2003}. The optical observation of GRB 020813 shows no PA flip \citep{Lazzati2004}.} This model also predicts low PD in the phase earlier than the jet break time (say, $\sim 1$ day). It is disfavored by the detection of early-phase forward shock polarization, PD $\sim 10 \%$, in GRB 091208B with Kanata Telescope \citep{Uehara2012}, although there might be contribution from highly-polarized reverse shock emission \citep{Jordana-Mitjans2021, Mundell2013}. Recently, radio forward shock polarization in the optically thin regime for synchrotron self-absorption has been first detected with ALMA in GRB 171205A \citep{Urata2019}. Simultaneous observations at the optical and radio bands will be a new observational test of the magnetic field structure. The radio PD is lower than the optical one generally in the plasma-scale turbulent field model \citep{ST2021, Birenbaum2021, Rossi2004}. In contrast to the plasma-scale turbulent field model, the hydrodynamic-scale turbulent field model has not been studied well. It has been expected that the PD and PA temporally change in a random manner and PD $\sim 70/\sqrt{N} \%$, where $N$ is the number of patches within which the magnetic field is considered to be coherent in the relativistically-beamed visible region \citep{Gruzinov1999}, but no numerical calculations have been shown so far. Frequency dependence of PD and PA has not been studied, either. In this paper, we build a semi-analytic model of hydrodynamic-scale turbulent field to predict its multi-band polarimetric signature. We show that the radio PD can be higher than the optical one, differently from the plasma-scale field model. This paper is organized as follows. In \Secref{sec:blast}, we introduce the dynamics and synchrotron emission from the expanding forward shocks. In \Secref{sec:large-B}, we construct a synchrotron polarization model with hydrodynamic-scale turbulent magnetic field. We analytically estimate the level of the polarization degree in \Secref{sec:analytical-estimate} and we show the numerical calculation results of multi-wave band synchrotron polarization in \Secref{sec:results}. \Secref{sec:summary} is devoted to summary and discussion. \section{Forward shock dynamics and emission flux} \label{sec:blast} We consider GRB afterglows as synchrotron emission from relativistically and adiabatically expanding spherical forward shocks. We calculate their dynamics and emission fluxes by following the formulation of \cite{Granot1999a} with the thin-shell approximation (see Section \ref{subsec:flux}). We also take into account the collimation of outflows. Throughout this paper, the superscript prime denotes a physical value measured in the comoving frame (i.e., the rest frame of the fluid in the downstream of the shock wave). \subsection{Equal arrival time surface} \label{subsec:EATS} The radius of the shock front is denoted by $R=R(t)$, and its Lorentz factor $\Gamma$ scales as $R^{-3/2}$ in the case of adiabatic expansion \citep{BM1976}. The radius $R$ can be rewritten as a function of a photon arrival time to the observer, $T$. We use a spherical coordinate system centered on the shock wave and set our line of sight along the $z$-axis for convenience. For a photon emitted at time $t$ and position $(r,\mu)$ in the lab frame, where $\mu \equiv \cos \theta$, the arrival time $T$ is \begin{equation} \frac{T}{1+z} = t - \frac{r\mu}{c}, \label{eq:observer-time} \end{equation} where $z$ is the cosmological redshift and $c$ is the speed of light. We have chosen $T=0$ as the arrival time of a photon emitted from the origin at time $t=0$. Solving the motion equation of the shock, $dR/cdt = \sqrt{1-1/\Gamma^2}$ with $\Gamma \propto R^{-3/2}$ and $\Gamma \gg 1$, we obtain \begin{equation} \label{eq:R-EATS} R \simeq \frac{c T/(1+z)}{1 - \mu + 1/(8\Gamma^2)}. \end{equation} This surface is called ``the equal arrival time surface (EATS)''. \Figref{fig:geometry} schematically shows the geometry of our system. We regard a part of the blast wave within the angle interval $2\theta_j$ as produced by a GRB jet, and exclude the other part. In this work, we focus on the case of the viewing angle $\theta_v=0$. \subsection{Emission flux} \label{subsec:flux} The energy flux density of synchrotron emission should be considered based on EATS. \cite{Granot1999a} provided with a general formula of the flux density of radiation from a spherical expanding system in the case of the optically-thin limit and isotropic radiation in the comoving frame, \begin{align} F(\nu,T) = \frac{1+z}{4\pi d_L^2} \int_0^{2\pi} d\phi \int_{-1}^1 d\mu \int_0^\infty r^2 dr \frac{P'(\nu', \bm{r}, t)}{\gamma^2(1-\beta \mu)^2}, \label{eq:flux-general} \end{align} where $d_L$ is the luminosity distance to a GRB, $\gamma$ is the Lorentz factor of the fluid, $\beta = \sqrt{1-1/\gamma^2}$, $\nu' = \gamma \nu (1-\beta \mu)$, and $t$ is given by \Eqref{eq:observer-time}. The factor $1/\gamma^2(1-\beta\mu)^2$ represents the Doppler beaming effect, by which the bright region is concentrated to $\theta \lesssim \gamma^{-1}$. The emission from the region at $\theta > \gamma^{-1}$ is beamed away from the line of sight. We set the emission power $P'$ to be zero to $\cos \theta < \cos \theta_j$ (see \Figref{fig:geometry}). $P'$ depends on the number and energy densities in the shocked region, which rapidly decrease on the length scale of $\sim R/\Gamma^2$ in the downstream of the shock front. This structure of the shocked region with $\Gamma \gg 1$ is given by Eqs. (27)-(30) and (40)-(42) of \cite{BM1976}, and we call it ``BM structure''. In our model, instead of taking account of BM structure, we approximate the bright region behind the shock front as a thin shell. The number density and internal energy density of the thin shell are then given by \begin{align} n' &= 4\gamma_f n, \label{eq:thinshell-n} \\ e' &= 4\gamma_f^2 n m_{\rm p} c^2, \label{eq:thinshell-e} \end{align} where $\gamma_f$ is the Lorentz factor of the fluid just behind the shock front, $n$ is the number density of ambient medium, and $m_{\rm p}$ is the proton mass. We employ the thin-shell approximation as \begin{align} P'(\nu',\bm{r},t) = P'(\nu',R,t) &\times \Delta R' \times C \nonumber\\ &\times \delta (r'-R'(t')), \label{eq:power-thinshell} \end{align} where $\Delta R' = R(t)/16\gamma_f$ is the thickness of shocked region in the comoving frame, and $C$ is the normalization constant to fit the approximated flux to the flux calculated with BM structure. We introduce a self-similar variable $y$ \begin{equation} y \equiv \frac{R}{R_l}, \end{equation} where $R_l$ is the radius of the shock from which a photon on the line of sight reaches the detector at a time $T$. From \Eqref{eq:R-EATS}, $y$ depends on $T,r$ and $\mu$. We can also describe the condition of adiabatic expansion as \begin{equation} \gamma_f = \gamma_l y^{-3/2}, \end{equation} where $\gamma_l$ is the Lorentz factor of the fluid just behind the shock at $R = R_l$. From \Eqref{eq:observer-time} and $R_l = 16\gamma_l^2 c T/(1+z)$, we can express $r, \mu$ in terms of $y$ \citep{Granot1999a}, \begin{equation} \label{eq:r-mu} r = R = R_l y,\quad \mu \simeq 1-\frac{1 - y^4}{16\gamma_l^2 y}. \end{equation} Substituting \Eqref{eq:power-thinshell} and \Eqref{eq:r-mu} into \Eqref{eq:flux-general}, we obtain \begin{equation} F(\nu,T) = C \frac{64(1+z)R_l^3}{\pi d_L^2} \int_0^{2\pi} d\phi \int_{\frac{1}{32\gamma_l^2}}^1 dy \frac{(1+3y^4) y^{10} P'}{(1+7y^4)^3}. \label{eq:flux-thinshell} \end{equation} See \Secref{subsec:param} for the value of $C$. \subsection{Synchrotron power} \label{subsec:syn} To calculate $P'$ at each point, we assume that the energy densities of accelerated electrons $e'_e$ and magnetic field $e'_B$ are fixed fractions of the local internal energy; $e'_e = \epsilon_e e'$ and $e'_B = \epsilon_B e'$. We also assume that the electrons are accelerated by the forward shock and they have a single power-law distribution function everywhere in the downstream of the shock: \begin{equation} N(\gamma'_e) = K {\gamma'_e}^{-p}\quad {\rm for}\ \gamma'_e \ge \gamma'_m, \end{equation} where $\gamma'_e$ is electron's Lorentz factor and $p$ is constant ($p>2$). The normalization constant $K$ is determined as \begin{equation} K = (p-1) n' {\gamma'_m}^{p-1}, \end{equation} and the minimum Lorentz factor of electrons $\gamma'_m$ is determined as \begin{equation} \gamma'_m = \pr{\frac{p-2}{p-1}} \frac{\epsilon_e e'}{n' m_e c^2}, \end{equation} where $m_e$ is electron mass. \begin{deluxetable}{lrl} \tablecaption{Parameters in this work \label{table:ordered-param}} \centering \tablehead{ \colhead{parameter} & \colhead{symbol} & \colhead{value} } \startdata redshift & $z$ & $0.0$ \\ isotropic energy of blast wave & $E_{\rm iso}$ & $2.0\times 10^{52}\ {\rm erg}$ \\ upstream number density & $n$ & $1.0\ {\rm cm^{-3}}$ \\ power-law index of accelerated electrons & $p$ & $2.5$ \\ energy fraction of accelerated electrons & $\epsilon_e$ & $0.1$ \\ energy fraction of magnetic field & $\epsilon_B$ & $0.01$ \\ opening angle of jet & $\theta_j$ & $6.0\ {\rm deg.}$ \\ viewing angle of jet & $\theta_v$ & $0.0\ {\rm deg.}$ \\ \enddata \end{deluxetable} \cite{Granot1999a} gives approximate synchrotron power formulas, \begin{equation} P' = \begin{cases} P'_{\rm \nu',max} \pr{\frac{\nu'}{\nu'_m}}^{\frac{1}{3}} & {\rm for}\ \nu' < \nu'_m \\ P'_{\rm \nu',max} \pr{\frac{\nu'}{\nu'_m}}^{-\frac{p-1}{2}} & {\rm for}\ \nu' > \nu'_m \end{cases}, \label{eq:synpower-approx} \end{equation} where \begin{equation} P'_{\rm \nu',max} = 0.88 \frac{4(p-1)}{3p-1} \frac{n' P'_{\rm e,avg}}{\nu'_{\rm syn}(\langle \gamma'_e \rangle)} \end{equation} where a factor of $0.88$ comes from the fitting with exact synchrotron emission and $P'_{\rm e,avg}$ is the synchrotron power by a single electron with an average Lorentz factor of $\langle \gamma'_e \rangle \equiv \epsilon_e e'/(n' m_e c^2)$. This is given as \begin{equation} P'_{\rm e,avg} = \frac{4}{3} \sigma_T c \beta_e^2 \langle \gamma'_e \rangle^2 \epsilon_B e', \end{equation} where $\sigma_T$ is the Thomson cross section and $\beta_e = \sqrt{1-1/\langle \gamma'_e \rangle^2}$. The synchrotron peak frequency in the fluid frame is \begin{equation} \nu'_{\rm syn} (\gamma'_e) = \frac{3\gamma_e^{'2} q_e B'}{16 m_e c}, \end{equation} where $q_e$ is an electron's electric charge and $B' = \sqrt{8\pi \epsilon_B e'}$ is the local magnetic field strength. $\nu'_m$ in \Eqref{eq:synpower-approx} is defined as $\nu'_m \equiv \nu'_{\rm syn} (\gamma'_m)$. \subsection{Parameter setting, light curves and brightness profile} \label{subsec:param} The Lorentz factor of the shock wave is given by \begin{equation} \Gamma = \frac{1}{2} \pr{\frac{17E_{\rm iso}}{16\pi n m_{\rm p} c^5 T^3/(1+z)^3}}^{1/8}, \end{equation} according to \cite{BM1976}. Then, we can obtain observed synchrotron flux by using the above formulae and plot the light curves in \Figref{fig:light-curve}. For this calculation we have used the parameter values shown in Table 1, which are typical ones of long GRBs \citep[e.g.][]{Panaitescu2002}, except for the redshift. We assume nearby events with $z \sim 0$. We fix those parameter values for the flux calculation and change a parameter value on the turbulent magnetic field to study the properties of PD and PA (see \Secref{sec:large-B}). We consider that the sideways expansion of collimated relativistic blast waves is weak, as indicated by high-resolution hydrodynamic simulations \citep{Zhang2009, vanEerten2012}, and thus we set $\theta_j =$ const. for simplicity. As we mentioned in \Secref{subsec:EATS}, we focus on the case of $\theta_v = 0$. In \Figref{fig:light-curve}, the light-curve break at $T\simeq0.5~{\rm day}$ is due to the collimation of outflow (i.e., jet break). According to \Eqref{eq:synpower-approx}, the integrated flux spectrum has a broken power-law form, and its peak frequency is given by \begin{equation} \nu_T = \nu_m (y=1). \end{equation} We consider the frequencies of $10^{15}~{\rm Hz}$ (optical) and $100~{\rm GHz}$ (radio). In the time interval $T = 0.1-1\;$day, for which we calculate PD and PA, $\nu_T$ always lies between the radio and optical frequencies. We have confirmed that the synchrotron cooling and synchrotron self-absorption effects are not important for our parameter set, similarly to the case in \cite{ST2021}, i.e., $\nu_a < 100~{\rm GHz} < \nu_T < 10^{15}~{\rm Hz} < \nu_c$. The break of the radio light curve at $T\simeq 2\;$day is due to the crossing of $\nu_T$. The dashed lines in \Figref{fig:light-curve} represent the calculation results with BM structure. We find that the thin-shell approximation can mimic the emission flux from the shock with BM structure very well with $C = 0.80$. In \Figref{fig:flux-density-element}, we show the flux density element $dF/dyd\phi$, which is the integrand of \Eqref{eq:flux-thinshell} at the optical and radio bands. At the optical band, the brightest part is around the middle part of the EATS, $y \sim 0.5-0.7$, while at the radio band, it is on the observer side of the EATS, $y \sim 0.8-1.0$. This frequency dependence of the brightness profile causes the difference of the polarization between the optical and radio bands, as shown in \Secref{sec:analytical-estimate} and \ref{sec:results}. \section{polarization from hydrodynamic-scale turbulent magnetic field} \label{sec:large-B} Synchrotron polarization depends on the magnetic field configuration at each local position. We consider the turbulent magnetic field with coherence length of hydrodynamic scale, which may be produced by magnetohydrodynamic instabilities such as Richtmyer-Meshkov instability \citep{Sironi2007,InoueT2013, Mizuno2014}. In this case, the maximum coherence length of the magnetic field will be $\sim \Delta R$. Since the detailed structure of magnetic field in the shocked region has not been clear yet, we build a generic, semi-analytic model with parametrized field anisotropy under the assumption that the wavelength of turbulent field is $\sim \Delta R$. In reality, the turbulent magnetic field will be amplified by the instabilities on a few eddy turnover times and cascade to smaller scales in the shocked region, forming a power spectrum of the magnetic energy \citep{Brandenburg2005, InoueT2011, Xu&Lazarian2016, Tomita2022}. We confirm that our conclusion on the synchrotron polarization does not change even in a case of Kolmogorov-type turbulence (see \appref{app:Kolmogorov-spectrum}). The distribution of magnetic field strength and coherence length in the shocked region will depend on the amplification time scale, but we ignore the amplification process for simplicity. \subsection{Turbulent magnetic field model} \label{subsec:B-field} We use a method based on the work of \cite{Giacalone1999} and its application to the polarization of supernova remnants \citep{Bykov2020}. To obtain the turbulent magnetic field that varies on the thin shell, we first derive turbulent magnetic field on a 2D plane and then rotate it onto the spherical thin shell. We consider the summation of a large number of waves with directions of wavevectors and phases given by the uniform random numbers on a 2D plane whose normal is along $\bm{\hat{z}}$: \begin{equation} \bm{B}(x,y) = \sum_{\rm n=1}^{N_m} \bm{\hat{b}}_n \exp{(i\bm{k}_n \cdot \bm{r}'_n + i\beta_n)}, \label{eq:B-field} \end{equation} where \begin{equation} \label{eq:B-field-direction} \bm{\hat{b}}_n = \sigma_{\perp} \cos \alpha_n \bm{\hat{y}}'_n + i\ \sigma_{\rm \|} \sin \alpha_n \bm{\hat{z}}, \end{equation} the transformation from $\bm{r} = (x,y,z)$ to $\bm{r}'_n = (x'_n,y'_n,z)$ is described by \begin{equation} \begin{pmatrix} x^\prime_n\\ y^\prime_n \end{pmatrix} = \begin{pmatrix} \cos \phi_n & \sin \phi_n \\ -\sin \phi_n & \cos \phi_n \\ \end{pmatrix} \begin{pmatrix} x\\ y \end{pmatrix}, \end{equation} and $\phi_n, \alpha_n, \beta_n$ are random numbers. We have two orthogonal directions $\bm{\hat{y}}'_n, \bm{\hat{z}}$ of the turbulent magnetic field, which are chosen orthogonal to the wavevector $\bm{k}_n \| \bm{\hat{x}}'_n$ in order to satisfy $\bm{\nabla} \cdot \bm{B} = 0$ in the lab frame. $\sigma^2_{\perp}$ and $\sigma^2_{\parallel}$ are the variances of the wave amplitude in $\bm{\hat{y}}'_n$ and $\bm{\hat{z}}$ directions, respectively. We assume that the turbulent magnetic field is isotropic in the comoving frame, \begin{equation*} \frac{2{\sigma'_\|}^2}{{\sigma'_\perp}^2} = 1, \end{equation*} so that we have $\sigma_\perp^2 = 2 \Gamma^2 \sigma_\|^2$. For simplicity, we set $|\bm{k}_n| = k_0 =$ const. Let $\lambda'_B$ denote the wavelength corresponding to $k_0$ in the comoving frame. We assume that $\lambda'_B$ is of the order of $\Delta R' \simeq R/16\gamma_f$, and write \begin{equation} \lambda'_{B} = f_B \frac{R}{16\gamma_f}, \label{eq:lambda-B} \end{equation} where $f_B$ is a parameter. (We also calculate polarization for Kolmogorov power spectrum in \appref{app:Kolmogorov-spectrum}). Finally we model the radial variation of the magnetic field on the expanding thin shell to calculate the polarization of emission from EATS. Consider the radial length scale $\lambda_B^{\rm sh}$, which is the field wavelength measured in the rest frame of the shock. Then the timescale for which the shocked fluid advect from the shock front to the length scale of $\lambda_B^{\rm sh}$ is \begin{equation} \delta t^{\rm sh}_B \simeq \frac{\lambda_B^{\rm sh}}{c/3} = 2\sqrt{2} \frac{\lambda'_B}{c}. \label{eq:delta-t-prime} \end{equation} In the same timescale, new turbulent eddies amplifying the field of scale $\sim \lambda_B^{\rm sh}$ will be created behind the shock front. Although the turbulent field changes moment by moment in reality, we assume that the turbulent field under our thin-shell approximation changes \textit{in a discrete manner} for simplicity: The field structure of the thin shell is set to be unchanged during the timescale $\delta t_B^{\rm sh}$ and reset with new realization of the random numbers at every $\delta t_B^{\rm sh}$. The timescale $\delta t_B^{\rm sh}$ is measured in the lab frame as $\delta t_B = \Gamma \delta t^{\rm sh}_B \simeq \sqrt{2} \gamma_f \delta t^{\rm sh}_B$, for which the thin shell expands over the distance of \begin{equation} \delta R_B = c \delta t_B \simeq \frac{1}{4} f_B R. \label{eq:delta-R-b} \end{equation} The black solid lines in \Figref{fig:flux-density-element} represent the spheres with the spatial interval $\delta R_B$, which are set to have different configurations of turbulent magnetic field. The field configurations are unchanged between these spheres. \subsection{Synchrotron polarization} \label{subsec:polari} To calculate the polarization of synchrotron emission, we first derive the electric field $\bm{e}'$ of radiation in the comoving frame by $\bm{e}' = \bm{B}' \times \bm{n}'$ \citep[cf.][]{2003ApJ...597..998L,2009ApJ...698.1042T}. $\bm{n}'$ is the unit wavevector of observed radiation in the comoving frame, which is related to $\bm{\hat{n}} \parallel \bm{\hat{z}}$ as \begin{equation} \bm{\hat{n}}' = \frac{ \bm{\hat{n}} + \Gamma \bm{\beta} (\frac{\Gamma}{\Gamma + 1}\bm{\hat{n}} \cdot \bm{\beta} - 1) }{ \Gamma (1 - \bm{\hat{n}} \cdot \bm{\beta} ) }, \label{eq:n-prime} \end{equation} where $\bm{\beta} = \beta \bm{\hat{r}}$ (i.e., the spherical expansion). The magnetic field in the comoving frame is given by $\bm{B}_\parallel' = \bm{B}_\parallel$ and $\bm{B}_\perp' = \bm{B}_\perp/\Gamma$ under the ideal MHD approximation. Then we obtain the radiation electric field in the lab frame by Lorentz transformation, \begin{equation} \bm{e} = \Gamma \left[ \bm{e}' - \frac{\Gamma}{\Gamma+1} (\bm{e}' \cdot \bm{\beta}) \bm{\beta} - \bm{\beta}\times (\bm{n}' \times \bm{e}') \right]. \end{equation} The PA $\phi_p$ at each grid of EATS is given by \begin{equation} \phi_p = \arctan \pr{\frac{e_y}{e_x}}. \end{equation} We obtain observed Stokes parameters $I_\nu, Q_\nu, U_\nu$, which are integrated over EATS: \begin{align} &I_\nu = \int \frac{dF}{dyd\phi} dyd\phi, \label{eq:I-nu} \\ &Q_\nu = \int \Pi_0 \cos(2\phi_p) \frac{dF}{dyd\phi} dyd\phi, \label{eq:Q-nu} \\ &U_\nu = \int \Pi_0 \sin(2\phi_p) \frac{dF}{dyd\phi} dyd\phi, \label{eq:U-nu} \end{align} where $\Pi_0$ is the synchrotron PD in the uniform magnetic field, which has frequency dependence \citep{Radipro,Melrose1980b}, \begin{equation} \label{eq:Pi-0} \Pi_0 = \begin{cases} 0.5 & (\nu < \nu_m) \\ \frac{p+1}{p+7/3} & (\nu \geq \nu_m) \end{cases}. \end{equation} Note that we focus on the spectral segments at $\nu_a < \nu < \nu_c$ (see Section \ref{subsec:param}). We obtain the net PD $\Pi_\nu$ and the net PA $\Phi_{p,\nu}$ using $I_\nu, Q_\nu, U_\nu$: \begin{align} &\Pi_\nu = \frac{\sqrt{Q_\nu^2 + U_\nu^2}}{I_\nu}, \label{eq:pi-tot} \\ &\Phi_{p,\nu} = \frac{1}{2} \tan^{-1}\pr{\frac{U_\nu}{Q_\nu}}. \label{eq:phi-tot} \end{align} We write the net PDs at $10^{15}$ Hz and $100$ GHz by $\Pi_{\rm opt}$ and $\Pi_{\rm radio}$, respectively. \section{Analytical estimate of PD} \label{sec:analytical-estimate} We analytically estimate the level of PDs before showing our numerical results in \Secref{sec:results}. By this analysis we can understand the order of magnitude of PD at each band and its dependence on $f_B$. The average level of PD with the hydrodynamic-scale turbulent magnetic field can be estimated by \begin{equation} \Pi_\nu \sim \frac{\Pi_0}{\sqrt{N}}, \label{eq:PD-largescale} \end{equation} where $N (\gg 1)$ is the number of patches with coherent magnetic field in the visible area of the shocked region \citep{Gruzinov1999}, as introduced in \Secref{sec:intro}. To estimate $N$ at the optical and radio bands, we divide the EATS into three parts; the dark region at both of the wavelengths ($y \lesssim 0.4$, Region 0), the bright region at optical ($y \sim 0.5-0.7$, Region 1) and the bright region at radio ($y \sim 0.8-1.0$, Region 2), as shown by the schematic pictures in \Figref{fig:patch}. The net PD is the brightness weighted average of local PDs. Thus we may estimate the optical and radio PDs by counting $N$ of Region 1 ($N_1$) and Region 2 ($N_2$), respectively. The distance of a position on the EATS from the line of sight is $R_\perp \simeq (\sqrt{2}R_l/4\gamma_l)\sqrt{y-y^5}$, and its maximum value is given at $y \simeq 0.67$ as $R_{\perp,\rm max} \simeq 0.26 R_l/\gamma_{l}$. Region 1 can be regarded as a cylindrical surface of the radius $R_{\perp,{\rm max}}$. In this cylindrical surface the number of parts between shocks with different configurations of turbulent magnetic field is \begin{equation} N_f \sim \frac{R}{\delta R_B} = 4 f_B^{-1}. \end{equation} The number of patches in each of the parts is \begin{equation} N_\phi = \frac{2\pi R_{\perp,\rm max}}{\lambda'_{B}/4} \simeq 380 f_B^{-1}, \end{equation} where we have assumed that the size of a patch is a quarter of wavelength of turbulent magnetic field, i.e., $\sim \lambda'_{B}/4$, and estimated $\lambda'_B$ at $y=0.6$. We obtain $N_1 = N_\phi N_f \sim 1500 f_B^{-2}$, and then we have analytical estimate of optical PD as \begin{equation} \Pi_{\rm a,opt} \sim \frac{\Pi_0(\nu \geq \nu_m)}{\sqrt{N_1}} \sim 2 f_B \;\%. \label{eq:PD-Region 1} \end{equation} Region 2 can be roughly regarded as a spherical surface of the radius $R_{\perp,{\rm max}}$. The number of patches in this surface $N_{\rm surface}$ is \begin{equation} N_{\rm surface} \simeq \frac{\pi R_{\perp,\rm max}^2}{(\lambda'_{B}/4)^2} \simeq 880 f_B^{-2}, \label{eq:N-surface-thinshell} \end{equation} where we have estimated $\lambda'_B$ at $y=1.0$. We obtain $N_2 \sim N_{\rm surface}$, and then we have analytical estimate of radio PD as \begin{equation} \Pi_{\rm a,radio} \sim \frac{\Pi_0(\nu < \nu_m)}{\sqrt{N_2}} \sim 2 f_B\; \%. \label{eq:PD-Region 2} \end{equation} From \Eqref{eq:PD-Region 1} and (\ref{eq:PD-Region 2}), our analytical estimate shows that $\Pi_{\rm a,opt}$ and $\Pi_{\rm a,radio}$ are comparable and both proportional to $f_B$. \section{Numerical Calculation Results} \label{sec:results} We show the numerical calculation results of synchrotron polarization from collimated blast waves as functions of $T$ and its dependence on $f_B$ in this section. \subsection{Numerical setup} For our numerical calculations, we set the number of waves to be $N_m = 3000$ in \Eqref{eq:B-field}. From analysis of emission from a plane of uniform brightness with various values of $N_m$, we found that the net PD is converged to be $\sim \Pi_0/\sqrt{N}$ for $N_m \gg N$. If $N_m$ is too small, PD induced by an artificial anisotropy of turbulent field $(> \Pi_0/\sqrt{N})$ arises. We confirmed $N_m \gg N$ for our calculations, so that our resulting PDs are not affected by the numerical artifact. We set the number of grids of the spherical thin shell for giving the magnetic field as $(60/f_B,~240/f_B)$ in $(R,\phi)$ directions. We set the number of the spatial grids for the flux and polarization calculations as $(256, 1024)$ in $(y,\phi)$ directions. We confirmed the numerical convergences of PD and PA at every observed time with the above setups. \subsection{Temporal behaviors of PDs} \label{subsec:PD-curve} \Figref{fig:PDPA-curve} shows the calculated PD curves (the first panels) and PA curves (the second panels) for $f_B = 1.0$ at frequencies $10^{15}$ Hz (optical), $100$ GHz (radio) during $T = 0.1-1.0$ days. We show the calculation results for four cases, Realizations (a), (b), (c), and (d), for which the parameter sets are the same as listed in Table 1, but the realizations of random numbers for the turbulent field creation are different. Interestingly, we have the optical PD of the observed level, $\Pi_{\rm opt} \simeq 1-3\%$, in the case of $f_B = 1.0$. The temporal variations of PDs and PAs look random and continuous at both of the optical and radio bands. This is a behavior different from that in the plasma-scale turbulent field model, in which PDs have one or two peaks, and PAs keep constant or have a sudden flip by $90\;$deg \citep{Sari1999,Ghisellini1999,Rossi2004}. We also find that for some realizations, the time interval in which radio PDs are higher than the optical PDs is frequently seen (see Realizations (a) and (b)) while at other realizations, the opposite trend is seen (see Realization (c)). The radio PD higher than the optical one is a distinct feature from the plasma-scale turbulent field model, in which the radio PD is always lower than the optical one \citep{ST2021,Rossi2004}. As we mentioned in the above sections, the difference between the radio and the optical polarizations comes from the frequency dependence of the brightness profile of EATS shown in \Figref{fig:flux-density-element}. To demonstrate this, we numerically calculate PDs both at Region 1 and 2 by \begin{equation} \Pi_{i, \nu} = \frac{\sqrt{Q^2_{i,\nu} + U^2_{i,\nu}}}{I_{i,\nu}} \quad (i=1,2), \label{eq:PD1_PD2} \end{equation} where the subscript $i$ denotes a region number and $I_{i,\nu}, Q_{i,\nu},$ and $U_{i,\nu}$ are the Stokes parameters calculated at each region (cf. \Eqref{eq:I-nu}-(\ref{eq:U-nu})). Here, we set the ranges of Regions 0, 1, and 2 as $y < 0.42$, $0.42 \le y < 0.76$, $y \ge 0.76$, respectively. We have chosen $y = 0.76$ (and at $y = 0.42$ at the optical band) as the point where the value of $dF/dy$ is 60\% of its maximum at each band (see also \Secref{subsec:validity-thin-shell}). We show these calculated PDs in the bottom panels of \Figref{fig:PDPA-curve}. We can see that $\Pi_{\rm radio} > \Pi_{\rm opt}$ while $\Pi_{2,{\rm radio}} > \Pi_{1,{\rm opt}}$, and vice versa. This means that the net PD reflects the PD at the brightest region at each band. It is remarkable in Realizations (a) and (b) that the temporal behaviors of $\Pi_{\rm radio}$ are in good agreement with $\Pi_{2,{\rm radio}}$, not with $\Pi_{1,{\rm radio}}$. \subsection{Statistical behavior of PDs} \label{subsec:stat-PD} We calculate the PDs using the magnetic field turbulence for 300 turbulence realizations, and take average and variance. We perform this calculation for $f_B=1.0$ and show the results at $T=0.2,\,0.4$ and $0.9\,$days in \Figref{fig:PD-time}. At $T=0.2,\,0.4\,{\rm days}~(<T_j)$, the realization-averaged PDs at the radio band $\langle \Pi_{\rm radio} \rangle$ (the orange circle marks) tend to be higher than that at the optical band $\langle \Pi_{\rm opt} \rangle$ (the purple square marks). On the other hand, at $T=0.9\; {\rm day}~(>T_{j})$, $\langle \Pi_{\rm radio} \rangle$ is comparable to $\langle \Pi_{\rm opt} \rangle$. This is because the part of the EATS that would be the brightest at the optical band is outside the jet and does not radiate (see \Figref{fig:flux-density-element}), and then the brightness profile becomes more alike that at the radio band. In \Figref{fig:PD-time-region}, we show the realization-averaged PDs at Region 1 at the optical band $\langle \Pi_{\rm 1,opt} \rangle$ and that at Region 2 at the radio band $\langle \Pi_{\rm 2,radio} \rangle$. While our rough estimate leads to $\Pi_{\rm a,radio} \sim \Pi_{\rm a,opt}$ (see \Secref{sec:analytical-estimate}), the numerical results show that $\langle \Pi_{\rm 2,radio} \rangle$ is a bit higher than $\langle \Pi_{\rm 1,opt} \rangle$. One can see that this trend results in $\langle \Pi_{\rm radio} \rangle \gtrsim \langle \Pi_{\rm opt} \rangle$ shown in \Figref{fig:PD-time}. \Figref{fig:PD-time-region} also shows a similar time evolution as \Figref{fig:PD-time}. $\langle \Pi_{\rm 2,radio} \rangle$ is higher than $\langle \Pi_{\rm 1,opt} \rangle$ at $T < T_j$, whereas at $T = 0.9 \;{\rm day}~(> T_j)$, $\langle \Pi_{\rm 1,opt} \rangle$ slightly increases and closer to $\langle \Pi_{\rm 2,radio} \rangle$. In \Figref{fig:fB-PD}, we show the PDs at $T = 0.4$ days for various values of $f_B = 0.25 - 4.0$. The PDs are roughly proportional to $f_B$. In \Figref{fig:PD-time}-\ref{fig:fB-PD}, we also show the lines of $\Pi_{\rm a,opt}$ and $\Pi_{\rm a,radio}$ (see \Secref{sec:analytical-estimate}), which well explain the order of magnitudes of PDs and the dependence of PDs on $f_B$ except for $f_B=4.0$. Note that at $f_B=4.0$, the analytical estimate may be invalid, because \Eqref{eq:PD-largescale} is an expression in the large $N$ limit. \section{Summary and Discussion} \label{sec:summary} We have constructed a semi-analytic model of GRB forward shock afterglows with hydrodynamic-scale turbulent magnetic field and performed analytical and numerical estimates of optical and radio polarizations. The numerical calculations are under the assumption of the zero viewing angle and the isotropic turbulent magnetic field. Our analytical estimate calculating the number of coherent field patches in the brightest part of the EATS at each frequency band indicates the comparable level of optical and radio PDs $\sim 2 f_B \%$. Interestingly, the observed level $\sim 1-3 \%$ of late-time optical GRB afterglows is obtained for $f_B \sim 1$. The numerical calculations are consistent with the analytical estimate, but show that the realization-averaged PDs at the radio band $\langle \Pi_{\rm radio} \rangle$ are slightly higher than that at the optical band $\langle \Pi_{\rm opt} \rangle$ at $T < T_j$. We also numerically show that the averaged radio PD at Region 2 $\langle \Pi_{2,\rm radio}\rangle$ is a bit higher than the optical PD at Region 1 $\langle \Pi_{1,\rm opt} \rangle$ at $T < T_j$, which causes $\langle \Pi_{\rm radio} \rangle \gtrsim \langle \Pi_{\rm opt} \rangle$, and that $\langle \Pi_{2,\rm radio}\rangle$ and $\langle \Pi_{1,\rm opt} \rangle$ exhibit similar time evolution as $\langle \Pi_{\rm radio} \rangle$ and $\langle \Pi_{\rm opt} \rangle$, respectively. Our numerical calculations also show that PDs and PAs vary randomly and continuously at both of two bands. In some time intervals, the radio PD can be significantly higher than the optical PD. In contrast, in the plasma-scale turbulent magnetic field model, (i) the radio PD is always lower than the optical PD, (ii) PD have one or two peaks while PA is constant or has sudden flip by $90^\circ$, and (iii) the differences in PAs between the radio and the optical bands is zero or $90^\circ$ \citep{Sari1999,Ghisellini1999,Rossi2004,ST2021}. Thus, the more simultaneous polarimetric observations of GRB afterglows at the radio and the optical bands would be decisive tests of these two turbulent magnetic field models. \subsection{the validity of the thin-shell approximation} \label{subsec:validity-thin-shell} We have used the thin-shell approximation for the shocked region. Here we discuss its validity. If we use the BM structure for the fluid behind the shock front, the brightness profile is changed from that of the thin-shell approximation, which could affect the frequency dependence of PD. The flux density element of the BM structure is $dF/d\chi dy$ (after the integration over $\phi$), where $\chi$ is a self-similar variable corresponding to the distance behind the shock front \citep[cf. Eq. (9) of][]{Granot1999a}. For the thin-shell approximation, we have roughly counted the number of coherent patches $N$ on the region of EATS with high value of $dF/dy$, by which we have obtained the net PD consistent with the numerical results (see \Secref{sec:analytical-estimate}). For the BM structure, $N$ can be counted roughly by deriving the profile of $dF/dy$ after the integration over $\chi$, recalling that the magnetic field structure behind the shock front is assumed to be unchanged while the shock expands over the distance of $\delta R_B$. We integrate $dF/d\chi dy$ in the range of $\chi = 1-2$, because only this region is bright and mainly contributes to the net PD, and plot it in \Figref{fig:flux-density-element-BM}, where $dF/dy$ for the thin-shell approximation is also shown for comparison. At the optical band, as shown in the top panel of \Figref{fig:flux-density-element-BM}, the values of $dF/dy$ for the BM structure and for the thin-shell approximation are the same except at $y>0.9$ and take the same maximum value at $y \sim 0.6$. Thus, we suppose that our calculated $\Pi_{\rm opt}$ will not be changed if we calculated that with the BM structure. At the radio band, as shown in the bottom panel of \Figref{fig:flux-density-element-BM}), $dF/dy$ for the BM structure peaks at $y\sim 0.85$ while that with the thin-shell approximation peaks at $y=1.0$. Then we should estimate the number of coherent patches $N$ for the BM structure. The brightest region for the BM structure at the radio band has a ring-like shape with the inner radius $R_\perp~(y=0.85)$ and the outer radius $R_{\perp, \rm max})$. The number of patches in this ring $N_{2,\rm BM}$ is \begin{equation} N_{2,\rm BM} = \frac{\pi(R_{\perp, \rm max}^2 - R_\perp^2(y=0.85))}{(\lambda'_B(y=0.85)/4)^2} \times 2 \simeq 920 f_B^{-2}, \end{equation} where the last factor $2$ corresponds to $N_f$ introduced in \Secref{sec:analytical-estimate} and means that the region between $y=0.67-0.85$ includes at most two different configurations of turbulent magnetic field. $N_{2,\rm BM}$ is comparable to that with the thin-shell approximation (see \Eqref{eq:N-surface-thinshell}). Therefore, if we calculate $\Pi_{\rm radio}$ with the BM structure, it would not significantly change from our results with the thin-shell approximation. In summary, our conclusion on $\Pi_{\rm radio}$ and $\Pi_{\rm opt}$ in this paper would not be changed even if we calculate the polarization with the BM structure. \subsection{future work} Our analysis treats only the case of zero viewing angle $(\theta_v = 0)$ and isotropic turbulent magnetic field $({\sigma'}^2_\parallel = {\sigma'}^2_\perp/2)$ in this paper. In reality, many GRBs should be observed for finite values of $\theta_v$. The shock jump condition would make the magnetic field anisotropic in the downstream region even if it is isotropic in the upstream region \citep{Ma&Zhang2022}. The nonlinear dynamics of MHD turbulence also show anisotropic magnetic energy spectrum \citep{Goldreich1995, Lazarian1999}. The cases of finite $\theta_v$ (particularly $0 < \theta_v < \theta_j$, for which the prompt emission can be bright) and/or anisotropic turbulent field will exhibit different features of PDs and PAs from those we have found in this paper, which are interesting theoretically and observationally. The effects of ordered field component and Faraday effects should also be examined \citep[cf.][]{Granot2003, Toma2008}. We will perform calculations in those cases for $T \gtrsim 1\;$day and compare the results with the observed data in separate papers. This will enable us to distinguish the hydrodynamic-scale turbulent field and the plasma-scale turbulent field in the regions behind GRB afterglow forward shocks. \vskip\baselineskip We thank S. Chon for helpful discussions. We also thank the anonymous referee for useful comments. Numerical calculations were performed on Draco, a computer cluster of FRIS. We utilized Science Lounge of FRIS CoRE for discussions many times. This work is partly supported by JSPS Grants-in-Aid for Scientific Research No. 18H01245 (K.T.), No. 22K14028 (S.S.K.), by Graduate Program on Physics for the Universe (GP-PU), Tohoku University (A.K.), and by the Tohoku Initiative for Fostering Global Researchers for Interdisciplinary Sciences (TI-FRIS) of MEXT's Strategic Professional Development Program for Young Researchers (S.S.K.). \appendix \renewcommand{\thetable}{\Alph{section}.\arabic{figure}} \renewcommand{\thefigure}{\Alph{section}.\arabic{table}} \section{polarization from Kolmogorov-type turbulent magnetic field} \label{app:Kolmogorov-spectrum} We have constructed the turbulent magnetic field model with wavelength of $\lambda'_B = f_B \Delta R'$ for the shocked region of the forward shocks in \Secref{sec:large-B} and showed the calculation results of synchrotron polarization in \Secref{sec:results}. Here we consider the distribution of magnetic field wavelengths. The MHD simulations of Richtmyer-Meshkov instability at relativistic shocks show a magnetic field turbulence with power spectrum somewhat flatter than Kolmogorov one \citep{InoueT2011, Mizuno2014}. Observations of supernova remnants indicate the Kolmogorov power spectra in the shocked regions \citep{Shimoda2018, Shimoda2022, Vishwakarma2020}. We assume isotropic turbulent field (i.e., ${\sigma'}^2_\parallel = {\sigma'}^2_\perp/2$) with a Kolmogorov power spectrum as an example just to demonstrate the effect of field wavelength distribution on the synchrotron polarization. The Kolmogorov power spectrum is $P(k)dk \propto k^{-11/3} 4\pi k^2 dk$, i.e. $P(k)k d\ln k \propto k^{-2/3} d\ln k$. We set $\sigma_{\perp}^2, \sigma_{\parallel}^2 \propto P(k)k$ in \Eqref{eq:B-field-direction} for $k_{\rm min} \leq k \leq k_{\rm max}$, where $k_{\rm min} = 2\pi/ \lambda'_B$. We calculate the polarization for $x \equiv k_{\rm max}/k_{\min} = 2, 4,$ and $8$. For the calculation of $x=8$, we have divided the $\ln k$-space by 8 grids and set $3000$ waves of random wave directions and phases for each grid. Then we find that the obtained PDs are roughly proportional to a characteristic length, \begin{equation} L_c = \frac{\int_{k_{\rm min}}^{k_{\rm max}} \frac{2\pi}{k} P(k) dk}{\int_{k_{\rm min}}^{k_{\rm max}} P(k) dk} = \frac{4 \pi}{5}k_{\rm min}^{-1} \frac{x^{5/3}-1}{x(x^{2/3}-1)}. % \label{eq:coherent-length} \end{equation} $L_c(x)$ monotonically decreases and converges to $(4\pi/5) k_{\rm min}^{-1}$. Although the power spectrum will extend to the tiny dissipation scale (i.e., $x \gg 1$) which depends on parameters such as particle mean free path, viscosity, resistivity, etc. \citep{Schekochihin2002}, we do not have to set such a large value of $x$. Since $L_c(x=8) \simeq L_c(x\gg 1)$, the calculation of polarization for $x = 8$ is a good approximation of that for $x \gg 1$. We show the calculation results for $f_B = 4.0$ and $x=8$ in \Figref{fig:PD_kspectrum}. The average PD at each time is $\sim 4$ times lower than that calculated using only $k_{\rm min}$. This is still comparable to the observed PDs of late-time optical GRB afterglows. We see that $\langle \Pi_{\rm radio} \rangle \gtrsim \langle \Pi_{\rm opt} \rangle$ and that the PDs and PAs vary randomly and continuously in a similar manner as the calculation results with only $k_{\rm min}$. We also perform calculations for $f_B = 1.0$ and find the same behaviors as $f_B = 4.0$. In summary, our conclusion in \Secref{sec:summary} does not change even if we consider the Kolmogorov-type magnetic power spectrum. \bibliography{ms}{} \bibliographystyle{aasjournal}
Title: Investigation of the broadband emission of the gamma-ray binary HESS J0632+057 using an intrabinary shock model
Abstract: We investigated a wealth of X-ray and gamma-ray spectral energy distribution (SED) and multi-band light curve (LC) data of the gamma-ray binary HESS J0632+057 using a phenomenological intrabinary shock (IBS) model. Our baseline model assumes that the IBS is formed by colliding winds from a putative pulsar and its Be companion, and particles accelerated in the IBS emit broadband radiation via synchrotron (SY) and inverse-Compton upscattering (ICS) processes. Adopting the latest orbital solution and system geometry (Tokayer et al. 2021), we reproduced the global X-ray and TeV LC features, two broad bumps at $\phi \sim 0.3$ and $\sim0.7$, with the SY and ICS model components. We found these TeV LC peaks originate from ICS emission caused by the enhanced seed photon density near periastron and superior conjunction or Doppler-beamed emission of bulk-accelerated particles in the IBS at inferior conjunction. While our IBS model successfully explained most of the observed SED and LC data, we found that phase-resolved SED data in the TeV band require an additional component associated with ICS emission from pre-shock particles (produced by the pulsar wind). This finding indicates a possibility of delineating the IBS emission components and determining the bulk Lorentz factors of the pulsar wind at certain orbital phases.
https://export.arxiv.org/pdf/2208.01189
\title{Investigation of the broadband emission of the gamma-ray binary HESS~J0632+057 using an intrabinary shock model} \correspondingauthor{Hongjun An} \email{hjan@cbnu.ac.kr} \author{Jinyoung Kim} \affiliation{Department of Astronomy and Space Science, Chungbuk National University, Cheongju, 28644, Republic of Korea} \author{Hongjun An} \affiliation{Department of Astronomy and Space Science, Chungbuk National University, Cheongju, 28644, Republic of Korea} \author{Kaya Mori} \affiliation{Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027, USA} \section{Introduction} \label{sec:intro} High-energy $\gamma$-ray surveys using ground-based imaging air Cherenkov telescopes (e.g., VERITAS, H.E.S.S. and MAGIC), along with X-ray telescopes, have uncovered a rare subclass of binary systems detected above $E\sim0.1$~TeV \citep[e.g.,][]{Corbet2012,Aharonian2006}. These so-called TeV $\gamma$-ray binaries (TGBs) harbor a compact object and a massive companion (O, B or Be star), with a wide range of orbital periods spanning 3.9~days to $\sim50$~years. TGBs emit orbitally-modulating broadband radiation from X-ray to gamma-ray energies \citep[e.g.,][]{Mirabel2012}. It is widely accepted that very high-energy (VHE; $\ge$0.1\,TeV) emission from TGBs implies that particles should be accelerated to GeV--TeV energies in the system \citep[][]{Becker2017}. Among $\le$10 TGBs discovered thus far, the compact object has been identified as a neutron star in only three systems: PSR~B1259$-$63 \citep[][]{Johnston1992}, PSR~J2032+4127 \citep[][]{Ho2017}, and LS~I~+61$^\circ$~303 \citep[][]{Weng2022}. Studies of TGBs have been carried out primarily by modeling their light curves (LCs) and broadband spectra in the X-ray and gamma-ray bands. The multi-wavelength spectral energy distributions (SEDs) are well characterized by two non-thermal components at low energy ($\le$100\,MeV) and gamma-ray bands ($\sim$TeV) which are mixed with thermal emission from the companion. It is thought that the low-energy non-thermal emission extending from radio to $\le$100\,MeV band is produced by synchrotron (SY) radiation of energetic electrons \citep[e.g.,][]{Tavani1994,Dubus2013}. The GeV emission may consist of the SY and the pulsar magnetospheric radiation with some contribution from inverse-Compton scattering (ICS) of stellar thermal photons by low-energy electrons \citep[e.g., with a Lorentz factor of $\sim10^4$;][]{Zabalza2013,Dubus2015}. The VHE emission is believed to be produced by ICS of the stellar photons by high-energy electrons \citep[e.g.,][]{Dubus2006,Khangulyan2008,Chernyakova2020}. The X-ray and VHE emission, resulting from SY and ICS in the shocked region, respectively, shows a strong dependence on orbital phase due to various factors such as the intrabinary distance, anisotropic radiation processes and relativistic Doppler boosting. These emission mechanisms have been employed primarily in two scenarios for TGBs: a microquasar and an intrabinary shock (IBS) scenario. In the microquasar scenario, the `unknown' compact object is assumed to be a black hole with bipolar and relativistic jets. Particles are accelerated to high energies in the jets, and orbital variation of the jet viewing angle generates the orbital modulation in the emission \citep[e.g.,][]{Bosch-Ramon2007,Marcote2015}. In the IBS scenario, the compact object is assumed to be a pulsar whose wind interacts with the companion's outflow. The wind-wind collision produces a contact discontinuity (CD) in the IBS region where pulsar wind particles are accelerated and emit broadband non-thermal radiation \citep[e.g.,][]{Dubus2006}. Orbital variation of the orientation of the IBS flow with respect to the observer’s line of sight (LoS) causes the high-energy emission to modulate with the orbital period. \citep[e.g.,][]{vandermerwe2020}. Given the three TGBs containing radio pulsars combined with constraints on mass function, the compact object in other TGBs is generally considered to be a neutron star \citep[e.g.,][]{Dubus2013}. The TeV source HESS~J0632+057 (J0632 hereafter) was identified as a TGB by a detection of its $\sim320$-day orbital modulation in the VHE band \citep[][]{Acciari2009}. Later, X-ray and GeV modulations on the orbital period ($P_{\rm orb}$) were detected \citep[][]{Aliu2014,Li2017}, confirming the VHE identification. An optical spectroscopic study identified the companion to be a Be star \citep[HD~259440;][]{Aragona2010} with an equatorial disk. The compact object has not been identified yet, but a Chandra imaging found a hint of extended emission around the source which was interpreted as a signature of wind-wind interaction \citep[][]{Kargaltsev2022}. In this paper, we test the orbital solution suggested by TAH21 and determine the IBS parameters of J0632 using the $\sim$GeV and VHE measurements. Our phenomenological model fit to the extensive X-ray and gamma-ray data puts some constraints on magnetohydrodynamic (MHD) flows in the IBS that are useful to further MHD simulations of J0632 and other TGBs. We describe IBS structure in Section~\ref{sec:sec2} and present emission model components in Section~\ref{sec:sec3}. We then use the model to explain the broadband data of J0632 in Section~\ref{sec:sec4}. We discuss implications of the modeling in Section~\ref{sec:discussion} and present a summary in Section~\ref{sec:summary}. \section{orbital solutions for J0632}\label{sec2_0} The X-ray and the VHE LCs of J0632 exhibit a similar shape characterized by broad bumps at orbital phases $\phi\approx0.25$ and 0.75, and a sharp spike at $\phi$=0.35 \citep[e.g.,][]{Tokayer2021, Adams2021}. These features could probe the emission mechanisms and help infer the properties of particle acceleration and flow in the binary system. However, the orbital solution of J0632 has not been well determined. While optical data have provided accurate orbital solutions for other TGBs, the situation for J0632 is unclear. Orbital solutions derived from radial velocity measurements using the H$\alpha$ line do not agree with each other. For the reference epoch of MJD~54857 used throughout this paper, the solution inferred from an optical study by \citet{Casares2012} suggests a highly eccentric orbit with an eccentricity of 0.83, periastron at $\phi=0.967$, and LoS at $\phi=0.961$. In contrast, \citet{Moritani2015} suggested a less eccentric orbit with eccentricity of 0.64, periastron at $\phi=0.663$, and LoS at $\phi\approx 0.17$. Even considering that their radial velocity curves were folded on different periods, 321\,day and 313\,day in \citet{Casares2012} and \citet{Moritani2015}, respectively, the solutions differ substantially. X-ray LC data provide alternate orbital solutions to those obtained with optical data. Two such solutions are obtained by attributing the bumps in the X-ray LC to disk interactions \citep{Malyshev2019,Chen2022}. Most recently, \citet{Tokayer2021} derived an orbit of J0632 by modeling the most extensive X-ray LC data with an IBS model (Fig.~\ref{fig:fig1}). This latest orbital solution seems to be well justified since the model matches the X-ray LC data well and accounts for the enhanced hydrogen column density ($N_{\rm H}$) observed at some orbital phases which \citet{Malyshev2019} and \citet{Tokayer2021} attributed to the pulsar-disk interaction. These three orbits inferred from modelings of X-ray LCs folded on $P_{\rm orb}\approx317$\,day are all similar to one another even though the emission models employed in those works are somewhat different. The suggested eccentricities are modest (0.4--0.5), and phases of periastron and LoS are $\phi=$0.3--0.4 and 0.7--0.8, respectively. Considering that the three X-ray-inferred orbits broadly agree, this leaves three orbital solutions for J0632 which significantly disagree with one another -- two from optical data and one from X-ray studies. Because it is unclear which orbital solution is correct, it will be helpful to check to see if the suggested orbits can explain VHE data (LC and SED) using emission models, which has not been done for any of the suggested orbits. The optical orbits of \citet{Casares2012} and \citet{Moritani2015} are incompatible with a shock-emission scenario as noted by \citet{Chen2022}. Thus they are inadequate for an IBS study of J0632. The X-ray orbits of \citet{Chen2022} and \citet{Malyshev2019} were constructed using an inclined disk model which employs a termination shock and its interaction with an inclined disk. This is essentially a one-zone model with a disk as the shock region is assumed to be a point source \citep[][]{Chen2022}, whereas the IBS model of \citet[][TAH21 hereafter]{Tokayer2021} took into account the multi-zone emission from a cone-shape IBS region. Because the IBS, unlike the one-zone termination shock, produces Doppler-boosted emission along the shock tail \citep[e.g.,][]{Dubus2013, An2017, vandermerwe2020}, the orbit of J0632 inferred by an IBS model (Fig.~\ref{fig:fig1}; TAH21) slightly differs from that obtained by the inclined disk model. Note that TAH21 also applied a one-zone shock model to the broadband SEDs of J0632, but they did not attempt to model the VHE LC data. Given that TGBs should form an extended IBS as demonstrated by hydrodynamic simulations \citep[e.g.,][]{Bogovalov2008,Dubus2015,Bosch-Ramon2017}, it is necessary to consider a multi-zone shock case for modeling both the multi-wavelength SED and LC data in detail. Hence, we restrict our study to an IBS scenario and the orbit of TAH21; the distinct features in the X-ray and VHE LCs allow us to determine the IBS properties more accurately. \section{Structure of IBS}\label{sec:sec2} In our IBS model, a pulsar injects cold pulsar-wind (preshock; purple arrows in Fig.~\ref{fig:fig2}) electrons and magnetic field ($B$) to the IBS (blue line in Fig.~\ref{fig:fig2}), and the electrons are accelerated to very high energies in the shock \citep[e.g.,][]{Tavani1997,Dubus2015}. These particle injection and acceleration schemes have been widely used in the previous models of TGBs \citep[e.g.,][]{Sierpowska-Bartosik2008,Dubus2015,An2017,Chernyakova2020,Xingxing2020}. In this work, we adopt an analytical approach for modeling the global features of the flows, although hydrodynamics (HD) simulations showed more complexities and substructures in the particle flows \citep[e.g.,][]{Bosch-Ramon2015}. Below we describe the prescriptions, assumptions, and formulas for the IBS flows in our model. \subsection{Pulsar wind and stellar outflow}\label{sec:sec2_1} The 2D shape of an IBS is determined by the pressure balance of the pulsar and the stellar winds. The pulsar wind is thought to be composed of cold and relativistic plasma. MHD simulations suggest that the pulsar wind can be anisotropic with higher flow velocities and particle densities in the equatorial plane \citep[e.g.,][]{Tchekhovskoy2016}. Massive stars emit isotropic outflows and Be-type stars such as the companion star of J0632 may have strong equatorial outflows (decretion disk; e.g., Fig.~\ref{fig:fig1}) which are evidenced by an infrared (IR) excess \citep[e.g.,][]{Waters1984}. Besides, stellar outflows can be clumpy and highly variable, inducing large variability to the IBS shape and thus to the observed emission of TGBs \citep[e.g.,][]{Bosch-Ramon2013}. The shape of an IBS formed by anisotropic wind interaction is difficult to determine observationally given that the pulsar's spin axis and anistoropic geometry of the pulsar and/or stellar winds are not known. In this work, we assume an isotropic geometry for both the pulsar and the stellar winds since an IBS formed by slightly anisotropic winds does not differ much from the isotropic case \citep[e.g.,][]{Kandel2019}; the most important shock tangent angle near the tail changes only slightly, which can be accounted for by a small change in the wind momentum flux ratio and/or observer's inclination angle in our phenomenological model. Note, however, that the strong equatorial outflow (i.e., disk) of a Be-type companion can significantly alter the IBS shape at the pulsar crossing phases (e.g., phases 0.12 and 0.35 in Fig.~\ref{fig:fig1}). Under the isotropic wind assumption, our model does not properly account for the IBS-disk interaction at the disk-crossing phases. Moreover, emission produced by the interaction depends strongly on other parameters such as density and heating that are poorly known, and need to be determined by simulations and observations of the interaction sites. We do not consider such interaction in this work and hence our model does not naturally reproduce the X-ray and TeV spikes at $\phi\approx0.3$. Note, however, that TAH21 arbitrarily increased $B$ at the interaction phases to reproduce the X-ray spike. We adopt this prescription for the X-ray modeling and thus our model phenomenologically matches the X-ray LC, but not the VHE LC (see Section~\ref{sec:sec4_2_1} for more details). \subsection{Shape of the IBS}\label{sec:sec2_2} The orbital variability of high-energy emission observed in TGBs is thought to be induced by a change in the emission and viewing geometry of the IBS \citep[e.g.,][]{Romani2016}. In general, the IBS is assumed to have a paraboloid shape near the apex but it is distorted significantly at large distances from the system \citep[e.g.,][]{Bosch-Ramon2017}. Because the high-energy emission in the X-ray to VHE band is mostly produced in the inner regions of the IBS \citep[e.g.,][]{Dubus2015,Bosch-Ramon2017}, the analytic formulas (Appendix~\ref{sec:appendix1}) presented in \citet{Canto1996} for CD produced by isotropic wind-wind interaction is adequate for our modeling effort. The IBS shape is basically determined by the winds' momentum flux ratio: \begin{equation} \label{eq:beta} \beta=\frac{\dot E_{\rm SD}}{\dot M_{\rm w}v_{\rm w}c}, \end{equation} where $\dot E_{\rm SD}$ is the pulsar spin-down power, $\dot M_{\rm w}$ is the mass loss rate of the companion, $v_{\rm w}$ is the velocity of the companion's wind, and $c$ is the speed of light. The massive companion's wind is likely stronger than the pulsar's ($\beta<1$) thus it is likely that the IBS is formed around the pulsar in TGBs. A schematic view of the vertical cross sections of a TGB system and its IBS is depicted in Figure~\ref{fig:fig2}. As denoted in the figure, an emission zone at a distance $s$ (red solid line) along the IBS (blue solid) from its apex is $r_{\rm s}$ away from the star at an angle $\theta_{\rm s}$ and is $r_{\rm p}$ away from the pulsar at an angle $\theta_{\rm p}$. The blue dashed line shows the direction of the particle flow in the emission zone (polar angle $\theta_{\rm t}$), and the green line is an observer's LoS with an inclination $\theta_i$. The asymptotic tangent angle (blue arrow; half opening angle $\theta_{\rm cone}$) of the shock is determined by $\beta$ (Eq.~\ref{eq:coneangle}). These geometrical parameters for an IBS are computed using the equations given in Appendix~\ref{sec:appendix1} \citep[see][for more detail]{Canto1996}. Both $r_{\rm s}$ and $r_{\rm p}$ vary orbitally because they are proportional to the orbital separation ($d_{\rm orb}$) between the pulsar and the companion (e.g., Eq.~\ref{eq:shocknose}): \begin{equation} \label{eq:dorb} d_{\rm orb}(\phi_0)=\frac{a(1-e^2)}{(1+e\mathrm{cos}\phi_0)}, \end{equation} where $a$ is the semi-major axis, $e$ is the eccentricity, and $\phi_0$ is the true anomaly. We assume that the emission zone extends to $s_{\rm max}=3$--$5 d_{\rm orb}$ \citep[e.g.,][]{Dubus2015,Bosch-Ramon2017}. Note that the emission-zone size ($s_{\rm max}$) in our model is assumed to be a constant multiple of $d_{\rm orb}$ which changes with the orbital phase in eccentric orbits (Eq.~\ref{eq:dorb}). However, the solid angle subtended by the IBS from the pulsar (i.e., pulsar's energy injection into IBS) remains unchanged. \subsection{Particles in the pulsar-wind zone}\label{sec:sec2_3} The preshock pulsar wind is composed of cold electrons accelerated in the pulsar wind zone (purple arrows in Fig.~\ref{fig:fig2}). The exact location and physical processes of the particle acceleration are not well known, and hence the energy distribution of the particles is unclear. Some physical processes may produce narrow Maxwellian-like distributions \citep[e.g.,][]{Hoshino1992,Sironi2011} while others may produce a broad power-law distribution \citep[e.g.,][]{Jaroschek2008}. As such, various distributions have been assumed in TGB emission models previously: a broadened delta function \citep[e.g.,][]{Khangulyan2011}, power-law \citep[][]{Sierpowska-Bartosik2008}, or Maxwellian distribution \citep[][]{Takata2017}. These distributions would result in slightly different shapes for the ICS SED of the preshock particles. In this work, we assume that the preshock particles are accelerated near the light cylinder $R_{\rm LC}\ll r_{\rm p}$ of the pulsar \citep[e.g.,][]{Aharonian2012}, flow isotropically, and follow a relativistic Maxwellian energy distribution \begin{equation} \label{eq:maxwellian} \frac{d\dot N_e^{\rm pre}}{d\gamma_e}=N_0 \frac{\gamma_e^2\beta_e}{\Theta K_2(1/\Theta)}e^{-\gamma_e/\Theta}, \end{equation} where $\beta_e$ is $\sqrt{1 - 1/\gamma_e^2}$, $K_2$ is the modified Bessel function of the second kind, and $\Theta$ is a temperature parameter which we adjust to have a Lorentz factor $\gamma_{\rm e,peak}^{\rm pre}\approx 10^6$ at the peak of the distribution \citep[e.g.,][]{Amato2021}. The number and energy of the particles injected into the preshock by the pulsar are given by \begin{equation} \label{eq:ndot} \dot N=\int \frac{d\dot N^{pre}_e}{d\gamma_e} d\gamma_e, \end{equation} and \begin{equation} \label{eq:number} \int \gamma_e m_e c^2 \frac{d\dot N^{pre}_e}{d\gamma_e} d\gamma_e = \eta \dot E_{\rm SD}, \end{equation} where $\eta$ is the particle conversion efficiency of the pulsar's spin-down power \citep[e.g.,][]{Gelfand2009, Uchiyama2009}. Then, the number of particles within a radial length $dr$ over the $4\pi$ solid angle in the pulsar wind zone is given by $\frac{dN}{dr}=\frac{\dot N}{c}$. At certain orbital phases, the preshock flow along the LoS may be open if it does not cross the IBS. In this case, we stop the flow at a distance $\approx 5d_{\rm orb}$ where a back shock is expected to form \citep[][]{Dubus2015}. We verified that extending the preshock flow to larger distances at these phases had no significant impact on the output emission. \subsection{Particles and their flows in IBS}\label{sec:sec2_4} The preshock particles are injected into the entire surface of the IBS and further accelerated there as in termination shocks of pulsar wind nebulae \citep[PWNe;][]{Kennel1984}. The particles flow along the conic surface (blue in Fig.~\ref{fig:fig2}) towards the tail of the IBS and exit the emission zone at $s=s_{\rm max}$. As IBS and PWN physics share some common grounds, we assume that the IBS electron distribution in the flow rest frame is isotropic and follows a broken power law \citep[e.g., as in PWN cases;][]{Sironi2011,Cerutti2020} between a lower ($\gamma_{e,\rm min}$) and an upper ($\gamma_{e,\rm max}$) bound with a break at $\gamma_b$ which is caused by particle cooling: \begin{eqnarray} \label{eq:bpl} \frac{dN_e}{d\gamma_e} = \begin{cases} N_1 \gamma_e^{-p_1}, & \gamma_{e,\rm min}\le \gamma_e<\gamma_b \\ N_1 \gamma_b^{-p_1}(\gamma_e/\gamma_b)^{-p_2}, & \gamma_b\le\gamma_e \le \gamma_{e,\rm max}. \\ \end{cases} \end{eqnarray} $p_1$ may be determined by particle acceleration modeling or more directly by X-ray observations. It is almost certain that $p_1$ varies over the orbit since gamma-ray binaries have shown orbital variations of their X-ray spectral index \citep[][]{Bosch-Ramon2005,Chernyakova2009,Corbet2012,An2015}. Because the origin of the variability is still uncertain and yet it does not have a significant impact on the LCs, we assume a constant $p_1$ over the orbit. In our model, the particle cooling is manifested as a spectral break of the stationary electron distribution (Eq.~\ref{eq:bpl}). In the case that $B$ is uniform over the IBS, a $p_2 - p_1=1$ break is expected. However, the degree of the spectral break ($p_2 - p_1$) may differ in IBSs because $B$ varies with $s$ and fresh particles are injected at every point on the IBS. Cooling time scale of the highest-energy electron is an order of $\le$100\,s (e.g., $\gamma_{e,\rm max}\approx 10^8$ and $B\approx1$\,G) and then a spectral break is expected at $\gamma_b \sim 5\times 10^{6}$. For typical $B\approx 1$\,G in IBSs of TGBs, particles at the break would emit $\sim$MeV photons; $p_2$ and $\gamma_b$ cannot be inferred directly from data due to the lack of sensitive observations in the MeV band. For $\gamma_{e,\rm max}$, we assume that particles are accelerated to the radiation reaction limit so that the SY photon energy emitted by the highest-energy particles is $\sim 100$\,MeV (but see Section~\ref{sec:sec4_2_1}). The flow bulk Lorentz factor $\Gamma$ can vary along the IBS in a complex way as was seen in relativistic HD simulations \citep[e.g.,][]{Bogovalov2008,Dubus2015}, where a small fraction $\xi$ ($\ll 1$) of the particles in the flow is seen to be bulk-accelerated to high $\Gamma$. We follow \citet{An2017} and TAH21, and for simplicity assume two distinct populations of particles in the IBS, one with a small constant bulk Lorentz factor ($\Gamma\approx 1$; slow flow) and another with a linearly growing bulk Lorentz factor \begin{equation} \label{eq:bulkG} \Gamma(s) = 1 + \frac{s}{s_{\rm max}}(\Gamma_{\rm max} - 1) \end{equation} (fast flow). The flow speed ($v_{\rm flow}=c\sqrt{1-1/\Gamma^2}$) of the `fast flow' is determined by this equation, but that of the slow flow needs to be prescribed. In an ideal model for PWNe \citep[][]{Kennel1984}, the flow speed in the post-shock region is predicted to be $\approx c/3$, and we use a similar value for the slow flow (but see below). Flows with different velocities are subject to instabilities and thus may not be stable. Our phenomenological description of the two flows does not account for the physical instabilities, and we assume that the two flows are physically separated. More accurate description of the flow requires relativistic HD simulations as noted above; the speed change from the `slow' to `fast' flow may be more continuous \citep[e.g.,][]{Bogovalov2008,Dubus2015}. The number of particles per unit length along the IBS (integrated over the azimuth angle $\phi_s$ of the shock cone) is given by the continuity condition \citep[e.g.,][]{Canto1996} as \begin{equation} \label{eq:IBSparticles} \frac{dN_e(s)}{ds} = \frac{\dot N [1 - \mathrm{cos}\theta_{\rm p}(s)]}{2v_{\rm flow}}, \end{equation} where $\dot N$ is the number of preshock particles injected by the pulsar given in Eq.~\ref{eq:ndot}. The number of particles in the IBS $N_e = \int \frac{dN_e(s)}{ds} ds$ is controlled by particle residence time $s_{\rm max}/v_{\rm flow}$. This determines the relative contribution of the preshock and the IBS particles to ICS (VHE) emission. As the pulsar's injection into the IBS is assumed to be the same at every orbital phase, we assume that $N_e$ in the IBS is constant over the orbit. This means that $v_{\rm flow}$ varies over the orbit; once this value is prescribed at a given phase, values at the other phases are determined to preserve $N_e$. Note that this assumption does not have a large impact on matching the observational data with the phenomenological model; using a different assumption (e.g., varying $N_e$ and constant $v_{\rm flow}$) can also reproduce the measurements with slightly different $\theta_i$ and $\Gamma_{\rm max}$. We split the IBS cone, seen in Figure~\ref{fig:fig2}, into $21\times361$ computational grids which are described in greater detail in Sections~\ref{sec:sec3_2}, \ref{sec:sec3_3}, and \ref{sec:sec4_2}. We apply the same particle distribution over the IBS zones (Eq.~\ref{eq:bpl}), but the number of particles (Eq.~\ref{eq:IBSparticles}), the fast flow $\Gamma$ (Eq.~\ref{eq:bulkG}), and the flow direction (Fig.~\ref{fig:fig2}) differ in each zone as they depend on $s$. \subsection{Magnetic field in IBS}\label{sec:sec_2_5} The magnetic field in an IBS is assumed to be supplied by the pulsar and randomly oriented. For a pulsar with a surface magnetic-field strength $B_{\rm s}$ and a spin period $P_{\rm s}$, $B(s)$ in an emission zone in the IBS is estimated to be \begin{equation} \label{eq:Bfield} B(s)=B_{\rm s}\left (\frac{2\pi R_{\rm NS}}{cP_s}\right )^3 \left( \frac{cP_{\rm s}}{2\pi r_{\rm p}(s)} \right )\equiv B_0\left ( \frac{r_0}{r_{\rm p}(s)} \right ), \end{equation} where $R_{\rm NS}$ is the radius of the neutron star and $r_0$ is distance to a reference point. The pulsar continuously injects particles and $B$ which are frozen together as they stream to the IBS, potentially creating complex $B$ structures within the IBS flow. Relativistic HD computations \citep[e.g.,][]{Bogovalov2012,Dubus2015} indeed show very complicated structures of $B$: nonlinear changes with $s$, variations over the thickness of the IBS, and most importantly a $B$ that decreases with increasing $s$. However, the simplified binary pulsar magnetic relationship in Eq.~\ref{eq:Bfield} is an appropriate substitute for the complex MHD results as it captures the important inverse $B$-$s$ relationship. Prior analyses of IBSs of pulsar binaries and TGBs have used this approach \citep[e.g.,][]{Romani2016,Dubus2013}, and we adopt Eq.~\ref{eq:Bfield} in this work as well. In reality, deviations from the $B\propto 1/r_{\rm p}$ dependence could be present, but an investigation of such effects is beyond the scope of this paper which attempts to account for the global IBS features and SED/LC data. The value of $B_0$ can vary substantially depending on the pulsar parameters and is often assumed to be $\approx 1$\,G in TGBs \citep[e.g.,][]{Dubus2013}. In our model, $B_0$ at the reference point (i.e., the shock nose at the inferior conjunction of the pulsar) is prescribed, and $B$s at the other positions in the IBS and at different orbital phases are computed using Eq.~\ref{eq:Bfield}. \section{Emission in TGB systems}\label{sec:sec3} The observed emission of TGB systems spans over the entire electromagnetic wavelengths from radio to TeV gamma-ray bands. The radio-emission region is often observed to be elongated \citep[e.g., $>d_{\rm orb}$;][]{Ribo2008,Moldon2011} and so the radio emission is thought to be produced by particles that escaped from the IBS which we do not model. The equatorial disk of a Be companion emits in the IR to optical band, and the massive companion (blackbody) radiates primarily in the optical band. These IR and optical photons provide the preshock and IBS particles with seeds for ICS. Below, we describe SY and ICS emissions of the preshock and IBS particles relevant to X-ray and VHE emissions in detail. IR and optical photons from the disk and the companion play an important role for the VHE emission via ICS. \subsection{Stellar emissions}\label{sec:sec3_1} Seed photons for ICS are primarily produced in the atmosphere and the disk of the companion star. The atmospheric emission is presumably isotropic blackbody radiation with absorption/emission lines and is observed as a prominent optical bump in SEDs of TGBs. This emission can be well characterized by the temperature $T_{*}$ and radius $R_{*}$ of the star that can be measured with spectroscopic observations. Although there are a number of absorption/emission lines in the spectrum of a star, these narrow features will be washed out in the ICS processes by electrons with featureless, non-thermal energy distributions. Hence, a blackbody spectrum, scaled to the distance to an emission zone ($r_{\rm s}$; Fig.~\ref{fig:fig2}), is a good approximation for ICS seed photons. A blackbody spectrum of J0632 computed with $T_*=30000$\,K and $R_*=6.6R_{\odot}$ \citep[][]{Aragona2010} is presented in red in Figure~\ref{fig:fig3} along with observed J0632 IR-to-optical flux densities.\footnote{http://vizier.unistra.fr/vizier/sed/} Disk emission of a Be star is produced by free-free and free-bound processes in the disk \citep[e.g.,][]{Waters1984,Carciofi2006} and is observed as an IR excess in the spectrum \citep[e.g.,][]{Klement2017}. While the observed disk spectra appear simple (e.g., broad IR bump; Fig.~\ref{fig:fig3}), computing the emission spectrum is highly complicated because it is not uniform over the surface of the disk due to changes of the disk density and thickness with distance from the star. Moreover, the disk emission is anisotropic \citep[e.g.,][]{Carciofi2006} because of varying optical depth depending on the disk viewing angle ($\theta_{\rm d}$). The disk emission itself is usually much weaker than the stellar atmospheric one \citep[e.g., Fig.~\ref{fig:fig3}; see also][]{Klement2017} and therefore EC emission off of disk seed photons does not contribute much to VHE emission in TGBs \citep[e.g.,][see also Section~\ref{sec:sec3_3}]{vanSoelen2012}. Furthermore, a detailed shape of the disk spectrum is blurred in the ICS process by the broad electron distributions. Hence, it is well justified to use an approximate continuum model for the disk emission (e.g., multi-temperature blackbody), but we investigate a more complex disk-emission model below. Although a complete disk model (considering radiation transfer and radiative equilibrium) requires computations of the disk structure and its emission for arbitrary density and velocity distributions \citep[e.g.,][]{Carciofi2006}, the disk emission can be simplified for our purpose with an assumption of power laws for radial distributions of density, scale height, and temperature. In a cylindrical coordinate system for the disk, \citet{Carciofi2006} assumed $\rho(r)=\rho_0( R_*/r )^{n_d}\mathrm{exp}\left (-z^2/2H^2 \right )$, $H(r)=H_0 (r/R_*)^{\beta_d}$, and $T_{\rm disk}(r)=T_d(r/R_*)^{-s_d}$, for the density, scale height, and temperature, respectively, and presented analytic formulas for computation of optical depths ($\tau(r)$) and emission spectra of pole-on disks. $n_d$ and $\beta_{\rm d}$ are typically $\sim$3 and $\sim$1.5, but they vary from source to source \citep[][]{Klement2017}. We followed the procedure described in the Appendix of \citet{Carciofi2006} for computations of optical depths and the disk spectra. Note that these computations require Gaunt factors for which \citet{Carciofi2006} used a long-wavelength approximation. We instead used approximations from \citet{Waters1984}, which are more accurate within our frequency range of $10^{12}$--$10^{16}$\,Hz. For an inclined disk as compared to a pole-on disk, the depth of the emitting medium (disk) increases by a factor of $1/\mathrm{cos}\theta_{\rm d}$, and so we assume that the optical depth of the disk varies as $1/\mathrm{cos}\theta_{\rm d}$ (see \citet{vanSoelen2012} for a more accurate treatment of the inclined disk case). This simplified emission model with the power-law prescriptions suffices to compute the disk spectrum in our model. The disk parameters for J0632 are not known, and thus we adopt their typical ranges \citep[][]{Carciofi2006, Klement2017} to match the observed data of J0632 (Fig.~\ref{fig:fig3}). The observed spectrum of an isothermal ($s_d=0$) hydrogen disk extending to 60$R_*$ with a central density $\rho_0=10^{-11}\rm \ g\ cm^{-3}$, disk temperature $T_d=0.7T_*$, $n_d=2.45$, and $\beta_{\rm d}=1.5$ is presented in Figure~\ref{fig:fig3}. The disk emission amplitude parameters (e.g., $\rho_0$) used to match the observed IR-to-optical measurements depend on the assumed disk-viewing angle ($i_{\rm d}$) between the LoS and the surface normal vector of the disk. For the measured IR-to-optical flux densities (Fig.~\ref{fig:fig3}), the `intrinsic' disk emission would be inferred to be stronger for a larger $i_{\rm d}$; then the disk will provide the preshock and IBS with a larger amount of ICS seeds. Because it was suggested that the disk of J0632 is seen nearly edge-on \citep[e.g.,][TAH21]{Aragona2010}, we assume a large $i_{\rm d}$ of 85$^\circ$ to test a limiting case. We verified our simple disk emission model by comparing the modeling results with those of \citet{Carciofi2006} and \citet{vanSoelen2012}. Firstly, our model for the spectrum of the Be disk in J0632 (Fig.~\ref{fig:fig3}) is similar to that in another TGB PSR~B1259$-$63 \citep[][]{vanSoelen2012}. Secondly, we found that surface brightness of the disk at high frequencies ($10^{14}$\,Hz) is significantly higher in the inner region ($r\lapp 2R_*$) than the outer regions, which is consistent with a previous result \citep[e.g., Fig.~13 of][]{vanSoelen2012}. Thirdly, emission computed by our model for a moderately inclined disk does not deviate much from that for a pole-on disk, similar to previous results \citep[][]{Carciofi2006,vanSoelen2012}. These verify that our simplified disk-emission model captures the main emission features of Be disks. \subsection{Synchrotron radiation}\label{sec:sec3_2} With particle distributions and $B$ in IBS being prescribed (Sections~\ref{sec:sec2_3}--\ref{sec:sec_2_5}), a SY SED is computed using the formulas given in Appendix~\ref{sec:appendix3} \citep[see][for more detail]{Finke2008}. We construct an IBS surface at each of 100 orbital phases and divide it into $21\times 361$ (axial and azimuthal) emission zones. The particle distribution is computed in 300 energy ($\gamma_e$) bins and the resulting SED is computed in 200 frequency bins. These binnings were chosen to achieve sufficient accuracy to fit the observation data while maintaining reasonable computation time. As an example, we consider a circular orbit with orbit radius $a=3.5\times 10^{13}$\,cm, periastron at $\phi_{\rm perias}=0.3$ and the inferior conjunction (IFC) of the pulsar at $\phi_{\rm IFC}=0.75$. For the IBS, we used $\beta=0.07$, $p_1=2.3$, $p_2=2.6$, $\gamma_{e,\rm min}=1.5\times 10^5$, $\gamma_{\rm b}=5\times 10^6$, $s_{\rm max}=4d_{\rm orb}$, $\xi=0.1$, $B_0$=0.28\,G, $\Gamma_{\rm max}=5$, and $\theta_i=50^\circ$. Using these baseline values, we compute parameters in the IBS flow (see Fig.~\ref{fig:fig2}), e.g., $N_e$ (Eq.~\ref{eq:IBSparticles}), $\Gamma$ and $\delta_{\rm D}$ (Eqs.~\ref{eq:bulkG} and \ref{eq:Doppler}), $B$ (Eq.~\ref{eq:Bfield}). Then the SY emission SED (Eq.~\ref{eq:sysed}) in each zone is computed neglecting the SY emission of the `cold' preshock electrons. Figure~\ref{fig:fig4} shows an orbitally-averaged SED. The computed SED matches the expected emission features at energies $10^2$\,eV, $10^4$\,eV and $10^8$\,eV, which correspond to $\gamma_{e,\rm min}$, $\gamma_{\rm b}$, and $\gamma_{e,\rm max}$, respectively. Even though the fast flow is assumed to be only a small fraction ($\xi=0.1$) of the slow flow, the fast-flow emission is highly boosted near the IFC and is noticeably strong in the phase-averaged SED. The shape of the SY SEDs of the two flows are similar, but that of the fast flow is shifted to higher energies (blue in Fig.~\ref{fig:fig4}) because of Doppler beaming by the bulk motion (Eq.~\ref{eq:Doppler}). The SY LC of IBS flows is influenced by the orbital and the IBS parameters. Orbital variation of the `slow' flow is induced by varying $r_{\rm p}$ in an eccentric orbit as $B\propto 1/r_{\rm p}\propto 1/d_{\rm orb}$ (Eqs.~\ref{eq:Bfield} and \ref{eq:shocknose}). Hence, LCs of the slow flow (dashed lines in Fig.~\ref{fig:fig5}a) have a peak at periastron $\phi_{\rm perias}=0.3$. Because SY flux ($F_{\rm SY}$) is proportional to $B^{(p_1 + 1)/2}$ \citep[][]{Dermer1995,An2017}, we anticipate the orbital variation being $\propto 1/d_{\rm orb}^{(p_1 + 1)/2}$. For eccentricity $e$, the min-max ratio in the LC of the slow flow (e.g., dashed lines in Fig.~\ref{fig:fig5} a) would be $\propto \left ( \frac{1 + e}{1 - e} \right )^{(p_1 + 1)/2}$ (e.g., Eq.~\ref{eq:dorb}). Emission of the `fast' flow arises mostly from the tail ($s\approx s_{\rm max}$) of the shock where $\Gamma$ and the particle density are largest (Eqs.~\ref{eq:bulkG} and \ref{eq:IBSparticles}). Because IBS particles flow along a cone-shape surface (blue in Fig.~\ref{fig:fig2}) and the emission of the fast flow is highly beamed in the flow direction, the sky emission pattern of the fast flow will be ring-like in the shock tail direction \citep[e.g.,][]{Romani2016}. Due to different viewing angles of the shock tail, the Doppler factor will vary with the orbital phase. Since $F_{\rm SY}\propto \delta_{\rm D}^{(5 + p_1)/2}$ \citep[e.g.,][]{Dermer1995,An2017}, the observed flux of the fast flow will be largest near the IFC where the flow direction aligns well with the LoS (e.g., Fig.~\ref{fig:fig2} left). Note that $B$ ($\propto 1/d_{\rm orb}$) also varies with $\phi$ and affects the amplitude of the variation, but this effect is small compared to the $\delta_{\rm D}$ effect for the fast-flow emission. Hence, its peak occurs at the IFC. Figure~\ref{fig:fig5} shows the SY LC's dependencies on the orbital and IBS parameters. Panel (a) shows an effect of the eccentricity ($e$). For a circular orbit ($e=0$), a modulation of the slow-flow emission is very weak (i.e., weak beaming due to low $v_{\rm flow}$); it was ignored in this example for clarity. The fast-flow emission modulation is large even in the $e=0$ case. In an eccentric orbit, the phase separation between periastron and the IFC also affects the shape of the SY LC (e.g., Fig.~\ref{fig:fig5} b with an assumed $e$=0.5). The double peaks of the fast-flow emission (solid line) may have substantially different amplitudes and widths; one closer to the periastron is higher and sharper because of larger $B$ and rapid orbital motion of the pulsar. The effects of $\theta_i$, $\beta$, $\Gamma_{\rm max}$, and $s_{\rm max}$ are presented in Figure~\ref{fig:fig5} (c)--(f), respectively. For a given IBS opening angle ($\theta_{\rm cone}$), $\theta_i$ determines the viewing angle ($\theta_{\rm V}$) and thereby $\delta_{\rm D}$ (Eq.~\ref{eq:Doppler}) for the emission. For $\theta_i=0$, circular orbits do not cause a modulation of the slow or the fast flow. As $\theta_i$ grows, the LoS becomes closer to the shock tangent (i.e., emission ring) near the IFC, and the modulation induced by orbitally varying $\theta_{\rm V}$ should be observed. For a sufficiently large $\theta_i$ (i.e., $\theta_i>\pi/2-\theta_{\rm cone}$; Fig.~\ref{fig:fig2}), the LoS crosses the emission ring twice near IFC, and the fast-flow emission bump in the LC splits into two peaks (red vs. purple in Fig.~\ref{fig:fig5} c); the separation between them (LoS crossings of the emission ring) increases with $\theta_i$ and is $2\theta_{\rm cone}$ for $\theta_i=90^\circ$. Effects of $\beta$ (Fig.~\ref{fig:fig5} d) are similar to those of $\theta_i$ since $\beta$ determines $\theta_{\rm cone}$ (Eq.~\ref{eq:coneangle}) and thus $\theta_{\rm V}$. However, separation of the two peaks in the fast-flow LC depends more sensitively on $\beta$ than $\theta_i$. This is an obvious geometrical effect of the former ($\beta$) directly changing the emission-ring size $\theta_{\rm cone}$. Note also that $r_{\rm p}$ (see Fig.~\ref{fig:fig2}) is determined by $\beta$ (e.g., Eq.~\ref{eq:shocknose}); for a smaller $\beta$, $r_{\rm p}$ is smaller and thus $B$ ($\propto 1/r_{\rm p}$) in the IBS is stronger, increasing the SY flux of the slow flow as well (dashed lines in Fig.~\ref{fig:fig5} d). The bulk Lorentz factor of the fast flow near the shock tail (i.e., $\Gamma_{\rm max}$) controls the width of each peak (i.e., $1/\Gamma$ beaming) as well as emission strength of the fast flow (Fig.~\ref{fig:fig5} e). The emission strengths of the slow- and the fast-flow particles change depending on the length of IBS ($s_{\rm max}$) even if the particle number does not change (Fig.~\ref{fig:fig5} f). This is because of the decrease in $B$ at large distances from the pulsar ($r_{\rm p}$); the total IBS emission is reduced with increasing $s_{\rm max}$. This effect is more pronounced for the fast-flow emission because it mostly arises from the shock tail. Thus the peak-amplitude ratio of the fast- to the slow-flow emissions moderately decreases with increasing $s_{\rm max}$ as is noticeable in Fig.~\ref{fig:fig5} (f). \subsection{ICS Emission}\label{sec:sec3_3} VHE emission can be produced by ICS processes: the synchrotron self-Compton (SSC) process in the IBS and the external Compton (EC) process of the preshock and IBS particles. However, the electron density in TGBs is too low to produce significant SSC emission flux in the IBS. Hence, the SSC process is often ignored for TGBs and we therefore only consider the EC process for VHE emission. We assume a head-on collision for the EC scattering which is appropriate for ultra-relativistic particles ($\gamma_e\gg1$) in the preshock and IBS, and calculate their emission SEDs using formulas given in Appendix~\ref{sec:appendix3} \citep[see][for more detail]{Dermer2009}. For the stellar blackbody seeds, we assume that the star is a point source. Therefore, both the incidence azimuth and polar angles $\phi_*$ and $\theta_*$ ($\mu_*\equiv \mathrm{cos}\theta_*$) of the seed photons (see Eq.~\ref{eq:ecsed}) into a scattering emission zone, as well as the distance ($r_{\rm s}$) from the stellar surface to the scattering zone in the IBS or preshock, are the same over the surface of the companion (see Fig.~\ref{fig:figdisk}). This is a reasonable approximation for the companion of J0632 since $R_* \ll d_{\rm orb}$. In contrast, the size of the disk can be comparable to the orbit, and thus the incident azimuth and polar angles $\phi_{\rm d}$ and $\theta_{\rm d}$ ($\mu_{\rm d}\equiv\mathrm{cos}\theta_{\rm d}$) of the disk seeds into the scattering zone vary over the disk surface (Fig.~\ref{fig:figdisk}), as does the distance from a surface element of the disk to the scattering zone ($r_{\rm d}$). Additionally, the disk emission is anisotropic and varies radially (Section~\ref{sec:sec3_1}). These variations require computationally demanding calculations, since the EC computation is carried out in each of $21\times 361$ IBS zones, 500 preshock zones, and 100 orbital phases, in addition to 300 energy ($\gamma_e$) bins. Hence, we simplified EC computations by assuming the disk is a point source with $\phi_{\rm d}=\phi_*$, $\mu_{\rm d}=\mu_*$, and $r_{\rm d}=r_{\rm s}$ over the disk surface. Note, however, that the EC scattering angles in and distance from the disk to a scattering zone vary over the IBS surface because the location and the flow direction of each zone change (Figs.~\ref{fig:fig2} and \ref{fig:figdisk}). We verified that EC spectra computed with this assumption were not significantly different from those obtained with a full integration over the disk surface at a few phases. Further note that the disk emission is weaker than the companion's blackbody emission (see Fig.~\ref{fig:fig3}), and therefore contribution of the disk seeds to the EC SED is small (Fig.~\ref{fig:fig6}). With the aforementioned assumptions, we compute the EC emission. In each emission zone, we compute the incidence angles and spectral energy densities ($u_*$) of the stellar and disk emissions, calculate the EC emission (Eq.~\ref{eq:ecsed}) in the zone, and integrate over the IBS surface to generate the EC SED at a phase. Figure~\ref{fig:fig6} shows orbitally-averaged EC SEDs constructed using the same parameters as those used for Figure~\ref{fig:fig4}. Note that the EC SEDs also reflect the distributions of emitting particles and seed photons; broader distributions result in a more extended EC SED. Hence, the EC SED for the disk photons (Fig.~\ref{fig:fig6} middle) is slightly broader than that for the stellar photons (Fig.~\ref{fig:fig6} left). Notice that any sharp features in the disk-seed spectrum (Fig.~\ref{fig:fig3}) are not apparent in the EC SED as they are blurred by the broad electron distributions. The EC emission of IBS particles varies orbitally due to changes in the seed photon density, the scattering angle $\psi$ (the angle between incoming and scattered photons), and Doppler beaming of the emission. The seed photon density varies as $\propto 1/r_{\rm s}^2$ over the orbit, and so the orbital modulation of the EC emission from the slow flow is similar to that of the SY emission (dashed curves in Fig.~\ref{fig:fig5}). As in the SY case, the EC emission of the fast flow is strongest at IFC where the flow direction is well aligned with LoS thereby leading to the strongest Doppler beaming. However, the EC LC varies in a more complex way because of changes in the scattering geometry (e.g., scattering angle $\psi$) over the orbit which is most favorable near superior conjunction of the pulsar (SUPC; $\psi\approx \pi$). Dependencies of the EC LCs on the orbital and IBS parameters are similar to those of the SY LC. We use the same parameters as were used for Figure~\ref{fig:fig5} and compute EC LCs. The results are displayed in Figure~\ref{fig:fig7}. In the EC case, the slow flow emission exhibits strong modulation even in circular orbits due to variation of the scattering geometry (red dashed line in Fig.~\ref{fig:fig7} a). In addition, the seed photon density for ICS in eccentric orbits is highest at periastron due to small $r_{\rm s}$, and thus emission of the slow flow is further enhanced at that phase (dashed lines in Fig.~\ref{fig:fig7}a). The peaks in the fast-flow EC LCs appear sharper than the corresponding peaks in the SY LCs (Fig.~\ref{fig:fig5}). This is due to the strong dependence of EC emission on $\delta_D$ \citep[$\delta_D^{3+p_1}$;][]{Dermer1995} with small contribution of the $\psi$ effect. $\beta$ dependence of the strengths of the SY and the EC emissions is opposite to each other (Fig.~\ref{fig:fig5}d vs. Fig.~\ref{fig:fig7}d) as a smaller $\beta$ pushes the IBS closer to the pulsar (higher $B$) but farther from the companion (lower $u_*$). $s_{\rm max}$ dependence of the EC emission (Fig.~\ref{fig:fig7} f) is similar to that of the SY emission, but the ratio of the fast flow to slow flow fluxes at the IFC drops with $s_{\rm max}$ faster in the EC regime than in the SY regime. This is produced by a combination of $r_{\rm s}$ and $\psi$ changes. Consequently, the IBS size ($s_{\rm max}$) can be estimated by comparing slow-to-fast-emission ratios of the SY and the EC emissions. \subsection{$\gamma$-$\gamma$ absorption}\label{sec:sec3_4} VHE emission is absorbed by soft photons (stellar blackbody and disk emission) through the $\gamma$-$\gamma$ pair production process. In each of the IBS and preshock emission zones, we compute the $\gamma$-$\gamma$ optical depth $\tau_{\gamma\gamma}$ along the LoS using the scattering cross section given in \citet{Gould1967}. The VHE emission in each zone (Section~\ref{sec:sec3_3}) is then reduced by a factor of $e^{-\tau_{\gamma\gamma}}$ appropriate for the zone. If the orbit is tight, a large companion may block part of IBS or preshock as was seen in pulsar binaries \citep[e.g.,][]{Corbet2022}; this is not a concern for J0632. Examples of $\gamma$-$\gamma$ absorption by the blackbody and disk emission (in Figure~\ref{fig:fig3}) for parameters appropriate for J0632 (e.g., Table~\ref{ta:ta1}) are presented in Figure~\ref{fig:fig8}. As expected, the maximum absorption occurs at SUPC for gamma-ray photons with $E\approx \frac{(m_e c)^2}{h\nu_{\rm seed}} \approx 10^{11}$\,eV \citep[][]{Gould1967}. Note that the emission zones are spread over the extended IBS and thus the effect of the absorption is not very large in the spectrum integrated over the IBS even though the absorption of the emission at the shock nose (nearest to the companion) would be somewhat stronger. Secondary electrons produced by the $\gamma$-$\gamma$ interaction may be significant if $\tau_{\gamma\gamma}$ is large \citep[e.g., $\gg1$;][]{Bednarek2013,Dubus2013} but we do not consider them in this work since $\tau_{\gamma\gamma}$ in J0632 seems not to be very large. \section{Modeling the LCs and SEDs of J0632}\label{sec:sec4} \subsection{Broadband SED and multi-band LCs of J0632}\label{sec:sec4_1} We compiled broadband data of J0632 from published papers. Its X-ray LCs and spectra have been measured by Swift-XRT and NuSTAR (TAH21), and a $\sim$GeV SED \citep[][]{Li2017} was measured by the Fermi large area telescope \citep[LAT;][]{Atwood2009}. Note that the Fermi-LAT flux may include emission of a putative pulsar which we do not model, and thus we regard the $\sim$GeV flux measurements as upper limits. In the VHE band, VERITAS and H.E.S.S. data presented in \citet{Adams2021} are used to construct the LCs and SEDs. For the X-ray SEDs, we plot those measured with Swift in the 0.3--10\,keV band and with NuSTAR in the 3--20\,keV band. Since the X-ray spectrum of J0632 is variable, we take three representative power law models (i.e., three orbital phases) in each of the Swift and the NuSTAR bands \citep[Fig.~\ref{fig:fig9};][TAH21]{Aliu2014}. For the VHE SEDs, we display six representative power-law fits reported by \citet{Adams2021}. The shapes of the multi-band LCs differ slightly depending on the {\bf $P_{\rm orb}$} used to fold the data \citep[310--320\,days;][]{Casares2012,Moritani2015,Maier2019,Adams2021}. We use folded LCs (Fig.~\ref{fig:fig10}) produced with the most recent orbital period \citep[$P_{\rm orb}=317.3$\,days;][]{Adams2021}. Note that the X-ray LC (Fig.~\ref{fig:fig10}, top) was constructed with average 0.3--10\,keV Swift-measured fluxes \citep[first cycle and left ordinate;][]{Adams2021} and average count rates (second cycle and right ordinate; TAH21); the latter was normalized to have a maximum of 1. The fluxes in the VHE LC \citep[Fig.~\ref{fig:fig10} bottom; taken from][]{Adams2021} were obtained by assuming a photon index of $\Gamma_{\rm VHE}=2.6$. We base our broadband modeling of J0632 on the results of the previous X-ray study (TAH21), and use our model to explain the multi-band LCs and broadband SED of J0632. \subsection{Application of the IBS model to J0632}\label{sec:sec4_2} We apply our IBS emission model (Sections~\ref{sec:sec2} and \ref{sec:sec3}) to the observed SEDs and LCs of J0632. We compute the IBS emission in 21 and 361 segments along $s$ and $\phi_s$, respectively, and the preshock EC emission in 500 radial segments. Particle spectra are constructed using 800 energy bins, and each of the resulting SY and EC emissions is computed in 300 frequency bins. $\gamma$-$\gamma$ absorption is applied to emission in each segment separately out to $\sim10^4d_{\rm orb}$ along the LoS, and LCs are generated by integrating the computed SEDs in the energy ranges relevant to the X-ray (0.3--10\,keV) and VHE data ($>$350\,GeV). The results are displayed in Figures~\ref{fig:fig9} and \ref{fig:fig10}, and the model parameters are presented in Table~\ref{ta:ta1}. \begin{table}[t] \vspace{-0.0in} \begin{center} \caption{Parameters for the IBS model in Figures~\ref{fig:fig9} and \ref{fig:fig10}} \label{ta:ta1} \vspace{-0.05in} \scriptsize{ \begin{tabular}{lcc} \hline\hline Parameter & Symbol & Value \\ \hline Semi-major axis & $a$ & $3.5\times10^{13}$\,cm \\ Eccentricity & $e$ & 0.45 \\ Inclination & $\theta_i$ & 48$^\circ$ \\ Periastron phase & $\phi_0$ & 0.26 \\ Pulsar inferior conjunction & $\phi_{\rm IFC}$ & 0.75 \\ \hline Winds' momentum flux ratio & $\beta$ & 0.045 \\ Speed of slow flow at IFC & $v_{\rm flow}$ & $0.3c$ \\ Length of the IBS & $s_{\rm max}$ & $4d_{\rm orb}$ \\ Max. bulk Lorentz factor & $\Gamma_{\rm max}$ & 7 \\ Fraction of fast flow & $\xi$ & 0.05 \\ Magnetic-field strength$^{\rm a}$ & $B_0$ & 0.25\,G \\ \hline Pulsar's power injection & $\eta \dot E_{\rm SD}$ & $2\times10^{34}\rm \ erg\ s^{-1}$ \\ Bulk Lorentz factor of preshock & $\gamma_{\rm e,peak}^{\rm pre}$ & $1.3\times 10^6$ \\ Min. electron Lorentz factor & $\gamma_{e,\rm min}$ & $1.5\times10^5$ \\ Max. electron Lorentz factor & $\gamma_{e,\rm max}$ & $\sim10^8$ \\ Lorentz factor at the break & $\gamma_b$ & $5\times10^6$ \\ Low-energy spectral index & $p_1$ & 2.3 \\ High-energy spectral index & $p_2$ & 2.6 \\ \hline Radius of the companion & $R_*$ & 6.6$R_\odot$ \\ Temperature of the companion & $T_*$ & 30000\,K \\ Disk size & $R_{\rm d}$ & 60$R_*$ \\ Disk temperature & $T_{\rm d}$ & 0.7$T_*$ \\ \hline \end{tabular}} \end{center} \vspace{-0.5 mm} \footnotesize{ $^{\rm a}$ Magnetic-field strength at the apex ($s=0$) of the IBS at IFC.} \end{table} \subsubsection{Phase-averaged SED and LCs}\label{sec:sec4_2_1} The model reasonably describes the phase-averaged SED and the X-ray/VHE LCs of J0632 (Figs.~\ref{fig:fig9} and \ref{fig:fig10}), except for the spike at $\phi\approx0.35$ in the VHE LC because we do not model the VHE emission at the disk crossing which requires deep understanding of how the pulsar and disk interact with each other. Notice that the model has a small bump at $\sim$TeV which is produced by the preshock-EC emission. It has not been commonly included in the previous basic IBS models \citep[e.g.,][]{Dubus2015,An2017}, but $\sim$TeV bumps seen in phase-resolved SEDs of J0632 (Fig.~\ref{fig:fig11}) hint at a possibility of such preshock emission. By jointly modeling the X-ray and VHE data, we are able to infer the magnetic-field strength $B_0$ (Eq.~\ref{eq:Bfield}) and parameters for the particle spectrum (e.g., $\gamma_{e,\rm min}$ and $\gamma_{e,\rm max}$; Eq.~\ref{eq:bpl}) in the IBS since the spectral shapes of the IBS emission depend strongly on these parameters. The magnetic-field strength $B_0$ at the shock nose at IFC is inferred to be $B_0\approx 0.25$\,G by the relative flux between the SY and the EC emissions, e.g., $\frac{F_{\rm SY}}{F_{\rm EC}} \approx \frac{u_{\rm B}}{u_*}$ in the Thomson regime for EC. For the $B_0$ value, $\gamma_{\rm e,min}$, $\gamma_{\rm e,max}$, and $\gamma_{\rm b}$ are inferred by the observed features in the SEDs since electrons with an energy $\gamma_e m_e c^2$ emit SY photons with an energy \begin{equation} \label{eq:syncnu} h\nu_{\rm SY}\approx 4\times 10^6 h\left( \frac{B}{1\rm G} \right) \gamma_e^2\rm \ eV, \end{equation} where $h$ is the Planck constant, and upscatter seed photons with an energy $E_{\rm seed}$ to \begin{equation} \label{eq:ecnu} h\nu_{\rm EC}\approx \gamma_e^2 \left( \frac{E_{\rm seed}}{1\rm eV}\right) \rm \ eV. \end{equation} $\gamma_{\rm e, min}$ is estimated to be $\approx10^5$ so that the low-energy cutoff in the model SED is below the 0.3\,keV start of the X-ray band and the low-energy EC SED is below the LAT upper limits at $<$100\,GeV energies (Fig.~\ref{fig:fig9}). $\gamma_{\rm e, max}$ is computed to be $\approx 10^{8}$ from the radiation reaction limit as mentioned in Section~\ref{sec:sec2_4}. An upper bound for $\gamma_{\rm e, max}$ can be estimated since a larger value (e.g., $\ge 4\times 10^{8}$) would overpredict the $<$GeV upper limits. Constraints on the lower bound for $\gamma_{\rm e,max}$ are poor because the VHE SED is little affected by $\gamma_{e,\rm max}$ due to the KN effect. The observed $\sim$10\,TeV photons imply that $\gamma_{e,\rm max}>10^{6}$. It is difficult to estimate $\gamma_{\rm b}$ without a measurement in the MeV band. Our SY LC model for J0632 (Figure~\ref{fig:fig10}, top) fully matches the detected X-ray peak at $\phi\approx0.35$ and also produces an LC that is extremely similar to that of TAH21, who phenomenologically modeled the disk crossings as an enhancement of $B$ in the IBS. With this model, we ignore two complex phenomena - strong SY cooling and disk interactions - that impact the EC emission. In reality, SY cooling of particles would be boosted by the enhanced $B$. However as our model does not include particle cooling, we are able to leverage the enhanced $B$ to fit the SY emission without altering the EC emission in the VHE band. Other processes such as additional seeds from disk heating may provide favorable conditions for EC emission \citep{Chen2019}. As noted in Section~\ref{sec:sec2_1}, we also ignore these complexities in our model. The EC LC model (Figure~\ref{fig:fig10} bottom) reproduces two bumps: one bump at $\phi\approx0.25$ produced by the slow flow from high seed density and favorable ICS geometry at periastron and SUPC, and the other at $\phi\approx 0.75$ due to the Doppler boosted emission of the fast flow. Note that there is a small notch at $\phi=0.75$ in the EC LC but not in the SY LC. This is due to the unfavorable ICS scattering geometry at IFC ($\psi\approx 0$; see Section~\ref{sec:sec3_3}). Although the notch cannot be identified in the data shown in the bottom panel of Figure~\ref{fig:fig10}, a smoothed LC presented in Figure~16 of \citet[][]{Adams2021} seems to exhibit a notched morphology. We also display contributions of the stellar (red, green) and disk (blue, purple) EC by the IBS and by the preshock particles in the bottom panel of Figure~\ref{fig:fig10}. Unlike the EC emission of the IBS particles, the preshock EC exhibits only one bump at $\phi$=0.25 (near periastron and SUPC). Hence the preshock emission controls the relative amplitude of the EC LC at $\phi$=0.25 and $\phi$=0.75; using more preshock particles (e.g., larger $\eta \dot E_{\rm SD}$; Eq~\ref{eq:number}) and/or placing the emission zone (preshock acceleration region; Section~\ref{sec:sec2_3}) farther away from the pulsar would make the $\phi$=0.25 bump more pronounced. As noted in Section~\ref{sec:sec3_3}, the EC off of the disk seeds is weaker than that of the stellar seeds. Although a distinction between the shapes of the stellar-EC and the disk-EC emissions is subtle, the ratio of the emission peaks of the slow ($\phi=0.25$) and the fast flow ($\phi=0.75$) particles is smaller for the stellar EC (red). This is because the stellar EC at SUPC suffers from the KN effect, whereas the IR seeds of the disk alleviates that effect. At IFC, the small scattering angle $\psi$ and Doppler de-boosting push the onset of the KN suppression to much higher $\gamma_e$s where little or no scattering electrons are available, making the KN effect observationally unimportant for both the stellar and disk seeds. The joint modeling of the X-ray and VHE LCs allows estimations of the IBS size $s_{\rm max}$ ($\approx 4d_{\rm orb}$), $\Gamma_{\rm max}$ ($\approx 7$), and $\xi$ ($\approx 0.05$) because the ratio of the peak fluxes of the slow- and the fast-flow emissions generated by the SY and EC processes ($F^{\rm slow, max}_{\rm SY}/F^{\rm fast, max}_{\rm SY}$ and $F^{\rm slow, max}_{\rm EC}/F^{\rm fast, max}_{\rm EC}$), and the widths of the LC bumps at $\phi\approx 0.75$ depend differently on these parameters (Figs.~\ref{fig:fig5} and \ref{fig:fig7}). Note that the ratios $F^{\rm slow, max}_{\rm SY}/F^{\rm fast, max}_{\rm SY}$ and $F^{\rm slow, max}_{\rm EC}/F^{\rm fast, max}_{\rm EC}$ are controlled by $\xi$, and differences between them are determined by $s_{\rm max}$ and $\Gamma_{\rm max}$ (panels e and f in Figs.~\ref{fig:fig5} and \ref{fig:fig7}). The latter controls also the widths of the LC peaks at IFC. \subsubsection{Phase-resolved VHE SEDs}\label{sec:sec4_2_2} As the model reproduces the observed phase-averaged SED and multi-band LCs reasonably well, we check to see if the model reproduces phase-resolved VHE SEDs (Fig.~\ref{fig:fig11}). Because the observed SED in a phase interval was constructed by combining observations from different epochs with different exposures spread over the phase interval \citep[][]{Adams2021}, we display model calculations at 20 phases within the interval (denoted in the panel) for comparison. The VERITAS- and H.E.S.S.-measured phase-resolved SEDs are reproduced well by our model. Specifically, the modulation of the spectral hardness of the VHE spectra is mostly consistent with the model; the hard VHE spectra at the phase interval 0.6--0.8 (Fig.~\ref{fig:fig11}) are explained by the enhanced contribution of the Doppler boosted emission of the fast flow. Some noticeable discrepancies between the data and model are likely due to strong orbit-to-orbit variations of the VHE flux, as was noticed by \citet{Adams2021}, and uneven phase coverage of the VHE measurements. It is intriguing to note that the observed SEDs exhibit a bump at $\sim$TeV seen in phase intervals 0.2--0.4 and 0.8--0.2 (Fig.~\ref{fig:fig11}). It was difficult to replicate this bump with basic IBS models which considered IBS particles only as they produce a smooth SED. The preshock EC emission (Section~\ref{sec:sec3_3} and Fig.~\ref{fig:fig9}) can naturally reproduce the SED feature, which possibly suggests that the preshock emission is an important contributor to the VHE emission in J0632. \section{Discussion\label{sec:discussion}} In the previous sections, by adopting the latest orbital solution and binary system geometry suggested by a recent study (TAH21), we demonstrated that both the LC and SED data of J0632 from the X-ray to VHE band can be well explained with our IBS model. Compared to the previous investigations based on X-ray data only, we further constrained the IBS parameters by simultaneously fitting the X-ray and VHE data (Table~\ref{ta:ta1}). Still, we note that some of the parameters are degenerate, making error estimation difficult, and thus it is possible that the derived parameters may not represent a unique solution. In this section, we discuss several key IBS properties and observationally important parameters. \subsection{VHE emission} We found that most of the VHE flux in J0632 arises from EC of the stellar seeds by IBS electrons and that the exact shape of the seed spectrum (e.g., features in the disk spectrum in Fig.~\ref{fig:fig3}) does not alter the EC spectrum significantly as long as the IR-to-optical SED model matches the observed flux. For example, a phenomenological multi-temperature blackbody model for the disk emission also results in a similar EC SED. Notice that the disk-EC SED is slightly broader than the stellar-EC SED (Section~\ref{sec:sec3_3} and Fig~\ref{fig:fig6}). In particular, the former is less affected by the KN effect which is severe at $h\nu_{\rm EC}\ge \frac{(m_e c^2)^2}{E_{\rm seed}} \rm \ eV$, hence making the SED model spectrally harder if the intrinsic energy density of the disk emission is much higher. The observationally inferred $u_{\rm *}$ of the disk depends on the assumed disk inclination $i_{\rm d}$ with respect to the LoS. We used an extreme inclination angle of $i_{\rm d}=85^\circ$ for modeling the disk emission (Section~\ref{sec:sec3_1}); if this value were smaller, the intrinsic disk emission would be lower, thus affecting the total observed VHE emission less significantly. The `total' disk + star seed flux inferred from the optical data (Fig.~\ref{fig:fig3}) is an important factor that determines the amplitude of the VHE emission. The inferred optical seed density $u_{\rm *}$ in the emission zones depends on the assumed distance to the source (here, $d$=1.4\,kpc) and orbital size quantified by the semi-major axis $a$. Hence, these parameters $d$ and $a$ have large impacts on our IBS parameter determination because one of the most important parameters $B_0$ is related to $u_{\rm *}$ (Section~\ref{sec:sec4_2_1}). Our estimations of $B_0$ and thus $\gamma_{\rm e,min}$ and $\gamma_{\rm e,max}$ are sensitive to the value of $u_{\rm *}$; see Eqs.~\ref{eq:syncnu} and \ref{eq:ecnu}. Therefore, accurate measurements of $d$ and $a$ as well as precise characterizations of the X-ray and VHE emission will lead to better understanding of the particle properties and thus particle acceleration in the IBS. \subsection{Particle flow in the IBS region} We were able to infer $s_{\rm max}\approx 4d_{\rm orb}$, $\Gamma_{\rm max}\approx 7$, and $\xi\approx 0.05$ by modeling the X-ray and VHE LCs. Since the quality of the VHE LCs is rather poor, our estimations are subject to large uncertainties. Moreover, model parameters are covariant with each other, and our simplified prescriptions such as the IBS shape and $B$ structure for the IBS flows may not be very accurate. These may add further systematic uncertainties. Although the reported parameter values (Table~\ref{ta:ta1}) need to be taken with some caution, we point out that the inferred parameters are in accord with the recent HD simulations supporting the general properties of IBS flows in TGBs -- high-energy emission arises from the inner region of the IBS flow and a small fraction of the particles are bulk accelerated in the flow. \subsection{A TeV bump in the SED: EC by preshock particles} The most interesting emission feature in J0632 is the $\sim$TeV SED bump seen in some orbital phases, which we ascribed to EC emission of the preshock particles with $\gamma^{\rm pre}_{\rm e,peak}\sim 10^6$. It has been widely believed that pulsar wind is dominated by Poynting flux near the magnetosphere, and the magnetic energy should be converted into particle energy in the pulsar wind zone between the light cylinder and the termination shock. The location, physical processes, and energetics for such an energy conversion have not been well understood theoretically \citep[e.g.,][]{Coroniti1990,Cerutti2020}. But \citet{Aharonian2012} modeled pulsed VHE emission of the Crab nebula and suggested that the Poynting flux of the pulsar wind should be converted into kinetic energy of particles in a narrow region at $\approx$30$R_{\rm LC}$, and that the accelerated particles have a Lorentz factor of $\gamma^{\rm pre}_{\rm e,peak}\sim 10^6$. The $\sim$TeV bump in J0632's SEDs can be explained by such preshock emission as we demonstrated in Section~\ref{sec:sec4_2_2}. Note that similar SED features have been seen in the VHE low state of the TGB PSR~J2032+4127 \citep[e.g.,][]{VeritasJ20322018}. The relatively sharp $\sim$TeV bumps imply that the energy distribution of the preshock is narrow like the Maxwellian distribution we assumed (Eq.~\ref{eq:maxwellian}). Other narrow distributions such as a broadened delta function or a narrow power law with a peak at $\gamma^{\rm pre}_{\rm e,peak}\sim10^6$ would predict slightly different SED shapes for the bumps but may also explain them equally well. Currently, these distributions are indiscernible because the measurement uncertainties are large. Precise characterization of the bumps with deep VHE observations may help to constrain the preshock particle distribution, possibly providing a hint to acceleration mechanisms \citep[e.g.,][]{Hoshino1992,Jaroschek2008,Sironi2011} in the pulsar wind zone. In our model, the SED amplitude, emission frequency (Fig.~\ref{fig:fig11}), and the LC shape (Fig.~\ref{fig:fig10} bottom) of the preshock EC depend sensitively on the conversion location and particle energy; if the conversion takes place at a larger distance from the pulsar, the model predicts weaker preshock emission and sharper LC features (particularly, a sharper peak near periastron is expected). Thus, the SED features in TGBs can provide a sensitive probe to the energy conversion process in pulsar winds. While the current measurements of the SED and LC of J0632 are somewhat constraining, more accurate measurements with deeper VHE observations \citep[e.g., by CTA;][]{CTA2011} in the near future will enable us to determine the fundamental parameters for energy conversion in pulsar winds. \subsection{A sharp spike in the VHE LC: a signature of disk-pulsar interaction? } Since the large spike observed at $\phi = 0.35$ in the VHE LC was not accounted for in our baseline IBS model, we suspect that this unique feature may be caused by a disk crossing of the pulsar. Note that the circumstellar disk material was suggested to be the origin of the higher $N_{\rm H}$ at the disk interaction phases \citep[TAH21;][]{Malyshev2019}. If this is true, the disk interaction may leave some observable signatures in the multi-band emission at the `interaction' phases. \citet[][]{HESS2020} reported an increase in the VHE flux near the disk crossing phase for another TGB PSR~B1259$-$63, and \citet{Chen2019} attributed the enhanced VHE flux to disk heating by the IBS. Similarly, we find that the VHE spike in the LC can be reproduced if we assume that the IBS is immersed in a $T_{\rm BB}\approx 950$\,K blackbody field of a heated disk. Note, however, that the existence of $T_{\rm BB}\approx 950$\,K blackbody field is not physically supported by our model since it does not include complex dynamic effects and microphysics of interaction such as disk heating. We defer further investigations to a future work. \subsection{Comparison with other IBS models: double-bump features in the LCs} A difference between the IBS model (TAH21) and the similar inclined disk model for J0632 \citep[][]{Chen2022} is that the latter assumed one-zone shock geometry whereas the former used an extended cone-shape IBS with two particle flows. This difference has a significant impact on the LC modeling. The one-zone shock model requires a peculiar ``inclined disk'' geometry to account for the two bumps in the LC \citep{Chen2022}. In contrast, the IBS cone with two particle flows can naturally reproduce two bumps, and disk crossing can add two more (bumps or peaks). Overall, the IBS model was able to accommodate the complex X-ray LC of J0632 with two bumps and one peak (TAH21). The most distinctive feature of IBS models, as compared to one-zone inclined disk models, is a double-peak structure around IFC in the X-ray/VHE LCs of highly inclined sources \citep[Figs.~\ref{fig:fig5} and \ref{fig:fig7}; see also][]{vandermerwe2020}. Such double-peak features are often observed in the X-ray LCs of redback pulsar binaries and regarded as a signature of bulk-accelerated particles in the IBS \citep[e.g.,][]{Kandel2019}. Intriguingly, a hint of double-peak X-ray/VHE LCs was also found in J0632 \citep[Figs.~15 and 16 of][]{Adams2021}, suggesting that the emission at IFC is indeed produced by IBS cone emission. While our SY model based on the parameters derived in TAH21 does not reproduce the X-ray double peaks at $\phi=0.75$, the IBS model can account for the double-peak LC with a larger inclination (e.g., Fig.~\ref{fig:fig5} c); in this case, the model-predicted dip at $\phi=0.75$ in the EC LC (Fig.~\ref{fig:fig10} bottom) might be deeper. This prediction can be confirmed with more sensitive X-ray and VHE observations and can distinguish between the disk interaction case \citep[][]{Chen2022} and the IBS cone emission. \subsection{The origin of short-term variability} Broadband emission of J0632 is known to exhibit strong orbit-to-orbit variability \citep[TAH21;][]{Adams2021}. In the IBS scenario, such short-term variability is likely due to non-uniform clumpy stellar winds that can drive varying momentum flux ratio $\beta$. In this case, the shock opening angle $\theta_{\rm cone}$ \citep[Eq.~\ref{eq:coneangle}; see also][]{Bogovalov2008}, and distances to the emission zones from the pulsar $r_{\rm p}$ and the star $r_{\rm s}$ are also expected to vary, affecting the IBS emission. For example, a stronger stellar outflow will push the IBS closer to the pulsar, making $B$ larger by reducing $r_{\rm p}$ and $u_{\rm *}$ smaller by increasing $r_{\rm s}$. This will enhance the SY emission but reduce the EC emission in general as seen in panel d of Figs.~\ref{fig:fig5} and \ref{fig:fig7}. At the IFC, however, the situation is more complicated because the Doppler factor $\delta_{\rm D}$, which is also determined by $\beta$ through a change in $\theta_{\rm cone}$, comes into play. Generally, stronger variability is expected at the IFC if the LoS is close to the shock tangent as it is in J0632, because the observed flux depends strongly on $\delta_{\rm D}$ \citep[e.g.,][]{An2017}. Contemporaneous observations of multi-band variability in the optical, X-ray, and VHE bands will provide a useful diagnostic to test the IBS scenario. \section{Summary\label{sec:summary}} We showed that our phenomenological IBS model provides a good fit to the multi-band SED and LC data of J0632, suggesting that the interaction between the winds of the pulsar and companion drives the observed broadband emission. Below we summarized our results and conclusions. \begin{itemize} \item We constructed an IBS emission model employing physical processes and emission components appropriate for TGBs, applied the model to J0632 with an orbit inferred from an X-ray LC modeling (TAH21), and found that the model and the orbit could explain not only the X-ray LC but also the VHE emission properties of the source LCs and SEDs. \item The observed SEDs of J0632 show a bump at $\sim$TeV in some phase intervals. This feature is likely due to ICS emission by preshock particles, implying that the bulk Lorentz factor of the preshock is $\gamma^{\rm pre}_{\rm e,peak}\sim 10^6$. Conversely, accurate characterizations of the bump, both observationally and theoretically, will help us to understand the energy conversion processes in pulsar winds. \item Our VHE LC model for J0632 predicts a double-peak structure around IFC. This is a natural consequence of the IBS model in contrast to inclined disk models \citep[e.g.,][]{Chen2022}. \end{itemize} While the model captures the main features of the broadband emission SED and LCs of J0632, some of the parameters such as $\gamma_b$ are not well constrained. Further, the parameters are degenerate and thus it was difficult to determine a unique solution set. A comparison with MHD simulations as well as future contemporaneous observations in the optical, X-ray, and VHE bands will allow us to determine these parameters better and to break the degeneracy. Further constraints on the IBS properties can be set by detecting a spectral break caused by particle cooling. The break is expected at $\sim$MeV energies where future missions \citep[e.g., COSI;][]{Tomsick2019} can add valuable data in the MeV band to boost our understanding of TGBs. It is well known that less powerful IBSs are formed in the so-called `black widow' and `redback' pulsar binaries \citep[e.g.,][]{Romani2016,Wadiasingh2017}. The common signatures expected from IBSs formed in pulsar binaries are double-peak X-ray LCs (for large $\theta_i$) and orbital modulations in the $\sim$GeV band \citep[e.g.,][]{An2018,Corbet2022}. Hence, a variety of IBS models, including ours presented in this paper, has been applied to pulsar binaries in circular orbits with a very low-mass companion \citep[e.g.,][]{vandermerwe2020}. However, it is yet unclear whether an (universal) IBS model can eventually account for the diverse observational properties of both the pulsar binaries and the TGBs by simply employing different geometries and energetics; e.g., the X-ray, GeV, and VHE phase variations are observed to be diverse in the sources \citep[e.g.,][]{Corbet2012,Corbet2016,An2020}. While the pulsar binary and TGB systems certainly share some common emission properties (e.g., orbital modulation), the systems seem to possess fundamentally different mechanisms (e.g., companion, orbit, energetics etc). A large and more comprehensive multi-wavelength study of these exotic pulsar binaries and TGBs, as demonstrated for J0632 in this paper, will give deeper insights into IBS and pulsar physics. \acknowledgments We thank Melania Nynka for a helpful review of the paper. We thank the anonymous referee for the careful reading of the paper and insightful comments. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT \& Future Planning (NRF-2022R1F1A1063468). \bigskip \vspace{5mm} \bigskip \bibliographystyle{apj} \bibliography{ms} \begin{appendix} The shape of an IBS formed by interaction of isotropic winds of two stars was analytically calculated by \citet{Canto1996}, and computations of SEDs of the SY and ICS emissions were well described in \citet{Finke2008} and \citet{Dermer2009}. Here we show formulas that we used for the IBS emission model for reference. \section{Formulas for computation of the IBS shape} \label{sec:appendix1} Interaction of two isotropic winds forms a contact discontinuity (CD) of which shape can be analytically computed using pressure balance equations \citep[][]{Canto1996}. For a wind's momentum flux ratio of $\beta$ (Eq.~\ref{eq:beta}) and the geometry depicted in Figure~\ref{fig:fig2}, the locus of the CD in a vertical plane (to the orbital plane) containing the pulsar and the star is given by \begin{equation} \label{eq:CDrad} r_{\rm p} = d_{\rm orb} \mathrm{sin}\theta_{\rm s} \mathrm{csc}(\theta_{\rm p} + \theta_{\rm s}), \end{equation} and \begin{equation} \label{eq:CD} \theta_{\rm s} \mathrm{cot}\theta_{\rm s} = 1 + \beta(\theta_{\rm p}\mathrm{cot}\theta_{\rm p} - 1) \end{equation} \citep[Eqs.~23 and 24 of][]{Canto1996}. Distance to the apex of the CD ($\theta_{\rm s}=\theta_{\rm p}=0$; shock nose) from the pulsar and the asymptotic angle of the IBS ($\theta_{\rm cone}$; half opening angle of the IBS cone) are given by \begin{equation} \label{eq:shocknose} r_0 = d_{\rm orb}\frac{\sqrt{\beta}}{1+\sqrt{\beta}} \end{equation} and \begin{equation} \label{eq:coneangle} \mathrm{tan}\theta_{\rm cone} - \theta_{\rm cone} = \frac{\pi \beta}{1-\beta}, \end{equation} respectively \citep[Eqs.~27 and 28 of][]{Canto1996}. $\theta_{\rm cone}$ increases monotonically with increasing $\beta$. \section{Formulas for computation of emission SEDs} \label{sec:appendix3} Suppose particle flow has bulk motion with a Lorentz factor $\Gamma$ with respect to an observer, and that the particles in the flow-rest frame move randomly (isotropically) with an energy distribution $\frac{dN_e'(\gamma')}{d\gamma'}$, where $\gamma'$ is the Lorentz factor of the particle's random motion in the flow rest frame (primed quantities are defined in the flow rest frame). In randomly oriented $B$, the SY SED of the particles can be computed using the formula given in Eq.~(18) of \citet{Finke2008}; \begin{equation} \label{eq:sysed} f_{\rm SY}(\epsilon)=\frac{\sqrt{3}\delta_{\rm D}^4\epsilon'q_e^3B}{4\pi hd^2}\int_1^\infty d\gamma' \frac{dN_e'(\gamma')}{d\gamma'}R(x), \end{equation} and \begin{equation} \label{eq:Rx} R(x) = \frac{x}{2}\int_0^{\pi} d\theta \mathrm{sin}\theta \int_{x/\mathrm{sin}\theta}^{\infty} dt K_{5/3}(t) {\rm \ with} \ x=\frac{4\pi \epsilon' m_e^2 c^3}{3q_e B h\gamma'^2}, \end{equation} where $\epsilon=h\nu/m_ec^2$ and $\epsilon'=h\nu'/m_e c^2$ are dimensionless energies of the observed (observer frame) and the emitted (flow-rest frame) photons, respectively, $m_e$ is the mass of an electron, $q_e$ is the electron charge, $h$ is the Planck constant, $d$ is the distance between the emission zone and the observer, $K_{5/3}$ is a modified Bessel function, and $\delta_{\rm D}$ is a Doppler beaming factor determined by the flow viewing angle $\theta_{\rm V}$ (angle between flow tangent and observer) and the bulk Lorentz factor $\Gamma$: \begin{equation} \label{eq:Doppler} \delta_{\rm D}=\frac{1}{\Gamma(1 - \sqrt{1 - 1/\Gamma^2}\mathrm{cos}\theta_{\rm V})}. \end{equation} Energetic particles in the preshock and IBS can upscatter ambient low-energy photons emitted by the star (blackbody) and the disk via the ICS process (i.e., EC). The scattering cross section depends on the scattering geometry \citep[e.g.,][]{Dubus2013} and can be simplified with a head-on approximation if the electrons are highly relativistic \citep[i.e., head-on collision;][]{Dermer2009}. Because the electrons in the IBS and preshock of TGBs are highly relativistic (i.e., $\gamma_{\rm e}\gg1$; Sections~\ref{sec:sec2_3}--\ref{sec:sec2_4}), we use the head-on approximation and calculate EC SEDs using Eq.~(34) of \citet{Dermer2009}: \begin{equation} \label{eq:ecsed} f_{\rm EC}(\epsilon_s)=\frac{3c\sigma_{\rm T}}{32\pi d^2}\epsilon_{\rm s}^2 \delta_{\rm D}^3\int_0^{2\pi}d\phi_* \int_{-1}^1 d\mu_* \int_0^{\epsilon_{*,\rm hi}} d\epsilon_* \frac{u_*(\epsilon_*,\Omega_*)}{\epsilon_*^2} \int_{\gamma_{\rm low}}^{\infty} d\gamma \frac{dN'_e(\gamma / \delta_{\rm D}) }{d\gamma}\frac{\Xi}{\gamma^2}, \end{equation} where $\sigma_{\rm T}$ is the Thomson scattering cross section, $\epsilon_*$ and $\epsilon_s$ are dimensionless energies ($\frac{h\nu}{m_e c^2}$) of the incident and scattered photons, $\phi_*$ and $\mu_*$ are the direction of the incident photon into the emission zone, $u_*(\epsilon_*, \Omega_*)$ is the energy density of the seed photons in the emission zone, and $\Xi$ is defined as \begin{equation} \label{eq:Xi} \Xi \equiv y + y^{-1} - \frac{2\epsilon_s}{y\bar \epsilon y} +\left (\frac{2\epsilon_S}{y\bar \epsilon y} \right )^2 {\rm\ with\ } y\equiv 1-\frac{\epsilon_s}{\gamma}, \end{equation} where $\bar \epsilon$ is the invariant collision energy: \begin{equation} \label{eq:ebar} \bar \epsilon \equiv \gamma \epsilon_*(1-\sqrt{1-1/\gamma^2}\mathrm{cos}\psi) \end{equation} with $\psi$ being the scattering angle of the incident photon. The integration limits in Eq.~\ref{eq:ecsed} are determined by the scattering kinematics and are \begin{equation} \label{eq:gammalow} \gamma_{\rm low} = \frac{\epsilon_s}{2}\left[ 1 + \sqrt{1 + \frac{2}{\epsilon_* \epsilon_s(1-\mathrm{cos}\psi)}} \right ], \end{equation} and \begin{equation} \label{eq:ehi} \epsilon_{*,\rm hi}= \frac{2\epsilon_s}{1-\mathrm{cos}\psi}. \end{equation} \end{appendix}
Title: Simons Observatory: Broadband Metamaterial Anti-Reflection Cuttings for Large Aperture Alumina Optics
Abstract: We present the design, fabrication, and measured performance of metamaterial Anti-Reflection Cuttings (ARCs) for large-format alumina filters operating over more than an octave of bandwidth to be deployed on the Simons Observatory (SO). The ARC consists of sub-wavelength features diced into the optic's surface using a custom dicing saw with near-micron accuracy. The designs achieve percent-level control over reflections at angles of incidence up to 20$^\circ$. The ARCs were demonstrated on four 42 cm diameter filters covering the 75-170 GHz band and a 50 mm diameter prototype covering the 200-300 GHz band. The reflection and transmission of these samples were measured using a broadband coherent source that covers frequencies from 20 GHz to 1.2 THz. These measurements demonstrate percent-level control over reflectance across the targeted pass-bands and a rapid reduction in transmission as the wavelength approaches the length scale of the metamaterial structure where scattering dominates the optical response. The latter behavior enables the use of the metamaterial ARC as a scattering filter in this limit.
https://export.arxiv.org/pdf/2208.02292
\section{Introduction} \label{sec:Introduction} Astronomical observations at millimeter and sub-millimeter wavelengths are key to understanding aspects of the universe ranging from probing the conditions in the first instant to understanding the star formation history and details of stellar births in our galaxy. Large format detector arrays have revolutionized observations at these wavelengths with state-of-the-art focal planes containing tens of thousands of broadband detectors. These focal planes have driven the need for large aperture optical elements that can operate at cryogenic temperatures. Alumina is a ceramic material that can be machined into large diameter optics for millimeter wave bands and its use as an IR rejecting filter is critical to the function of next-generation instruments. For the optics considered in this publication, we sourced our low-dielectric loss type alumina from NTK Ceratek in Japan \cite{Ceratek_JP}. High-purity (>99.\%) sintered alumina exhibits low dielectric absorption at millimeter wavelengths, but has relatively high dielectric losses in the sub-millimeter \cite{Alford_LowLoss,Breeze_LowLoss,Afsar_LowLoss}. This property, combined with strong restrahlen bands at infrared wavelengths, makes alumina an ideal filter material by effectively rejecting infrared radiation by scattering it out of the optical path or absorbing and efficiently thermally conducting it to a cryogenic system without significant heating. However, alumina's high index of refraction ($n=\sqrt{\epsilon_r}=3.14$) is approximately constant in the microwave \cite{Inoue_thermalandopticalproperties,Yukithesis, Lamb_OpticalProperties} and causes 26\% of incident light to be reflected per surface in the absence of optical coatings. These reflections not only reduce the amount of transmitted power that reach the detectors, but can also cause unwanted optical effects such as degraded image quality and reduced polarization purity. Therefore, reliable high quality anti-reflection (AR) coatings are needed. Coatings can be realized by adding materials with carefully selected dielectric properties and thicknesses to the surface of interest. In the simplest realization, a quarter-wavelength thick layer of material with dielectric index $\sqrt{n}$ can be added to the surface of a material with index $n$. This quarter wavelength coating can perfectly cancel reflections at a particular wavelength and provide acceptable performance over a moderate bandwidth. Broader band performance is possible with multiple layer coatings or by extending these to the continuum limit to realize a gradient index coating \cite{AR_Coating_Review}. In this work we focus on two-layer metamaterial cuttings that function as an homogeneous two-layer coating which can achieve percent-level control of reflectance over an octave of bandwidth when applied to alumina. Enormous efforts have gone into developing multi-layer AR coatings for alumina. This includes laminated plastics \cite{SPT-3G_AR}, laminated epoxy \cite{EpoxyAR,Bicep_Plastic_Lam_AR,Nitta_EpoxyARandLaserAR}, laminates of commercial materials \cite{Mullite_AR}, and laser machined metamaterial cuttings \cite{Laser_AR,Nitta_EpoxyARandLaserAR,Matsumura_LaserAR}. The laminates require identifying materials with the required indices of refraction, controlling changes in the thickness and index throughout the lamination process, and control of cryomechanical delamination. The latter poses a critical risk for applications which require a high reliability. The recent results on laser machining shows promise as they have demonstrated coatings on deployed optics though it has yet to control reflections at the percent-level across an octave of bandwidth. We present a new approach whereby we use dicing saw blades to cut metamaterial structures, comprised of sub-wavelength stepped pyramids, into the alumina surface. This approach is based on our group's previous work on silicon ARCs \cite{Datta_Si_AR,Coughlin_si_AR,Golec_si_AR} which are now deployed on AdvACTPol \cite{Thornton_ACT_Overview}, CLASS \cite{CLASS_Design_Overview}, TolTEC \cite{TolTEC_Overview}, and the Simons Observatory \cite{SO_Instrument_Overview}, but it is necessary to expand this technology to alumina substrates as well to take advantage of the IR rejecting material properties inherent in alumina that are not present in silicon. We demonstrate this technique on the alumina filters for the Simons Observatory (SO), a CMB observatory that is currently being deployed in the Atacama Desert in Chile. The organization of this paper is as follows: Section 2 contains a brief description of the designs of the ARCs for two dichroic observing bands centered at 90/150 GHz (the Mid-Frequency bands or MF) and 220/280 GHz (the Ultra High-Frequency bands or UHF). The fabrication procedure with the dicing saw is described in section 3 and meteorology of the machined coatings follow in section 4. We present measurements of the reflection and transmission of the MF and UHF ARCs in section 5, along with the description of the apparatus used to make those measurements. We conclude with a discussion of the performance of the ARCs, and the production of these optical elements for SO and future experiments \cite{CMBS4_sciencebook,CMBS4_techbook}. \section{Design} \label{sec:Design} The design of the ARC is nearly identical in geometry to diced metamaterial ARCs that have been fabricated in silicon lenses and deployed on current CMB experiments \cite{Datta_Si_AR,Coughlin_si_AR,Golec_si_AR}. The ARC consists of nested cuts of fixed thickness and depth periodically diced into the optics surface as shown in Figure \ref{fig:design}. If the periodicity, or pitch (P), is sufficiently smaller than the wavelength of the incident radiation then the sub-wavelength metamaterial features do not scatter and behave like a traditional homogeneous layered AR coating \cite{Datta_Si_AR, pitch_criteria}. The ARC design is modelled in the finite-element analysis software HFSS by Ansys \cite{Ansys_HFSS}. The initial design parameters are based on an existing two-layered metamaterial ARC design for silicon optics, but scaled to the alumina index and the 75-170 GHz MF band. The pitch is then fixed so that the coating will not scatter in the observing band at angles of incidence up to $20^\circ$ which is the most extreme angle that rays reflect off the alumina filters in the SO optical systems. The remaining design parameters, the widths (kerfs) and depths of the nested cuts, are then optimized to minimize reflection across the observing bands. The UHF design was numerically optimized for the 200-300 GHz band. In this case, the simple scaling of the ARC design was not possible due to blade limitations with regards to the thinner of the nested cuts. To overcome this issue, the kerf of the thinner cut (K2) was fixed to the smallest achievable kerf in alumina, 50 microns, and the optimization process adjusted the remaining three free parameters. In both the MF and UHF cases the optimal designs result in percent-level control of reflections across the bands which satisfies the design requirements for SO. The initial optimal designs were prototyped and the measured profile of the cuts revealed that the dicing blade tool wear resulted in rounded rather than flat bottoms as shown in Figure \ref{fig:Microscope_Pictures}. This rounded geometry was incorporated into the HFSS simulation and the ARC designs re-optimized. The resulting final design parameters for the MF and UHF ARCs are summarized in table \ref{tab:DesignParams}. \begin{table}[htbp] \centering \caption{Parameters of the two AR coating designs} \begin{tabular}{ccc} \hline & MF & UHF \\ \hline Pitch (P) & 0.522 mm & 0.295 mm \\ Kerf 1 (K1) & 0.220 mm & 0.160 mm \\ Depth 1 (D1) & 0.425 mm & 0.250 mm \\ Kerf 2 (K2) & 0.070 mm & 0.050 mm \\ Depth 2 (D2) & 0.289 mm & 0.138 mm \\ \hline \end{tabular} \label{tab:DesignParams} \end{table} After the optimal designs were chosen, the tolerance of the performance to errors in cut depth was analyzed. The dicing blade thicknesses are controlled to a three micron tolerance which has little to no effect on the ARC performance. Depth errors dominate the tolerance budget of the machining process. The ARCs were simulated at normal incidence with combinations of depth errors of $\pm20$ microns in steps of 10 microns for each effective layer. The results of this analysis, along with the nominal ARC performance, are shown in Figure \ref{fig:tolerance}. Note that the simulations were performed on a single sided ARC model and so the results represent the reflection per surface. All simulations that deviate from the nominal design are below 2\% reflection in the observing bands which is the design goal for SO and so we fix 20 microns as the requirement for depth control for the fabrication of the ARCs. \section{Fabrication} The metamaterial ARCs were fabricated using a custom-built multi-axis dicing saw system built by our team for SO at Fermi National Accelerator Laboratory. This system (shown in Figure \ref{fig:dicingsaw}) consists of three dicing saw spindles mounted on independent z-axes (100 mm of travel) on a common y-axis ($\pm 600$mm of travel). A ruby-tipped metrology probe with sub-micron accuracy is mounted on the forth z-axis. This probe is used to map the surface of the mounted optics to be cut. The filters to be machined are secured on a chuck mounted on a rotary stage with $360^\circ$ rotation on top of an x-axis ($\pm275$mm of travel). These axes allow the complete fabrication of an ARC on one side of an up to 600 mm diameter optic without need to dismount the optic. The dicing procedure for the alumina ARC is similar to the process used for silicon \cite{Datta_Si_AR}, but because alumina is considerably harder than silicon extra steps must be taken to ensure the cuts maintain the correct depth. The procedure starts with mounting an alumina filter and using the metrology probe to map its surface. These data are fit to a flat plane model with Fourier corrections to account for deformations due to how the filter is clamped. The residuals of this fitting procedure are normally less than five microns. The cut trajectories are then generated for each dicing spindle. To confirm the absolute depth of the cuts, test touches are then performed with the dicing saw blades on the filter surface. These serve two purposes: the first is to confirm that the fit to the metrology data and the model is accurate; the second is to correct for any z-offset between the model surface and the actual filter which can arise from slight differences in the blade diameters. A set of test touches are additionally performed on a test silicon wafer mounted to the same aluminum chuck as the alumina filter, which serves as a relative reference for blade wear. After every 25 lines are diced on the filter, the system automatically makes a test touch on the silicon wafer. The test touches are chosen to be approximately 50 microns deep. The length of the test touch represents a chord across the 78 mm diameter circle defined by our round blades. Measurement of this chord allows a micron accurate determination of the depth of the cuts. This measurement is used to determine the blade wear (diameter loss) as a function of millimeters of alumina diced. Controlling for tool wear of the dicing blades is crucial for the success of these coatings. Different blades are used to realize the two different kerfs in both the MF and UHF ARCs. These blades differ not only in thickness, but also in how the zinc composite material they are made of is bonded and blade exposure which leads to multiple blade wear coefficients to track. Moreover, the blade wear also depends on which orientation of the ARC is being diced. Since the second set of cuts (rotated $90^\circ$ relative to the first set) dices through previously diced material, the blades need to remove roughly half the alumina and thus have a lower blade wear. For reference, for the MF ARC we have found that the most the blades wear on average is 6 microns per meter of alumina diced for the thick blade and 20 microns per meter for the thin blade. One 42 cm diameter MF filter requires roughly 260 meters of total length diced. With the maximum blade exposure without compromising cut quality, the dicing process requires roughly two thick and 12 thin blades to completely fabricate a filter. To streamline the fabrication process, we mount one of the thick blades on one spindle and thin blades on the remaining two spindles. This allows for longer cutting periods without blade changes. Every time the blade is changed the test touches on the silicon reference wafer calibrate the depth the new blades are cutting and allow new wear coefficients to be inferred. Using the procedures outlined in this section, four full-scale SO Large Aperture Telescope (LAT) MF filters were fabricated as well as a two-inch diameter UHF prototype. The ultimate production rate that was achieved for the MF filters (one of which is shown in Figure \ref{fig:MF_Filter}) was 15 days per 42 cm diameter filter. We believe that a higher rate can be achieved if the inspection of the test touches on the silicon reference wafer is automated. If a microscope is added to the system this test touch process could be done remotely or automated completely. This would allow for nearly continuous operation of the saw system and could cut the production rate to less than a week per filter. \section{Metrology} Throughout the development of the ARCs, measurements of the coating need to be made to confirm that the cuts are being diced to specification. We use several techniques to make these measurements including microscopy and photography. Microscopy was performed on the smaller ARC prototypes since the larger filters do not fit in the available microscope. The microscope is also used during the fabrication process to inspect cuts diced into a test silicon wafer to ensure the dicing blades are not excessively worn such that the profile is compromised. Images taken with the microscope of the MF and UHF ARCs are shown in Figure \ref{fig:Microscope_Pictures} and one can clearly see that the profile is similar to the fiducial stepped pyramid design but with rounded bottoms due to the tool wear. The microscope images also confirm that the kerf of the cuts are to specification. The difference between the measured depths and the nominal values are within the 20 micron machining tolerance which confirms that the machining procedure outlined in the previous section is successful at achieving the design specifications for the ARCs. We use a 5:1 macro lens on a digital camera to verify that there are no large scale variations in the ARCs and to inspect any defects that may appear. The resulting images from the camera serve as the quality assurance step in the fabrication procedure of the filters. The photos shown on the right side of Figure \ref{fig:MF_Filter} confirm that there is essentially no chipping or other defects that can be associated with the dicing procedure. Additionally, no defects appeared after thermally cycling the filter between room temperature and 80 Kelvin which is the operating temperature of these filters in the SO LAT cyrostat. Due to the robustness of alumina, the defect rate is extremely low with less than a dozen broken ARC pillars across the filter which contains over 500,000 pillars in total. \section{Reflection and Transmission} The reflection and transmission of the ARC filters and prototypes were measured using an ultra-broadband (20-1200 GHz) source and detector coupled to the sample using parabolic mirrors. The measurements presented here are made at ambient room temperature. The operating temperatures of the alumina filters in the SO optical systems are 80 K and below. While the index of refraction will not change appreciably from room temperature to those temperatures, the dielectric losses will decrease by as much as a factor of four \cite{Inoue_thermalandopticalproperties}. This will not to affect the reflection performance presented here, but it will result in improved in-band transmission. The cryogenic dielectric loss was accounted for in the design of the IR blocking filters \cite{ningfeng_LATR_thermal}. \subsection{Transmitter and Receiver} The ultra-broadband coherent source used to measure reflection and transmission uses two distributed feedback (DFB) lasers with indium gallium arsenide transmitter and receiver photomixers purchased from Toptica Photonics. The two DFB diode lasers send near-infrared frequencies to the emitter photomixer. The difference frequency from these two tuned lasers dictates the terahertz frequency generated in the photomixer. These two diode lasers are temperature controlled automatically by the accompanying digital controller. The emitter photomixer, i.e. the transmitter, outputs a beam with a divergence angle ranging from 12 to 15 degrees from a 25 mm diameter silicon lens encasing. The transmitter (also referred to as the source) generates a continuous wave (cw) terahertz frequency signal. This photomixer is a metal-semiconductor-metal structure with interdigitated electrodes and a log-spiral antenna. The electrodes produce a photocurrent which oscillates at the difference frequency. This photocurrent is then output as our desired terahertz signal by the log-spiral antenna surrounding the photomixer. The receiver photomixer, also encased in a silicon lens, is illuminated by both the terahertz wave and the laser beat. The photocurrent induced in the receiver photomixer is proportional to the amplitude of the signal's electric field \cite{terascan_manual}. A fiber stretcher extension, which adjusts the optical path length difference between the transmitter and receiver, is used to modulate the phase at kHz frequencies. The output of the receiver is then demodulated to get the amplitude and phase of the electric field. The minimum frequency resolution is 1 MHz when using the fiber stretcher and the source can sweep from 20-1200 GHz in 0.2 GHz steps in approximately thirteen minutes. \subsection{Reflection} Figure \ref{fig:refl_setup} shows the setup for measuring reflection. Reflection measurements were made by mounting the sample on an adjustable mount that precisely aligns the sample to couple optimally between the source and receiver. The mount is made of aluminum and covered in an Eccosorb HR-10 layer to mitigate unwanted reflections. The setup of the system can support standing waves which reflect through the full optical path between the transmitter and receiver. These are reduced by two carbon-loaded polyethylene flat sheets, typically 0.5 mm thick (not pictured in Figure \ref{fig:refl_setup}), placed in the optical path just following the transmitter and receiver to work as attenuators \cite{black_poly}. These plates are oriented at $45^\circ$ relative to the beam propagation direction so that reflected light exits the system. Calibration is done by measuring the reflected signal from a polished aluminum plate. The reflection from the sample is then measured with the surface of the ARC placed in the same optical plane as the aluminum plate. The ratio of the measurement and calibration are squared to find the fractional reflected power. While the phase measurements are stable at the degree level, these data are not considered here. The reflection was measured from 60-180 GHz for the MF alumina filters, and from 190-330 GHz for the UHF filter. The step size was set to 0.1 GHz with an integration time of 3.02 ms per frequency bin. The wedged MF filters are roughly 8 mm thick (the thickness varies between 5 and 11 mm) and the UHF filter prototype is 5 mm thick, so the 0.1 GHz frequency spacing of the reflection measurements is adequate to resolve fringing within the samples which should be on the order of several GHz. The measurements considered here were carried out at an angle of incidence of 14 degrees for the first MF filter and at 10 degrees for the other three MF and the UHF filters. The reflection measurements of four SO MF LAT filters and the prototype UHF filter are shown in Figure \ref{fig:allreflplot}. The four MF filters all have mean reflections less than 2\% in the observing bands which meets the SO design requirements for the alumina filters. Due to the wedge design of the filters, the reflection performance between all four filters are not identical since they were measured at different locations on the filter with varying thicknesses. However, all the filters produced are in line with HFSS simulations of the ARC. The UHF prototype also has a mean reflection less than 2\% across the UHF bands but performs slightly worse than predicted by the design HFSS simulation. Measurements of the ARC dimensions with a microscope revealed that the cuts were too deep by as much as 20 microns at points which caused poor performance in the lower frequency band. This error was confirmed post-fabrication by investigating the variability of the test touches made on the silicon reference wafer during ARC fabrication. Even with this error, the prototype achieves less than 2\% reflectance across this band. We anticipate improved UHF ARC performance when these filters enter production for SO since better active care will be taken to control the cut depths during fabrication. \subsection{Transmission} The transmission was measured with the beam from the source aligned to point directly to the parabolic mirror pairing into the receiver. Calibration was performed by dividing the power received with the samples in place by the power received when the transmitter-to-receiver path is unblocked. These measurements were carried out from 50 GHz to 1.2 THz in 0.1 GHz steps. The results are shown in Figure \ref{fig:tmplot} (see \cite{Yukithesis} for the transmission spectra of alumina without ARC). The results for the MF and UHF coatings place a lower limit on in-band transmission at > 80\%. The ability to assess transmission in-band are limited by the control of alignment and standing waves in the system which are not perfectly controlled. Given the reflectance measurements, and separate measurements of the loss-tangent of alumina \cite{Inoue_thermalandopticalproperties}, we interpreted this as consistent with 97\% transmission in-band. Both the MF and UHF coatings show a dramatic decrease in transmission above their respective bands. This apparent attenuation is more than an order of magnitude larger than what is expected from the loss tangent of the alumina material. This was confirmed by measuring the transmission of an uncoated alumina plate which showed no such attenuation at high frequency. We interpret this above band attenuation as the onset of scattering in these coatings which is anticipated as the wavelength of the incident light becomes comparable to the pitch of the ARC. In this wavelength regime, the ARC grating no longer satisfies the "quasi-static limit" condition where only the zeroth-order diffraction mode propagates and instead multi-mode propagation occurs \cite{Lalanne_EMT, Grann_EMT, Kikuta_EMT, pitch_criteria}. While a rigorous theoretical model that encompasses this behavior is beyond the scope of this paper, we can test our interpretation by scaling the frequency axis relative to the metamaterial breakdown frequency, or the frequency where the ARC grating enters the diffractive or multi-mode limit. This is given by \cite{pitch_criteria} \begin{align} f_0 = \frac{c}{P(n_\text{alumina}+n_\text{vacuum} \sin \theta_i)} \end{align} where $P$ is the pitch of the ARC previously defined in section \hyperref[sec:Design]{2}, $n_\text{alumina}$ and $n_\text{vacuum}$ are the optical indices of alumina and vacuum respectively, $\theta_i$ is the angle of incidence, and $c$ is the speed of light. The diffractive limit threshold frequency as stated here is dependant on the pitch dimensions and therefore is a scale dependant property of the ARC. For the two filter designs presented in this work the diffractive limit threshold frequencies are $f_\text{0, MF} = 184 \text{ GHz}$ and $f_\text{0, UHF} = 326 \text{ GHz}$, for the MF and UHF filters respectively. We plot the transmission after scaling, $f_\text{scaled} = f/f_0$, in the right panel of Figure \ref{fig:tmplot}. The qualitative agreement of these two measurements supports our interpretation with a change in transmission by approximately -10 dB per octave after entering the diffractive limit. \section{Conclusion} We have developed metamaterial ARCs for alumina filters for the SO MF and UHF bands. These ARCs achieve percent-level control of reflections across up to an octave of bandwidth, which is the best performance across the widest bandwidth for any alumina ARC technology to date. Four 420 mm diameter alumina filters for the SO LAT were fabricated with the MF ARC with an ultimate production rate of 15 days per filter. The reflection performance of these filters was measured and agrees with simulations. Additionally, a two-inch diameter prototype of the UHF ARC was fabricated and its reflection performance was measured to have percent-level reflections across the UHF band. The transmission measurements indicate that these filters cause significant scattering above the target bands. If future work were to characterize the scattering kernel, this behavior could be exploited to help define the in-band filtering for future experiments. Since this scattering is out-of-band it is not a liability in the current use cases. This was demonstrated with end-to-end measurements of a system including these filters which will be presented in a future publication \cite{Grace_holopaper,Sierra_LATRtpaper}. In addition to the future work of the alumina UHF ARC on a full-scale filter, the MF ARC will be diced onto two alumina filters for the SO Small Aperture Telescope (SAT). The largest of these has a diameter of 550 mm which will be a further test for the alumina metamaterial ARC technology. We do not anticipate any production obstacles from the larger diameter except for a marginally longer production time. After those filters are finished, the remaining task for alumina ARC for SO concerns the remaining dichroic band, the 20/40 GHz (Low-Frequency or LF). An ARC design has been made for this final band, but details such as the dicing blade types and cutting procedure still remain to be determined. The production rate and reliability of the metamaterial ARCs for alumina optics makes it a compelling technology for future millimeter and sub-millimeter experiments. Both laser ablated metamaterials fabricated for other experiments and the diced ARCs presented here have mechanical advantages that make them desirable for the cryogenic applications of alumina filters compared with the laminate AR coatings discussed in section \ref{sec:Introduction}. However the diced metamaterials have been demonstrated across a wider bandwidth, on larger-format optical filters, and at a much larger production scale than both laser machined ARCs and laminate coatings. These advantages show that diced metamaterial ARCs for alumina filters are a more mature technology ready for deployment on current generation experiments like SO, and deserve to be heavily considered as a baseline technology for future large-scale observatories like CMB-S4. While the work presented here is limited to flat surfaces, extending this technology to curved lenses is trivial since the same machine that produced the ARC for the alumina filters has also produced metamaterial ARCs for curved silicon lenses. Robust ARCs are critical to the present and future generation of millimeter wave experiments. This work demonstrates that diced metamaterial ARCs provide a combination of good performance, extreme robustness, and a manageable production rate. \begin{backmatter} \bmsection{Funding} This work was funded by the Simons Foundation (Award \#457687, B.K.). JEG is supported by a NASA Space Technology Research Fellowship (80NSSC21K0411). SS is supported by a National Science Foundation Graduate Research Fellowship under Grant No. DGE 1746045. \bmsection{Acknowledgments} This document was prepared by The Simons Observatory using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. \bmsection{Disclosures} The authors declare no conflicts of interest. \bmsection{Data availability} Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. \end{backmatter} \bibliography{sample} \ifthenelse{\equal{\journalref}{aop}}{% \section*{Author Biographies} \begingroup \setlength\intextsep{0pt} \begin{minipage}[t][6.3cm][t]{1.0\textwidth} % \begin{wrapfigure}{L}{0.25\textwidth} \includegraphics[width=0.25\textwidth]{john_smith.eps} \end{wrapfigure} \noindent {\bfseries John Smith} received his BSc (Mathematics) in 2000 from The University of Maryland. His research interests include lasers and optics. \end{minipage} \begin{minipage}{1.0\textwidth} \begin{wrapfigure}{L}{0.25\textwidth} \includegraphics[width=0.25\textwidth]{alice_smith.eps} \end{wrapfigure} \noindent {\bfseries Alice Smith} also received her BSc (Mathematics) in 2000 from The University of Maryland. Her research interests also include lasers and optics. \end{minipage} \endgroup }{}
Title: Orbital dynamics and histories of satellite galaxies around Milky Way-mass galaxies in the FIRE simulations
Abstract: The orbits of satellite galaxies encode rich information about their histories. We investigate the orbital dynamics and histories of satellite galaxies around Milky Way (MW)-mass host galaxies using the FIRE-2 cosmological simulations, which, as previous works have shown, produce satellite mass functions and spatial distributions that broadly agree with observations. We first examine trends in orbital dynamics at z = 0, including total velocity, specific angular momentum, and specific total energy: the time of infall into the MW-mass halo primarily determines these orbital properties. We then examine orbital histories, focusing on the lookback time of first infall into a host halo and pericenter distances, times, and counts. Roughly 37 per cent of galaxies with Mstar < 10^7 Msun were `pre-processed' as a satellite in a lower-mass group, typically ~2.7 Gyr before falling into the MW-mass halo. Half of all satellites at z = 0 experienced multiple pericenters about their MW-mass host. Remarkably, for most (67 per cent) of these satellites, their most recent pericenter was not their minimum pericenter: the minimum typically was ~40 per cent smaller and occurred ~6 Gyr earlier. These satellites with growing pericenters appear to have multiple origins: for about half, their specific angular momentum gradually increased over time, while for the other half, most rapidly increased near their first apocenter, suggesting that a combination of a time-dependent MW-mass halo potential and dynamical perturbations in the outer halo caused these satellites' pericenters to grow. Our results highlight the limitations of idealized, static orbit modeling, especially for pericenter histories.
https://export.arxiv.org/pdf/2208.05977
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} galaxies : kinematics and dynamics -- galaxies : Local Group -- methods : numerical \end{keywords} \section{Introduction} % \label{sec:intro} % The satellite galaxies around the Milky Way (MW) and M31 in the Local Group (LG) are unique systems in that we can measure both resolved stellar populations and full orbits to derive orbital histories. Key answerable questions regarding satellite formation histories include: What are their orbits today and what were their orbital histories? When did they first become satellites? How many were satellites of a lower-mass group, like the Large Magellanic Cloud \citep[LMC;][]{Kallivayalil18, Patel20, Pardy20}, and when did they become satellites of such groups? How close have they orbited to the MW and M31? Understanding the answers to these questions within the LG will also improve our understanding of satellite evolution in systems beyond the LG. Thanks to numerous studies \citep[for example][]{Kallivayalil13, Kallivayalil18, Fritz18_segue, GaiaDR2, Patel20} and HST treasury programs (for example, GO-14734, PI Kallivayalil; GO-15902, PI Weisz), we now have or soon will have 3D velocity information for the majority of the satellites in the LG, with continued higher precision \citep[for example, see updated values between][and references therein]{McConnachie12, Fritz18}. With the advent of new data coming from studies such as the Satellites Around Galactic Analogs (SAGA) survey \citep{SAGA_I, SAGA_II}, which focus on measuring properties of satellites of MW-mass galaxies beyond the LG, understanding general orbital/infall histories of satellites is imperative. Given this rich data, combined with observationally informed estimates of the MW's gravitational potential \citep[for example][]{BlandHawthorn16, McMillan17, Li20, Deason21, CorreaMagnus22}, one can constrain the orbital histories of satellites. However, a key challenge is understanding limitations and biases from assuming a static, non-evolving potential \citep[for example][]{Klypin99, DSouza22}. One of the most important events in a satellite galaxy's history is when it first orbited within the MW-mass halo, or any more massive halo. After falling into a more massive host halo, a satellite galaxy temporarily can orbit outside of the host's virial radius (a `splashback' satellite); typically such satellites remain gravitationally bound and orbit back within the host's virial radius \citep[for example][]{Ludlow09, Wetzel14}. As a satellite orbits within the host halo, its hot gas can quench the satellite's star formation through ram-pressure stripping of gas \citep[for example][]{Gunn72, vandenBosch08, Fillingham19, RodriguezWimberly19, Samuel22}. Not only can a lower-mass galaxy fall into a MW-mass halo, but it can become a satellite of another intermediate-mass galaxy before falling into the MW-mass halo, called `pre-processing' \citep[for example][]{DOnghia08, Li08, Wetzel15, Deason15, Kallivayalil18, Jahn19, Patel20}. This process can suppress and even quench star formation in the low-mass galaxy before it falls into its MW-mass halo \citep[for example][]{Jahn21,Samuel22}. If satellites fell into a MW-mass halo via a group, they should have similar orbits, at least for one or a few orbital timescales, with broadly similar orbital angular momenta and energy \citep[for example][]{Sales11, Deason15, Sales17, Pardy20}. Some theoretical studies suggest that no current satellites of the MW were satellites during the epoch of reionization \citep[for example][]{Wetzel15, RodriguezWimberly19} at $z \gtrsim 6$, thus, if the satellites quenched at $z \gtrsim 6$, the host environment could not have quenched them, such that the effects of the host environment and cosmic reionization are separable, in principle. Many works have studied infall histories and orbital properties of simulated satellite galaxies of MW-mass halos \citep[for example][]{Slater13, Wetzel15, Li20_infall, Bakels21, DSouza21, Robles21, Ogiya21}, sometimes with the intent of deriving properties of observed satellites of the MW \citep[for example][]{Rocha12, Fillingham19, Miyoshi20, RodriguezWimberly21}. However, many such previous works used dark matter only (DMO) simulations, and the inclusion of baryonic physics is critical to model accurately the satellite population \citep[see][]{Brooks14, Bullock17, Sales22}. One important process that affects the orbital evolution of satellites is dynamical friction. As satellites orbit within a host halo, they induce an over-density of dark matter behind them, which causes a drag force called dynamical friction that slows their orbits and can lead them to merge with the host galaxy \citep{Chandrasekhar43, Ostriker75}. Dynamical friction is more efficient when the satellite and host galaxy or halo are of similar mass \citep[for example][]{BoylanKolchin08, Jiang08, Wetzel10}. Upon accretion, massive satellites can also induce global perturbations within the larger MW-mass galaxy, which also can affect the orbits of less-massive satellites \citep[for example][]{Weinberg86, Weinberg89, Colpi99, Tamfal21}. Furthermore, because the dark matter in the satellite galaxy is tidally stripped as it orbits throughout the host halo, this stripped material further can slow the satellite \citep{Miller20}. One way to parameterize the efficiency of dynamical friction is via the time from first infall it takes a satellite to merge into its host galaxy/halo. For example, \citet{Wetzel10} approximated this merging time as \begin{equation} \label{eq:dyn} t_{\rm merge} \approx C_{\rm dyn} \frac{M_{\rm host}/M_{\rm sat}}{\ln(1+M_{\rm host}/M_{\rm sat})} t_{\rm Hubble} \end{equation} where $M_{\rm host}$ is the total mass of the host halo, $M_{\rm sat}$ is the total mass of the smaller satellite halo, $t_{\rm Hubble} = H^{-1}(z)$. They found $C_{\rm dyn} \approx 0.2 - 0.3$ agrees well with the results of a large-volume cosmological DMO simulation. This implies that for a satellite to merge within a Hubble time (the age of the Universe) the halo mass ratio between the host and a satellite must be closer than about $1:20$. Because dynamical friction robs (primarily higher-mass) satellites of their orbital energy, one might expect satellites to shrink monotonically in orbital radius over time, until the satellite merges/disrupts. Many studies implement idealized simulations (both with and without baryonic physics) that incorporate the effects of dynamical friction, mass loss, and ram-pressure stripping \citep[for example][]{Weinberg86, Taylor01, Penarrubia02, Penarrubia05, Amorisco17, Jiang21}, but testing this assumption in a cosmological setting is imperative. Additionally, because the LMC has satellites of its own \citep[for example][]{DOnghia08, Deason15, Kallivayalil18}, many studies have investigated satellite spatial distributions and their dynamics with an additional LMC-like contribution to the host potential to test the dynamical effects of a massive satellite on nearby lower-mass galaxies \citep[for example][]{GaravitoCamargo19, Patel20, Samuel21, Vasiliev21, DSouza22, Pace22}. In this paper, we examine the orbital histories of satellite galaxies using baryonic simulations that match general observed properties of satellites of MW-mass galaxies. Specifically, we use the FIRE-2 cosmological zoom-in simulations, which form realistic MW-mass galaxies \citep[for example][]{GarrisonKimmel18, Hopkins18, Sanderson20, Bellardini21} and populations of satellite galaxies around them \citep[][]{Wetzel16, GarrisonKimmel19a, Samuel20, Samuel22}. The two main topics that we explore are: (1) The relation of orbital properties of satellite galaxies at $z = 0$ to their orbital histories, including lookback times of infall, distances from the MW-mass host, and stellar masses. These relations not only help characterize the orbits of satellites, but their orbital histories also provide insight, or caution, into approximating history-based properties, such as infall time or pericentre information, based on their present-day properties, such as distance and stellar mass. (2) Testing a common expectation that the orbits of satellite galaxies shrink over time, that is, that a satellite's most recent pericentric distance is the minimum that it has experienced. In Santistevan et al., in prep, we will compare directly the orbits of satellites in cosmological simulations of MW-mass galaxies to orbits from integration in a static, idealized MW-mass halo \citep[see also ][and references therein]{Vasiliev21, DSouza22}. \section{Methods} % \label{sec:methods} % \subsection{FIRE-2 Simulations} % \label{sec:sims} % \begin{table*} \centering \begin{threeparttable} \caption{ Properties at $z = 0$ of the 13 MW/M31-mass galaxies in the FIRE-2 simulation suite that we analyze, ordered by decreasing stellar mass. Simulations with `m12' names are isolated galaxies from the Latte suite, while the others are from the `ELVIS on FIRE' suite of LG-like paired hosts. Columns: host name; $M_{\rm star,90}$ is the host's stellar mass within $R_{\rm star,90}$, the disk radius enclosing 90 per cent of the stellar mass within 20 kpc; $M_{\rm 200m}$ is the halo total mass; $R_{\rm 200m}$ is the halo radius; $N_{\rm satellite}$ is the number of satellite galaxies at $z=0$ with $\Mstar>3\times10^4\Msun$ that ever orbited within $R_{\rm 200m}$; $M_{\rm sat,star}^{\rm max}$ is the stellar mass of the most massive satellite at $z = 0$; and $M_{\rm sat,halo,peak}^{\rm max}$ is the peak halo mass of the most massive satellite. In Remus and Juliet, the satellite with the largest stellar mass is not the same as the satellite with the largest subhalo mass. } \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \hline Name & $M_{\rm star,90}$ & $M_{\rm 200m}$ & $R_{\rm 200m}$ & $N_{\rm satellite}$ & $M^{\rm sat}_{\rm star,max}$ & $M^{\rm sat}_{\rm halo,peak,max}$ & Reference \\ & [$10^{10} \Msun$] & [$10^{12} \Msun$] & [kpc] & & [$10^8 \Msun$] & [$10^{10} \Msun$] & \\ \hline m12m & 10.0 & 1.6 & 371 & 44 & 4.68 & 3.78 & A \\ Romulus & 8.0 & 2.1 & 406 & 52 & 2.34 & 2.68 & B \\ m12b & 7.3 & 1.4 & 358 & 32 & 0.58 & 1.21 & C \\ m12f & 6.9 & 1.7 & 380 & 43 & 1.61 & 2.01 & D \\ Thelma & 6.3 & 1.4 & 358 & 33 & 0.33 & 1.78 & C \\ Romeo & 5.9 & 1.3 & 341 & 33 & 1.92 & 3.34 & C \\ m12i & 5.5 & 1.2 & 336 & 26 & 1.24 & 2.40 & E \\ m12c & 5.1 & 1.4 & 351 & 40 & 15.1 & 16.7 & C \\ m12w & 4.8 & 1.1 & 319 & 38 & 7.79 & 4.88 & F \\ Remus & 4.0 & 1.2 & 339 & 34 & 0.50$^*$ & 2.07$^*$ & B \\ Juliet & 3.3 & 1.1 & 321 & 38 & 2.51$^*$ & 3.16$^*$ & C \\ Louise & 2.3 & 1.2 & 333 & 34 & 2.50 & 3.71 & C \\ m12r & 1.5 & 1.1 & 321 & 27 & 28.4 & 13.7 & F \\ \hline \hline \end{tabular} \label{tab:hosts} \begin{tablenotes} \item \textit{Note:} Simulation introduced in: A: \citet{Hopkins18}, B: \citet{GarrisonKimmel19b}, C: \citet{GarrisonKimmel19a}, D: \citet{GarrisonKimmel17}, E: \citet{Wetzel16}, F: \citet{Samuel20}. \end{tablenotes} \end{threeparttable} \end{table*} We use the cosmological zoom-in baryonic simulations of both isolated MW/M31-mass galaxies and LG-like pairs from the Feedback In Realistic Environments (FIRE) project\footnote{See the FIRE project web site: http://fire.northwestern.edu} \citep{Hopkins18}. We ran these simulations using the hydrodynamic plus $N$-body code \textsc{Gizmo} \citep{Hopkins15}, with the mesh-free finite-mass (MFM) hydrodynamics method \citep{Hopkins15}. We used the FIRE-2 physics model \citep{Hopkins18} that includes several radiative heating and cooling processes such as Compton scattering, Bremsstrahlung emission, photoionization and recombination, photoelectric, metal-line, molecular, fine-structure, dust-collisional, and cosmic-ray heating across temperatures $10-10^{10}\K$, including the spatially uniform and redshift-dependent cosmic ultraviolet (UV) background from \cite{FaucherGiguere09}, for which HI reionization occurs at $z_{\rm reion} \approx 10$. Star formation occurs in gas that is self-gravitating, Jeans unstable, molecular \citep[following][]{Krumholz11}, and dense ($n_{\rm H} > 1000$ cm$^{-3}$). Star particles represent single stellar populations, assuming a \cite{Kroupa01} initial mass function, and they evolve along stellar population models from \textsc{STARBURST99 v7.0} \citep{Leitherer99}, inheriting masses and elemental abundances from their progenitor gas cells. FIRE-2 simulations also include the following stellar feedback processes: core-collapse and Ia supernovae, stellar winds, and radiation pressure. We used the code \textsc{MUSIC} \citep{Hahn11} to generate the cosmological zoom-in initial conditions at $z \approx 99$ % within periodic cosmological boxes of length $70.4 - 172 \Mpc$, sufficiently large to avoid unrealistic periodic gravity effects on individual MW-mass halos. For each simulation, we save 600 snapshots with $20 - 25 \Myr$ spacing down to $z=0$, assuming a flat $\Lambda$CDM cosmology. Consistent with \citet{Planck18}, we used cosmological parameters in the following ranges: $h = 0.68 - 0.71$, $\sigma_{\rm 8} = 0.801 - 0.82$, $n_{\rm s} = 0.961 - 0.97$, $\Omega_{\Lambda} = 0.69 - 0.734$, $\Omega_{\rm m} = 0.266 - 0.31$, and $\Omega_{\rm b} = 0.0449 - 0.048$. Our galaxy sample consists of the 12 MW/M31-mass galaxies in \citet{Santistevan20}, as well as one additional galaxy, `m12r' first introduced in \citet{Samuel20}. These are from the Latte suite of isolated MW/M31-mass galaxies introduced in \citet{Wetzel16} and the `ELVIS on FIRE' suite of LG-like MW+M31 pairs, introduced in \citet{GarrisonKimmel19a}. Table~\ref{tab:hosts} lists their stellar mass, $M_{\rm star,90}$, halo mass, $M_{\rm 200m}$, and radius $R_{\rm 200m}$ at $z = 0$, where both are defined at 200 times the matter density at $z = 0$. The Latte suite consists of halos with $M_{\rm 200m} = 1 - 2 \times 10^{12} \Msun$ at $z = 0$ with no other similar-mass halos within $5 \times R_{\rm 200m}$. We also chose m12r and m12w to have LMC-mass satellite analogs near $z \approx 0$ \citep{Samuel20}. The initial masses of star particles and gas cells is $7100 \Msun$, but the average star particle mass at $z = 0$ is $\approx 5000 \Msun$ from stellar mass loss. Within the zoom-in region, the mass of DM particles is $3.5 \times 10^4 \Msun$. The gravitational softening lengths are 4 and 40 pc (Plummer equivalent), co-moving at $z > 9$ and physical thereafter, for star and DM particles, respectively. Gas cells use adaptive force softening, equal to their hydrodynamic smoothing, down to 1 pc. Each pair of halos in the `ELVIS on FIRE' suite of LG-like pairs was chosen based on their individual mass ($M_{\rm 200m} = 1 - 3 \times 10^{12} \Msun$) and combined masses (total LG mass between $2 - 5 \times 10^{12} \Msun$), as well as their current separation ($600 - 1000\kpc$) and radial velocities at $z = 0$ ($\rm \upsilon_{rad} < 0$), and isolated environment (no other massive halos within $2.8 \Mpc$ of either host center). The mass resolution is $\approx 2 \times$ better in the `ELVIS on FIRE' suite, with initial baryonic particle masses $3500 - 4000 \Msun$. For all results in this paper, we investigated possible differences between the isolated and LG-like MW-mass galaxies, which also partially tests resolution convergence, and we find negligible differences between the two samples. These 13 host galaxies reflect formation histories of general MW/M31-mass (or LG-like) galaxies within our selection criteria and exhibit observational properties broadly similar to the MW and M31, including: realistic stellar halos \citep{Bonaca17, Sanderson18}, dynamics of metal-poor stars from early galaxy mergers \citep{Santistevan21}, satellite galaxy stellar masses and internal velocity dispersions \citep{Wetzel16, GarrisonKimmel19b}, radial and 3-D spatial distributions \citep{Samuel20, Samuel21}, and star-formation histories and quiescent fractions \citep{GarrisonKimmel19b, Samuel22}. \subsection{Halo/Galaxy Catalogs and Merger Trees} % \label{sec:rockstar} % We use the \textsc{ROCKSTAR} 6-D halo finder \citep{Behroozi13a} to generate (sub)halo catalogs using only DM particles at each of the 600 snapshots, and \textsc{CONSISTENT-TREES} \citep{Behroozi13b} to generate merger trees. As a consequence of the large zoom-in volume for each host, there is no low-resolution DM particle contamination in any of the (sub)halos that we analyze. \citet{Samuel20} describes our star particle assignment to (sub)halos in post-processing; we briefly review it here. We first select star particles within $d < 0.8 R_{\rm halo}$ (out to a maximum distance of $30 \kpc$) with velocities $\upsilon < 2 V_{\rm circ,max}$ of the (sub)halo's center-of-mass (COM) velocity. We then keep only the star particles within $d < 1.5 R_{\rm star,90}$ (the radius enclosing 90 per cent of the stellar mass) of the (then) current member stellar population's COM and halo center position. We further kinematically select the star particles with velocities $\upsilon < 2 \sigma_{\rm vel,star}$ (the velocity dispersion of current member star particles) of the COM velocity of member star particles. Finally, we iterate on these spatial and kinematic criteria, which guarantees that the COM of the galaxy and (sub)halo are consistent with one another, until the (sub)halo's stellar mass converges to within 1 per cent. We use two publicly available analysis packages: \textsc{HaloAnalysis}\footnote{\url{https://bitbucket.org/awetzel/halo\_analysis}} \citep{HaloAnalysis} for assigning star particles to halos and for reading and analyzing halo catalogs/trees, and \textsc{GizmoAnalysis}\footnote{\url{https://bitbucket.org/awetzel/gizmo\_analysis}} \citep{GizmoAnalysis} for reading and analyzing particles from Gizmo snapshots. \subsection{Selection of Satellites} \label{sec:selection} We include all luminous satellite galaxies at $z = 0$ with $\Mstar > 3\times10^4\Msun$ that have crossed within their MW-mass host halo's $R_{\rm 200m}(z)$. This lower limit on stellar mass corresponds to $\approx 6$ star particles, the limit for reasonably resolving the total stellar mass \citep{Hopkins18}. Our sample includes `splashback' satellites that are currently beyond the host's $R_{\rm 200m}$, which are typically still gravitationally bound to the host but simply near apocentre \citep[for example][]{Wetzel14}. As Table~\ref{tab:hosts} shows, the number of surviving luminous satellites at $z = 0$, including this splashback population, per host ranges from 26-52, and our sample totals 473 satellites. Both \citet{GarrisonKimmel14} and \citet{Wetzel15} showed that in the ELVIS DMO simulation suite the average number of subhalos that would typically host galaxies with $\Mstar\gtrsim10^5\Msun$ is $\sim31-45$. However, because stellar feedback and baryonic physics can affect galaxy formation, \citet{Wetzel16} and \citet{GarrisonKimmel19a} showed that the number of satellites above this mass range decreases to $\sim13-15$. More recently, \citet{Samuel20} showed that the radial distributions of these satellites are consistent with the MW and M31 out to $300\kpc$. The MW and M31 each have 13 and 27 satellites, respectively, and the MW-mass hosts in our simulations bracket these values with 11 to 27 satellites. Unless otherwise stated, in our analysis we refer to luminous satellites, i.e. satellites containing stars, as simply `satellites'. In computing host-averaged results below, to avoid biasing our results to the hosts with larger satellite populations, we oversample the satellites so that each host contributes a nearly equal fraction of satellites to the total. Specifically, we multiply the number of satellites in the MW-mass host with the largest population (Romulus, with 52 satellites) by 10 which results in an oversampled population of 520. Then, for each of the other MW-mass hosts, we divide 520 by the number of their satellites and obtain the nearest integer multiplicative factor, $m$, that we apply to each host's satellite population, $N_{\rm sat}$ (see Table~\ref{tab:hosts}), so that each host contains $\approx 500 - 530$ satellites, or that their satellite populations are within 5 per cent of one another. Thus, when plotting properties, such as pericentre distances, we count each satellite in a given host $m$ times for each property in the figures. Figure~\ref{fig:mstar_v_mhalo} shows the relation between stellar mass and halo mass (SMHM) for our satellites. We show stellar mass \textit{at $z = 0$} versus \textit{peak} (sub)halo mass throughout its history. We include the median SMHM relation of non-satellites in the same simulations, which we define as low-mass galaxies that never crossed the virial radius of the MW-mass host, and that currently orbit beyond $1\Mpc$ at $z = 0$, in the blue dot-dashed line. The formation histories of non-satellites differ from satellites because they form in less dense regions of the Universe. This, along with the UV heating of gas in isolated low-mass galaxies may explain their slightly smaller $\Mstar$, however, a deeper investigation is outside of the scope of this paper. The grey dotted line at $\Mstar = 3\times10^4\Msun$ shows our lower limit in stellar mass. To compare to other SMHM relations from the literature, we also show extrapolations from \citet{Moster13, GarrisonKimmel17_smhm, Behroozi20} in the dashed red, green, and black lines, respectively. We also include the median $\Mstar$, 68th percentile, and full distribution of values from \citet{Fitts17} of higher resolution (initial baryon masses of $500\Msun$), isolated low-mass galaxies, in the pink horizontal line, and dark and light pink shaded regions, respectively. Finally, we include 4 higher resolution, isolated low-mass galaxies from \citet{Wheeler19} (named m09, m10q, m10v, and m10vB in their work), with initial baryon masses of $30\Msun$ (magenta stars). The SMHM relation in our sample broadly agrees with these (extrapolated) semi-empirical estimates and values of isolated low-mass galaxies; for a more detailed discussion about the SMHM relation in FIRE-2, see \citet{Hopkins18, Wheeler19}. The low-mass end of our SMHM relation in Figure~\ref{fig:mstar_v_mhalo} flattens at $\Mstar\lesssim10^5\Msun$, purely because of our stellar mass selection of $\Mstar>3\times10^4\Msun$. We note that the galaxy with the smallest $\Mstar$ from \citet{Wheeler19} is beyond the resolution limit in our sample, and the second smallest $\Mstar$ galaxy would be only marginally resolved. However, with better resolution, the SMHM relation in Figure~\ref{fig:mstar_v_mhalo}, would likely follow a similar trend as the extrapolations and it is likely that the lowest $\Mstar$ isolated galaxy would lie in the lower end of the full distribution scatter. Thus, while our sample is complete in stellar mass, we are complete in halo mass for $\Mhp\gtrsim3\times10^9\Msun$. We find only minor differences in our results if selecting satellites via $\Mhp$ instead (see Appendix~\ref{app:mhalo}). We checked the SMHM relation in Figure~\ref{fig:mstar_v_mhalo} instead using the \textit{peak} stellar mass throughout a galaxy's history, $M_{\rm star,peak}$. The two relations are similar, but the SMHM relation with $M_{\rm star,peak}$ has a $\approx 5\%$ higher normalization, on average, because of stellar mass loss after infall. \subsection{Numerical Disruption} % \label{sec:disrupt} % Many previous studies \citep[for example][]{vandenBosch18_I, vandenBosch18_II, Bovy20} noted that without proper mass and spatial resolution, satellite subhalos can suffer from artificial numerical disruption too quickly. Thus, sufficient resolution and implementation of the relevant physics is necessary to model accurately the evolution of satellite galaxies. As a partial test of this, we investigated differences between the isolated and LG-like satellite populations, which have a $\approx 2 \times$ resolution difference, and we saw no strong differences in our results. \citet{Samuel20} also tested resolution convergence of the satellite radial profiles using the FIRE-2 simulations with dark matter particle masses of $m_{\rm dm} = 3.5 \times10^4\Msun$ and $m_{\rm dm} = 2.8 \times10^5\Msun$ and found generally consistent results between the two, because there are enough dark matter particles ($ \gtrsim 2 \times 10^4$) in the lowest-mass luminous subhalos ($\Mhp \gtrsim 10^8 \Msun$) to prevent numerical disruption, and many more than that in our typical (more massive) luminous subhalos, thus satisfying criteria such as in \citet{vandenBosch18_II}. Also, our DM particle mass resolution is $m_{\rm dm} = 2 - 3.5\times10^4\Msun$ and DM force softening length is $40$ pc, significantly better than previous work like \citet{Wetzel15}, who used the ELVIS DMO simulations with $m_{\rm dm} = 1.9 \times 10^5 \Msun$ and $140$ pc but found results broadly consistent with ours. Nonetheless, any simulations like ours necessarily operate at finite resolution, which inevitably leads to some degree of numerical disruption, which any reader should bear in mind. \subsection{Calculating pericentre} % \label{sec:pipeline} % We calculate pericentres by tracking the main progenitor back in time using the merger trees, which store each satellite's galactocentric distance at each of the 600 snapshots. We first ensure that the satellite is within the MW-mass host halo at a given snapshot. Then, we check if the galactocentric distance reaches a minimum within $\pm 20$ snapshots, corresponding to a time window of $\approx 1 \Gyr$. Given the $\approx 25 \Myr$ time spacing between snapshots, we fit a cubic spline to the distance and time arrays across this interval. We then find the minimum in the spline-interpolated distance(s), and record the corresponding spline-interpolated time. We checked how our results differ if varying the window to $\pm$ 4, 8, and 10 snapshots by visually inspecting each satellite's orbit history and conclude that a window of $\pm 20$ snapshots reduces nearly all `false' pericentres, that is, instances in which the criteria above are met because of numerical noise in the orbit or a short-lived perturbation. Because the center of mass of the MW-mass galaxy does not perfectly coincide with the center of mass of its DM host halo, we also checked how our results vary between using distances with respect to the center of the galaxy versus the halo: we find no meaningful difference. We additionally checked how our results vary when using distances with respect to the center of the satellite galaxy versus the center of the satellite (sub)halo, again finding no significant differences. \subsection{Calculating Gravitational Potential and Total Energy} \label{sec:potential} We also explore trends with a satellite's specific orbital total energy, $E$ at $z = 0$, defined as the sum of the kinetic and potential energies. Our simulations store the value of the gravitational potential for all particles at each snapshot, so to calculate the potential for each satellite at $z = 0$, we select all star, gas, and DM particles within $\pm 5 \kpc$ of the satellite (sub)halo's virial radius, to limit biasing from the satellite's self-potential, and we compute the mean potential across these particles. Given that some satellites are in LG-like environments, we normalize $E$ at the MW-mass halo radius, such that $E(d = R_{\rm 200m}) = 0$, that is, the sum of the host potential at $R_{\rm 200m}$ and the kinetic energy of a circular orbit at $R_{\rm 200m}$ is 0. \section{Results} % \label{sec:results} % Throughout the paper, we present results for all satellite galaxies at $z = 0$, based on their stellar mass ($\Mstar > 3 \times 10^4 \Msun$), across all of our MW-mass hosts. Although \citet{Santistevan20} noted that MW-mass halos/galaxies in LG-like pairs formed $\sim 1.6 \Gyr$ earlier than those that are isolated, we compared satellites based on isolated versus LG-like environment and find negligible differences in any properties that we investigate. This agrees with the lack of dependence in \citet{Wetzel15}, who investigated the infall histories of satellite subhalos in the ELVIS dark matter-only simulations. Appendix~\ref{app:mhalo} examines how our results change by selecting satellites by their peak halo mass. In summary we find qualitatively similar results for selecting via stellar mass, but given the scatter in the SMHM relation, the trends with halo mass are smoother and any features are sharper. Although the dark matter (sub)halo mass is more dynamically relevant to the orbits of satellite galaxies, we present our results with a stellar mass selected sample, because it is observationally easier to measure than halo mass. Finally, Appendix~\ref{app:dmo} compares our results for baryonic simulations against DMO simulations of the same systems. In summary, the lack of a MW-mass galaxy in the DMO simulations allows satellites to survive longer, orbit closer to the center of the halo, and complete more orbits. We examine trends guided by which orbital properties are relevant to different phenomena. As we will show, specific angular momentum and specific total energy provides insight into when a satellite fell into the MW-mass halo. We also explore trends with satellite mass, in part to understand where dynamical friction becomes important: from Equation~\ref{eq:dyn}, for the MW-mass halos with $M_{\rm 200m}(z = 0) \approx 10^{12}\Msun$ (and lower $M_{\rm 200m}$ at earlier times), we expect dynamical friction to significantly affect satellites with $M_{\rm 200m} \gtrsim 3 \times 10^{10}\Msun$, or $\Mstar \gtrsim 10^8 \Msun$. We also focus on infall times, to understand how long satellites have been orbiting in the host halo environment, and we explore the incidence of pre-processing in a lower-mass group prior to MW-mass infall. We also examine properties of orbital pericentre, given that satellites typically feel the strongest gravitational tidal force and the strongest ram pressure at pericentre. In this paper, we present trends for the simulated satellite populations only, however, in the future we plan to investigate differences in the simulations and results obtained from idealized orbit modeling methods. Ultimately we will provide a framework to derive similar orbital properties from satellites in the MW and M31 using the satellite populations in the simulations, and compare the results. Thus, we leave direct observational comparisons for future work. \subsection{Orbital properties today} \label{sec:dynamics} We first investigate the instantaneous orbital properties of satellites at $z = 0$, including approximate integrals of motion like angular momentum and energy. Figure~\ref{fig:dynamics} shows total velocity, specific orbital total energy, $E$, and specific angular momentum, $\ell$, as a function of lookback time of infall into the MW-mass halo, $\MWinfall$, galactocentric distance, $d$, and stellar mass, $\Mstar$. The top of each column shows distributions of $\MWinfall$, $d$, and $\Mstar$. In particular, the distribution of $\MWinfall$ is relatively flat, with a modest peak $\approx 9 \Gyr$ ago. For reference, all panels versus $\MWinfall$ (left) include a vertical shaded region to represent the free-fall time at $R_{\rm 200m}$, $t_{\rm ff} = \sqrt{ 3 \pi / \left(32 G \rho_{\rm 200m}\right) }$, where $t_{\rm ff} = 2.8 - 3 \Gyr$ across our hosts. In all panels versus $d$ (middle), the vertical shaded region shows the range of $R_{\rm 200m}$ for the MW-mass halos, as Table~\ref{tab:hosts} lists. Similarly, all panels versus total velocity and $\ell$ show horizontal shaded bands that represent the range of $V_{\rm 200m}$ and $L_{\rm 200m}$ across hosts, where $V_{\rm 200m} = \sqrt{GM_{\rm 200m}/R_{\rm 200m}}$ is the velocity of a circular orbit at the virial radius, and $L_{\rm 200m} = V_{\rm 200m}\times R_{\rm 200m}$ is the specific angular momentum of that orbit. Because the satellites fell in at different times and orbit at various distances with unique stellar masses, we do not expect them to have values equal to $V_{\rm 200m}$ or $L_{\rm 200m}$, so we provide these shaded regions as a reference only. \subsubsection{Total velocity} % We first present trends in the total velocity, corresponding to the top row in Figure~\ref{fig:dynamics}. Considering the trends in total velocity with infall time (top left), satellites that fell in $< 1 \Gyr$ ago have not yet experienced a pericentre, so they show an increase in total velocity with time since infall from $0.5 - 1.5 \Gyr$ ago, because these satellites are near pericentre. The total velocity then decreases with increasing time since infall, because those satellites are now near apocentre. We see a similar, but weaker, peak in total velocity $\sim 6 \Gyr$ ago, from a marginally phase-coherent population near pericentre, but after a few orbits, satellites become out of phase. Satellites that fell in $\lesssim 3 \Gyr$ ago typically have total velocities of $150 - 250 \kmsi$, while earlier infalling satellites are only orbiting at $100 - 185 \kmsi$. Comparing these infall times to $t_{\rm ff}$ in the vertical shaded bands, satellites with $\MWinfall < t_{\rm ff}$ often have larger velocities, and again are likely to be on first infall, because they have not had enough time to reach pericentre. For reference, we also compare the satellite total velocities to the host's virial velocity, $V_{\rm 200m}$ (horizontal gray shaded band). Satellites that fell in $\MWinfall \approx 3 - 11 \Gyr$ ago typically have comparable total velocities to the virial velocity, but the full scatter extends to both much larger and smaller values. Next, the median velocity decreases with increasing $d$ (top middle), from as high as $260\kmsi$ for the closest satellites to $80\kmsi$ for satellites near $R_{\rm 200m}$. The shaded region of the 68th percentile follows the median trend and the width is roughly constant at $\approx 95 \kmsi$ across all distances. At $R_{\rm 200m}$, the total velocities of satellites are typically lower than $V_{\rm 200m}$ both because they are not on perfectly circular orbits and because the population at large $d$ is likely biased to lower values from the splashback population which typically have negligible velocities. The median is nearly constant with $\Mstar$ (top right), ranging from $120 - 160 \kmsi$ for $\Mstar \approx 10^{4.75-8.25} \Msun$, which decreases to $70 - 100 \kmsi$ at higher mass, likely because sufficiently massive satellites experience significant dynamical friction that slows their orbit. Across all stellar masses, the average total velocity is $\approx 135 \kmsi$ and the typical range of the 68th percentile is $90 - 205 \kmsi$. The median for all satellites with $\Mstar \lesssim 10^{8.5}\Msun$ show consistent values with $V_{\rm 200m}$. \subsubsection{Specific total energy} % Next, the middle row in Figure~\ref{fig:dynamics} shows trends in the specific orbital energy, $E$. Given that some satellites are in LG-like environments, which complicates computing the specific total energy beyond $R_{\rm 200m}$, we normalize $E$ at the MW-mass halo's $R_{\rm 200m}$ so $E(d = R_{\rm 200m}) = 0$. Thus, satellites with $E > 0$ are the splashback population with apocentres beyond $R_{\rm 200m}$ and are essentially all bound to the host halo. The middle left panel shows that earlier infalling satellites are on more bound orbits, with the median $E$ decreasing from $\approx 0$ to $-3.9 \times 10^4\kmsis$. This reflects the growth of the MW-mass halo over time. $E$ increases with $d$ from the host, so satellites are more bound at smaller distances. Thus, the median $E$ is not constant with $d$, but this does not imply that specific energy is not being conserved over an orbit. As we explore below, $\MWinfall$ correlates with $d$, such that satellites at smaller $d$ fell in earlier (though with large scatter; see Figure~\ref{fig:infall_dz0}). This is largely because they fell in when the host $R_{\rm 200m}$ was smaller, so they necessarily orbit at smaller $d$. This then leads to the correlation of $E$ with $d$ across the population at $z = 0$ (Figure~\ref{fig:dynamics}, center panel). Similar to the trends in total velocity, $E$ does not strongly depend on $\Mstar$, except at $\Mstar\gtrsim10^8\Msun$, where satellites experience significant dynamical friction, causing their orbits to become more bound, despite the fact that higher-mass satellites fell in more recently, as we show below. \subsubsection{Specific angular momentum} % Last, we present trends in specific angular momentum in the bottom row of Figure~\ref{fig:dynamics} The median specific angular momentum, $\ell$ (bottom), decreases across time since infall from $\approx 3$ to $1 \times 10^4 \kpc\kmsi$ (bottom left panel). Between $\MWinfall \approx 2 - 6 \Gyr$, $\ell$ is nearly constant at $\approx 2 \times 10^4 \kpc\kmsi$, which as we will show in Figure~\ref{fig:peri_dn} corresponds to satellites that completed $1 - 2$ pericentres. The median $\ell$ is much higher for satellites that are to the left of the $t_{\rm ff}$ band, consistent with the higher velocities in these satellites. Similar to the top row, we define the reference host halo angular momentum, $L_{\rm 200m} = V_{\rm 200m} \times R_{\rm 200m}$, and we show the range of values across hosts in the gray horizontal band. As expected, essentially all satellites have $\ell < L_{\rm 200m}$, and most have $\ell$ significantly lower. Generally, $\ell$ and its scatter increase with increasing $d$ as expected given that $\ell = \upsilon_{\rm tan} d$, where $\upsilon_{\rm tan}$ is the satellite's tangential velocity. Because the MW-mass halo grew over time, satellites fell into their MW-mass halo on larger orbits at later times, which explains the increasing $\ell$ with more recent infall times. The median $\ell$ is nearly constant with stellar mass at $\Mstar<10^{8.25}\Msun$, but as with velocity, it decreases for higher-mass satellites. Although we find little dependence of median $d$ with $\Mstar$ (not shown), satellites with $\Mstar > 10^8 \Msun$ today exist at $\lesssim 250 \kpc$, whereas there are lower mass satellites out to $\approx 800 \kpc$. Thus, because dynamical friction likely drove higher-mass satellites to smaller velocities and distances, they have smaller orbital lifetimes and smaller $\ell$ today. Across the full mass range, the mean of the median is $1.6\times10^4\kpc\kmsi$, and the mean range of the 68th percentile is $0.9-2.5\times10^4\kpc\kmsi$ To investigate how $\ell$ changed over time, we measure the fractional difference in $\ell$ from first infall into the MW-mass halo to present-day, that is, $(\ell(z=0) - \ell_{\rm infall,MW}) / \ell_{\rm infall,MW}$. Figure~\ref{fig:ell_evo} shows this evolution versus $\MWinfall$ (left) and $\Mstar$ (right). The median $\ell$ did not significantly change, \textit{on average}, for satellites that fell in $\lesssim 10 \Gyr$ ago. However, earlier-infalling satellites systematically \textit{increased} their angular momenta, by up to a factor of 2 on average, so angular momentum is not conserved for long periods in a dynamic cosmological halo environment, in part because the halo potential evolves on the same timescale as the orbit. We also stress that the typical $1-\sigma$ width of the distribution is $\approx 40$ percent, so this represents the typical amount of dynamical scattering that a satellite experienced. Thus, the ensemble satellite population that fell in $\lesssim 10 \Gyr$ ago did not change in $\ell$ much, but any \textit{individual} satellite's angular momentum changed by up to $\approx 40$ percent, on average. Figure~\ref{fig:ell_evo} (right) shows that the median fractional difference in $\ell$ is minimal for satellites with $\Mstar \lesssim 10^8\Msun$, but again, the $1-\sigma$ width of the distribution is $30 - 50$ percent. Higher-mass satellites experienced a stronger reduction in $\ell$, by roughly 70 per cent, likely from the stronger dynamical friction they experienced. Figure~\ref{fig:dynamics} thus highlights the strong dependencies of total velocity, $E$, and $\ell$ with the lookback time of infall and present-day galactocentric distance, and a lack of dependence on mass, except at $\Mstar\gtrsim10^8\Msun$. Our results in the middle column especially highlight the large distribution of these properties when selecting satellites at a given distance, and thus, we caution in solely interpreting the median values. Figure~\ref{fig:ell_evo} highlights how earlier infalling and higher-mass satellites experienced larger changes in their specific angular momenta over time, as we explore below. \subsection{Orbital histories} % \label{sec:orbits} % In all but the most ideal conditions, the orbits of satellites change over time. Mechanisms such as the time-dependent (growing) MW-mass host potential, triaxiality of the host halo, dynamical friction, and satellite-satellite interactions can perturb satellite orbits. Because higher-mass satellites experience stronger dynamical friction, and because the MW-mass galaxy and host halo grew over time, a common expectation is that satellite orbits shrink over time, such that their most recent pericentre generally should be their smallest \citep[for example][]{Weinberg86, Taylor01, Amorisco17}. We now explore these expectations and examine trends for the infall times and pericentres in the orbital histories of satellites at $z = 0$. We investigate ensemble trends to help characterize and compare satellite populations in galaxies such as the MW, M31, and those in the SAGA survey. One should interpret these results for ensembles only, and not necessarily for individual satellites and their orbits, which we will explore further in future work. Figure~\ref{fig:times_mstar} (top) shows the lookback times when satellites fell into the MW-mass halo, $\MWinfall$ (orange) or into \textit{any} more massive halo, $\anyinfall$ (black), as a function of satellite stellar mass, $\Mstar$. By `any more massive halo', we specifically mean any halo that is more massive than a given satellite at the same time, which could either be a MW-mass host galaxy’s halo or the halo of a massive central galaxy. We also show a horizontal dotted line to represent when the epoch of reionization ended \citep[for example][]{Robertson21}. Galaxies form hierarchically, so early infalling galaxies, either into the MW-mass halo or any more massive halo, were less massive. Additionally, higher-mass satellites experienced stronger dynamical friction, which caused their orbits to lose $\ell$ and merge with the MW-mass host more quickly. Infall lookback time into any more massive halo decreases from $\anyinfall \approx 9 \Gyr$ ago for satellites with $\Mstar \approx 10^{4.75} \Msun$ to $\anyinfall \approx 2 \Gyr$ ago for $\Mstar \approx 10^{9.25}\Msun$. At $\Mstar \lesssim 10^7 \Msun$, satellites typically fell into another (more massive) halo before they fell into the MW-mass halo. Above this mass, the limited range in mass between the satellite and the MW-mass halo does not leave room for an intermediate-mass host halo. Both the 68th percentile and full distribution for each infall metric span similar ranges, in particular, the 68th percentile ranges from $4 - 11 \Gyr$ for satellites at $\Mstar < 10^7 \Msun$ and $1 - 9.5 \Gyr$ at higher mass. The earliest infalling satellites that survive to $z = 0$ fell into a more massive halo around $\anyinfall \approx 13.2 \Gyr$ ago ($z \approx 8.6$) and into the MW-mass halo at $\MWinfall \approx 12.5 \Gyr$ ago ($z \approx 4.7$). Furthermore, $\approx 2 / 3$ of the satellites that fell into their MW-mass halo within the first $3 \Gyr$ belong to the LG-like paired hosts, presumably because of the earlier assembly of halos in LG-like environments \citep[see][]{Santistevan20}. Similar to the analysis of dark matter-only simulations in \citet{Wetzel15}, no surviving satellites at $z = 0$ were within their MW-mass halo before the end of the epoch of reionization at $z \gtrsim 6$ \citep{Robertson21}. In their analysis, less than 4 per cent of satellites were a member of a more massive halo as early as $\approx 13.2 \Gyr$ ago ($z \approx 8.6$), near to when reionization was $\approx 50$ per cent complete \citep[see for example][]{FaucherGiguere20}, compared to $<1$ per cent of the satellites in our sample. Thus, \textit{reionization was finished before surviving satellites first became satellites of a more massive halo.} For a detailed study on satellite quenching after infall in the FIRE simulations, see \citet{Samuel22}. Roughly $37$ per cent of satellites below $\Mstar<10^7\Msun$ were pre-processed before falling into their MW-mass hosts. Using the ELVIS DMO simulation suite of 48 MW-mass hosts, \citet{Wetzel15} found that for pre-processed subhalos, the typical halo mass of the more massive halo they fell in was within $M_{\rm host halo} = 10^{10-12}\Msun$, with a median mass of $10^{11}\Msun$. From Figure~\ref{fig:mstar_v_mhalo}, this corresponds to a median stellar mass of $10^{7-9}\Msun$. \citet{Wetzel15} also determined that $\sim30$ groups contributed all pre-processed satellites, for a typical MW-mass system, with each group contributing between $2-5$ satellites. In our sample, satellites that were pre-processed typically fell into halos with $M_{\rm halo} = 10^{9.3-11}\Msun$ before falling into the MW-mass host halo, with a median mass at infall of $M_{\rm halo} = 10^{10.2}\Msun$. The stellar masses of galaxies hosted within these more massive halos ranged from $\Mstar\sim10^{6.4-8.9}\Msun$, with a median stellar mass of $\Mstar=10^{7.9}\Msun$. Thus, our results are consistent with \citet{Wetzel15}, however, we see a slightly less massive central halo mass. Figure~\ref{fig:times_mstar} (bottom) shows the lookback times to pericentres about the MW-mass host, for satellites that have experienced at least one pericentre. Some satellites have orbited their MW-mass host multiple times, so we show the lookback time to two pericentres: the most recent pericentre, $\tperirec$ (green), and the minimum-distance pericentre, $\tperimin$ (purple). The mass dependence of $\tperirec$ is weak. However, lower-mass satellites experienced earlier $\tperimin$ than higher-mass satellites, again because lower-mass satellites fell in earlier, when the MW-mass halo was smaller, so their overall orbit and pericentre distance was smaller than for higher-mass satellites that fell in at later times on larger orbits. Given that dynamical friction tends to shrink the orbits of satellites over time, particularly those with $\Mstar \gtrsim 10^8 \Msun$ or $\Mhp \gtrsim 3 \times 10^{10} \Msun$, one might assume that, for satellites that have experienced multiple pericentres, the minimum pericentre should be equal to the most recent. However, for satellites with $\Mstar \lesssim 3 \times 10^7 \Msun$, the minimum pericentre occurred $1 - 3 \Gyr$ earlier than the most recent pericentre, and the 68th percentile in $\tperimin$ spans a much larger range in lookback time, $0.25 - 9 \Gyr$, than $\tperirec$, $0 - 5 \Gyr$. At $\Mstar \gtrsim 3 \times 10^7 \Msun$, where the typical number of pericentres experienced is only about 1 (see below), and where dynamical friction is more efficient, the two are more comparable. Thus, the naive expectation that low-mass satellite orbits remain relatively unchanged because they do not experience strong dynamical friction cannot explain the trends in Figure~\ref{fig:times_mstar}. Furthermore, although the medians between $\tperimin$ and $\tperirec$ are comparable, the 68th percentile range (and full distribution) shows that differences between these pericentre metrics exist for satellites with $\Mstar \gtrsim 3 \times 10^7 \Msun$ also, implying that even the orbits of massive satellites can increase over time. The differences between the most recent and minimum pericentres implies that some mechanism increases the pericentre distances of (especially lower-mass) satellites over time, as we explore below. Figure~\ref{fig:infall_dz0} shows trends in $\MWinfall$ (orange) and $\anyinfall$ (black) with present-day distance from the host, $d$. Satellites currently at closer distances typically fell into another halo earlier than more distant satellites. The closest low-mass satellites fell into any more massive halo roughly $\anyinfall \approx 10.5 \Gyr$ ago, and this median time since infall decreases to $4.8 \Gyr$ ago for satellites at $d \gtrsim R_{\rm 200m}$. Comparing this to the time since infall into their MW-mass halo, the median $\MWinfall$ is roughly $0.5 - 2 \Gyr$ later across all distances. The range of both the 68th percentile and full distribution generally span similar lookback times, with $\anyinfall$ offset to earlier times. The trend of more recent time since infall at larger $d$, is because at earlier times, the MW-mass halos were smaller and satellites fell in on smaller orbits. Again, one should not interpret our results for individual satellites but rather for populations of satellites. Focusing on the full distribution, which extends across the full range in $d$, galaxies that fell into their MW-mass hosts between $\MWinfall \approx 3 - 9 \Gyr$ ago currently orbit at all distances between $25 - 400 \kpc$. Thus, although the median shows a clear trend with $d$, the range in the 68th percentile is $\gtrsim 2 \Gyr$, which limits the ability to use the present-day distance of a given satellite to infer its time since infall. Because of the mass dependence in satellite infall times in Figure~\ref{fig:times_mstar}, we checked for possible mass dependence of infall time with $d$ by splitting the sample into lower-mass ($\Mstar < 10^7 \Msun$) and higher-mass satellites. The difference between $\anyinfall$ and $\MWinfall$ exists at all satellite distances in the low-mass sample, and because the stellar mass function is steep, we saw nearly identical results to Figure~\ref{fig:infall_dz0}. However, the higher-mass sample showed little to no difference between the two metrics, with times since infall ranging from $3.5 - 7 \Gyr$ ago, because there were not many other more massive halos for higher-mass satellites to fall into before the MW halo. Figure~\ref{fig:peri_dn} shows trends in the number of pericentric passages, $\Nperi$, about the MW-mass host (top row) and various pericentric distance metrics (bottom row), versus the time since infall into MW-mass halo, $\MWinfall$ (left), present-day distance from the MW-mass host, $d$ (middle), and satellite stellar mass, $\Mstar$ (right). Again, we include vertical gray shaded regions that represent the free-fall time at $R_{\rm 200m}$, $t_{\rm ff}$ (left column), and MW-mass halo $R_{\rm 200m}$ (middle), as reference values. When presenting trends in $\Nperi$, we include all satellites, including those that have not yet experienced a pericentric passage, but for trends in pericentre distance we only include satellites that have experienced at least one pericentre. Satellites that have not yet reached first pericentre comprise $\approx 7$ per cent of all satellites. The top left panel shows the expected trend of more pericentres for earlier $\MWinfall$. The mean $\Nperi$ is 0 for recently infalling satellites, and it rises to $\Nperi \approx 1$ and is flat across $\MWinfall \approx 2.5 - 5.5 \Gyr$ ago, because this time interval is comparable to an orbital timescale. $\Nperi$ then rises rapidly with $\MWinfall$, reaching nearly 9 for the earliest-infalling satellites. We compared these trends for lower-mass versus higher-mass satellites and find no significant differences. Figure~\ref{fig:peri_dn} (top middle) shows the dependence of $\Nperi$ on $d$. Because we find significant differences in $\Nperi$ with $\Mstar$ (top right), we split the sample into $\Mstar < 10^7 \Msun$ (solid) and $\Mstar > 10^7 \Msun$ (dashed). We choose this mass selection given that the lower-mass satellites experience a mean $\Nperi \geq 2$, while the higher-mass satellites have a mean of $\Nperi \approx 1$. Lower-mass satellites generally experienced more pericentres than higher-mass satellites at a given $d$, and the mean number decreases with $d$ for lower-mass satellites from $\Nperi \approx 5$ to 1 for those near $R_{\rm 200m}$ (gray shaded region). Conversely, we do not find dependence on $d$ for higher-mass satellites, with a mean value of $\Nperi \approx 1-2$ at all $d$, likely because of their more recent infall and the increased importance of dynamical friction on them. Finally, $\Nperi$ declines weakly with $\Mstar$, with a mean $\Nperi \approx 2.5$ at $\Mstar = 10^{4.75} \Msun$ to $\Nperi \approx 1$ at $\Mstar > 10^9 \Msun$. Lower-mass satellites experienced more pericentres, because they fell in earlier (see Figure~\ref{fig:times_mstar} top), and also because higher-mass satellites took longer to form and felt stronger dynamical friction that caused them to merge into their MW-mass host on shorter timescales. Over the full sample, the largest number of pericentres experienced is $\Nperi = 4$ at $\Mstar > 10^7 \Msun$, and $\Nperi = 10$ for lower-mass satellites. The bottom row shows trends for both pericentre metrics: the pericentre with the minimum distance, $\dperimin$ (purple), and the most recent pericentre, $\dperirec$ (green). In the idealized scenario we outline at the beginning of this subsection, an orbiting satellite's pericentre will remain unchanged or shrink over time because of dynamical friction. However, the panels above show that this often is not true; early infalling satellites can have larger subsequent pericentres. The bottom left panel shows the median $\dperimin$ (purple) and $\dperirec$ (green). Both $\dperimin$ and $\dperirec$ are nearly identical for satellites that fell in $\MWinfall < 6 \Gyr$ ago, with median values ranging from $60 - 100 \kpc$. For earlier-infalling satellites, both distance trends slowly decrease from $\approx 100 \kpc$ to $20 - 35 \kpc$, where the most recent pericentre was roughly $5 - 20 \kpc$ larger than the minimum. Thus, for sufficiently early-infalling satellites which spent longer amounts of time in the evolving MW-mass host halo, the orbits grew slightly over time, which is not expected given the assumption of unchanged orbits or shrinking orbits due to dynamical friction. We also investigated how the first pericentre a satellite experienced depends on infall time and find qualitatively similar results to the other two pericentre metrics. Earlier-infalling satellites had smaller first pericentres than later-infalling satellites, and the first pericentres were smaller than the most recent ones. The first pericentres were also the minimum a satellite ever experienced in a majority ($\approx 72$ per cent) of satellites with $\Nperi \geq 2$. The only noticeable differences between the first pericentres and $\dperimin$ occurred for galaxies that fell in $\gtrsim 6 \Gyr$ ago. The bottom middle panel shows pericentre distance trends versus current $d$. As expected, both pericentre metrics increase with $d$, from $20 - 30 \kpc$ for satellites that are currently closer, to $70 - 80 \kpc$ for satellites near $R_{\rm 200m}$ (gray shaded region). Satellites within $d \lesssim 225 \kpc$ often had recent pericentres that are larger than $\dperimin$ by nearly $10 - 20 \kpc$, so the orbits of these satellites grew. The pericentre metrics in both lower-mass and higher-mass satellites increase with $d$, but because lower-mass satellites fell in earlier than higher-mass satellites and completed more orbits, they largely drive the differences between $\dperimin$ and $\dperirec$ (and $\dperimin$ and $d_{\rm peri,first}$) in the bottom left panel. We again highlight that the full distributions in both $\dperimin$ and $\dperirec$ span a wide range at a given $d$, so even though the median trend increases with $d$, one should not directly apply our results to an individual satellite. Finally, the bottom right panel shows that lower-mass satellites typically had smaller recent and minimum pericentres. However, at $\Mstar \gtrsim 10^{8.25} \Msun$, the median pericentre distances decrease, likely driven by the onset of efficient dynamical friction. Lower-mass satellites have smaller pericentre distances because they fell-in earlier when the MW-mass halo was smaller and less massive. The typical recent/minimum pericentre distance is $40 - 60 \kpc$ for satellites with $\Mstar \lesssim 10^7 \Msun$, $60 - 100 \kpc$ for satellites with $\Mstar \approx 10^{7-8.25} \Msun$, and $\lesssim 100 \kpc$ for higher-mass satellites. Because the mass of the host can determine the orbits of the satellites, we investigated potential differences between satellites in higher-mass and lower-mass host halos. At pericentre, satellites are deep in the potential near the galaxy, therefore, the stellar mass of the central galaxy could also correlate with our pericentre metrics. Specifically, we divided the sample in two by selecting the 6 MW-mass hosts with the higher $\Mstar$ and 7 hosts with lower $\Mstar$ (see Table~\ref{tab:hosts}) and examined their pericentre distances versus $\MWinfall$ and satellite $\Mstar$. We find no differences between the two samples versus $\Mstar$. Versus $\MWinfall$, the satellites in higher-mass host halos had slightly larger, although minimal, $\dperimin$ and $\dperirec$. The results in Figures~\ref{fig:times_mstar}-\ref{fig:peri_dn} suggest a different evolution than expected for some satellites. Lower-mass satellites fell into their MW-mass hosts earlier, when the halo was smaller and less massive, so they complete more orbits than higher-mass satellites in this evolving potential and orbit at smaller distances. Interestingly, the orbits of these lower-mass satellites can increase over time, presumably through the evolving global potential or interactions with other galaxies, which opposes the common expectation of shrinking orbits. However, given the 68th percentile ranges and the full distribution of the pericentre properties, differences between the most recent and minimum pericentres exist at \textit{all} satellite masses, and not solely at low mass. \subsection{Satellites with growing pericentres} % \label{sec:torqued} % As we showed in Figures~\ref{fig:times_mstar}-\ref{fig:peri_dn}, the most recent pericentre that a satellite experienced is often not the minimum in terms of distance. We now investigate these cases in more detail and refer to satellites with $\dperimin < \dperirec$ as having `growing pericentres'. Satellites with growing pericentres make up 31 per cent of all satellites (ranging from $23 - 46$ per cent for a given host). Moreover, growing pericentres comprise the \textit{majority} (67 per cent across all hosts, ranging from $50 - 86$ per cent for a given host) of all satellites with $\Nperi \geq 2$. In other words, \textit{for satellites with two or more pericentres, typically their most recent pericentre was not their closest encounter with their MW-mass host galaxy}. Figure~\ref{fig:fraction} highlights this, showing the fraction of satellites with growing pericentres versus pericentre number. For satellites with $\Nperi \geq 2$, the growing pericentre population represents $> 50$ per cent of the total sample at any $\Nperi$, and in some cases, they represent the \textit{entire} population at a given $\Nperi$. This fraction broadly increases with $\Nperi$, at least up to $\Nperi = 6$, where it represents all satellites, though the fraction fluctuates for $\Nperi$ above that. Although we cannot directly check whether or not this is a temporary occurrence, we compared the most recent pericentre distance to the maximum pericentre a satellite experienced. For satellites that experienced more than 3 pericentres, we found that roughly 30 per cent of them experienced their maximum pericentre sometime between the minimum and most recent. Of these satellites, the fractional difference between their most recent pericentre distance, and their maximum, i.e. $(d_{\rm peri,recent}-d_{\rm peri,max})/d_{\rm peri,max}$, was between 1-54 per cent, with a median value of 17 per cent, and a 68th percentile range of 8-38 per cent. Thus, because this scenario happens in the minority of satellites, and because the median fractional difference is small, we argue that it is not merely a temporary occurrence. Furthermore, from the top right panel of Figure~\ref{fig:peri_dn}, satellites with more than 3 pericentres are generally lower-mass satellites, which will not strongly feel the effects of dynamical friction. We confirmed that this population of satellites with growing pericentres is not sensitive to the choice for the center of the MW-mass host in computing satellite distances. Specifically, we examined these trends using the center of the host dark-matter halo (instead of the center of the stars in the host galaxy, as is our default). This results in only 7 additional satellites whose minimum and most recent pericentres differ by more than 5 per cent, which represents only $\approx 4$ per cent of all satellites with $\Nperi \geq 2$. To quantify further the significance of this population, Figure~\ref{fig:torqued_hist} shows the probability distributions of the difference between key properties at the minimum and most recent pericentres for all satellites with growing pericentres. The left panel shows the fractional difference between the two pericentre distances, $(\dperimin - \dperirec) / \dperirec$. As the black point shows, the median fractional difference is $-37$ per cent, with a 68th percentile range of $-15$ to $-65$ per cent. Figure~\ref{fig:torqued_hist} (middle) shows the fractional difference in specific angular momentum at the minimum and most recent pericentres. Nearly all satellites with growing pericentres ($> 95$ per cent) experienced an increase in $\ell$ between the two pericentres; we do not show in Figure~\ref{fig:torqued_hist} the small percent with increased $\ell$. The median fractional difference in $\ell$ is $-29$ per cent, with a range in the 68th percentile of $-10$ to $-60$ per cent. Finally, Figure~\ref{fig:torqued_hist} (right) shows the difference between the lookback times of the minimum and most recent pericentres. These satellites have a wide range of time differences, with a median of $\approx 6.3 \Gyr$ and 68th percentile range of $3.5 - 8.5 \Gyr$. These are slightly longer than the typical orbital periods of these satellites, $2 - 5 \Gyr$, as Figure~\ref{fig:torqued_orbits} shows, the minimum and most recent pericentres do not always occur successively. To provide more context, Figure~\ref{fig:torqued_orbits} shows the orbits (host distance versus time) for four representative satellites with growing pericentres (top row), along with their specific angular momentum (bottom row), labeled A-D from left to right. We chose these four particular satellites at random to span the entire possible range of fractional pericentre distances. The legends show the values of the minimum and most recent pericentres, along with the fractional differences between them; these four satellites range from $12 - 93$ per cent. The arrows indicate when these pericentres occurred. For reference, we also show the MW-mass halo's $R_{\rm 200m}(t)$ (grey). All four satellites experienced $\dperimin$ immediately after first infall, and the first pericentre was the minimum in 72 per cent of all satellites with growing pericentres. Furthermore, as Figure~\ref{fig:torqued_orbits} suggests, $71$ per cent of satellites with growing pericentres experienced a splashback phase of orbiting beyond $R_{\rm 200m,host}$ after their first pericentre. For comparison, among the population with $N_{\rm peri} > 1$ but $\dperimin = \dperirec$, only $57$ per cent experienced a splashback phase. So this suggest that orbiting beyond $R_{\rm 200m,host}$ is associated with a growing pericentre, at least in some cases. As Figure~\ref{fig:torqued_orbits} (bottom row) suggests, nearly all satellites whose pericentres grew also increased their specific angular momentum. By visually inspecting the histories of the full population, we find that this occurs in two broad ways: (1) steady, gradual increase in $\ell$ over time, which accounts for 45 per cent of all growing pericentres, and (2) rapid growth in $\ell$ near a pericentre or apocentre, which account for 53 per cent of the satellites. The remaining 2 per cent of satellites are the rare cases in which the pericentres increased from minimum to the most recent, but the angular momentum decreased. The fractional change in $\ell$ for these satellites is generally small, $\lesssim6$ per cent. However, some of these satellites show clear signs of interactions with other galaxies, and fell in early ($\gtrsim 8.5 \Gyr$ ago). For satellites in category (1), a time-dependent and/or triaxial host halo potential likely plays an important role, especially given that satellites with growing pericentres typically fell in early, when the shape of the host halo potential was changing more rapidly \citep[for example][Baptista et al, in prep]{Santistevan20, Gurvich22}. Satellite C in Figure~\ref{fig:torqued_orbits} shows a relatively gradual increase in $\ell$ over time. We defer a more detailed investigation to future work. For the growing pericentres in category (2), $4/5$ of the satellites experienced a rapid increase in $\ell$ near either a single apocentre, or some combination of them, with the first apocentre being the most common. This is especially apparent in satellites B and D in Figure~\ref{fig:torqued_orbits}. The other satellites showed rapid increases in $\ell$ involving a pericentre that was not the minimum pericentre, much like in satellite A. Because the fraction of splashback orbits is higher for satellites with growing pericentres compared to the remaining population with multiple pericentres, this suggests that perturbations at $d \gtrsim R_{\rm 200m,host}$ may play a key role in causing this population. This behavior is apparent in satellites B and D of Figure~\ref{fig:torqued_orbits}, where large spikes in $\ell$ occur near apocentres, some of which are beyond $R_{\rm 200m,host}$. These rapid increases in $\ell$ are caused by rapid increases in the tangential velocities, which typically were of order $\delta \upsilon \approx 30 \kmsi$. Other satellite mergers with the MW-mass host also can significantly alter the global potential, resulting in orbit perturbations. We investigated correlations of both $\tperimin$ and $\tperirec$ with the lookback times of mergers, with stellar mass ratios of $\gtrsim 1:100$, and did not find a clear correlation between these times. We also investigated correlations of these pericentre metrics with various metrics of host formation times including: the lookback times of when the host galaxy formed 90 per cent of its stellar mass \citep[see][for a table of values]{Gandhi22}, the lookback times of when the host formed 10 per cent of its halo mass, and the lookback time of when the host galaxy's growth transitioned from being dominated by mergers to in-situ formation \citep{Santistevan20}. We find no significant correlations with these formation metrics. To investigate whether satellites with growing pericentres have biased orbits, both throughout their history and today, Figure~\ref{fig:torqued_mstar} shows several orbital properties versus stellar mass, for satellites with growing pericentres, all satellites, and all satellites with $N_{\rm peri} > 1$ but with shrinking pericentres, that is, with $\dperimin = \dperirec$. The top left panel shows the lookback time of infall into the MW-mass halo. Compared to the total sample, as expected, both sub-samples with $\Nperi > 1$ fell in typically $\gtrsim 1 - 2 \Gyr$ earlier. However, among the population with $\Nperi > 1$, we find no significant differences between those with growing versus shrinking pericentres, so infall time does not correlate with having a growing pericentre. Figure~\ref{fig:torqued_mstar} (bottom left) shows the minimum pericentre distances, $\dperimin$. Although all three samples show similar behaviour to Figure~\ref{fig:peri_dn}, with increasing pericentre distance with increasing $\Mstar$, the growing pericentre population is biased to the smallest $\dperimin$. The shrinking pericentre sub-sample is generally consistent with the total sample, with typical values spanning $\dperimin \approx 35 - 90 \kpc$, while $\dperimin$ for growing pericentre satellites is $10 - 25 \kpc$ smaller, ranging from $\dperimin \approx 25 - 35 \kpc$. Thus, satellites with growing pericentres orbited closer to the host galaxy. Again, $\approx 30$ per cent of satellites with growing pericentres experienced rapid increases in $\ell$ during their first apocentre or slightly after $\dperimin$. Likely, other important factors contribute to the larger differences between the pericentre metrics, such as the evolving MW-mass host potential, gravitational interactions with other satellite galaxies, and mergers, Figures~\ref{fig:torqued_orbits} hints at, as we plan to explore in future work. Figure~\ref{fig:torqued_mstar} (top right) shows the specific angular momentum at $z = 0$. Because the growing and shrinking pericentre sub-samples fell into their MW-mass halo earlier, we expect them to have smaller $\ell$. However, the growing pericentres having modestly higher $\ell$ at $z = 0$ at most masses, again reflecting that they have scattered to larger $\ell$ by today. Figure~\ref{fig:torqued_mstar} (bottom right) shows the specific orbital total energy, $E$. Consistent with their earlier infall, both sub-samples with $\Nperi > 1$ are on more bound orbits today than the total population, at least at $\Mstar < 10^{7.25} \Msun$. Any systematic differences between the growing and shrinking pericentres are modest, given the scatter, so we conclude that there are no clear differences in specific orbital total energy at $z = 0$. A satellite galaxy can undergo significant mass stripping when it orbits throughout the MW-mass host halo, especially when it is deepest in the host's potential at pericentre, and this drastic loss in the satellite's subhalo mass subsequently can affect its orbit. Thus, to better understand the origin of satellites with growing pericentres, including the timescales over which the orbits changed and potential dynamical perturbations near $\dperimin$, we compared the specific angular momentum and DM subhalo mass $200 \Myr$ before and after the minimum pericentre. Near $\dperimin$, $\ell$ changed by a much smaller amount ($10 - 20$ per cent) than the change in $\ell$ from the minimum to most recent pericentres ($\approx 40$ per cent). The fractional mass lost near $\dperimin$ was also minimal ($\lesssim 7$ per cent). Thus, in general, the orbital perturbations did not occur just near $\dperimin$, as also apparent in Figure~\ref{fig:torqued_orbits}. Finally, we investigated the other orbital properties we presented in the previous figures (total velocity, pericentre lookback times, and the recent pericentre distances) for these sub-samples but find no compelling differences. We find no mass dependence to the fractional differences in pericentre distances or times in Figure~\ref{fig:torqued_hist}, so satellites with growing pericentres exist similarly at a range of masses. Thus, even though higher-mass satellites experience stronger dynamical friction and have smaller orbital lifetimes, we find no mass dependence to whether or not a satellite has an orbit with growing pericentres. We find no strong correlation of the fractional distance or time metrics with either $\dperimin$, $\dperirec$, or the lookback times that these occurred. Unsurprisingly, the fractional difference in pericentre distance increased slightly with earlier $\MWinfall$, given that satellites that orbited for a longer amount of time, had more time to experience changes in their orbits. In summary, of all satellites that experienced $\Nperi \geq 2$, the majority (67 per cent) experienced a growing pericentre. The most recent pericentre distance is typically $\approx 37$ per cent higher than the minimum experienced, which occurred $\sim 6 \Gyr$ earlier. Interestingly, about half (45 per cent) of growing pericentres experienced a gradual increase in $\ell$, presumably from a time-dependent and/or triaxial MW-mass host potential, and about half (53 per cent) experienced rapid growth in $\ell$ following either their first or minimum pericentres, during their first apocentres, or during multiple pericentre or apocentre events, which suggests a perturbation by another galaxy. Satellites with growing pericentres are more likely to have been splashback satellites, further suggesting perturbations at large distances. Furthermore, because we measure the orbits of these satellites relative to the MW-mass host galaxy, another effect may be perturbations to the center of mass of the host galaxy from mergers or massive satellites. Given these complexities and likely multiple causes for the origin of satellite with growing pericentres, we defer a more detailed investigation to future work. \section{Summary \& Discussion} % \label{sec:sumndisc} % \subsection{Summary of Results} % \label{sec:sum} % We investigated the orbital dynamics and histories of 473 satellite galaxies with $\Mstar > 3 \times 10^4 \Msun$ around 13 MW-mass galaxies from the FIRE-2 suite of cosmological simulations. Surprisingly, and in contrast to many (semi)analytical models of satellite evolution, most satellites that experienced multiple orbits experienced an increase in orbital pericentre and specific angular momentum, likely from interactions with the MW-mass host or other satellites. This highlights that satellite orbits do not always shrink and that angular momentum is not always conserved throughout a satellite's orbital history. In summary, the topics that we presented in the Introduction and our corresponding results are: \renewcommand{\labelenumi}{\arabic{enumi}} \renewcommand{\labelenumii}{\arabic{enumi}.\arabic{enumii}} \begin{enumerate} \item \textit{The relation of orbital properties of satellite galaxies at $z = 0$ to their orbital histories, including lookback times of infall, distances from the MW-mass host, and stellar masses.} \begin{itemize} \item Satellites that fell in earlier have lower orbital energies and specific angular momenta, \textit{though with significant scatter}, because satellites that fell in earlier necessarily had to be on smaller orbits to be captured by the MW halo, and the MW-mass host potential continued to grow over time (Figure~\ref{fig:dynamics}). \item Satellites closer to the host generally orbit with higher velocities, smaller specific angular momenta, and have more bound orbits, \textit{though with significant scatter} (Figure~\ref{fig:dynamics}). Total velocity, specific angular momentum, and specific orbital energy do not correlate with $\Mstar$ except at $\Mstar \gtrsim 10^8 \Msun$, where dynamical friction is more efficient (Figure~\ref{fig:dynamics}). \item \textit{Specific angular momentum, $\ell$, often is not conserved, even approximately, throughout a satellite's orbital history} (Figure~\ref{fig:ell_evo}). In particular, earlier-infalling satellites \textit{increased} in $\ell$ since infall. More expectedly, higher-mass satellites decrease in $\ell$, likely because of dynamical friction. The range of fractional changes in $\ell$ at smaller $\Mstar$ and later infall extends $\gtrsim 50$ per cent. That said, the average $\ell$ across the full satellite population remains statistically unchanged since infall. \item Many lower-mass satellites were pre-processed before becoming a satellite of the MW-mass host. At $\Mstar < 10^7 \Msun$, 37 per cent fell into another more massive halo \textit{before} falling into the MW-mass halo, typically $\approx 2.7 \Gyr$ before (Figures~\ref{fig:times_mstar}, \ref{fig:infall_dz0}, \ref{fig:times_mhalo}). \item \textit{No surviving satellites were within the MW-mass halo during the epoch of reionization} ($z \gtrsim 6$), and less than 4 per cent were satellites of any host halo during this time, similar to \citet{Wetzel15} (Figures~\ref{fig:times_mstar} and \ref{fig:times_mhalo}). Surviving satellites at $z = 0$ fell into the MW-mass halo as early as $12.5 \Gyr$ ago, and into any host halo as early as $13.2 \Gyr$ ago. \item Satellites at a given distance today experienced a large range of infall times into the MW-mass halo. Thus, one cannot infer a precise infall time based solely on a satellite's present-day distance alone, and the use of total velocity, specific angular momentum, or specific orbital energy alone is similarly limited (Figures~\ref{fig:dynamics}, \ref{fig:infall_dz0}). \end{itemize} \textit{\item Testing a common expectation that the orbits of satellite galaxies shrink over time, that is, that a satellite's most recent pericentric distance is the minimum that it has experienced.} \begin{itemize} \item Most satellites at $z = 0$ with $\Mstar \lesssim 10^7 \Msun$ experienced more than one pericentre, while more massive satellites experience only one (Figure~\ref{fig:peri_dn}), because of their later infall and dynamical friction. \item \textit{Contrary to the expectation that satellite orbits tend to shrink over time, most satellites that experienced 2 or more pericentres have grown in pericentre distance.} Of all satellites with $\Nperi \geq 2$, 67 percent experienced a growing pericentre. This represents 31 per cent of all satellites. \item Typically, the minimum percienter was $37$ per cent smaller than the most recent one, because the fractional specific angular momentum increased by $30$ per cent (Figure~\ref{fig:torqued_hist}). This minimum pericentre typically occurred $\sim 6 \Gyr$ before the most recent one (Figure~\ref{fig:torqued_hist}). \item Satellites with growing pericentres orbited closer to the host ($\dperimin = 24 - 35 \kpc$) than those with shrinking pericentres. \item Perturbations at large distances likely contribute to these changes in satellite orbits, given the high fraction (71 per cent) of growing pericentres that were once a splashback satellite. However, we find no single dynamical origin: 53 per cent of satellites with growing pericentres experienced a large increase in $\ell$ during one or more apocentre, while 45 per cent experienced a gradual, steady increase in $\ell$. This suggests that as the MW-mass host halo grows over time, this may help slowly torque the satellites to larger orbits, such that their subsequent pericentres increase. We leave a more detailed investigation of this to future work. \end{itemize} \end{enumerate} \subsection{Inferring Infall Times from Present-day Properties} % \label{sec:infall} % We presented various trends of present-day properties, such as total velocity, specific angular momentum, $\ell$, specific energy, $E$, and distance from the host galaxy, $d$, with the lookback time of satellite infall, $\MWinfall$. The median trends in these present-day properties often correlate with $\MWinfall$. However, we stress that distribution of infall times at fixed property span a large range, limiting the ability to use a property like present-day distance to infer the infall time of a single satellite. For example, in Figure~\ref{fig:dynamics}, while the median specific energy decreases with increasing $\MWinfall$, for a satellite with a specific energy of $E = -1 \times 10^4 \kmsis$, the 68 per cent range in $\MWinfall$ is $1.5 - 10.5 \Gyr$ ago. Similarly, although the median specific angular momentum decreases with increasing $\MWinfall$, a satellite with $\ell = 2 \times 10^4 \kpc\kmsi$ fell in $1 - 9 \Gyr$ ago. Figure~\ref{fig:infall_dz0} shows that, for a satellite at $100 \kpc$ today, $\MWinfall \approx 5.5 - 10.5 \Gyr$, and for a satellite near the host virial radius, $d \approx 300 \kpc$, it experienced $\MWinfall \approx 2 - 8 \Gyr$ ago. Furthermore, across Figures~\ref{fig:dynamics} and \ref{fig:infall_dz0}, at a given satellite total velocity, $\ell$, $E$, or $d$, the \textit{full} distribution of infall times spans $\approx 13 \Gyr$, nearly the age of the Universe. Thus, while these figure show trends in the median for a population of satellite galaxies, we caution that using any of one of these present-day properties for a single satellite will not precisely determine its infall time into the MW-mass halo. In future work, we will explore how precisely one can infer infall time using full 6D phase-space information, including knowledge about the host potential. \subsection{Comparison to Previous Work} % \label{sec:sim_comp} % First, we re-emphasize that these FIRE-2 simulations broadly reflect the observed population of satellites in the LG. \citet{Wetzel16} and \citet{GarrisonKimmel19a} showed that their satellite stellar mass functions and internal velocity dispersions (dark-matter densities) broadly agree with the MW and M31. \citet{Samuel20} showed that their radial distance distributions broadly agree with the MW and M31, and with MW-mass galaxies from the SAGA survey. Furthermore, \citet{Samuel21} showed that, although uncommon, spatially or kinematically coherent planes of satellites exist in these simulations, similar to what is observed in the MW and M31. These benchmarks are important for motivating our analysis of their satellite orbits and histories. Our results agree with \citet{Wetzel15}, who examined similar trends of satellite infall against both $\Mstar$ and $d$, using the ELVIS suite of cosmological zoom-in DMO simulations, with abundance matching to assign stellar mass to subhalos across $\Mstar = 10^{3-9} \Msun$. The mass and spatial resolution in these simulations were $1.9 \times 10^5\Msun$ and $140$ pc, respectively, $\approx 5 \times$ and $\approx 3 \times$ larger than our baryonic simulations. They found that satellites typically first fell into any more massive halo $\approx 6.5-10 \Gyr$ ago and into the MW-mass halo between $\approx 5-7.5 \Gyr$ ago. These times since infall are consistent with the top panel of Figure~\ref{fig:times_mstar} (and the results in Figure~\ref{fig:times_mhalo}) for lower-mass satellites, but the lookback times of infall for satellites at $\Mstar \gtrsim 10^7 \Msun$ in our results are more recent than in \citet{Wetzel15}: $\lesssim 6 \Gyr$ ago for both infall metrics. As we show in Appendix~\ref{app:dmo}, the addition of baryonic physics, especially the additional gravitational potential of the MW-mass galaxy, causes stronger tidal mass stripping and disruption, and because the higher-mass satellites additionally feel stronger dynamical friction, we only see a few higher-mass satellites that happen to survive in our baryonic simulations. As a function of distance from the MW-mass host, \citet{Wetzel15} also found that satellites experience first infall (into any more massive halo) $\approx 4 - 11 \Gyr$ ago and infall into their MW-mass halo $\approx 3-9 \Gyr$ ago, consistent with our results in Figure~\ref{fig:infall_dz0}. \citet{Rocha12} used the Via Lactea II DMO simulation and found a strong correlation between satellite orbital energy and infall time \citep[see also][]{Fillingham19, DSouza22}. The authors suggest that satellites that are deeper in the gravitational potential at $z = 0$ often fell in earlier than satellites farther out and have more negative orbital energies. We find qualitatively consistent values and dependencies of infall time with $d$ as in \citet{Rocha12}: satellites presently closer to the MW-mass galaxy fell in earlier and are on more bound orbits. \citet{Bakels21} used one of the $N$-body Genesis simulations, which have mass and spatial resolution of $7.8 \times 10^6\Msun$ and $1.1 \kpc$, respectively, which is $\approx 200 \times$ and $\approx 30 \times$ larger than the resolution in our simulations, to study the infall histories of satellites. They analyzed the orbits of (sub)halos of 2309 hosts with $M_{\rm 200c} \geq 0.67 \times 10^{12} \Msun$ and found that roughly 22 per cent of all subhalos are on first infall, much larger than the $\approx 7-8$ per cent in our sample. Furthermore, \citet{Bakels21} found that roughly 60 per cent of the splashback population of halos have yet to reach their first apocentre, and the majority ($\approx 86$ per cent) have only reached pericentre once, indicating that this population of satellites are on long-period orbits. Given the wide range in MW-mass $R_{\rm 200m}$, if we select satellites that are currently beyond $300 - 400 \kpc$ to represent the splashback population, we find comparable results: over 95 per cent of the satellites have only experienced one pericentre so far, and the remaining 5 per cent have only experienced two pericentres. For the subhalos that have experienced pericentre at least once and are currently inside of the host's virial radius, \citet{Bakels21} found that about half of them have only experienced one pericentre, and $\approx 30$ per cent have not yet reached apocentre. Our result in the bottom left panel of Figure~\ref{fig:peri_dn} is generally consistent with this, with a median pericentre number of $1-2$ for all satellites in our sample. However, of satellites that experienced at least one pericentre, we find that $\gtrsim 2/3$ of them completed more than one, which suggests that the satellites in our simulations completed more orbits. Finally, \citet{Bakels21} noted that roughly 95 per cent of the surviving subhalos were accreted since $z = 1.37$ ($9.1 \Gyr$ ago), where we generally see earlier infall: 95 per cent of our satellites fell into their MW-mass host since $z = 2.2$ ($10.7 \Gyr$ ago). Thus, the satellites that survive to $z = 0$ in our simulations fell in earlier resulting in the larger fraction that have completed more orbits. The differences in the first infall fractions, and the accretion times of satellite galaxies, between our results and \citet{Bakels21} are likely because of the differences in resolution between the FIRE-2 and Genesis simulations. Because the Genesis simulations have DM particle masses $\approx200\times$ larger, they necessarily resolve only more massive satellites. \citet{Fattahi20} used the cosmological baryonic zoom-in simulations of MW-mass galaxies from the Auriga project to investigate the $\MWinfall$ for surviving and destroyed low-mass galaxies, and their effect on the growth of the stellar halo. They also found that surviving satellites fell into their MW-mass halos more recently than the destroyed satellites, similar to the results in \citet{Panithanpaisal21} and \citet{Shipp22}, who used the same 13 FIRE-2 simulations in our analysis to investigate stellar stream progenitors, their orbits, and their detectability. \citet{DSouza21} also found similar results in their DMO satellite analysis. The analysis by \citet{Fattahi20} shows similar results to the top panel in our Figure~\ref{fig:times_mstar}, with more massive satellites falling in more recently. At $\Mstar = 10^6 \Msun$ and $\Mstar = 10^9 \Msun$, the authors report average infall lookback times of $\MWinfall = 7.8 \Gyr$ and $\MWinfall = 3.8 \Gyr$. Our results are broadly consistent, though shifted to more recent times, where satellites at $\Mstar = 10^6 \Msun$ and $\Mstar = 10^9\Msun$ fell into their MW-mass halo with mean infall lookback times of $\MWinfall \approx 6.5 \Gyr$ and $\MWinfall \approx 2.9 \Gyr$. We only have 3 satellites at $\Mstar \sim 10^9 \Msun$, smaller than the sample in \citet{Fattahi20}. The differences between the infall times between satellites in FIRE-2 and Auriga may arise from differences in the stellar mass - halo mass relation at low-mass \citep{Auriga, Hopkins18}. Furthermore, the way in which we both average satellite properties over the hosts may contribute to the differences in infall time, given that host galaxies with larger satellite populations will skew the results. \citet{Wetzel15} concluded that, in the ELVIS suite of DMO simulations, no present-day satellites were within the MW-mass halo's virial radius during the epoch of reionization at $z \gtrsim 6$. This implies that, for any satellites whose star formation quenched during that time, the MW environment was not the driving factor, so the effects of the MW halo environment and cosmic reionization are separable, in principle. We similarly conclude that no satellites at $z = 0$ were within their MW-mass halo virial radius during reionization. Although our resolution is still finite, the trend in Figure~\ref{fig:times_mstar} is relatively flat with mass, with no indication that it significantly increases for lower-mass satellites. Also, as we show in Appendix~\ref{app:mhalo}, we find similar infall trends in subhalos down to $\Mhp = 10^8 \Msun$, which would host ultra-faint galaxies (Figure~\ref{fig:times_mhalo}). Recently, \citet{Sand22} proposed that the ultra-faint galaxy Tucana B, whose nearest neighbor is $\approx 500 \kpc$ away, was likely quenched in an isolated environment from reionization. It has an old ($\approx 13.5 \Gyr$), metal-poor ($\rm [Fe/H] \approx -2.5$), stellar population, and has no recent star formation. Thus, because of its distance to any other massive galaxy, old stellar population, and lack of star formation, \citet{Sand22} argued that Tucana B is an excellent candidate for a galaxy quenched by reionization. However, our results, and those in \citet{Wetzel15}, imply that no present-day satellites were within a MW-mass halo during reionization. Thus, selecting isolated galaxies \textit{today} does not necessarily make them cleaner probes of the effects of reionization. Rather, satellites around MW-mass galaxies today provide similarly good candidates to study these effects. Using the ELVIS DMO simulations, \citet{Wetzel15} showed that many satellite galaxies first were pre-processed, for $0.5 - 3.5 \Gyr$, before falling into their MW-mass halo; $\approx 30$ per cent of satellites with $\Mstar = 10^{3-4} \Msun$ were members of another group during their infall into a MW-mass halo, and this fraction decreases to $\approx 10$ per cent at $\Mstar = 10^{8-9} \Msun$. Any time before their infall into the MW-mass halo, $\approx 60$ per cent of low-mass satellites were members of another more massive group, falling to $\approx 30$ for high-mass satellites. Over our full sample of satellites, nearly 35 per cent fell into another more massive halo before falling into the MW-mass host, consistent with \citet{Wetzel15}. The fraction of pre-processed satellites in our results is also comparable to the DMO-based results from \citet{Li08} who reported that $\approx 1/3$ of subhalos were pre-processed, though they selected subhalos down to $M_{\rm halo} \gtrsim 3 \times 10^6 \Msun$ to probe subhalos that may not host luminous galaxies. \citet{Bakels21} report that nearly half of all subhalos with $M_{\rm sub,acc} / M_{\rm host,200m} \sim 10^{-3}$ were pre-processed, and this ratio decreases for increasing subhalo mass. When specifically analyzing subhalos \textit{on first infall}, \citet{Bakels21} showed that as many as 40 per cent of subhalos were pre-processed prior to falling into their MW-mass halo, and this fraction increases for more massive host halos. More recently, \citet{DSouza21} used the ELVIS DMO simulations to study the times since infall of subhalos with $M_{\rm halo,peak} > 10^9 \Msun$ and how they were influenced by a massive merger ($>1:10$). The distribution of times since infall for their surviving subhalos range $0 - 12 \Gyr$, and the satellite $\MWinfall$ are peaked toward more recent values compared to the splashback population, which were accreted earlier. The full range of times since infall in our Figure~\ref{fig:times_mstar} (and Figures~\ref{fig:times_mhalo} and \ref{fig:dmo}) are consistent with the distribution in \citet{DSouza21}. Although \citet{DSouza21} did not specifically focus on the first infall of subhalos into other more massive satellites/subhalos, they investigated group infall of satellites and showed that the distribution of time since infall clusters with the timing of the massive merger (and is slightly clustered with lower-mass mergers, $>1:15$), with many subhalos becoming satellites of the massive merger $< 2.5 \Gyr$ before it first crossed the MW-mass host radius. \citet{Bakels21} showed that after first infall, subhalos generally lose orbital energy and reach apocentres that are $\approx 0.8 \times$ their turn-around radius, $r_{\rm ta}$, and all subsequent apocentres are typically comparable in distance.% On the extreme ends, some subhalos gained or lost orbital energy and thus, reached larger or smaller subsequent apocentres, respectively, analogous to our satellites with growing pericentres. Regarding the subhalos that deviate strongly in their first apocentres and $r_{\rm ta}$, \citet{Bakels21} found that nearly $\approx 2/3$ of the satellites with first apocentres $\gtrsim 3 r_{\rm ta}$, and $\approx 80$ per cent of the satellites that only reached $\lesssim 1/4 r_{\rm ta}$, were pre-processed. Roughly $1/3$ of the satellites with growing pericentres in our sample were pre-processed before falling into the MW, but they may also orbit outside of the more massive halo before falling into the MW halo. Thus, it is unlikely that pre-processing is the only driving factor in the origin and orbital evolution of satellites with growing pericentres. Both \citet{Panithanpaisal21} and \citet{Shipp22} used the same 13 MW-mass galaxies in the FIRE-2 simulations that we use here to investigate stellar stream properties. Stellar streams form via disrupted low-mass galaxies or star clusters, however, before they completely disrupt, because they stretch throughout the halo we learn something about their initial orbits. \citet{Shipp22} find that systems with smaller pericentres are more likely to form streams, and that the distribution of pericentres in the simulated streams are slightly smaller than the dwarf galaxies in our work. Furthermore, the authors suggest that not only are there differences in the orbital properties of present-day satellites and stellar streams, the orbits of streams with fully or partially disrupted progenitors differ as well, highlighting the complex evolution of low-mass stellar systems. Finally, \citet{DSouza22} explored uncertainties associated with orbit modeling using the ELVIS DMO simulations. They suggested that using simple parametric models for the MW-mass host (and recently accreted LMC-like galaxy) result in errors that are comparable to the 30 per cent uncertainty in the halo mass of the MW. They also extensively studied the errors associated with modeling the potential of a recently accreted LMC-like galaxy, the initial conditions of the satellites, and the mass evolution of the MW-mass halo, and they show that each comes with errors comparable to or less than the uncertainties in using simple parametric potentials. Consistent with works like \citet{DSouza22}, our results highlight complications and limitations with idealized orbit modeling in a static, non-cosmological MW halo potential; most importantly, our results refute any expectation that the orbits of satellite galaxies always, or even generally, shrink over time. In Santistevan et al., in prep., we will use our simulations to pursue orbit modeling of individual satellite histories to compare with idealized orbit modeling in a static host potential. \section*{Acknowledgements} % We greatly appreciate discussions with Nora Shipp and Pratik Gandhi throughout the development of this paper. We are also thankful for interesting discussion with Andrey Kravtsov on the stellar mass - halo mass relation. IBS received support from NASA, through FINESST grant 80NSSC21K1845. AW received support from: NSF via CAREER award AST-2045928 and grant AST-2107772; NASA ATP grant 80NSSC20K0513; and HST grants GO-14734, AR-15809, GO-15902, GO-16273 from STScI. JS was supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2102729. RES gratefully acknowledges support from NASA grant 19-ATP19-0068, from the Research Corporation through the Scialog Fellows program on Time Domain Astronomy, from NSF grant AST-2007232, and from HST-AR-15809 from the Space Telescope Science Institute (STScI), which is operated by AURA, Inc., under NASA contract NAS5-26555. We ran simulations using: XSEDE, supported by NSF grant ACI-1548562; Blue Waters, supported by the NSF; Frontera allocations AST21010 and AST20016, supported by the NSF and TACC; Pleiades, via the NASA HEC program through the NAS Division at Ames Research Center. \section*{Data Availability} % The python code that we used to analyze these data is available at \url{https://bitbucket.org/isantis/orbit\_analysis}, which uses the publicly available packages \url{https://bitbucket.org/awetzel/gizmo\_analysis}, \url{https://bitbucket.org/awetzel/halo\_analysis}, and \url{https://bitbucket.org/awetzel/utilities}. The FIRE-2 simulations are publicly available \citep{Wetzel22} at \url{http://flathub.flatironinstitute.org/fire}. Additional FIRE simulation data is available at \url{https://fire.northwestern.edu/data}. A public version of the GIZMO code is available at \url{http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html}. Finally, data values in each figure are available at \url{https://ibsantistevan.wixsite.com/mysite/publications}. \bibliographystyle{mnras} \bibliography{paper} \appendix \section{Trends with Peak Halo Mass} % \label{app:mhalo} % In Section~\ref{sec:results}, we investigated trends of satellite orbital dynamics and histories as a function of satellite $\Mstar$, which is the mass most directly observable. However, from Figure~\ref{fig:mstar_v_mhalo}, the DM (sub)halo mass of a satellite is $10^2 - 10^4 \times$ larger, so it is the one most important for dynamics. Here we investigate the same trends but as a function of a satellite's peak halo mass, $\Mhp$. We select \textit{all} subhalos with $\Mhp > 10^8 \Msun$, which includes both luminous and dark subhalos (with no stars). Thus, given the extrapolated abundance matching relations of \citet{Moster13, GarrisonKimmel17_smhm, Behroozi20} in comparison to our stellar mass selected sample in Figure~\ref{fig:mstar_v_mhalo}, this includes lower-mass subhalos that likely would host ultra-faint galaxies whose stellar masses our baryonic simulations do not natively resolve. For reference, the fraction of satellites in each mass bin that are luminous in our simulations is: 1 per cent for $\Mhp=10^{8-8.5}\Msun$, 14 per cent for $\Mhp=10^{8.5-9}\Msun$, 60 per cent for $\Mhp=10^{9-9.5}\Msun$, 92 per cent for $\Mhp=10^{9.5-10}\Msun$, and 100 per cent above $\Mhp>10^{10}\Msun$. Compared to our stellar mass selection, this increases our satellite sample size by a factor of $\approx 8.5$. Because (sub)halo mass is the most relevant dynamically, but a satellite with a given $\Mhp$ hosts a range of stellar masses given the scatter in the SMHM relation, trends with $\Mstar$ tend to be noisier. Figures~\ref{fig:mhalo_corr_dyn}, \ref{fig:times_mhalo}, and \ref{fig:nperi_vs_mhalo} all show qualitatively similar trends to those in Section~\ref{sec:results}. In particular, the trends at low halo mass in Figure~\ref{fig:mhalo_corr_dyn} show relatively flat dependence, and at $\Mhp\gtrsim10^{10.5}\Msun$, we see a more pronounced decline given the stronger dynamical friction at these masses. Trends with the lookback times of both infall metrics and the pericentre lookback times all qualitatively show similar results and offsets in Figure~\ref{fig:times_mhalo}, and the number of pericentric passages agrees with the stellar mass selection, though it is shifted to slightly smaller $\Nperi \approx 2$ for the smallest subhalos, compared to $\Nperi \approx 2.5$ at our lowest stellar masses. We do not show trends of pericentre distance, given the lack of a strong dependence on $\Mhp$, but we compare $\dperimin$ for satellites in baryonic versus DMO simulations in Appendix~\ref{app:dmo}. In summary, the trends using this halo-mass selected sample are qualitatively similar to the results presented throughout Section~\ref{sec:results}. Furthermore, our results here imply similar trends for ultra-faint galaxies, where no halo capable of hosting an ultra-faint galaxy, $\Mhp \approx 10^8\Msun$, was a satellite of the MW-mass host halo progenitor during the epoch of reionization, $z\gtrsim6$. Similar to the results in our stellar-mass selected sample, we also find that $<1$ per cent of the satellites in this halo-selected sample were members of a more massive halo during reionization. \section{Baryonic versus dark-matter-only simulations} % \label{app:dmo} % Here we compare our results from our FIRE-2 baryonic simulations against satellites in dark matter-only (DMO) simulations of the same halos, to understand the effects of baryons and contextualize previous results based on DMO simulations, given that many previous works investigated satellite orbits and infall histories in DMO simulations \citep[for example][]{Wetzel15, Bakels21, DSouza21, Robles21, Ogiya21}, which, among other things, do not model the potential from a central galaxy. Furthermore, stellar feedback in more massive satellites can reduce their inner dark-matter densities, making them more susceptible to tidal disruption \citep[for example][]{Bullock17}. With no tidal forces from the central galaxy and with less dense dark-matter cusps within the subhalos, tidal disruption can be much stronger in baryonic simulations. Recent studies also have used DMO simulations with an embedded disk-like potential \citep[for example][]{Kelley19, RodriguezWimberly19, Fillingham19, Robles21}. We compare only simulations that have DMO counterparts at all snapshots, which comprises the 7 MW-mass hosts in isolated environments (names beginning with `m12') in Table~\ref{tab:hosts}. As in Appendix~\ref{app:mhalo}, for all simulations we select all satellites with $\Mhp > 10^8 \Msun$, which includes both luminous and dark satellites in the baryonic simulations. In the DMO simulations, we re-normalize $\Mhp$ to account for the loss of baryons by multiplying by $1 - f_{\rm b}$, where $f_{\rm b} = \Omega_{\rm baryon} / \Omega_{\rm matter}$ is the cosmic baryon fraction. The total number of satellites in the DMO simulations is $\approx 1.6 \times$ higher. Figure~\ref{fig:dmo} shows the lookback times of `first' infall into any other more massive halo, $\anyinfall$ (top left), specific angular momentum, $\ell$ (top right), the smallest pericentre experienced, $\dperimin$ (bottom left), and the number of pericentric passages about the MW-mass host, $\Nperi$ (bottom right), for satellites in the baryonic (red) and DMO (black) simulations. Solid lines show the median and the dark and light shaded regions show the 68th percentile and full distribution, respectively. Satellites in the DMO simulations do not feel the gravity of a central galaxy, so they experience weaker tidal stripping and disruption, even if they fell in early or orbit closer to the center of the halo. Thus, the (surviving) satellites in the DMO simulations generally fell in $0.5 - 3 \Gyr$ earlier than in the baryonic simulations. As a result of the surviving population falling in earlier, satellites in DMO simulations also orbit at smaller distances; they were able to orbit closer to the center of the MW-mass halo without becoming tidally disrupted, as the bottom left panel shows. Furthermore, surviving satellites have lower $\ell$ in DMO simulations, given that satellites with smaller $\ell$ in the baryonic simulations are likely to be tidally disrupted \citep[for example][]{GarrisonKimmel17}. Finally, because satellites in DMO simulations fell in earlier and orbit at smaller distances, they completed more pericentric passages (bottom right panel). We also see a small increase in $\Nperi$ with $\Mhp$, likely because higher-mass satellites in DMO simulations in particular can survive longer than in the presence of a central galaxy. Our results agree with \citet{GarrisonKimmel17}, who compared subhalo populations between DMO and FIRE-2 baryonic simulations using 2 of the same systems that we analyze (m12i and m12f). They also tested the results of using a DMO simulation with an analytic galaxy potential embedded within the host halo, finding good agreement with the baryonic simulations, which implies that the most important effect in the baryonic simulations is additional gravitational effect of the MW-mass galaxy. They showed that the number of subhalos between the different types simulations converges for subhalos that orbit farther away from the center of the MW-mass halo. Thus, differences between DMO and baryonic simulations are largest for subhalos that orbit closer to the center of the host, where these subhalos get preferentially disrupted in the baryonic simulations, and result in a satellite population with a larger fraction on more tangential orbits, with higher specific angular momentum. \bsp % \label{lastpage}
Title: Broadband X-ray Spectroscopy and Estimation of Spin of the Galactic Black Hole Candidate GRS 1758-258
Abstract: We present the results of a broadband (0.5-78 keV) X-ray spectral study of the persistent Galactic black hole X-ray binary GRS 1758-258 observed simultaneously by Swift and NuSTAR. Fitting with an absorbed power-law model revealed a broad Fe line and reflection hump in the spectrum. We used different flavours of the relativistic reflection model for the spectral analysis. All models indicate the spin of the black hole in GRS 1758-258 is >0.92. The source was in the low hard state during the observation, with the hot electron temperature of the corona estimated to be kT$_e$ ~ 140 keV. The black hole is found to be accreting at ~1.5 % of the Eddington limit during the observation, assuming the black hole mass of 10 $M_{\odot}$ and distance of 8 kpc.
https://export.arxiv.org/pdf/2208.01399
command. \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \newcommand{\eps}{erg s$^{-1}$} \newcommand{\ecs}{erg cm$^{-2}$ s$^{-1}$} \newcommand{\pcm}{cm$^{-2}$} \newcommand{\M}{$M_{\odot}$} \newcommand{\phc}{ph cm$^{-2}$ s$^{-1}$} \newcommand{\kms}{km s$^{-1}$} \newcommand{\nh}{$N_{\rm H}$} \newcommand{\gps}{g s$^{-1}$} \newcommand{\gpc}{g cm$^{-3}$} \newcommand{\ecps}{erg cm s$^{-1}$~} \newcommand{\source}{GRS~1758--258} \shorttitle{GRS 1758--258} \shortauthors{Jana et al.} \graphicspath{{./}{figures/}} \begin{document} \title{Broadband X-ray Spectroscopy and Estimation of Spin of the Galactic Black Hole Candidate GRS~1758--258} \author[0000-0001-7500-5752]{Arghajit Jana} \affiliation{Institute of Astronomy, National Tsing Hua University, Hsinchu, 30013, Taiwan} \author[0000-0002-5617-3117]{Hsiang-Kuang Chang} \affiliation{Institute of Astronomy, National Tsing Hua University, Hsinchu, 30013, Taiwan} \author[0000-0003-3932-6705]{Arka Chatterjee} \affiliation{Department of Physics and Astronomy, University of Manitoba, Manitoba, R3T 2N2, Canada} \author[0000-0003-2865-4666]{Sachindra Naik} \affiliation{Astronomy \& Astrophysics Division, Physical Research Laboratory, Navrangpura, Ahmedabad, 380009, India} \author[0000-0001-6189-7665]{Samar Safi-Harb} \affiliation{Department of Physics and Astronomy, University of Manitoba, Manitoba, R3T 2N2, Canada} \keywords{Accretion(14); Low mass X-ray binary stars(939); Black hole physics (159); Astrophysical black holes (98)} \section{Introduction} \label{sec:intro} Black hole X-ray binaries (BHXRBs) are powered by the accretion process, where matter from the companion star get accreted on to the central black hole (BH). The gravitational energy is converted to radiation emitted over the electromagnetic spectrum in the accretion process. Depending on the X-ray activity, a BHXRB can be classified into a transient or a persistent source \citep{Tetarenko2016}. The transient source spends most of the time in a quiescent state with very low X-ray luminosity ($L_{\rm X} <10^{32}$ \eps) and occasionally undergoes an outbursting phase when the X-ray luminosity increases to $L_{\rm X} > 10^{35}$ \eps. On the other hand, a persistent source is always found to be active in X-rays with the X-ray luminosity $L_{\rm X} \sim 10^{35-37}$ \eps \citep[e.g.,][]{Tetarenko2016}. A black hole is characterized by its mass ($M_{\rm BH}$) and spin ($a^*$). Estimation of the BH spin is harder compared to the estimation of BH mass. There exist various direct methods to estimate the BH mass, such as radial velocity measurement of the secondary, dips and eclipses in the light curves \citep[e.g.,][]{Kreidberg2012,Torres2019,Jana2022}. The BH mass can also be estimated from the spectral modelling and timing analysis \citep[e.g.,][]{Kubota1998,Shaposhnikov2007,Shaposhnikov2009,AJ2016,AJ2020b,AJ2021c,DC2016}. The BH spin can be measured from X-ray spectroscopy. In this method, the spin can be estimated in two processes: the continuum fitting (CF) method \citep[CF; e.g.,][]{Zhang1997,McClintock2006,Steiner2014}, and the study of Fe line and reflection spectroscopy \citep[e.g.,][]{Fabian1989,Miller2012,Reynolds2020}. Both methods require measuring the inner edge of the accretion disk that extends up to the inner most stable circular orbit (ISCO). It has also been suggested that the black hole spin can affect the polarization state of X-ray emission \citep[e.g.,][]{Schnittman2010,Dovciak2011}. In the CF method, the inner radius of the accretion disk is measured by fitting the thermal disk continuum with a general relativistic model \citep[e.g.,][]{Gierlinski2001,McClintock2006,Shafee2006}. In this process, one also needs to have prior knowledge of the BH mass, distance and inclination angle. On the contrary, no knowledge of BH mass or distance is required to measure the spin in reflection spectroscopy. In this method, the reflection of the coronal emission at the inner disk is studied. The essential features of the reflection spectra are Fe fluorescent emission between $6.4-6.97$~keV and a reflection hump around $15-40$~keV. The accretion disk is optically thick and geometrically thin and extends up to the ISCO \citep{SS73}. Due to the relativistic effect (Doppler shifts and gravitational redshift), the line profile of the Fe line originating from the inner accretion disk is blurred. As the ISCO depends on the spin, the study of the blurred spectra allows us to estimate the spin of the BH \citep{Bardeen1972,Novikov1973}. The spin of black hole has been estimated using either CF method or reflection spectroscopy, or both. The CF method has been used to estimate spin for LMC~X--1 \citep{Mudambi2020}, LMC~X--3 \citep{Bhuvana2021}, MAXI~J1820+070 \citep{Zhao2021}. The reflection spectroscopy is used to estimate spin for several BHs, e.g., MAXI~J1535--571 \citep{Miller2018}, XTE~J1908--094 \citep{Draghis2021}, Cygnus X--1 \citep{Tomsick2018}, MAXI~J1631--479 \citep{Xu2020}, GX~339--4 \citep{Garcia2019}. Both reflection spectroscopy and CF have been used to measure the spin in a few BHs, e.g., GRS~1716--249 \citep{Tao2019}, LMC~X--3 \citep{AJ2021d}, GX~339--4 \citep{Parker2016}. The majority of the BHs are found to have prograde spin, i.e., the accretion flow rotates in the same direction as the BH . Only a few BHs are found to have a retrograde spin, e.g., MAXI~J1659--152 \citep{Rout2020}, Swift~J1910.2-0546 \citep{Reis2013}, GS~1124--683 \citep{Morningstar2014}. Nonetheless, the spin of the BH has been observed to have a wide range. Although, the spin has been measured for a substantial number of BHs, the spin of many BH remains unknown. GRS~1758--258 is a black hole X-ray binary located in the close vicinity of the Galactic centre. GRS~1758--258 was discovered by {\it GRANAT}/SIGMA in 1990 \citep{Syunyaev1991}. It is one of the few persistent BHXRBs in our galaxy. The source has been observed in multi-wavelengths over the years \citep[e.g.,][]{Rodriguez1992,Mereghetti1994,Mereghetti1997,Smith2001,Keck2001,Pottschmidt2006,Lin2000,Luque2014}. It is considered to be a black hole based on its spectral and timing properties \citep{Sidoli2002}. GRS~1758--258 is predominately found to be in the low hard state \citep[LHS;][]{Soria2011}. From the RXTE monitoring, \citet{Smith2002} reported a state transition to the soft state in 2001 with the $3-25$~keV flux decreased by over an order of magnitude. GRS~1758--258 also shows two extended radio lobes, which makes the system a microquasar \citep{Rodriguez1992}. For a long time, the companion star was not identified due to the dense stellar population in the field. Recently, the spectroscopic study suggested that the companion is likely an A-type main-sequence star \citep{Marti2016}. The orbital period of GRS~1758--258 is reported to be $18.45\pm0.10$~days \citep{Smith2002}. GRS~1758--258 is the least studied source among three persistent Galactic black hole binaries. The mass and spin of the black hole in GRS~1758--258 are not known yet. In this paper, we aim to estimate the spin of the BH in GRS~1758--258 from the broadband X-ray spectroscopy. The paper is organized in the following way. We present the observation and data reduction technique in \S2. In \S3, we present the analysis and result. Finally, we discuss our findings in \S4. \begin{table*} \centering \caption{Log of Observations of GRS~1758--258} \begin{tabular}{lccccc} \hline Instrument & Date (UT) & Obs ID & Exposure (ks) & Count s$^{-1}$\\ \hline NuSTAR & 2018-09-28 & 30401030002 & 42 & $17.39\pm0.02$\\ Swift/XRT & 2018-09-28 & 00088767001 & 1.7 & $ 7.88\pm0.07$\\ \hline \end{tabular} \label{tab:log} \end{table*} \section{Observations and Data Reduction} NuSTAR observed \source~ on September 28, 2018 for a total exposure of 42~ks (see Table~\ref{tab:log}). NuSTAR is a hard X-ray focusing telescope, consisting of two identical modules: FPMA and FPMB \citep{Harrison2013}. The raw data were reprocessed with the NuSTAR Data Analysis Software ({\tt NuSTARDAS}, version 1.4.1). Cleaned event files were generated and calibrated by using the standard filtering criteria in the {\tt nupipeline} task and the latest calibration data files available in the NuSTAR calibration database (CALDB) \footnote{\url{http://heasarc.gsfc.nasa.gov/FTP/caldb/data/nustar/fpm/}}. The source and background products were extracted by considering circular regions with radii 60 arcsec and 90 arcsec, at the source co-ordinate and away from the source, respectively. The spectra and light curves were extracted using the {\tt nuproduct} task. We re-binned the spectra with 30 counts per bin by using the {\tt grppha} task. Swift/XRT observed \source~ simultaneously with NuSTAR for an exposure of 1.7 ks in window-timing (WT) mode. The XRT spectrum did not suffer from photon pile-up. In general, the pile-up occurs if the count rate is over 100 counts s$^{-1}$ in the WT mode \citep{Romano2006}. The $0.5-10$~keV spectrum was generated using the standard online tools\footnote{\url{http://www.swift.ac.uk/user_objects/}} provided by the UK {\it Swift} Science Data Centre \citep{Evans2009}. For the present study, we used simultaneous observations of Swift/XRT and NuSTAR in the $0.5-78$~keV energy range. \section{Analysis and Result} \label{sec:analysis} Figure~\ref{fig:lc} shows the $3-78$~keV light curve of \source~ from the NuSTAR observation in the top panel. In the bottom panel of Figure~\ref{fig:lc}, the variation of the hardness ratio (HR) is shown. We define the HR as the ratio of the count rate in the $6-30$~keV to the $3-6$~keV energy ranges. We did not observe any variation in the count rate or HR during the observation period. Hence, we carried out the spectral analysis using the time averaged spectrum obtained from the observation. The spectral analysis was carried out in {\tt HEASEARC}'s spectral analysis package {\tt XSPEC} version 12.8.2 \citep{Arnaud1996}. For the analysis, we used \textsc{tbabs} model for the interstellar absorption with the \textsc{wilms} abundance \citep{Wilms2000} and the cross-section that of \citet{Verner1996}. Generally, X-ray spectra of a BHXRBs can be approximated by a multi-colour disk blackbody (MCD) and power-law components. Additionally, reprocessed emission may be observed : a Fe K$\alpha$ line at $\sim 6.4$~keV and a reflection hump at $\sim 15-40$~keV. We started our spectral analysis with the Swift/XRT+NuSTAR data in the $0.5-78$~keV energy range. The MCD component may not be present in the LHS. Thus, we attempted to fit the data with the absorbed power-law model with an exponential high energy cutoff. This model did not give us an acceptable fit with clear signatures of Fe K line, and a reflection hump in the $5-8$~keV and $15-40$~keV energy ranges, respectively. Figure~\ref{fig:nu-spec} shows the $3-78$~keV NuSTAR spectrum in the top panel. For clarity, we only show the NuSTAR spectrum. The bottom panel of Figure~\ref{fig:nu-spec} shows the residuals in terms of data/model ratio. We added a Gaussian line to incorporate Fe K-line in the $5-8$~keV energy range to improve the fit. Although, the fit improved with $\Delta \chi^2 =91$ for 3 degrees of freedom (dof), it was still unacceptable as the reflection hump was clearly visible in the residuals at energies above $\sim 10$ keV. To probe the reprocessed emission, we employed the relativistic reflection model \textsc{Relxill} \citep{Garcia2013,Garcia2014,Dauser2014,Dauser2016} for further spectral analysis. We applied different flavours of the \textsc{Relxill} model with different assumptions to the $0.5-78$~keV XRT+NuSTAR spectra. We used different \textsc{Relxill} family of models, namely, \textsc{Relxill}, \textsc{RelxillCp}, \textsc{RelxillD}, \textsc{RelxillLp} and \textsc{RelxillLpCp} for the spectral analysis. Figure~\ref{fig:spec} shows the \textsc{RelxillLp} model fitted XRT+NuSTAR spectrum in the $0.5-78$~keV energy range in the top panel. The dot-dashed magenta and dashed green lines represent the primary emission and reprocessed emission, respectively. Corresponding residuals in terms of data/model ratio are shown in the bottom panel. Figure~\ref{fig:del} shows the residuals obtained from the different models in the different panels. The green and blue points represent the XRT and NuSTAR data, respectively. In the inset of each panel, the value of corresponding reduced-$\chi^2$ is mentioned. \subsection{Relxill} \textsc{Relxill} model uses a cutoff power-law as an incident primary emission. In this model, the reflection strength is measured in terms of reflection fraction ($R_{\rm refl}$) which is defined as the ratio of reflected emission to the direct emission to the observer. A broken power-law emission profile is assumed in \textsc{Relxill} model with $E(r) \sim R^{-q_{\rm in}}$ for $r > R_{\rm br}$ and $E(r) \sim R^{-q_{\rm out}}$ for $r < R_{\rm br}$, where E(r), $q_{\rm in}$, $q_{\rm out}$ and $R_{\rm br}$ are emissivity, inner emissivity index, outer emissivity index and break radius, respectively. We used \textsc{Relxill} model in two assumptions: first by keeping the inner and outer emissivity indices fixed at the default value, i.e. $q_{\rm in} = q_{\rm out} = 3$ (hereafter Relxill-1); and second, allowing the inner and outer emissivity index to vary freely (hereafter Relxill-2). During our analysis, we fixed the outer disk at $R_{\rm out} = 1000~R_{\rm g}$. Both models gave us a good fit with $\chi^2$/dof = 1718/1618 and 1691/1617 for Relxill-1 and Relxill-2, respectively, although \textsc{Relxill-2} gave us a better fit. Both models indicated a high spinning black hole with spin parameter, $a^* > 0.94$. The accretion disk is found to extend almost up to the ISCO, as we obtained $R_{\rm in} = 1.14^{+0.04}_{-0.03}$ $R_{\rm ISCO}$ and $1.13^{+0.02}_{-0.04}$ $R_{\rm ISCO}$ from Relxill-1 and Relxill-2, respectively. We could not constrain the cutoff energy ($E_{\rm cut}$) form these two models, as the cutoff energy pegged at the upper limit of the model at $1000$~keV. The reflection is found to be weak with $R_{\rm refl} \sim 0.23$ and $\sim 0.21$ from \textsc{Relxill-1} and \textsc{Relxill-2} models, respectively. With \textsc{Relxill-2}, we obtained a steep inner emissivity ($q_{\rm in}=6.78^{+0.05}_{-0.07}$) and a flat outer emissivity index ($q_{\rm out}=2.04^{+0.09}_{-0.06}$) with the break radius $R_{\rm br}=4.8^{+1.5}_{-0.8}$ $R_g$. We also included a disk component in our spectral analysis to check if the disk is present. The thermal disk component was modelled with \textsc{diskbb} \citep{Makishima1986}. We obtained the inner disk temperature, $kT_{\rm in}=0.18\pm0.12 $~keV. The other spectral parameters remained the same. The addition of the disk component did not improve our fit for both \textsc{Relxill-1} and \textsc{Relxill-2} models. We checked if the disk component is required with {\tt FTOOLS} task \textsc{ftest}. The \textsc{ftest} returned with probability=1, indicating the disk component was not required. As statistically, the disk component was not required; hence, we did not add the disk component for further analysis. \subsection{RelxillCp} For further spectral analysis, we applied the \textsc{RelxillCp} model for the spectral analysis. The \textsc{RelxillCp} has advantage over \textsc{Relxill}, as the \textsc{RelxillCp} directly estimate the coronal properties, namely, the hot electron plasma temperature ($kT_{\rm e}$). In \textsc{RelxillCp}, the primary incident spectrum is computed using \textsc{nthcomp} \citep{Z96,Zycki1999} model, replacing \textsc{cutoffpl} in the \textsc{Relxill} model. The analysis with the \textsc{RelxillCp} returned with a good fit with $\chi^2=1691$ for 1617 dof. We obtained a similar fit with this model with the spin and inner disk radius were found to be, $a^* = 0.97^{+0.01}_{-0.02}$ and $R_{\rm in} = 1.13^{+0.03}_{-0.03}$ $R_{\rm g}$, respectively. We also obtained the temperature of the Compton corona as $kT_e = 134^{+82}_{-29}$~keV. \subsection{High Density Model: RelxillD} \textsc{Relxill} and \textsc{RelxillCp} models consider a fixed density of the accretion disk as $n=10^{15}$ cm$^{-3}$, while \textsc{RelxillD} allows the density to vary. In this model, the incident primary emission is a cutoff power-law with cutoff energy fixed at 300~keV. The spectral analysis with the \textsc{RelxillD} model returned with a high spin $a^* > 0.98$. We obtained a steeper emissivity with \textsc{RelxillD} model with $q_{\rm in}=7.09^{+0.07}_{-0.11}$, compared to the \textsc{Relxill} and \textsc{RelxillCp} models. The inclination angle of the inner disk is found to be higher with higher density, with $i=37^{+2}_{-3}$ degrees. We obtained the disk density as $n<2\times10^{15}$ cm$^{-3}$. The detailed spectral analysis result is tabulated in Table~\ref{tab:my_label}. \subsection{Lamppost Geometry: RelxillLp and RelxillLpCp} In the \textsc{Relxill} model, no particular geometry of the corona is assumed. In the lamp post model, the corona is assumed to be a point source, located above the BH \citep{Garcia2010,Dauser2016}. \textsc{RelxillLp} and \textsc{RelxillLpCp} flavors of \textsc{Relxill} family of models assumed the lamp post geometry. The incident primary emission is either \textsc{cutoff} (\textsc{RelxillLp}) or \textsc{nthcomp} (\textsc{RelxillLpCp}). The height of corona ($h$) is an input parameter in this model. We obtained a good fit with both \textsc{RelxillLp} and \textsc{RelxillLpCp} models, with the fit returned as $\chi^2=1715$ and $\chi^2=1726$ for 1619 dof, respectively. The coronal height ($h$) is obtained to be $h=3.4^{+1.1}_{-0.7}$ $R_g$ and $h=3.7^{+0.9}_{-0.6}$ $R_g$ from \textsc{RelxillLp} and \textsc{RelxillLpCp} models, respectively. The spin of the BH is obtained to be $a^*<0.92$ from the analysis with these models. The reflection is obtained to be stronger with $R_{\rm refl}\sim 0.46$, compared to the other models. Table~\ref{tab:my_label} shows the detailed spectral analysis results with the \textsc{RelxillLp} and \textsc{RelxillLpCp} models. \subsection{Error Estimation} To calculate the uncertainty of the best-fitted spectral parameters, we ran Monte Carlo Markov Chain (MCMC) in {\tt XSPEC}\footnote{\url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node43.html}}. The chains were run with eight walkers for a total of $10^6$ steps using the Goodman-Weare algorithm. We discarded the first 10000 steps of the chains, assuming to be in the `burn-in' phase. The uncertainty is estimated with `error' command at 1.6$\sigma$ confidence level. In the paper, we used the error at 1.6$\sigma$ level (90\% confidence), or mentioned otherwise. Figure~\ref{fig:mcmc} shows the posterior distribution of the spectral parameters and errors obtained with the \textsc{RelxillLp} model. The errors are mentioned at 1 $\sigma$ level in Figure~\ref{fig:mcmc}. Following models, like \textsc{relxill}, \textsc{relxillCp} or \textsc{relxillD} do not assume any coronal geometry, while \textsc{relxillLp} assumes lamp-post geometry. As the \textsc{relxillLp} provides a physical picture of the coronal geometry compared to other models mentioned before, we used \textsc{RelxillLp} model for the MCMC analysis. \begin{table*} \centering \caption{Spectral Analysis results} \label{tab:my_label} \begin{tabular}{lccccccccc} \hline & CUTOFF & Relxill-1 & Relxill-2 & RelxillCp & RelxillD & RelxillLp & RelxillLpCp \\ \hline $N_{\rm H}$ ($10^{22}$ \pcm)& $0.86^{+0.18}_{-0.18}$ & $2.55^{+0.01}_{-0.01}$ & $2.56^{+0.02}_{-0.01}$ & $2.54^{+0.02}_{-0.02}$ & $2.53^{+0.02}_{-0.01}$ & $2.53^{+0.03}_{-0.05}$ & $2.52^{+0.02}_{-0.02}$\\ \\ $\Gamma$ & $1.56^{+0.02}_{-0.01}$ & $1.54^{+0.01}_{-0.01}$ & $1.53^{+0.02}_{-0.02}$ & $1.56^{+0.02}_{-0.02}$ & $1.54^{+0.02}_{-0.01}$ & $1.57^{+0.01}_{-0.02}$ & $1.53^{+0.02}_{-0.01}$\\ \\ $E_{\rm cut}$/$kT_{e}$ (keV) & $500^{+u}_{-28}$ & $>946$ & $>925$ & $134^{+82}_{-29}$ & $300^{\dagger}$ & $>911$ & $146^{+68}_{-32}$ \\ \\ $q_{\rm in}$ & -- & $3^{f}$ & $6.78^{+0.05}_{-0.07}$ & $6.51^{+0.06}_{-0.05}$ & $7.09^{+0.07}_{-0.11}$ & $-$ & $-$ \\ \\ $q_{\rm out}$ & -- & $3^f$ & $2.04^{+0.09}_{-0.06}$ & $1.99^{+0.08}_{-0.07}$ & $2.10^{+0.04}_{-0.06}$ & -- & --\\ \\ $R_{\rm br}$/$h$ ($R_g$) & -- & $10.8^{+2.5}_{-1.4}$ & $4.8^{+1.5}_{-0.8}$ & $5.3^{+1.7}_{-2.2}$ & $4.5^{+1.6}_{-0.8}$ & $3.4^{+0.6}_{-0.4}$ & $3.7^{+0.9}_{-0.6}$ \\ \\ $R_{\rm in}$ ($R_{\rm ISCO}$) & -- & $1.14^{+0.04}_{-0.03}$ & $1.13^{+0.02}_{-0.04}$ & $1.13^{+0.03}_{-0.03}$ & $1.15^{+0.04}_{-0.03}$ & $<1.04$ & $<1.04$\\ \\ $a^*$ & -- & $0.97^{+0.02}_{-0.01}$ & $0.96^{+0.02}_{-0.02}$ & $0.97^{+0.01}_{-0.02}$ & $>0.98$ & $0.97^{+0.02}_{-0.05}$ & $0.95^{+0.01}_{-0.03}$\\ \\ $i$ (degree) & -- & $31^{+2}_{-2}$ & $32^{+2}_{-4}$ & $29^{+2}_{-3}$ & $37^{+2}_{-3}$ & $27^{+2}_{-1}$ & $28^{+2}_{-2}$\\ \\ $\log(n)$ (log cm$^{-3}$) & -- & $15^{\dagger}$ & $15^{\dagger}$ & $15^{\dagger}$ & $<15.3$ & $15^{\dagger}$ & $15^{\dagger}$ & \\ \\ $\log \xi$ (log \ecps)& -- & $3.68^{+0.03}_{-0.07}$ & $3.61^{+0.10}_{-0.12}$ & $3.71^{+0.14}_{-0.11}$ & $3.90^{+0.07}_{-0.15}$ & $3.62^{+0.10}_{-0.12}$ & $3.72^{+0.10}_{-0.13}$ \\ \\ $A_{\rm Fe}$ ($A_{\sun}$)& -- & $2.75^{+0.07}_{-0.13}$ & $3.16^{+0.19}_{-0.13}$ & $3.16^{+0.10}_{-0.14}$ & $2.99^{+0.11}_{-0.15}$ & $3.17^{+0.11}_{-0.13}$ & $3.28^{+0.12}_{-0.11}$ \\ \\ $R_{\rm refl}$ & -- & $0.31^{+0.02}_{-0.04}$ & $0.23^{+0.02}_{-0.01}$ & $0.21^{+0.04}_{-0.03}$ & $0.16^{+0.02}_{-0.01}$ & $0.48^{+0.03}_{-0.04}$ & $0.46^{+0.03}_{-0.02}$\\ \\ $N_{\rm PL}$/$N_{\rm rel}$ ($10^{-3}$ \phc) & $0.12^{+0.01}_{-0.01}$ & $5.02^{+0.05}_{-0.03}$ & $4.75^{+0.06}_{-0.05}$ & $4.68^{+0.05}_{-0.07}$ & $4.61^{+0.03}_{-0.04}$ & $22.7^{+0.7}_{-0.5}$ & $22.7^{+0.5}_{-0.8}$\\ \\ $\chi^2$/dof & 1484/1165 & 1718/1618 & 1691/1617 & 1691/1617 & 1712/1617 & 1715/1619 & 1726/1619\\ \\ $\chi^2_{\rm red}$ & 1.273 & 1.062 & 1.045 & 1.047 & 1.059 & 1.059 & 1.066 \\ \\ $F_{\rm 2-10~keV}$ & $5.49^{+0.04}_{-0.03}$ & $5.69^{+0.04}_{-0.05}$ & $5.71^{+0.03}_{-0.04}$ & $5.70^{+0.02}_{-0.03}$ & $5.64^{+0.06}_{-0.05}$ & $5.68^{+0.03}_{-0.04}$ & $5.68^{+0.04}_{-0.05}$ \\ \\ \hline \end{tabular} \leftline{$^*$ indicate fixed value in the model. $^f$ indicate the value is fixed during the analysis.} \leftline{The $F_{\rm 2-10~keV}$ is in unit of $10^{-10}$~ergs cm$^{-2}$ s$^{-1}$.} \leftline{Errors are quoted at 1.6$\sigma$.} \end{table*} \section{Discussion and Concluding Remarks} We studied the spectral properties of \source~ using the data obtained by NuSTAR and Swift observatories in the energy range of $0.5-78$~keV. We used various spectral models to understand the inner accretion flow during the observation period. Figure~\ref{fig:mcmc} shows the degeneracy between several parameters. The spin parameter ($a*$) is observed to be correlated with the reflection fraction ($R_{\rm refl}$) and anti-correlated with the inner disk radius ($R_{\rm in}$). These are expected as the high spin would bring the disk closer to the BH, and would make reprocessed emission strong. We also observed the photon index ($\Gamma$) is anti-correlated with the ionization parameter ($\xi$). This indicates if the photon index decreases, the ionization increases which means the hard X-rays are more effective in ionizing the disk. During our analysis, the hydrogen column density was obtained to be $N_{\rm H} \sim 2.5 \times 10^{22}$ \pcm. However, this is higher than the previously reported $N_{\rm H}$. \citet{Soria2011} found that $N_{\rm H} \sim 1.5 \times 10^{22}$ \pcm. The discrepancy arises due to consideration of the different abundances during the spectral analysis. \citet{Soria2011} assumed the abundances to be \textsc{angr} \citep{Anders1989}, while we assumed \textsc{wilm} abundances. We checked this by assuming \textsc{angr} abundances during the spectral fits. Using \textsc{angr} abundances, the column density was obtained to be $N_{\rm H} \sim 1.5 \times 10^{22}$ \pcm. The unabsorbed flux was obtained to be $F_{\rm 2-10~keV} \sim 5.7 \times 10^{-10}$ \ecs~ in the $2-10$~keV energy range. The bolometric flux ($F_{\rm bol}$; in the 0.1--500 keV energy band) is estimated to be $F_{\rm bol} \sim 2.6 \times 10^{-9}$ \ecs. From this, we calculated the bolometric luminosity as, $L_{\rm bol} \sim 2\times 10^{37}$ \eps, assuming the source distance of 8~kpc \citep{Soria2011}. Considering the mass of the BH in \source~ as 10 $M_{\odot}$ \citep{Soria2011}, the Eddington ratio is obtained as $L/L_{\rm Edd} \sim 0.015$ or $1.5\%$ of the Eddington limit during the NuSTAR observation. GRS~1758--258 is predominately observed in a similar accretion state with the X-ray luminosity $L_{\rm X} \sim 0.01-0.03$ $L_{\rm Edd}$ \citep{Soria2011}. The spectral analysis indicated that the source was observed in the LHS. The spectrum was found to be hard, with the photon index $\Gamma \sim 1.53-1.57$ from different spectral models. We could not estimate the cutoff energy as it was not constraint. The spectral fits with the \textsc{RelxillCp} gave us the temperature of the Corona as $kT_{\rm e} = 134^{+82}_{-29}$~keV. Assuming lamp-post geometry, we obtained the corona temperature as $kT_{\rm e}=146^{+68}_{-32}$~keV. For completeness, we calculate the optical depth ($\tau$) of the corona using, $\tau=\sqrt{\frac{9}{4}+\frac{m_{\rm e}c^2}{kT_{\rm e}} \frac{3}{(\Gamma-1)(\Gamma+2)}}-\frac{3}{2}$ \citep{Z96}. The optical depth is obtained to be $\tau = 1.33^{+0.31}_{-0.54}$ for \textsc{RelxillCp} model. The lamp-post geometry gave $\tau=1.30^{+0.30}_{-0.37}$ which is consistent with the \textsc{RelxillCp} model. The observed coronal parameters are consistent with other BH in the LHS \citep{Yan2020}. The height of the corona is obtained as $h\sim 3.4$ $R_g$ and $h\sim 3.7$ $R_g$ from the lamp-post models \textsc{RelxillLp} and \textsc{RelxillLpCp}, respectively. We did not detect any signature of the accretion disk in our analysis of the $0.5-78$~keV Swift/XRT+NuSTAR spectra. We tested this by adding a disk component (\textsc{diskbb} in {\tt XSPEC}), but the f-test rejected this. This indicated that either the disk temperature was very low or the disk normalization was low with a higher disk temperature. However, the source was observed in the LHS, where a low disk temperature is expected \citep[e.g.,][]{RM06}. Hence, a low temperature disk is the most probable reason for non-observation of the thermal emission. During the spectral analysis, we started the analysis by fixing $q_{\rm in}$ and $q_{\rm out}$ at 3 (\textsc{Relxill-1}). In \textsc{Relxill-2}, \textsc{RelxillCp} and \textsc{RelxillD} models, we allowed $q_{\rm in}$ and $q_{\rm out}$ to vary freely. The outer emissivity index ($q_{\rm out}$) is found to be $\sim 2$ for all three models. We obtained a steeper inner emissivity index ($q_{\rm in}$) with all three models with $q_{\rm in} \sim 6.5-7.2$. A steep inner emissivity profile is expected as the illuminating photons will be largely beamed and focused at the inner accretion flow. Different flavour and configuration of \textsc{Relxill} model indicated that the accretion disk almost extend up to the ISCO. All models predict the inner edge of the disc, $R_{\rm in} < 1.2$ $R_{\rm g}$. The lamp-post models (\textsc{RelxillLp} and \textsc{RelxillLpCp}) indicated that the disk further moved towards the BH with $R_{\rm in}<1.04$ $R_g$. The inclination angle of the inner accretion flow is obtained to be $i \sim 26^{\circ{}}-37^{\circ{}}$ from the different model. The high density \textsc{RelxillD} model indicated a higher inclination with $i = 37^{+2}_{-3}$ degrees while the lamp-post models indicated a lower inclination with $i\sim 28^{\circ{}}$. The reflection was found to be weak with the reflection fraction ($R_{\rm refl}$) obtained to be $R_{\rm refl} \sim 0.2$ and $\sim 0.16$ from the \textsc{Relxill-2} and \textsc{RelxillD} models, respectively. A different geometry of the lamp-post model yields a stronger reflection with $R_{\rm refl}\sim 0.46$. The spectral fits with the \textsc{Relxill} and \textsc{RelxillCp} models indicated the iron abundances as $A_{\rm Fe} \sim 2.7-3.2$ $A_{\sun}$. The high density \textsc{RelxillD} model also indicated a similar iron abundances with $A_{\rm Fe} \sim 3$ $A_{\sun}$. The \textsc{RelxillLp} and \textsc{RelxillLpCp} also indicated a similar iron abundances with $A_{\rm Fe} = 3.17^{+0.11}_{-0.13}$ $A_{\sun}$ and $A_{\rm Fe}=3.28^{+0.12}_{-0.11}$ $A_{\sun}$, respectively. It is often observed that the fitting with a low density disk gives a high iron abundances \citep[e.g.,][]{Tomsick2018}. Various studies of Cygnus X--1 with the constant low density model yield the iron abundance $A_{\rm Fe}>9$ $A_{\rm Fe}$ \citep[e.g.,][]{Walton2016,Basak2017}. \citet{Tomsick2018} showed that the same spectra can be fitted with $A_{\rm Fe}=1$ $A_{\sun}$ with the density of the disk as $n\sim 4\times10^{20}$ cm$^{-3}$. In \textsc{Relxill}, \textsc{RelxillCp}, \textsc{RelxillLp} and \textsc{RelxillLpCp} models, the disk density is considered as $n=10^{15}$~cm$^{-3}$ while the density is a free parameter in the \textsc{RelxillD} model. The spectral analysis of \source~ with the \textsc{RelxillD} model returned the disk density as $n < 2\times10^{15}$~cm$^{-3}$. We re-analyzed the data by keeping $A_{\rm Fe}$ fixed at 1. The fit became significantly worse with $\Delta \chi^2>300$ for 1 dof with $n \sim 10^{18}$~cm$^{-3}$. Hence, the high density solution is not required for \source. The spin of the BH in \source~ is estimated to be very high with $a^* > 0.93$. \textsc{Relxill-1}, \textsc{Relxill-2} and \textsc{RelxillCp} indicated the spin of the BH to be $a^* = 0.97^{+0.02}_{-0.01}$, $0.96^{+0.02}_{-0.02}$ and $0.97^{+0.01}_{-0.02}$, respectively. The \textsc{RelxillD} model even indicated a higher spin, with $a^* > 0.98$. We obtained the spin of the BH as $a^* = 0.97^{+0.02}_{-0.05}$ and $0.95^{+0.01}_{-0.03}$ from the \textsc{RelxilLp} and \textsc{RelxillLpCp} models, respectively. We also checked if a low spin solution is possible for \source. We fixed the spin at 0, and re-analyzed the $0.5-78$~keV XRT+NuSTAR data with all the models. All models returned with a significantly worse fit with $\Delta \chi^2>60$ for 1 dof. Hence, a high spin solution is preferred for \source. In our analysis, we obtained a good fit from all reflection based models, with $\chi^2_{\rm red}\sim 1.04-1.07$. Statistically, \textsc{Relxill-2} and \textsc{RelxillCp} returned a better fit than other variant of the model with $\Delta \chi^2 \sim 20$. Nonetheless, our analysis shows that GRS~1758--258 hosts a high spinning BH, with $a^*>0.92$. In future, high resolution spetroscopy missions, such as \textit{XRISM} \citep{Tashiro18}, \textit{ATHENA} \citep{Nandra2013}, \textit{AXIS} \citep{Mushotzky2018}, \textit{Colibr\'i} \citep{Heyl2019, Caiazzo2019}, and \textit{HEX-P} \citep{Madsen2018} are expected to constrain or reconfirm the black hole spin of \source~more accurately. \begin{acknowledgments} We sincerely thank the anonymous reviewer for constructive suggestions which improved the manuscript significantly. AJ and HK acknowledge the support of the grant from the Ministry of Science and Technology of Taiwan with the grand number MOST 110-2811-M-007-500 and MOST 111-2811-M-007-002. HK acknowledge the support of the grant from the Ministry of Science and Technology of Taiwan with the grand number MOST 110-2112-M-007-020. AC and SSH are supported by the Canadian Space Agency (CSA) and the Natural Sciences and Engineering Research Council of Canada (NSERC) through the Discovery Grants and the Canada Research Chairs programs. This research has made use of the {\it NuSTAR} Data Analysis Software ({\tt NuSTARDAS}) jointly developed by the ASI Space Science Data Center (ASSDC, Italy) and the California Institute of Technology (Caltech, USA). This work was made use of XRT data supplied by the UK Swift Science Data Centre at the University of Leicester, UK. \end{acknowledgments} \vspace{5mm} \facilities{Swift, NuSTAR.} \software{\textsc{xspec}; \textsc{nustardas}; \textsc{python}; \textsc{corner.py}; \textsc{astropy}; \textsc{scipy}; \textsc{matplotlib}; \textsc{grace}.} \bibliography{bhxrb}{} \bibliographystyle{aasjournal}
Title: Identifying glitches near gravitational-wave events using the Q-transform
Abstract: We present a computational method to identify glitches in gravitational-wave data that occur nearby gravitational-wave signals. We flag any excess in the data surrounding a signal and compute the probability of such an excess occurring in Gaussian noise. We validate that the probabilities reported by this tool are self-consistent in colored Gaussian noise as well as data that contains a gravitational-wave event after subtracting the signal using the best-fit template. Furthermore, we compare our glitch identification results for events from LIGO-Virgo's third observing run against the list of events that required glitch mitigation. Finally, we discuss how the precise, automated information about the data quality surrounding gravitational-wave events this tool provides can be used to improve astrophysical analyses of these events.
https://export.arxiv.org/pdf/2208.12338
\title[Identifying glitches using the Q-transform]{Identifying glitches near gravitational-wave events using the Q-transform} \author{Leah Vazsonyi and Derek Davis} \address{LIGO, California Institute of Technology, Pasadena, CA 91125, USA} \ead{lvazsony@caltech.edu, dedavis@caltech.edu} \date{today} \section{Introduction} Since the initial gravitational-wave detection GW150914, gravitational-wave interferometers have provided a new method of observing the universe through gravitational waves, having successfully observed a plethora of compact binary coalescence (CBC) signals so far~\cite{TheLIGOScientific:2014jea, VIRGO:2014yos, GW150914_paper, GWTC-1, GWTC-2, GWTC-2.1, GWTC-3}. These signals allow us to observe astrophysical phenomena through an entirely new spectrum beyond electromagnetic waves, providing the opportunity for ``multi-messenger astronomy,'' wherein we observe events using both gravitational and electromagnetic waves~\cite{GBM:2017lvd}. Gravitational-wave interferometer data contains numerous noise artifacts, called ``glitches''~\cite{LIGO:2021ppb,Virgo:2022fxr,KAGRA:2020agh}. These glitches can be caused by a variety of sources such as earthquakes~\cite{Schwartz:2020pso,Figura:2022btt}, lightning strikes~\cite{thunder_LIGO,Washimi:2021ogz}, or nearby human activity~\cite{AdvLIGO:2021oxw, Virgo:2022ypn}. Instrumental issues in the detectors themselves may also cause glitches~\cite{Accadia:2010zzb,Soni:2020rbu}. If any of these glitches excite the detector at similar frequencies as binary coalescences, they may interfere with any true signal detected around that same time and bias analyses~\cite{Pankow:2018qpo, Powell:2018csz, Macas:2022afm, Payne:2022spz, Davis:glitch_sub}. Not only must we be able to sufficiently distinguish real gravitational-wave events from glitches, but also to understand the glitches well enough to recover portions of the gravitational-wave signal that may overlap with the glitch. Doing so will enable us to extract the gravitational-wave signal successfully, even when a loud glitch occurs nearby. For this reason, a variety of methods have been developed to subtract or otherwise mitigate glitches before analyzing gravitational-wave strain data~\cite{Cornish:2014kda,Wei:2019zlc,Zackay:2019kkv,Torres-Forne:2020eax,Cornish:2020dwh,Chatziioannou:2021ezd, Hourihane:2022doe,Merritt:2021xwh}. Following the third LIGO-Virgo observing run (O3)~\cite{GWTC-2, GWTC-2.1, GWTC-3}, improvements to the detector and data analysis tools allow us to be more sensitive to the power measured by the detector. This improved sensitivity means that the gravitational-wave signals will be louder and potentially clearer, but we also expect some noise artifacts to be louder as well~\cite{LIGO:2021ppb,Virgo:2022fxr,KAGRA:2020agh}. Thus, some previous glitch issues may be exacerbated in future observing runs. In order to prevent glitches from biasing analyses of gravitational-wave signals, our first step is to identify if there is any excess power that needs to be mitigated. Generically, this excess power may occur due to a glitch or a signal, and it is sometimes not immediately clear which is causing the power. Although current methods exist to identify glitches~\cite{Robinet:2020lbf, Macleod:2021, kleinewelle}, none currently are able to quickly assign a significance of the presence of a glitch. It is also common to identify glitches by visually inspecting images, which is a highly subjective manner of classification. Instead, it would be beneficial to statistically determine whether or not the noise is Gaussian; true glitches are caused by specific phenomena rather than the usual background in our observing runs, which follows a normal distribution. It also provides a more objective measure and definition for a glitch threshold. In this work, we introduce a new method to identify glitches using the Q-transform~\cite{Chatterji_thesis,Chatterji:2004qg} by rapidly computing a p-value, which measures the significance of any excess power in the data. With the assumption that the usual noise background is Gaussian, we can categorize any excess power which does not follow this expected distribution as a glitch and assign a p-value, assuming the noise to be Gaussian. In our work, we perform tests on this computation by testing its performance on (1) simulated gravitational-wave signals injected on a Gaussian background, (2) simulated gravitational-wave signals injected near a glitch, and (3) real gravitational-wave signals from O3~\cite{GWTC-2.1, GWTC-3}, some of which occur near a glitch. In each case, we also perform a significance test wherein we subtract the gravitational-wave signal from the data using a template generated by parameters recovered from that signal. Our paper will proceed as follows: Section \S \ref{sec:background} discusses methods to identify glitches, describing the mathematical background for computing p-values applicable to both gravitational-wave signals and glitches. This section includes a discussion on how we address clusters of tiles from the same glitch and how we account for the presence of an astrophysical signal in the data. Sections \S \ref{validation:gauss} and \S \ref{results:gauss} discuss the first set of validation tests in which we use a catalog of injections and recoveries to mimic gravitational-wave signal detections. We perform this test on a Gaussian background to check that we are able to recover real signals effectively under ideal conditions. Sections \S \ref{validation:glitch} and \S \ref{results:glitch} discuss the second set of validation tests, which involves injecting a gravitational-wave signal near a loud glitch. In this case, we ensure that the data quality flag accurately identifies both the signal and the glitch, and that the subtraction can recover the signal. Section \S \ref{validation:real} and \S \ref{results:real} present our third validation test in which we use real signals from O3 and subtract the waveforms using the official recovery parameters for each event. This test is effectively a robustness check of our first test, but with a noisier background and no knowledge of the ``correct'' parameters of the true signal. Finally, section \S \ref{conclusion} presents our concluding remarks, describes what we have learned from these tests, and discusses potential directions for future work. \section{Glitch Identification Methods}\label{sec:background} \subsection{Identifying Loud Tiles} After measuring the gravitational-wave strain in the detector, we need some way of separating cases where excess power in the data corresponds to glitches or expected fluctuations from Gaussian noise. To do so, we analyze our data in the time-frequency domain, creating spectrograms generated using the Q-transform. The continuous Q-transform is given by~\cite{Chatterji_thesis,Chatterji:2004qg} \begin{equation} x(\tau, f) = \int_{-\infty}^{+\infty} x(t) w(t-\tau) e^{-2\pi i f t} dt \text{ ,} \end{equation} where $w(t-\tau, f)$ is a window centered at a given time $\tau$. This window is dependent on the quality factor Q, defined as the frequency over the bandwidth, i.e. \begin{equation} Q = \frac{f}{\delta f} \text{ .} \end{equation} However, gravitational-wave data is measured in discrete time samples and we hence use a discretized version of the Q-transform. This modified discretized transform is given by \begin{equation} x[m, k] = \sum_{n = 0}^{N-1} x[n] w[n-m, k] e^{-2\pi i n k / N} \text{ ,} \end{equation} creating a map of tiles, each of which corresponds to a specific time and frequency, where its value of interest is the energy at that tile. The set of tiles produced by this method is referred to as a ``Q-gram.'' In this work, we rely on the Q-transform and Q-gram implemented in \texttt{gwpy}~\cite{Macleod:2021}. We note here that it is common to compute the discretized Q-transform tests for multiple values of Q within a given range, and the one with the largest individual tile energy is chosen to plot. We find later that this phenomenon affects the distribution of our p-values, so we subsequently fix Q. We therefore use this Q-transform that takes the strain data and computes the energy observed as a function of time and frequency. If there is a specific tile with more energy than the expected Gaussian background, we refer to this as a ``loud tile.'' These tiles could arise due to a gravitational-wave signal detection or a glitch. An example of a Q-gram and a spectrogram is shown in Fig.\ref{fig:spectrogram}. This figure shows data from around both a gravitational wave signal and a glitch. Although the Q-gram and spectrogram visualizations are similar, the spectrogram interpolates the Q-gram data in order to estimate the energy at all time-frequency points. We use the measured energies of the individual tiles in a Q-gram throughout this work. \spectroplot \subsection{Computing the P-value} In order to determine whether the data contains a glitch or not, we assign a significance to each tile in our spectrogram. Due to the central limit theorem, we expect the distribution of wavelet energies to converge to a Gaussian distribution. We thus expect these energies to follow an exponential distribution given by~\cite{Chatterji_thesis,Chatterji:2004qg} \begin{equation}\label{eq:probval} P(\epsilon > \epsilon[m, k]) \propto \mathrm{exp}(-\epsilon[m, k]/\mu_k) \text{ ,} \end{equation} where $k$ refers to the specific frequency and $\mu_k$ is the mean tile energy at the given frequency. The $m$ value is related to the time of the event from discretizing the continuous Q-transform. We use this expected distribution to assign a probability, and hence significance, to each tile. This distribution is shown in Fig.~\ref{fig:distfig}. We fit the distribution of energies as suggested by Eq.~\ref{eq:probval}, but allow for some deviations from this theoretical prediction. We perform this fit according to Eq.~\ref{eq:probval}, ensuring that it is correctly normalized as well, according to \begin{equation}\label{eq:fit} P(\epsilon) = A e^{-\epsilon t} \text{ ,} \end{equation} where $\epsilon$ is the energy of a given tile and $A$ and $t$ are our fit parameters. \distfig To calculate the significance, we compute the p-value for the null hypothesis that the distribution of energies of the tiles in the Q-gram is consistent with the distribution expected from Gaussian noise. To start, we find the expected number of tiles above the energy of that tile given the size of the Q-gram and the fitted exponential model. We compute this by multiplying the size of the Q-gram by the integral of the probabilities that the energy is greater than or equal to the magnitude of that tile. This probability is given by \begin{equation} \label{probofe} \begin{split} \tilde{P}(\epsilon > \epsilon_0) & = \int _{\epsilon_0} ^\infty P(\epsilon ') d \epsilon ' \\ & = \int _{\epsilon_0} ^\infty \frac{A e^{-\epsilon ' t}}{N} d \epsilon ' \\ & = \frac{A e^{-\epsilon_0 t}}{tN} \text{ .} \end{split} \end{equation} Here, $\epsilon_0$ refers to some given threshold energy, $A$ and $t$ are our fit parameters, and $N$ is the number of tiles in the Q-gram. $\tilde{P}(\epsilon > \epsilon_0)$ is the probability that our Q-gram contains a tile above the threshold energy, and $P(\epsilon)$ is the normalized probability of a given energy given in Eqs.~\ref{eq:probval} and \ref{eq:fit}. After computing the probability for each tile, we compute the probability of the Q-gram as a whole by assuming the same Gaussian distribution where the mean is the expected number of tiles. Assuming that the measured tile energies in Gaussian noise are a Poisson process, the probability of a observing at least one tile above a fixed energy is simply based on the rate of tiles above that energy and the total number of trials. We use the probability computed according to Eq.~\ref{probofe} as the rate and the number of tiles in the Q-gram as the number of trials. Finally, we have that the probability of observing $\epsilon_0$ as the loudest tile in our Q-gram is based on the rate of occurrence, $\lambda$, and the length of data considered, $\tau$ \begin{equation} \begin{split} P(\mathrm{Q\hbox{-}gram } \max{\epsilon} = \epsilon_0) &= 1 - e^{-\lambda \tau} \\ &= 1 - \mathrm{exp}\left[ -\left( N\frac{A}{tN}e^{-\epsilon_0 t}\right) \left( N \right) \right] \\ &= 1-\mathrm{exp}\left[ -\frac{AN}{t}e^{-\epsilon_0 t} \right] \text{ .} \end{split} \end{equation} To find the glitch times, we first chose a p-value threshold and solve for the tile energy that has this probability of being observed. Any tiles with energies larger than this threshold energy are assumed to be due to non-Gaussian features in the data, i.e., glitches. Any times corresponding to those tiles are flagged as ``glitch times.'' \subsection{Identifying Clusters} The computation has so far only considered each tile independently; in reality, if there are a number of significant tiles in a region, there may be other tiles that are part of the same glitch but are not statistically significant on their own. Thus, we include a clustering feature that includes these nearby loud tiles in the same data segment. To identify clusters, we first identify all tiles that meet some ``global'' p-value threshold. We then repeat the same analysis for a small time window around each of these tiles using a lower p-value threshold. For example, we can use 0.01 as the global p-value threshold but then use 0.2 as the second, lower threshold for clustering. All tiles that meet this new threshold are included in the list of statically significant tiles. The entire section of tiles is then categorized as a ``segment'' and flagged as a glitch. \subsection{Subtracting the Signal} Since the test is only generically sensitive to excess power, this test would identify both glitches and astrophysical signals as loud tiles. To ensure that any flagged tiles are non-astrophysical, it is possible to first subtract the astrophysical waveform and then analyze the residual. To do this, we first need a reasonable estimate of the signal parameters from a different source. In practice, this test would be utilized after an event is first identified by a search algorithm, which analyzes the data with hundreds of thousands of templates to identify the best match. We can then use these best match parameters to subtract the signal. With the best fit parameters, care must be taken to ensure that the signal is precisely subtracted. In order to account for different signal processing methods or conventions between the analysis that identifies the signal parameters and our analysis, we use matched filtering to identify the time of the signal. We assume that the peak of the matched filter signal-to-noise (SNR) time series is the time of the signal we are trying to subtract. However, this introduces a different potential concern, as it is possible that the presence of a glitch in the data could bias the peak of the SNR time series. If we only assume that the time of the signal is during our analysis segment, it is possible that the peak of the SNR time series is actually a glitch that is not near the actual signal. We can address this by only considering peaks in a small time window around our initial estimate of the time of the signal. This still does not entirely remove this bias, as a glitch directly overlapping a signal could still bias the recovered time of arrival of the signal. However, in this case, the likely outcome is that we would still identify non-Gaussian features in the data, which could then be further investigated. To demonstrate how the importance of this timing information, we perform tests with and without enforcing that the peak of the SNR time series is within a small time window of our initial estimate. \section{Validation Tests} \subsection{Gaussian Tests}\label{validation:gauss} Our initial set of tests represents an idealized form of recovering a gravitational-wave signal. We generate colored Gaussian noise with \texttt{Bilby}~\cite{Ashton:2018jfp} with the same power spectral density as a representative stretch of O3 data from LIGO Livingston. Using this simulated time series, we inject a gravitational-wave signal, which we then seek to recover. Unless otherwise specified, we consider 8s of data around the relevant time in interest for all tests. To approximate the process of recovering an astrophysical signal, we use the results of an injection campaign with \texttt{PyCBC}~\cite{pycbc_release, Usman:2015kfa}. For our simulated signal tests, we inject a waveform with the template parameters that were used to inject the signal in the \texttt{PyCBC} campaign but then attempt to subtract the signal using the parameters that \texttt{PyCBC} recovered from the injection. Furthermore, we only consider injections that were identified by \texttt{PyCBC} with a false alarm rate of less than $1\text{yr}^{-1}$. This process ensures that we are not assuming knowledge of the waveform parameters, which one does not have for astrophysical signals, and mirrors what would occur in practice for astrophysical signals. In subtracting the signal, we use a matched filter to find the peak SNR to determine the correct time of the subtraction. Throughout this process, we generate three time series, each of which we analyze separately. First, there is a section of data that should simply approximate a Gaussian background, where we expect the p-values to be distributed as expected, i.e., 50\% have a p-value of 0.5 or lower, 90\% have a p-value of 0.9 or lower, etc. We also expect no data quality flags or significant regions of data. Next, we have the same portion of data but with a gravitational-wave signal injected, where we anticipate a very low p-value and a data quality flag across the duration of the waveform. The injected signal region should be considered significant. If the first two time series are well-behaved (i.e., the background is Gaussian and the injection was successful without any artifacts), the final data portion will show how well we recovered the gravitational-wave signal. For a perfect recovery, we expect the time series to resemble the first portion containing only a Gaussian background. However, any mismatches in the manifestation of the gravitational-wave parameters of the injected and recovered signals will yield excess power in the data, so we expect some tiles to be louder and hence to potentially be flagged or have significant regions. Such regions should be within the duration of the injected waveform and should ideally be somewhat lower than in our injected time series. Thus, the distribution of p-values for many such tests should be distributed in a manner similar to that expected. However, one might anticipate a larger number of events at low p-value compared to the Gaussian data, indicating imperfect recovery of the signal by PyCBC. \subsection{Glitches}\label{validation:glitch} In our next set of tests, we work towards testing how well our computation performs at its primary objective: distinguishing a gravitational-wave signal and a glitch when both are present in the data. To test this, we take a set of glitch times classified by the Gravity Spy algorithm~\cite{Zevin:2016qwy, gravity_spy_dataset} from the second LIGO-Virgo observing run (O2) as our data and run a p-value computation and a data quality assessment. We then perform the same set of injections and subtractions described in section \S \ref{validation:gauss} to these glitches and repeat the p-value computation and data quality assessment at each stage. We randomize the time of the injection to occur within a small window of the glitch; clearly, injecting the signal directly on top of the glitch would make it difficult or impossible to recover, but we are interested in testing the ability to discriminate events from glitches when they occur close together. In this case, we expect the p-values to be extremely small, and the data quality flag should be present at the time of the glitch for all three time series. If the subtraction is works correctly, we expect the data quality flag to be present for the injected signal but no longer be present once the subtraction has been performed. A comparison of the subtraction during these tests to Gaussian tests described in section \S \ref{validation:gauss} determines whether or not the glitch is interfering with our ability to recover the signal. \subsection{Real Event Tests}\label{validation:real} For our final tests, we analyze observed gravitational-wave events from O3 and subtract a waveform corresponding to the reported source parameters. We consider all events with astrophysical probability greater than 0.5 that are reported in GWTC-2.1~\cite{GWTC-2.1} and GWTC-3~\cite{GWTC-3}, which we collectively refer to as the GWTC. Such tests should mimic the injection and subtraction stages described in sections \S \ref{validation:gauss} and \S \ref{validation:glitch}, but would use real data and hence would not be as idealized. For example, the background is likely to be noisier and not as well-behaved as our re-sampled time series. Furthermore, the actual signals do not necessarily correspond perfectly to a template waveform, as was the case for the injected signals. We also expect that there will be glitches in the data, so this section may contain some subtracted time series with low p-values for this reason. \section{Results} \subsection{Simulated Gaussian Tests}\label{results:gauss} Since we expect the re-sampled data to be Gaussian and to behave as described in section \S \ref{sec:background}, the p-values for our simulated Gaussian data prior to injections or subtractions should be distributed as theoretically expected, i.e. a uniform distribution. Likewise, the distribution of p-values for injections should show an excess of low p-values and the subtracted results should be close to a uniform distribution. For the injected signals, we expect the p-values to be extremely low, except in the case of weak signals. Thus, we expect the line to fall above that of the Gaussian tests, with a large excess of counts near zero. If all signals are loud, this should approach a horizontal line. For the subtraction time series, we expect the line to fall between the Gaussian background and the injected signals. The better the subtraction, the closer the line should be to the Gaussian background line. The distributions of p-values that we measure for these tests are shown in Fig.~\ref{fig:gaussppplot}. We find that how Q is chosen for producing Q-grams does introduce a potential bias to our results; the left panel of Fig.~\ref{fig:gaussppplot} shows results with a fixed Q, while the right panel shows results when we test a range of Q values and choose the Q-gram with the highest individual tile energy. We quantify the level of agreement between our measured p-value distribution and the expected, uniform distribution using the Kolmogorov–Smirnov (KS) test~\cite{kolmogorov1933sulla}. For tests with the Gaussian background data, we find that fixing Q results in a KS test p-value of \red{0.21} compared to \red{0.06} when allowing Q to vary. For this reason, we focus on fixed-Q results in later sections. We also find that our results after subtracting the injected signal show larger deviations from the expected distribution than simply Gaussian noise, mostly at low p-values. Investigating cases where the p-value after subtraction was below 0.01, we find that these cases are primarily for high-SNR injections with parameters that are not well-captured by the aligned spin templates in the PyCBC template bank. The higher mismatch between the injected template and template used to subtract the signal resulted in residual excess power in the data after subtraction. \gaussppplot \subsection{Glitch Runs}\label{results:glitch} As described in \S \ref{validation:glitch}, we expect the p-values for the initial data, the injection, and subsequent subtraction to be extremely low since a loud glitch will be present in all of these. Therefore, the p-values are not informative as to whether the tests behave correctly. Instead, we must check each of our runs to ensure that all three time series correctly flag the glitch. Furthermore, we expect a data quality flag on the injected signal, and for good recoveries (i.e., those that were successful in our simulated Gaussian tests), we expect the flag to no longer be present in the subtracted data. We find that the code always successfully returns a p-value less than 0.01, correctly flagging the glitch in all 300 cases. The injected data also always includes an appropriate data quality flag over the duration of the injected signal. Since the SNR of all glitches is higher than 10, the peak in the matched filter SNR time series is often the time of the glitch rather than the injected signal. To ensure that we are correctly subtracting the signal, we only look for peaks in the SNR time series within 0.1\,s of the estimated signal time (as reported by PyCBC). When this is done, we are able to correctly subtract the signal unless there is an almost perfect alignment of the signal and glitch. An example of data before and after subtraction is shown on the left side of Fig.~\ref{fig:glitch}. If we instead considered the global peak in the SNR time series, we would often mistakenly attempt to subtract the glitch. However, if the SNR of the injected signal were sufficiently loud, we would still be able to subtract the signal, as is shown on the left side of Fig.~\ref{fig:glitch}. As expected, restricting the time window used to identify the SNR time series peak to be within a short time interval of the injected signal yields much better recovery rates than considering the global peak; we found that it was possible to subtract the signal perfectly and remove the data quality flag, even when the injection overlaps with the glitch. \glitchfig \subsection{Real Signals}\label{results:real} As a final validation test, we analyze real gravitational-wave signals from O3 as described in section \S \ref{validation:real}. We analyze all detectors operating at the time of each event independently. Rather than use the same time window for all tests, as was done in previous sections, we use different time windows for each event that correspond to the same windows used in the GWTC to estimate the event source properties. These time windows ranged from 4 seconds to 128 seconds. In total, we analyze \red{209} different time series from \red{79} events. The complete set of results, along with an additional discussion of the results, can be found in \ref{app:results}. We repeat the same p-p tests as before, as shown in Fig.~\ref{fig:realppplot}. This figure includes the results using the data before subtraction and the residual after subtraction. As data around these events is known to contain glitches, we also attempt to remove events that contain glitches. To do this, we use a p-value threshold of 0.01 to flag glitches; we then plot the distribution of results with these events excluded. \red{33} of the 209 total time series were flagged with this threshold. Visual inspection of the spectrograms from flagged time series confirmed that glitch was visible in all cases. To account for any events that would be falsely flagged as glitches, when plotting the measured p-value distribution, we assume that 1\% of the results are still below 0.01. After removing these events, we find that the distribution of p-values is consistent with Gaussian noise at a \red{K-S test p-value of 0.037}. Although this may indicate some remaining excess power that is not Gaussian, it is not possible to identify any additional specific events that are not consistent with Gaussian noise. As an additional comparison, we instead only remove events that required glitch subtraction in the GWTC from our results. This requirement removes \red{18} time series from our results. The distribution of p-values after removing these events is also shown in Fig.~\ref{fig:realppplot}. We find that even after removing these events, we still have an excess of events with p-value less than 0.01, suggesting that there are still significant non-Gaussian features in the remaining time series that were not subtracted. As the metrics used to decide if glitch mitigation was required in these analyses are different than simply identifying if a glitch is present~\cite{Davis:glitch_sub}, it is possible that some glitches in the data were not deemed to require subtraction. This may explain why these cases were not mitigated in the GWTC analyses. Additional visual inspection of time series was completed for cases where this algorithm disagreed with the list of glitches subtracted from the GWTC. Any time series with a p-value of less than 0.01 that did not use glitch mitigation or a time series with a p-value of more than 0.01 that did use glitch mitigation was investigated. In all cases where this algorithm identifies a glitch, excess power was visible in the produced spectrogram. Many of these flagged glitches were short in duration at high frequency and did not overlap the signal. Conversely, no excess power was visible in cases where the algorithm did not identify a glitch. In the majority of these cases, the glitch that was subtracted was out of the time-frequency region considered by this tool. Although these checks did identify multiple cases where excess power is visible in the data that was not subtracted before analyses, we do not have evidence that these glitches biased estimates of event source properties. Additional comparisons between our results and those of the GWTC can be found in \ref{app:results}. \realppplot \section{Conclusion}\label{conclusion} In this work, we have presented and tested a method for identifying glitches that may occur near gravitational-wave signals. The method rapidly computes whether a statistically significant glitch occurred within a section of data, flags noisy regions, and then subtracts the gravitational-wave signal to effectively separate the signal and the glitch. We have shown that this tool's significance computation and data quality flagging were functioning correctly for a Gaussian background, gravitational-wave signal, and a loud glitch. We found that the measured p-values for all cases are distributed as expected and minimally biased: Gaussian data follows the expected distribution, and all loud events (including both gravitational-wave signals and glitches) yield a small p-value. Matching a template waveform and subsequently subtracting it from the gravitational-wave data leaves a p-value somewhat higher than the Gaussian background case due to template mismatches with the actual signal present in the data. Our tests injecting a signal near a glitch and subsequently subtracting a template waveform showed significantly more success when there is separate knowledge of the time of arrival of the gravitational-wave signal. Still, it is often possible to clearly identify glitches even without such additional information. One of the main potential use cases is to automate glitch detection in cases where this is currently done by visual inspection of spectrograms. We compared the glitch identification results based on this algorithm against the list of O3 events that required glitch mitigation~\cite{GWTC-2.1, GWTC-3}. In general, we find agreement between these two methods of glitch identification, but this algorithm identifies multiple additional glitches in the data surrounding events that were not mitigated. It is important to note that all glitches near events may not require mitigation; this reason may explain some of the differences we identify. As this algorithm is not able to evaluate the potential impact of a glitch on an astrophysical analysis, additional methods would be needed to answer this question. This algorithm requires only seconds to run, making it potentially useful to aid in investigations of gravitational-wave events that are detected in low-latency. Glitches that overlap gravitational-wave signals may impact the estimate of parameters that are important to multi-messenger astronomy, such as the distance and sky position of the source~\cite{Macas:2022afm}. The faster that such a bias is detected will reduce the chance that resources are wasted following up incorrect information and increase the speed at which unbiased results can be released. An important limitation of this method is that signal subtraction is only possible when the parameters of the event are well known. In the case of CBC signals, we have shown that it is generally sufficient to use the template parameters from the search that identified the signal. For signals that are detected by unmodelled searches~\cite{Klimenko:2015ypf, Lynch:2015yin}, separating astrophysical excess power from instrumental excess power is more difficult. If information such as the time of arrival, duration, and bandwidth of a signal is known, it is possible to exclude that time-frequency region before calculating this statistic. However, it is not possible for this method to differentiate an unmodelled astrophysical signal that directly overlaps a glitch. We hope that this tool can be used in future observing runs to increase the speed, accuracy, and automation of glitch identification. As the rate of gravitational-wave detection increases, we expect that this method, and similar statistical tools, will be an important component of adapting gravitational-wave analysis methods to handle large numbers of events. By relying on these types of statistical methods, personpower that was previously spent on visual inspection of the data can be diverted to other applications. Furthermore, the information about identified glitches that this tool generates can be used to further automated additional gravitational-wave analyses, such as glitch subtraction, which are important components of arriving at accurate astrophysical conclusions. \section{Acknowledgements} The authors would like to thank the LIGO-Virgo-KAGRA detector characterization groups for their input and suggestions during the development of this work. We thank Siddharth Soni for his comments during internal review of this manuscript. LV and DD are supported by the NSF as a part of the LIGO Laboratory. This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan. This material is based upon work supported by NSF’s LIGO Laboratory which is a major facility fully funded by the National Science Foundation. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation, and operates under cooperative agreement PHY-1764464. Advanced LIGO was built under award PHY-0823459. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. This work carries LIGO document number P2200223. \appendix \section{Complete GTWC results}\label{app:results} In this appendix, we present complete details of our results for GTWC events~\cite{GWTC-2.1, GWTC-3}. These measured p-values and analysis configurations for each event can be found in Tables~\ref{tab:glitch_tab_gwtc2} and \ref{tab:glitch_tab_gwtc3}. To compare our results with those in the GWTC, we first identify glitches using a p-value threshold. If the measured p-value is less than 0.01, we consider this a glitch. We find that \red{33} of the analyzed time series meet this threshold. Conversely, only \red{18} time series used glitch mitigation in the GWTC analyses. In total, we identify 23 differences between our analysis and the GWTC. We find 19 glitches that are not listed in the GWTC, while we do not find 4 of the glitches listed in the GTWC. It is possible that these glitches were identified as a part of the GWTC analyses but not flagged for glitch mitigation, as additional metrics used to decide if glitch mitigation was required~\cite{Davis:glitch_sub}. As mentioned in the main text, many of the glitches identified by this algorithm are short in duration and high frequency, which likely did not require mitigation. However, a significant fraction of the glitches we identify are clearly visible in a spectrogram of the data. Of the 4 cases where we do not identify a glitch, 3 of these where due to the frequency limits used in our analysis. If we lowered the lower frequency limit to 10\,Hz, these glitches would have been identified by our analysis as significant. The level of agreement between our results and those from the GWTC do appear to be detector-dependent. We find 99\% agreement (72/73) with the results for LIGO Hanford, 89\% agreement (66/76) with the results for LIGO Livingston, and 80\% agreement (48/60) with the results for Virgo. If we only consider cases where this algorithm identifies a glitch, the agreement rate is 80\% (4/5) for LIGO Hanford, 60\% (9/15) for LIGO Livingston, and 8\% (1/13) for Virgo. The reasons for these detector-dependent rates may be due to a combination of factors, including the rate of glitches in each detector, the types of glitches present in each detector, and the SNR of signals in each detector. \begin{table}[p] \footnotesize \begin{tabularx}{\linewidth}{l l r r r X l} Event name & Window & H1 p-value & L1 p-value & V1 p-value & GWTC result & Agree?\\ \hline GW190403\_051519 & 4\,s & 0.950 & 0.950 & 0.946 & - & \cmark \\ GW190408\_181802 & 8\,s & 0.516 & 0.131 & 0.804 & - & \cmark \\ GW190412 & 8\,s & 0.355 & 0.036 & 0.589 & - & \cmark \\ GW190413\_052954 & 4\,s & 0.011 & 0.895 & 0.665 & - & \cmark \\ GW190413\_134308 & 4\,s & 0.540 & \textbf{$<$ 0.001} & 0.970 & L1 glitch & \cmark \\ GW190421\_213856 & 4\,s & 0.172 & 0.176 & - & - & \cmark \\ GW190425 & 128\,s & - & \textbf{$<$ 0.001} & \textbf{$<$ 0.001} & L1 glitch & \xcmark \\ GW190426\_190642 & 4\,s & 0.707 & 0.169 & 0.744 & - & \cmark \\ GW190503\_185404 & 4\,s & 0.359 & \textbf{0.002} & \textbf{0.001} & L1 glitch & \xcmark \\ GW190512\_180714 & 8\,s & 0.660 & 0.455 & 0.198 & - & \cmark \\ GW190513\_205428 & 4\,s & 0.908 & 0.165 & 0.335 & L1 glitch & \xmark \\ GW190514\_065416 & 4\,s & 0.011 & 0.948 & - & L1 glitch & \xmark \\ GW190517\_055101 & 4\,s & 0.599 & 0.155 & 0.137 & - & \cmark \\ GW190519\_153544 & 4\,s & 0.475 & 0.862 & 0.562 & - & \cmark \\ GW190521 & 4\,s & 0.343 & 0.130 & 0.353 & - & \cmark \\ GW190521\_074359 & 4\,s & 0.207 & \textbf{0.009} & - & - & \xmark \\ GW190527\_092055 & 8\,s & 0.997 & 0.958 & - & - & \cmark \\ GW190602\_175927 & 4\,s & 0.174 & \textbf{$<$ 0.001} & 0.684 & - & \xmark \\ GW190620\_030421 & 4\,s & - & 0.499 & 0.989 & - & \cmark \\ GW190630\_185205 & 4\,s & - & 0.418 & 0.391 & - & \cmark \\ GW190701\_203306 & 4\,s & 0.772 & \textbf{0.004} & 0.348 & L1 glitch & \cmark \\ GW190706\_222641 & 4\,s & 0.025 & 0.203 & 0.269 & - & \cmark \\ GW190707\_093326 & 16\,s & 0.045 & 0.439 & - & - & \cmark \\ GW190708\_232457 & 8\,s & - & 0.647 & 0.171 & - & \cmark \\ GW190719\_215514 & 4\,s & 1.000 & 0.720 & - & - & \cmark \\ GW190720\_000836 & 16\,s & 0.021 & 0.020 & 0.045 & - & \cmark \\ GW190725\_174728 & 16\,s & 0.085 & 0.739 & 0.150 & - & \cmark \\ GW190727\_060333 & 4\,s & 0.701 & 0.864 & 0.311 & L1 glitch & \xmark \\ GW190728\_064510 & 16\,s & 0.418 & 0.463 & 0.488 & - & \cmark \\ GW190731\_140936 & 4\,s & 0.722 & 0.245 & - & - & \cmark \\ GW190803\_022701 & 4\,s & 0.046 & 0.624 & 0.844 & - & \cmark \\ GW190805\_211137 & 4\,s & 0.468 & 0.302 & 0.970 & - & \cmark \\ GW190814 & 32\,s & \textbf{0.003} & \textbf{$<$ 0.001} & 0.067 & L1 glitch & \xcmark \\ GW190828\_063405 & 4\,s & 0.227 & 0.755 & \textbf{0.004} & - & \xmark \\ GW190828\_065509 & 8\,s & 0.089 & 0.672 & \textbf{0.008} & - & \xmark \\ GW190910\_112807 & 4\,s & - & 0.918 & 0.661 & - & \cmark \\ GW190915\_235702 & 8\,s & 0.320 & 0.822 & 0.984 & - & \cmark \\ GW190916\_200658 & 8\,s & 0.727 & 0.645 & 0.877 & - & \cmark \\ GW190917\_114630 & 64\,s & 0.146 & 0.103 & 0.117 & - & \cmark \\ GW190924\_021846 & 32\,s & 0.989 & \textbf{$<$ 0.001} & \textbf{0.009} & L1 glitch & \xcmark \\ GW190925\_232845 & 8\,s & 0.342 & - & \textbf{$<$ 0.001} & - & \xmark \\ GW190926\_050336 & 4\,s & 0.986 & 0.684 & 0.142 & - & \cmark \\ GW190929\_012149 & 4\,s & 0.637 & 0.059 & 0.224 & - & \cmark \\ GW190930\_133541 & 16\,s & 0.316 & 0.025 & - & - & \cmark \\ \end{tabularx} \caption{\footnotesize The results of our residual test for all events in GWTC-2.1. The measured p-value is listed separately for each detector; cases where a detector was not operating are denoted as $-$. All results with a p-value less than 0.01 are considered impacted by glitches. For each of event, a \cmark means we find the same glitches as GWTC-2.1, a \xmark means our list of glitches completely disagrees with GWTC-2.1 and \xcmark indicates partial agreement. We generally find agreement between the glitches found by this method and those listed in GWTC-2.1, but multiple differences are noted.} \label{tab:glitch_tab_gwtc2} \end{table} \begin{table}[p] \footnotesize \begin{tabularx}{\linewidth}{l l r r r X l} Event name & Window & H1 p-value & L1 p-value & V1 p-value & GWTC result & Agree?\\ \hline GW191103\_012549 & 16\,s & 0.615 & 0.772 & - & - & \cmark \\ GW191105\_143521 & 16\,s & 0.098 & \textbf{0.004} & \textbf{$<$ 0.001} & V1 glitch & \xcmark \\ GW191109\_010717 & 4\,s & \textbf{$<$ 0.001} & 0.158 & - & H1 and L1 glitch & \xcmark \\ GW191113\_071753 & 64\,s & \textbf{$<$ 0.001} & 0.166 & \textbf{$<$ 0.001} & H1 glitch & \xcmark \\ GW191126\_115259 & 64\,s & 0.661 & \textbf{$<$ 0.001} & - & - & \xmark \\ GW191127\_050227 & 8\,s & \textbf{$<$ 0.001} & 0.500 & 0.765 & H1 glitch & \cmark \\ GW191129\_134029 & 16\,s & 0.010 & 0.878 & - & - & \xmark \\ GW191204\_110529 & 8\,s & 0.670 & 0.025 & - & - & \cmark \\ GW191204\_171526 & 8\,s & 0.986 & 0.483 & - & - & \cmark \\ GW191215\_223052 & 8\,s & 0.021 & 0.492 & \textbf{$<$ 0.001} & - & \xmark \\ GW191216\_213338 & 16\,s & 0.601 & - & 0.025 & - & \cmark \\ GW191219\_163120 & 32\,s & \textbf{$<$ 0.001} & \textbf{$<$ 0.001} & \textbf{0.006} & H1 and L1 glitch & \xcmark \\ GW191222\_033537 & 8\,s & 0.721 & 0.136 & - & - & \cmark \\ GW191230\_180458 & 4\,s & 0.299 & 0.241 & 0.350 & - & \cmark \\ GW200112\_155838 & 4\,s & - & 0.475 & \textbf{$<$ 0.001} & - & \xmark \\ GW200115\_042309 & 64\,s & 0.466 & \textbf{$<$ 0.001} & 0.567 & L1 glitch & \cmark \\ GW200128\_022011 & 4\,s & 0.954 & 0.721 & - & - & \cmark \\ GW200129\_065458 & 8\,s & 0.159 & \textbf{$<$ 0.001} & \textbf{$<$ 0.001} & L1 glitch & \xcmark \\ GW200202\_154313 & 16\,s & 0.358 & 0.500 & 0.010 & - & \cmark \\ GW200208\_130117 & 4\,s & 0.963 & 0.951 & 0.022 & - & \cmark \\ GW200208\_222617 & 8\,s & 0.362 & 0.832 & 0.926 & - & \cmark \\ GW200209\_085452 & 4\,s & 0.567 & 0.315 & 0.962 & - & \cmark \\ GW200210\_092254 & 16\,s & 0.319 & 0.356 & 0.098 & - & \cmark \\ GW200216\_220804 & 4\,s & 0.265 & \textbf{$<$ 0.001} & 0.822 & - & \xmark \\ GW200219\_094415 & 4\,s & 0.118 & 0.668 & 0.023 & - & \cmark \\ GW200220\_061928 & 4\,s & 0.195 & 0.279 & 0.410 & - & \cmark \\ GW200220\_124850 & 4\,s & 0.894 & 0.627 & - & - & \cmark \\ GW200224\_222234 & 4\,s & 0.267 & \textbf{$<$ 0.001} & 0.881 & - & \xmark \\ GW200225\_060421 & 8\,s & 0.413 & 0.169 & - & - & \cmark \\ GW200302\_015811 & 8\,s & 0.027 & - & 0.969 & - & \cmark \\ GW200306\_093714 & 16\,s & 0.014 & 0.395 & - & - & \cmark \\ GW200308\_173609 & 16\,s & 0.693 & 0.983 & 0.158 & - & \cmark \\ GW200311\_115853 & 4\,s & 0.027 & 0.547 & 0.199 & - & \cmark \\ GW200316\_215756 & 16\,s & 0.194 & 0.194 & \textbf{0.006} & - & \xmark \\ GW200322\_091133 & 16\,s & 0.292 & 0.898 & 0.166 & - & \cmark \\ \end{tabularx} \caption{\footnotesize The results of our residual test for all events in GWTC-3. The measured p-value is listed separately for each detector; cases where a detector was not operating are denoted as $-$. All results with a p-value less than 0.01 are considered impacted by glitches. For each of event, a \cmark means we find the same glitches as GWTC-3, a \xmark means our list of glitches completely disagrees with GWTC-3 and \xcmark indicates partial agreement. We generally find agreement between the glitches found by this method and those listed in GWTC-3, but multiple differences are noted.} \label{tab:glitch_tab_gwtc3} \end{table} \section*{References} \bibliography{main.bbl}
Title: PRODIGE -- Envelope to disk with NOEMA I. A 3000 au streamer feeding a Class I protostar
Abstract: Context. In the past few years, there has been a rise in the detection of streamers, asymmetric flows of material directed toward the protostellar disk with material from outside the star's natal core. It is unclear how they affect the process of mass accretion, in particular beyond the Class 0 phase. Aims. We investigate the gas kinematics around Per-emb-50, a Class I source in the crowded star-forming region NGC 1333. Our goal is to study how the mass infall proceeds from envelope to disk scales in this source. Results. We discover a streamer delivering material toward Per-emb-50 in H$_2$CO and C$^{18}$O emission. The streamer's emission can be well described by the analytic solutions for an infalling parcel of gas along a streamline with conserved angular momentum, both in the image plane and along the line of sight velocities. The streamer has a mean infall rate of $1.3 \times 10^{ -6}$ M$_{ \odot}$ yr$^{ -1}$, $5 -10$ times higher than the current accretion rate of the protostar. SO and SO$_2$ emission reveal asymmetric infall motions in the inner envelope, additional to the streamer around Per-emb-50. Furthermore, the presence of SO$_2$ could mark the impact zone of the infalling material. Conclusions. The streamer delivers sufficient mass to sustain the protostellar accretion rate and might produce an accretion burst, which would explain the protostar's high luminosity with respect to other Class I sources. Our results highlight the importance of late infall for protostellar evolution: streamers might provide a significant amount of mass for stellar accretion after the Class 0 phase.
https://export.arxiv.org/pdf/2208.01023
\title{PRODIGE - Envelope to disk with NOEMA \thanks{Based on observations carried out under project number L19MB with the IRAM NOEMA Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain)}} \subtitle{I. \commentstere{A 3000 au} streamer feeding a Class I protostar} \author{M. T. Valdivia-Mena \inst{1} \and J. E. Pineda\inst{1} \and D. M. Segura-Cox\inst{2}\thanks{Astronomy and Astrophysics Postdoctoral Fellow} \and P. Caselli\inst{1} \and R. Neri\inst{3} \and A. L\'opez-Sepulcre\inst{3,4} \and N. Cunningham\inst{4} \and L. Bouscasse\inst{3} \and D. Semenov\inst{5} \and Th. Henning\inst{5} \and V. Pi\'etu\inst{3} \and E. Chapillon\inst{6,3} \and A. Dutrey\inst{6} \and A. Fuente\inst{7} \and S. Guilloteau\inst{6} \and T. H. Hsieh\inst{1} \and I. Jim\'enez-Serra\inst{8} \and S. Marino\inst{5} \and M. J. Maureira\inst{1} \and G. V. Smirnov-Pinchukov\inst{5} \and M. Tafalla\inst{7} \and B. Zhao\inst{9} } \institute{Max-Planck-Institut f\"ur extraterrestrische Physik, Giessenbachstrasse 1, D-85748 Garching, Germany\\ \email{mvaldivi@mpe.mpg.de} \and Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Austin, TX 78712, USA \and Institut de Radioastronomie Millim\'{e}trique (IRAM), 300 rue de la Piscine, F-38406, Saint-Martin d'H\`{e}res, France \and IPAG, Universit\'{e} Grenoble Alpes, CNRS, F-38000 Grenoble, France \and Max-Planck-Institut f\"{u}r Astronomie, K\"{o}nigstuhl 17, D-69117 Heidelberg, Germany \and Laboratoire d'Astrophysique de Bordeaux, Universit\'{e} de Bordeaux, CNRS, B18N, All\'{e}e Geoffroy Saint-Hilaire, F-33615 Pessac, France \and Observatorio Astron\'{o}mico Nacional (IGN), Alfonso XII 3, 28014, Madrid, Spain \and Centro de Astrobiolog\'{i}a (CSIC-INTA), Ctra. Ajalvir km 4, Torrej\'{o}n de Ardoz, E-28850, Madrid, Spain \and Department of Physics and Astronomy, McMaster University, Hamilton, ON L8S 4E8, Canada } \date{\today} \abstract {In the past few years, there has been a rise in the detection of streamers, asymmetric flows of material directed toward the protostellar disk with material from outside the star's natal core. It is unclear how they affect the process of mass accretion, in particular beyond the Class 0 phase.} {We investigate the gas kinematics around Per-emb-50, a Class I source in the crowded star-forming region NGC 1333. Our goal is to study how the mass infall proceeds from envelope to disk scales in this source.} {We use new NOEMA 1.3 mm observations, including C$^{18}$O, H$_2$CO and SO, in the context of the PRODIGE MPG - IRAM program, to probe the core and envelope structures toward Per-emb-50.} {We discover a streamer delivering material toward Per-emb-50 in H$_2$CO and C$^{18}$O emission. The streamer’s emission can be well described by the analytic solutions for an infalling parcel of gas along a streamline with conserved angular momentum, both in the image plane and along the line of sight velocities. The streamer has a mean infall rate of $1.3 \times 10^{-6}$ \Msun yr$^{-1}$, $5-10$ times higher than the current accretion rate of the protostar. SO and SO$_2$ emission reveal asymmetric infall motions in the inner envelope, additional to the streamer around Per-emb-50. Furthermore, the presence of SO$_2$ could mark the impact zone of the infalling material. } {The streamer delivers sufficient mass to sustain the protostellar accretion rate and might produce an accretion burst, which would explain the protostar’s high luminosity with respect to other Class I sources. Our results highlight the importance of late infall for protostellar evolution: streamers might provide a significant amount of mass for stellar accretion after the Class 0 phase.} \keywords{ISM: kinematics and dynamics -- ISM: individual objects: Per-emb-50 -- ISM: structure -- stars: protostars -- stars: formation} \section{Introduction} The classical picture of star formation allows us to understand the collapse of a dense, individual core through simple physical assumptions, but does not fully explain the current observations of protostars and protoplanetary disks. In general, the classical models consist of a dense, mostly isolated core inside a molecular cloud which undergoes axisymmetric collapse and, due to the conservation of angular momentum, flattens and creates a disk around the central protostar \citep[e.g.,][]{Shu1977corecollapse, Terebey1984rotation}. % The first limitation of the classical models is that they depend on two assumptions: the spherical symmetry of the core collapse and its lack of interaction with material outside the protostar's natal core. In reality, molecular clouds are asymmetric % at all scales \commentstere{ \citep{Andre2014PPVIFilaments, Pineda2022}}, from the parsec-sized filaments \citep[e.g., ][]{HacarTafalla2011Taurus, Andre2010GBSHerschel}, to asymmetric envelopes around protostars \citep{Tobin2010Class0Envelopes}. Numerical simulations of molecular clouds that follow the collapse of several cores, including turbulence and magnetic fields, can reproduce these observed filaments and asymmetric structures \citep[e.g., ][]{Lebreuilly2021MFieldDiskForm,Kuznetsova2019, Kuffmeier2017infalltodisks,Padoan2014Infallsim}. A second problem with the standard model of inside-out, axisymmetric collapse of an isolated core is that predicts a constant mass accretion rate $\sim 10^{-5}$ \Msun yr$^{-1}$ \citep{Stahler1980ProtoEvolutionI}, but observed bolometric luminosities in embedded protostars imply accretion rates that are 10 to 100 times lower than this value \citep{Kenyon1990lumproblem, Evans2009C2Dlifetime}. This is known as the ``luminosity problem". Proposed solutions to this problem include an initial strong accretion phase followed by an accretion rate decay \citep{Padoan2014Infallsim}, and strong bursts of accretion during the protostellar phase \citep{Kuffmeier2018EpisodicAcc, Zhao2018_3Ddec_mag_field, Vorobyov2015DiskGI}. These solutions show that \commentstere{the} accretion process is asymmetric both in space and time, which is incompatible with fully axisymmetric collapse. Therefore, even if the simple symmetric model allows for a comprehension of isolated sources, it does not capture all the phenomena that affect the star formation process. Recently, numerical simulations show that the local environment surrounding the protostar has a deep impact on its evolution \citep{Hennebelle2020diskformationsims,Kuffmeier2018EpisodicAcc,Kuffmeier2017infalltodisks, Padoan2014Infallsim}. In particular, simulations focusing on star and disk formation repeatedly find asymmetric flows toward the disk \citep[e.g.,][]{Wurster2019, Kuznetsova2019,Kuffmeier2019-Bridge,Kuffmeier2017infalltodisks}. These long, thin inflows, called streamers, can deliver mass from outside the natal core to increase the available mass for the protostar \citep{Pelkonen2021massbeyondcore} and might have effects in the structure of protoplanetary disks \citep{Kuffmeier2017infalltodisks}. All these simulations show that the collapse from core to protostar is more complex than axisymmetric inside-out collapse. In the last few years, observations started to find streamers from envelope to disk scales \commentstere{ \citep[see][and references within]{Pineda2022}}. Streamers are found from the highly embedded Class 0 phase \citep{Pineda2020,LeGouellec2019PolarizedClass0} through the less embedded Class I \citep[Segura-Cox et al. in prep.,][]{Chou2016DiskandFilconnectionL1455}, all the way to Class II sources \citep[e.g.,][]{Ginski2021,Garufi2021arXiv-accretionDGTauHLTau,Alves2020,Akiyama2019,Yen2019HLTau,Tang2012ABAurLateAccretion}. They have also been found feeding not only singles, but also protostellar binaries, both funneling material toward the inner circumstellar disks \citep{Phuong2020GGTauSpirals,Alves2019PretzelAccretion, Dutrey2014GGTau} and to the binary system as a whole \citep{Pineda2020}. These structures are observed in a diversity of molecules, such as $^{12}$CO \citep{Alves2020} % and HC$_3$N \citep{Pineda2020}, and also in scattered light \citep{Ginski2021,Akiyama2019}. The first streamer to be characterized using only free-fall motion, and thus confirming it is infalling toward the protostar, is located toward the Class 0 source Per-emb-2 \citep{Pineda2020}. This streamer transports material from outside the dense core ($> 10\,000$ au) into the protoplanetary disk and protostar system. The infall rate of this streamer, which describes how much mass is deposited into disk forming scales, is comparable to the accretion rate toward the protostar, implying that the streamer could change the future protostellar accretion rate by funneling extra material. This streamer was discovered with a carbon-chain species, HC$_3$N, which traces best the less chemically evolved material in contrast to the more evolved protostellar core seen in N$_2$H$^+$ \citep{BerginTafalla2007coldcloudsreview}. % These objects prove that the environment influences the star’s development and support the results from simulations that state the mass available to the protostar could be coming from further away than the natal core \citep{Pelkonen2021massbeyondcore}. Even though asymmetric infall is an ubiquitous feature in numerical simulations, to the best of our knowledge, only a few streamers have been found and their infall properties quantified using either average estimates of infalling material and/or free-fall motion models toward the disk and protostar system \citep[e.g.,][]{Ginski2021,Pineda2020,Alves2019PretzelAccretion}. This is where the MPG - IRAM observing program ``PROtostars \& DIsks: Global Evolution" (PRODIGE, CO-PIs: P. Caselli, Th. Henning) comes in: this program is designed as a coherent study of the physical and chemical properties of young protostellar systems, targeting 32 Class 0/I sources and 8 Class II T Tauri protoplanetary disks. One of its goals is to search for material flowing into Class 0 and I sources and investigate the mass budget during these phases. PRODIGE observations are done with the IRAM NOrthern Extended Millimetre Array (NOEMA), located at the Plateau de Bure in the French Alps. This program takes advantage of the PolyFix correlator to make an unbiased survey of molecular lines, thus allowing for the search of streamers in multiple chemical tracers. In this paper, we present new NOEMA 1.3 mm ($\approx220$ GHz) observations from the PRODIGE survey of 5 molecules (H$_2$CO, C$^{18}$O, $^{12}$CO, SO and SO$_2$) toward the Class I protostar Per-emb-50. Our aim is to characterize the core kinematics around this embedded protostar from approximately 300 au scales out to 3000 au from the source, to investigate how the mass infall proceeds from envelope to disk scales. The paper is divided as follows. Section \ref{sec:observations} describes % the NOEMA observations, data reduction and imaging procedures. Section \ref{sec:results} shows the observed structures in each molecular tracer and how we separate the different kinematic components. We discuss how the structures found might affect the protostar and protostellar disk evolution in Per-emb-50 and how they fit in the general star formation paradigm in Sect. \ref{sec:discussion}. We summarize our results in Sect. \ref{sec:conclusions}. \section{Observations and data reduction\label{sec:observations}} \subsection{Per-emb-50} Per-emb-50 is an embedded Class I protostar, according to its Spectral Energy Distrubution (SED) in the near- and mid-infrared \citep{Evans2009C2Dlifetime, Enoch2009}. It is located in the active star-forming region NGC 1333, at a distance of 293 pc \citep{Ortiz-Leon2018,Zucker2018distance}, in the Perseus Giant Molecular Cloud. This protostar is $\sim10$ times brighter than other Class I sources in the vicinity \citep{Dunham2015gouldbeltcatalog,Enoch2009} and its protostellar accretion rate is estimated between $(1.3-2.8) \times 10^{-7}$ \Msun yr$^{-1}$, also around 10 times larger than other Class I sources \citep{Fiorellino2021-mdot}. It has a clear outflow observed in $^{12}$CO\,(2--1) emission with an east-west orientation \citep{Stephens2019}. VLA 8 mm continuum analysis shows a large dust disk in Per-emb-50, with a characteristic radius between $27-32$ au (where there is a significant drop in the dust flux profile), and dust mass around $0.28-0.58$ \Msun \citep{Segura-Cox2016}. Radiative transfer models applied to millimeter observations suggest that grain growth has proceeded within the envelope, producing grains with sizes $\sim100$ $\mu$m \citep{Agurto-Gangas2019}. Properties of the protostar and its disk taken from the literature are summarized in Table \ref{tab:peremb50}. \begin{table}[htb] \caption{Properties of Per-emb-50 from literature. } \centering \begin{tabular}{ccc} \hline\hline Property & Value & Reference \\ \hline RA (J2000, deg) & 03:29:07.76 & 1 \\ Dec (J2000, deg) & +31:21:57.2 & 1 \\ $M_{*}$ (\Msun) & $1.5-1.9$ & 2 \\ $M_{disk}$ (\Msun) & $0.28 - 0.58$ & 3 \\ $R_{c}$ (au) $^{*}$ & $27 - 32$ & 3 \\ $i_{disk}$ ($\deg$) & 67 & 3 \\ PA$_{disk}$ ($\deg$) & 170 & 3 \\ $d$ (pc) $^{**}$ & $293 \pm 22$ & 4 \\ \hline \end{tabular} \tablefoot{ \tablefoottext{*}{This is a characteristic radius at which there is a significant drop in the dust flux exponential profile, a proxy for the radius.} \tablefoottext{**}{This distance corresponds to the distance to NGC 1333.}\tablefoottext{1}{\cite{Enoch2009}} \tablefoottext{2}{\cite{Fiorellino2021-mdot}} \tablefoottext{3}{\cite{Segura-Cox2016}} \tablefoottext{4}{\cite{Ortiz-Leon2018, Zucker2018distance}} \label{tab:peremb50}} \end{table} \subsection{NOEMA observations} The observations are obtained with NOEMA and are part of the MPG-IRAM Observing Program PRODIGE (Project ID L19MB). In this program, we use the Band 3 receiver and the new PolyFix correlator, tuned with a local-oscillator (LO) frequency of 226.5 GHz. PolyFix provides a full 16 GHz of bandwidth at coarse spectral resolution (2 MHz channel width) and is divided into 4 units (LSB Outer, LSB Inner, USB Inner and USB Outer). Simultaneously, we place 39 high spectral resolution windows of 62.5 kHz channel resolution within the coarse resolution 16 GHz bandwidth. Observations of Per-emb-50 were conducted in two separate periods for each antenna configuration. The C configuration data were observed during 2019 December 29th and 2021 January 5th. The D configuration observations were taken in 2020 August 6th and September 7th and 8th. The maximum recoverable scale (MRS) for our data is 16.9\arcsec at 220 GHz, approximately 5000 au at the distance of Per-emb-50. \begin{table*}[htbp] \centering \caption{\label{tab:cubeprops}Properties of the molecular line observations from NOEMA. } \begin{tabular}{cccccccc} \hline\hline Molecule & Transition & $E_{up}$ & Frequency$^{*}$ & $\theta_{maj}\times \theta_{min}$ (PA) & rms & $\Delta V_{\mathrm{chan}}$ \\ & & (K) &(GHz) & ($\arcsec \times \arcsec$, $^{\circ}$) & (mJy beam$^{-1}$) & (\kms)\\ \hline SO & 5$_5$ -- 4$_4$ & 44.1 & 215.2206530 & $1.25\times0.73$ (21.48) & 13.01 & 0.08 \\ H$_2$CO & 3$_{0,3}$ -- 2$_{0,2}$ & 21.0 & 218.2221920 & $1.24\times0.72$ (21.43) & 11.97 & 0.08 \\ SO$_2$ & 11$_{1,11}$ -- 10$_{0,10}$ & 60.4 & 221.9652196 & $1.24\times0.72$ (20.89) & 11.54 & 0.08 \\ C$^{18}$O & 2 -- 1 & 15.8 & 219.5603541 & $1.24\times0.71$ (20.87) & 13.94 & 0.08\\ $^{12}$CO & 2 -- 1 & 5.5 & 230.5380000 & $1.15\times0.67$ (21.20) & 7.43 & 2.60 \\ \hline \end{tabular} \tablefoot{ \tablefoottext{*}{Taken from the Cologne Database for Molecular Spectroscopy \citep{Endres2016CDMS}} } \end{table*} We calibrate the data using the standard observatory pipeline in the GILDAS (Grenoble Image and Line Data Analysis Software) package CLIC (Continuum and Line Interferometer Calibration). We use 3C84 and 3C454.3 as bandpass calibrators. Phase and amplitude calibration sources are 0333+321 and 0322+222, and observations for these calibrators were taken every 20 min. LKHA101 and MWC349 are used as flux calibrators. The uncertainty in flux density is 10\%. The continuum is bright enough to allow for self calibration. Only for the continuum image used in this work, self calibration is performed iteratively with solution intervals of 300 s, 135 s, and 45 s. The line observations are not done with self-calibrated data. The resulting continuum image, shown in Appendix \ref{sec:cont}, is done with the Lower Inner (LI) continuum window and has a noise of 0.2 m\Jyb. Continuum subtraction and data imaging are done with the GILDAS package \texttt{mapping} using the \texttt{uv\_baseline} and \texttt{clean} tasks. All line cubes are imaged using natural weight to minimize the rms, while the continuum maps are imaged with robust $=1$, to improve the angular resolution. We image the continuum-subtracted cubes using the standard CLEAN algorithm and a manual mask for each channel. Once we converge to a final mask, we perform a final CLEAN down to the rms level using multiscale CLEAN algorithm implemented in \texttt{mapping}. This has an effect of reducing imaging artifacts (mainly negative emission bowls), thus improving the general image quality around bright sources. Towards Per-emb-50, we detect $^{12}$CO\,(2--1), C$^{18}$O\,(2--1), H$_2$CO\,(3$_{0,3}$--2$_{0,2}$), SO\,($5_5$--$4_4$) and SO$_2$\,(11$_{1,11}$ -- 10$_{0,10}$) line emission. The $^{12}$CO\,(2--1) line is located in the coarse resolution bandwidth, whereas the rest of the lines are inside the high resolution windows. The final line cubes have a beam FWHM $\theta$ of approximately 1.2\arcsec, a primary beam FWHM of 22\arcsec at 220 GHz, a field of view (FoV) of 45.8\arcsec diameter and a channel spacing of 0.08 \kms. The effective spectral resolution is approximately 1.7 times the channel spacing. The average rms is around 13 m\Jyb or around 400 mK. The resulting properties of each line cube are reported in Table \ref{tab:cubeprops}. \section{Results\label{sec:results}} \subsection{Streamer in Per-emb-50 \label{sec:asym-inflows}} The integrated intensity images for H$_2$CO\,(3$_{0,3}$--2$_{0,2}$) and SO\,($5_5$--$4_4$) are shown in Figure \ref{fig:images}. The former unveils a large streamer in the south-west of the central star, whereas the latter shows extended emission surrounding the protostar. We refer to H$_2$CO(3$_{0,3}$--2$_{0,2}$) as H$_2$CO and SO($5_5$--$4_4$) as SO from now on. The integrated intensity maps are calculated between 5.5 and 9.5 \kms in the case of H$_2$CO and between -1 and 14 \kms in SO, which are the velocity ranges where all emission over $3\sigma$ in each channel (see Table \ref{tab:cubeprops} for $\sigma$ values) is present for each molecule. The streamer stretches from the location of the protostar to the edge of the primary beam in southwest direction, with a total length of approximately 3000 au (22\arcsec) and a width of approximately 300 au (1\arcsec). As the width of the streamer is barely resolved, this width is considered an upper limit. Also, as the streamer reaches up to the primary beam FWHM, it is possible that it extends further, so the length is a lower limit as well. The peak integrated intensity in H$_2$CO presents a signal-to-noise ratio (S/N) of 11. The streamer is detected with an $\mathrm{S/N}\geq 6$ along its 3000 au length. % This streamer is spatially unrelated to the outflow of Per-emb-50, since the emission does not spatially overlap with the outflow. Figure \ref{fig:images} Left shows the outflow observed in $^{12}$CO(2--1) emission, from the wide-band setup of our NOEMA observations. $^{12}$CO is integrated from $\mathrm{V}_{\mathrm{LSR}}=-4$ to 4 \kms for the blueshifted emission and from 11 to 20 \kms for the redshifted emission. The outflow, previously observed by \cite{Stephens2019}, is in the east-west direction, whereas the H$_2$CO streamer extends in a northeast-southwest direction. Outside the primary beam and to the southwest of Per-emb-50, there is also enhanced H$_2$CO and SO emission. \commentstere{It is difficult to characterize the nature of this structure because it is outside the primary beam, even after primary beam correction: emission in this region might be contaminated by emission from outside our field of view, leaking through the side-lobes of the antenna response pattern. } In Section \ref{sec:discussion-streamlowlims}, we discuss the possibility that this emission consists of an extension of molecular emission further away from the protostar. \subsection{Streamer kinematics\label{sec:H2COgausfit}} We fit a Gaussian to the H$_2$CO line emission without primary beam correction using \texttt{pyspeckit} (see Appendix \ref{ap:gaussfit} for details). The central velocity $V_{\mathrm{LSR}}$ and velocity dispersion $\sigma_{\mathrm{v}}$ of the Gaussians that best fit the spectrum \commentstere{at} each pixel with $\mathrm{S/N}>4$ are shown in Figure \ref{fig:H2COfit}. % H$_2$CO line emission is characterized by mostly blueshifted emission with respect to Per-emb-50's $V_{\mathrm{LSR}}$ (7.5 \kms, see Section \ref{sec:protostellarmass}). The velocity of the streamer further away from the protostar consists of mostly constant blueshifted velocities (with $V_{\mathrm{LSR}}\approx7.2$ \kms) and low velocity dispersion of $\sigma_{\mathrm{v}}=0.1-0.2$ \kms. Closer to the protostar, between positions 2 and 3 marked in Fig.~\ref{fig:H2COfit} and shifted to the west with respect to the general direction of the streamer, there is a sudden increase in velocities, from 7.2 to 7.5 \kms. We refer to this region as the "kink" from now on, as it is a "kink" or bend in the overall shape of the emission and an abrupt break in the velocity distribution. It is improbable that the sudden redshift in velocities is caused by the outflow, as its west side consists of blueshifted emission, whereas the kink in the streamer is redshifted with respect to the rest of the streamer’s velocities. The kink is followed by a reversal back to \commentstere{blueshifted} velocities approaching the protostar, in the inner 1000 au. There is a steep velocity ($V_{\mathrm{LSR}}$) gradient, a change of 7.1 to 6.5 \kms in $\sim 750$ au, and the velocity dispersion ($\sigma_{\mathrm{v}}$) increases from 0.4 to 0.7 \kms in the same region. This gradient suggests that the gas follows infall motions dominated by the central gravitational force of the protostar, disk and inner envelope. \subsection{Protostellar mass and velocity\label{sec:protostellarmass}} The integrated intensity image of C$^{18}$O\,(2 -- 1) is shown in Fig.~\ref{fig:keplerC18O} Left. We refer to C$^{18}$O\,(2 -- 1) as C$^{18}$O from now on, unless otherwise stated. The C$^{18}$O observations show the most extended emission of our NOEMA observations and have a similar velocity range as SO, between -1 and 14 \kms. This molecule's emission closest to the protostar allows us to determine the protostar's velocity and mass. We produce the position-velocity (PV) diagram for C$^{18}$O along the major axis of the disk in Per-emb-50 found by \cite{Segura-Cox2016} (170$^{\circ}$ counter-clockwise from North, see Fig.~\ref{fig:keplerC18O}). We use the astropy package \texttt{pvextractor} \citep{pvextractor} to obtain the PV diagram along a path centered on the protostar spanning a total length of 2400 au. We averaged all emission along the path that is within 1\arcsec width. The resulting PV diagram in Fig.~\ref{fig:keplerC18O} Right is consistent with rotation, with increasing velocity toward the protostar. The C$^{18}$O emission might be tracing Keplerian rotation. Our observations of C$^{18}$O allow us to constrain the mass of the protostar. We obtain a central protostellar velocity $V_{\mathrm{LSR}}$ of 7.5 \kms, and a central protostellar mass $M_{*}=1.7\pm 0.2$ \Msun from the C$^{18}$O PV diagram. For this, we first manually determine the velocity that minimizes the asymmetries in the PV diagram. This results in a $V_{\mathrm{LSR}}=7.5$ \kms, marked with a horizontal dotted line in Fig.~\ref{fig:keplerC18O}. Afterwards, we compare the PV diagram with the Keplerian rotation curves produced by the masses previously estimated for Per-emb-50 by \cite{Fiorellino2021-mdot} using IR spectroscopy: they obtained a range between $0.5-0.7$ \Msun for a star located at the birthline at the HR diagram (using \cite{Palla1993birthmodel} model) and $1.5-1.9$ \Msun for a 1 Myr old protostar. The Keplerian rotation curves are weighted according to the inclination angle as $\mathrm{v}=\mathrm{v}_{\mathrm{kep}}\sin(i)$, where $i=67^{\circ}$ \citep[see Table \ref{tab:peremb50},][]{Segura-Cox2016}, with $i=0^{\circ}$ corresponding to a face-on disk. The Keplerian rotation curves for a central protostar of 1\,Myr present a good correlation with the 3$\sigma$ contours ($\sigma=14$ m\Jyb, see Table \ref{tab:cubeprops}) of the C$^{18}$O PV diagram. We use the average between the 1 Myr mass upper and lower limits, 1.7 \Msun, and their difference as uncertainty ($\pm0.2$ \Msun). \subsection{Streamline model\label{sec:streaminemodel}} We model the kinematics of the streamer observed with H$_2$CO emission to confirm that the velocity gradient observed in H$_2$CO emission is consistent with infall motion, using the analytic solution for material falling from a rotating, finite-sized cloud toward a central object, dominated by the gravitational force of the latter. We use the analytic solutions of \cite{Mendoza2009}, previously used by \cite{Pineda2020} on the Per-emb-2 streamer. The model returns the position $\Vec{x_i}=(x,y,z)_i$ in au and velocity $\Vec{V_i}=(\mathrm{v}_x,\mathrm{v}_y,\mathrm{v}_z)_i$ in \kms (in Cartesian coordinates) of a parcel of mass along a streamline, where the z axis is defined along the angular momentum vector of the disk and the x-y plane is the disk plane. The model's input is the initial position and radial velocity of the parcel of mass within the cloud in spherical coordinates (initial radial distance $r_0$, position angle $\vartheta_0$ with respect to the z axis, inclination angle $\varphi_0$ which marks the initial angle within the disk plane and radial velocity $\mathrm{v}_{r,0}$) and the initial angular velocity of the cloud $\Omega_0$. We also apply two rotations due to the inclination angle $i$ and position angle $PA$ of the disk, to obtain the position and velocity with respect to the observer's point of view from the disk's reference system. The streamline model requires as input the central mass that dominates the gravitation of the system, which in this case is the sum of the masses of the protostar, disk and envelope, $M_{tot}=M_{*}+M_{env}+M_{disk}$. We use $M_{*}=1.7\pm0.2$ \Msun (see Sect. \ref{sec:protostellarmass}) and $M_{disk} = 0.58$ \Msun, the upper limit calculated in \cite{Segura-Cox2016}. For the envelope mass we use an upper limit of 0.39 \Msun and a lower limit of 0.18 \Msun, obtained using the Bolocam 1.1 mm image from \cite{Enoch2006}, taking the emission of Per-emb-50 with the disk component removed (see Appendix \ref{ap:envelopemass} for details). We manually input the initial position ($r_0$, $\vartheta_0$ and $\varphi_0$), velocity $\mathrm{v}_{r,0}$ and inclinations $i$ and $PA$ to find the best parameters. We first assume that the streamer's rotation direction, given by $i$ and $PA$, are the same as the % dust disk $i$ and $PA$ from \cite{Segura-Cox2016} (see Table \ref{tab:peremb50}). The inclination angle $i$ obtained from the dust disk is degenerate in 3-dimensional space (it can be inclined in $67^{\circ}$ or $-67^{\circ}$). We use the rotation direction given by the C$^{18}$O PV diagram in Sect. \ref{sec:protostellarmass} and the outflow direction (see Fig.~\ref{fig:images} Left) to determine that the angular velocity vector of the disk $\Vec{\omega}$ points toward the west (in the direction of the blueshifted outflow component) and is inclined toward the observer, thus $i=-67^{\circ}$. Then, we attempt to find analytic solutions with other $i$ and $PA$ values. The $i$ and $PA$ from \cite{Segura-Cox2016} and our disambiguation give the only rotation direction where we could find a solution for the velocity profile of the streamer. Table \ref{tab:paramsstream} lists the parameters that result in the analytic solutions for an infalling mass that best reproduce the H$_2$CO line profiles in the image plane and the structure of the velocity along the line of sight. Figure \ref{fig:streamerH2CO} shows the projected trajectory of the streamline model with the best parameters over the central velocity of the Gaussian fit to the H$_2$CO emission, both in the image plane (Left panel) and over the kernel density estimate (KDE) of the velocity and projected distance in the data (Right panel). We use the KDE implementation in the python package \texttt{SciPy} \citep{2020SciPy-NMeth} over the resulting central velocities obtained in Sect. \ref{sec:H2COgausfit}. % The streamline model is able to reproduce the general shape of the KDE and the acceleration toward the protostar in the inner 1000 au. The model is not able to reproduce the slight discontinuity seen in the KDE at $\sim1700$ au, which is related to the kink feature (see Sec. \ref{sec:H2COgausfit}). The difference between using the upper and lower limits of the envelope mass is negligible in both the image and velocity planes (red and blue curves in Fig.\ref{fig:streamerH2CO}). \begin{table}[ht] \centering \caption{\label{tab:paramsstream}Parameters of the streamline model that reproduce best the H$_2$CO observations.} \begin{tabular}{lll} \hline\hline Parameter & Unit & Value \\ \hline $\vartheta_0$ & deg & 61.5 \\ $\varphi_0$ & deg & 28.0 \\ $r_0$ & au & \commentstere{3330} \\ $v_{r,0}$ & \kms & 1.25 \\ $\Omega_0$ & s$^{-1}$ & $4.53 \times 10^{-13}$ \\ $i$ & deg & -67 \\ P.A. & deg & 170 \\ \hline \end{tabular} \end{table} The centrifugal radius $r_c$ \citep[called $r_u$ in ][]{Mendoza2009} given by the parameters in Table \ref{tab:paramsstream} is between $r_c=238-258$ au, using the upper or lower limit for the envelope mass, respectively, both of which are within the beam size. This radius is the limit where the streamer can be modeled as free-falling matter with constant angular momentum, so we interpret this radius as approximately where the streamer deposits its material. This implies that the streamer deposits its mass at a distance about 150 au from the gas disk's edge, which we estimate has a radius of approximately 90 au using the SO line emission obtained in this work (see Sect. \ref{sec:SOdecomposition}). We do not use the streamline model solutions for distances smaller than $r_c$, as the model does not include motions within the gas and dust disk. \subsection{Streamer mass\label{sec:streamermass}} We calculate the streamer's mass and infall rate using the primary beam corrected C$^{18}$O emission in the area where the streamer is detected in H$_2$CO emission, as we can convert C$^{18}$O emission to gas emission with simple assumptions. \commentstere{We use the primary beam corrected emission because we use the intensity of the C$^{18}$O line, which we obtain by multiplying the map by the primary beam response, whereas in Sect. \ref{sec:H2COgausfit} we only need the central velocity and velocity dispersion of each spectrum to characterize the streamer's kinematics.} C$^{18}$O emission is the most extended of all the molecular transitions used in this work, as it traces not only the gas in the streamer, but also the extended gas in the inner envelope and the filament where the protostar is \commentstere{embedded into}, which has a larger extension than the FoV. Nevertheless, the streamer is easily detected in C$^{18}$O, with a $\mathrm{S/N}\approx 10$ at the streamer's tail. The C$^{18}$O emission shows a similar structure as in the H$_2$CO map. We cannot characterize the C$^{18}$O extended emission and kinematics outside of the streamer as we lack zero-spacing data, and we see some negative bowls in the image (see the black areas in Fig.~\ref{fig:keplerC18O} Right), indicating missing flux from larger scales. Therefore, for this work we use C$^{18}$O emission to describe the protostar and streamer's mass only. % C$^{18}$O shows a similar central velocity as H$_2$CO at the streamer's tail but different velocity distribution at the position of the protostar, as shown in Fig.~\ref{fig:C18Ofit}. We use the same procedure for H$_2$CO to obtain the best Gaussian that fits the spectrum of each pixel with $\mathrm{S/N}>4$, described in Appendix \ref{ap:gaussfit}. Where the emission is coincident with H$_2$CO, C$^{18}$O is well described with one Gaussian component which shares the same $V_{\mathrm{LSR}}$ and $\sigma_{\mathrm{v}}$ as H$_2$CO emission (compare Fig.~\ref{fig:H2COfit} with Fig.~\ref{fig:C18Ofit}). The kink in velocities observed in the middle of the streamer is also observed in C$^{18}$O. Surrounding the protostar, outside of the area traced by the continuum, we observe blueshifted emission toward the northwest and redshifted toward the east. This emission probably traces a mixture of part of the inner envelope and disk rotation and the inner section of the outflow, as it follows the same east-west direction as the $^{12}$CO outflow detected by \cite{Stephens2019}. Therefore, it is safe to use C$^{18}$O emission within the region used to characterize the streamer's kinematics (black polygon in Fig.~\ref{fig:H2COfit} and Fig.~\ref{fig:C18Ofit}) to determine the streamer's mass. We obtain a mass lower limit for the streamer within the region drawn in Figures \ref{fig:H2COfit} and \ref{fig:C18Ofit}. We detail the reasons why this is a lower limit in Sect. \ref{sec:discussion-streamlowlims}. We calculate the mass within the streamer assuming that C$^{18}$O is optically thin, under local thermodynamic equilibrium (LTE), and the streamer has a constant temperature $T_{ex}$. We use the values in the vicinity of Per-emb-50 in \cite{Friesen2017} and \cite{Dhabal2019}, which are between 10 and 20 K, hence we assume $T_{ex}=15\pm 5$ K. First, we obtain the column density of the C$^{18}$O molecule, $N(\mathrm{C}^{18}\mathrm{O})$, using the primary beam corrected C$^{18}$O image. We explain the details of this procedure in Appendix \ref{ap:columndens}. The C$^{18}$O column density is around $2.8\times10^{15}$ cm$^{-2}$ within 1000 au of the protostar, then it falls to $\approx 8.0\times10^{14}$ cm$^{-2}$ and in the outer 1500 au it reaches up to $3.6\times10^{15}$ cm$^{-2}$. Afterwards, we transform $N(\mathrm{C}^{18}\mathrm{O})$ to molecular Hydrogen column density $N(\mathrm{H}_2)$ using $N(\mathrm{H}_2) = X_{\mathrm{C}^{18}\mathrm{O}} N(\mathrm{C}^{18}\mathrm{O})$. We use the canonical ratio $X_{\mathrm{C}^{18}\mathrm{O}}=5.9 \times 10^{6}$ \citep{Frerking1982}. % Finally, we obtain the gas mass in the streamer using: \begin{equation} M_{\mathrm{streamer}} = M_{gas} = \mu m_{H} d^2\, \delta x\, \delta y \sum N_{\mathrm{H}_2}, \label{eq:mh2mass} \end{equation} where $\sum N_{\mathrm{H}_2}$ is the sum of $N_{\mathrm{H}_2}$ in the streamer in cm$^{-2}$, $d$ is the distance to the protostar in cm, $\delta x\, \delta y$ is the size of the pixels in radians, $\mu=2.7$ is the molecular weight of the gas, considering the contribution from H$_2$, He and heavy elements, and $m_{H}$ is the H atom mass. We use $d = 293 \pm 22$ pc, the distance to NGC 1333 \citep[][see Table \ref{tab:peremb50}]{Ortiz-Leon2018}. We obtain a lower limit for the total mass of the streamer $M_{\mathrm{streamer}} = 1.2 \times 10^{-2}$ M$_{\odot}$, with an uncertainty of 15\% due to uncertainties in flux calibration and in the distance to NGC 1333 (see Table \ref{tab:peremb50}). \subsection{Streamer infall rate\label{sec:streamerinfallrate}} We calculate the mean infall rate and the infall rate along the streamer using the mass obtained in Sect. \ref{sec:streamermass}, and compare it to the protostellar accretion rate. We differentiate between infall rate $\Dot{M}_{in}$, which is the rate at which mass is deposited from the envelope to the disk scales, and accretion rate $\Dot{M}_{acc}$, which is the rate at which the protostar is accreting mass. The free-fall timescale of the streamer, assuming the classic free-fall time equation, \begin{equation} t_{\mathrm{ff}} = \sqrt{\frac{R^3}{GM_{tot}}},\label{eq:freefall} \end{equation} is $21.3\pm0.8$ kyr for an envelope mass of 0.18 \Msun ($M_{tot}=2.47$ \Msun), and $20.5\pm0.7$ kyr for $M_{env}=0.39$ \Msun ($M_{tot}=2.68$ \Msun). In Equation \ref{eq:freefall}, $M_{tot}$ is the total mass within a distance $R=r_0=3300$ au from the protostar (obtained from the streamline model in Sect. \ref{sec:streaminemodel}), and $G$ is the gravitational constant. We divide the total mass with the free-fall timescale to obtain an average $\langle \Dot{M}_{in}\rangle$ between $(5.4-5.6)\times10^{-7}$ \Msun yr$^{-1}$. The upper limit is plotted as a dotted line in Fig.~\ref{fig:massaccretion}. Since we constrain the streamer's kinematics (see Sect. \ref{sec:streaminemodel}) and its column density at each position, we now derive the infall rate at every position of the streamer. We first calculate the free-fall timescale $t_{\mathrm{ff}, \mathrm{model}}$ and average infall rate $\langle \Dot{M}_{in, \mathrm{model}}\rangle$ using the analytic solutions from Sect. \ref{sec:streaminemodel} to compare it to the classical free-fall timescale $\langle \Dot{M}_{in} \rangle$. For this, we calculate the travel time along the streamer by using the streamer's trajectory and velocities from the streamline model, from $r_0=3300$ au to the centrifugal radius, which we assume is the landing point (we use $r_c=238$ au). We obtain a total free-fall time of 8.7 kyr for $M_{env}=0.18$ \Msun and 8.6 kyr for $M_{env}=0.39$ \Msun, around 2 times lower than the times calculated previously, because the classic free-fall timescale (Equation \ref{eq:freefall}) does not consider that the streamline already has an initial velocity toward the protostar at $R$. The resulting infall rate is $\langle \Dot{M}_{in, \mathrm{model}}\rangle = 1.3\times 10^{-6}$ \Msun yr$^{-1}$ for both envelope masses. The average $\langle \Dot{M}_{in, \mathrm{model}}\rangle$ using the streamline model is plotted as a dashed line in Fig.~\ref{fig:massaccretion}. The mass and average infall rates found for the streamer are summarized in Table \ref{tab:streamermass}. \begin{table} \caption{\label{tab:streamermass}Global properties of the streamer found in Sect. \ref{sec:streamermass} and \ref{sec:streamerinfallrate}.} \centering \begin{tabular}{lll} \hline\hline Property & Unit & Value \\ \hline $M_{\mathrm{streamer}}$ & \Msun & $(1.2\pm0.2) \times 10^{-2}$ \\ $t_{\mathrm{ff}}$ & kyr & 20.5 -- 21.3 \\ $t_{\mathrm{ff},\mathrm{model}}$ & kyr & 8.6 -- 8.7 \\ $\langle \Dot{M}_{in}\rangle$ & \Msun yr$^{-1}$ & $(5.4-5.6)\times10^{-7}$ \\ $\langle \Dot{M}_{in, \mathrm{model}}\rangle$ & \Msun yr$^{-1}$ & $1.3\times 10^{-6}$ \\ \hline \end{tabular} \end{table} We also study how the infall rate changes along the streamer, to determine if there are significant differences in the infall rate within the streamer. Figure \ref{fig:C18Ofit} Left shows that molecular emission is clumpy on scales of the beam size, which suggests that there might be small-scale variations along the streamer. We separate the streamer into radial bins and obtain the mean 3-dimensional distance to the protostar $r_{bin}$, the total mass $M_{bin}$, the time taken to traverse the bin $\Delta t_{bin}$ and the infall rate $\Dot{M}_{bin}$ in each bin. The bins are 360 au wide (which is the major axis FWHM of the beam) and consist of all pixels that are within a certain range of projected distances $[r, r+360]$ au from Per-emb-50. We sample every 120 au (1/3 of the major axis of the beam) from 200 au to 3300 au from the protostar, in projected distance. The resulting mass, crossing time and infall rates for each bin are in Fig.~\ref{fig:massaccretion}. We calculate $r_{bin}$ as the distance of the streamline model point that is closest to the center of mass of the bin in the image plane. We use $N(\mathrm{C}^{18}\mathrm{O})$ to find the center of mass within each bin and then find the point in the streamline model closest to it. Then, the distance $r_{bin}$ is the 3-dimensional distance between that point and the protostar. We express this distance as the free-fall timescale from $r_{bin}$ using: \begin{equation} t_{bin} = -\int_{r_{bin}}^{0} \frac{dr'}{\sqrt{\mathrm{v}_{r,0}^2 + 2GM_{tot} \Big(\frac{1}{r'} - \frac{1}{r_0}\Big)}}, \label{eq:integralffwithv0} \end{equation} where $\mathrm{v}_{r,0}$ is the initial velocity (1.25 \kms) at $r_0$ (3300 au) from the streamline model toward the direction of the protostar. The integral is done numerically using the python package \texttt{scipy} function \texttt{integrate}. The difference between the solution of Equation \ref{eq:integralffwithv0} and the free-fall timescale given by the streamline model is less than 20 yr, which is negligible for the timescales we are working with. We compute the infall rate of the streamer using the mass within each bin $M_{bin}$ and the time it takes to cross the bin $\Delta t_{in}$. $M_{bin}$ is calculated using Equation \ref{eq:mh2mass}, adding $N_{\mathrm{H}_2}$ in all pixels that belong to the bin. We then calculate $\Delta t_{bin}$ the same way as the total free-fall timescale, but adding up the time obtained from the trajectory and velocities within the bin only. The infall rate for each bin is therefore $\Dot{M}_{bin} = M_{bin}/\Delta t_{bin}$. The infall rate along the streamer is consistently larger or equal to the accretion rate estimated for Per-emb-50, independent of the variations along the streamer. Figure \ref{fig:massaccretion} shows the resulting $M_{bin}$, $\Delta t_{bin}$ and $\Dot{M}_{bin}$ with respect to the distance to the protostar $r_{bin}$, and compares the infall rates $\Dot{M}_{bin}$ with the accretion rates $\Dot{M}_{acc}$ for Per-emb-50 estimated in \cite{Fiorellino2021-mdot}. The average $\Dot{M}_{bin}$, $\langle \Dot{M}_{in}\rangle$, estimated using the streamline model is between 5-10 times larger than the $\Dot{M}_{acc}$ estimated for a 1 Myr protostar ($(1.3-2.8)\times10^{-7}$ \Msun yr$^{-1}$), and just above the upper limit for the $\Dot{M}_{acc}$ of Per-emb-50 assuming it is located at the Birth-line of the \cite{Palla1993birthmodel} model ($1.2\times10^{-6}$ \Msun yr$^{-1}$). The protostellar mass calculated in Sect. \ref{sec:protostellarmass} is consistent with a 1\,Myr protostar, so likely the accretion rate is the former, resulting in $\langle \Dot{M}_{in}\rangle / \Dot{M}_{acc}=5-10$. Therefore, the streamer is feeding more than enough mass to sustain the accretion rate of the protostar, and according to our total free-fall time, we can expect a similar infall rate for at least the next 8.7 kyr.% The mass per bin varies from $6\times10^{-4}$ to $2\times10^{-3}$ \Msun from bin to bin. This variation drives the fluctuations observed in the infall rates, which are within a factor of 3, \commentstere{with minima located at $\sim$1000 and $\sim$2000 au}. Nevertheless, these variations are small and the streamer shows a consistently high infall rate along its full length, reflected in $\langle \Dot{M}_{in}\rangle$. The fluctuations are present in spatial scales larger than 300 au, so these are not affected by the resolution limit. \commentstere{The mass variations might be because the streamer is clumpy, with changes in scales smaller than our 300 au resolution. On the other hand, the MRS of the data is around 22\arcsec, but the data are already less sensitive to extended emission before reaching that distance, at around 4\arcsec, so the apparent minima in the infall curve of Fig.~\ref{fig:massaccretion} might be explained by a decreased sensitivity to extended sources. } \subsection{Asymmetries in SO and SO$_2$ emission\label{sec:SOpvdiag}} Figure \ref{fig:images} Right shows the SO integrated emission obtained with NOEMA. Unlike H$_2$CO and C$^{18}$O, SO emission in Per-emb-50 is brightest at about 150 au south of the protostar, and extended out to around 1000 au from it (see also Fig.~\ref{fig:SOspectra}). % The southern part of the SO emission overlaps with the brightest H$_2$CO emission. It also presents emission at $\gtrapprox$ 3000 au from the protostar, but since this emission lies outside the primary beam, we will not describe it further in this work. SO is known to be a tracer of cold, dense gas \citep[e.g.,][]{Swade1989L134n_chem, HacarTafalla2011Taurus} and it is sublimated from dust grains by sufficient heating, for example, by accretion shocks around the centrifugal barrier \citep[e.g.,][]{Sakai2014Nature, vanGelder2021SO}. SO is found in young, embedded sources, but not in T Tauri disks \citep{Guilloteau2016ChemOfDisks, Dutrey2011SInTTauri}, suggesting an increasing S depletion with disk age. This hints that SO is tracing the dense inner envelope and gas disk around the protostar. We use SO$_2$\,(11$_{1,11}$ -- 10$_{0,10}$) emission to aid in the interpretation of the SO emission. The SO$_2$ integrated intensity image is shown in Fig.~\ref{fig:SO2withspectra}, together with selected spectra. SO$_2$ emission is compact and peaks at the south of Per-emb-50, % close to where H$_2$CO emission ends. Its peak is at the same location as the SO peak, but its emission is $\sim5$ times weaker than SO. The SO$_2$ molecule is a known shock tracer as it can trace warm areas in accretion shocks \citep{vanGelder2021SO}, in particular at the disk-envelope surface \citep[e.g., ][]{ArturDeLaVillarmois2019ClassIOph}. This suggests that the SO$_2$ emission in the south of Per-emb-50 might be tracing shocked material, probably due to either the streamer impacting zone or another phenomena. % We generate the PV diagrams of SO and SO$_2$ line emission along the same cut done for C$^{18}$O in Sect. \ref{sec:protostellarmass} to investigate what kinematics are these molecular lines tracing. The resulting PV diagrams are shown in Fig.~\ref{fig:pvSO-SO2}. The shapes of both PV diagrams differ from the C$^{18}$O PV diagram (see Fig.~\ref{fig:C18Ofit}), indicating that these molecules trace different kinematic components. SO has a skewed diamond-shaped emission, with both blueshifted and redshifted components at the north and south parts of the cut, which suggests mixture of infall and rotation motions, whereas C$^{18}$O has a bowtie shape consistent with motion dominated almost entirely by Keplerian rotation. Additionally, the brightest SO emission comes from redshifted velocities both toward the north and south of Per-emb-50, whereas blueshifted emission comes almost fully from the northern side of the inner envelope. Unlike SO, SO$_2$ emission is only present around the peak, with no recognizable characteristic shape and barely presents emission over $3\sigma$ at blueshifted velocities. Both diagrams peak at the same position, within the inner 300 au from the protostar toward the southeast, and in velocity, at approximately 9.5 \kms. The shape of these two molecules' emission suggest they follow motions which are asymmetric in the north-south direction. We fit the ``toy model" for infall and rotation motion from \cite{Sakai2014Nature} to the SO PV diagram to investigate if the diamond shape is consistent with the rotation and infall kinematics of a flattened inner envelope. The free parameters in this model are the centrifugal radius of the material in the envelope $r_{c, \mathrm{env}}$ (not to be confused with the centrifugal radius of the streamer $r_c$) and the mass of the central object $M_{tot}$. The best fit curves from this toy model are plotted in red and blue for the redshifted and blueshifted sides, respectively, overlaid on top of the SO PV diagram in Fig.~\ref{fig:pvSO-SO2}. The model must be divided in two parts to be able to reproduce the shape of the diagram: the redshifted and blueshifted side are best fitted with a different set of parameters. The redshifted side is best fitted with a toy model with $M_{tot, \mathrm{r}}=4$ \Msun and $r_{c, \mathrm{env}, \mathrm{r}}=130$ au, whereas for the blueshifted side $M_{tot, \mathrm{b}}=2.9$ \Msun and $r_{c, \mathrm{env}, \mathrm{b}}=100$ au. Therefore, SO molecular emission is tracing asymmetric kinematics in the inner envelope consistent with infall and rotation, where the redshifted emission (which is brighter) possesses a different motion than the blueshifted side. The fact that the masses $M_{tot, \mathrm{r}}$ and $M_{tot, \mathrm{b}}$ are higher than the protostellar mass we determined kinematically (1.7 \Msun, see Sect. \ref{sec:protostellarmass}), plus the fact that they are different, suggests that the model does not capture all the kinematic phenomena in the envelope. \commentstere{These results lead us to investigate the SO emission in more detail.} \subsection{Gaussian components of SO emission\label{sec:SOdecomposition}} The complex shape of the SO PV diagram, the strong peak at redshifted velocities, and the fact that it can be fitted with the \cite{Sakai2014Nature} toy model with two different set of parameters for the redshifted and blueshifted parts, suggest that there are at least three components being traced: rotation, infall and a strong, redshifted component. We separate the different kinematic components through Gaussian spectral fitting of SO to study them separately. We fit one, two and three Gaussians to the SO spectrum of each pixel with $\mathrm{S/N}>4$ using the same method for H$_2$CO and C$^{18}$O emission, described in Appendix \ref{ap:gaussfit}. Figure \ref{fig:SOspectra} shows four spectra in different regions with their respective best fit curves. Most of the SO spectra require two Gaussians, or in some cases, three Gaussians to be reproduced. For each pixel, we evaluate how much improvement we obtain by adding a second and third Gaussian using the Akaike Information Criterion (AIC, see Appendix \ref{ap:gaussfit} for details). With the decomposed spectra, we investigate the separate physical components of SO emission that can be described using each Gaussian.% We find 4 signature components in SO emission: one consistent with inner envelope rotation, a compact feature around the protostar with a large velocity dispersion ($\sigma_{\mathrm{v}} > 2$ \kms), a third component consistent with the streamer's kinematics from Sect. \ref{sec:asym-inflows} and a fourth component completely redshifted with respect to Per-emb-50. We separate each of the components using the following steps: \begin{enumerate} \item All Gaussian curves which have a velocity dispersion $\sigma_{\mathrm{v}} > 2$ \kms correspond to the broad feature, which is consistent with \textit{marginally-resolved disk rotation}. \item Then, all Gaussians with $\sigma_{\mathrm{v}} < 2$ \kms which have a central velocity $V_{LSR}> 8.1$ \kms correspond to the strong, \textit{redshifted component}. \item The Gaussians left which have $V_{LSR}< 7.2$ \kms and are located at a lower declination than $+31^{\circ}21'57.6"$ are consistent with the \textit{streamer}. \item All pixels left contain only one Gaussian curve, which has a central velocity map consistent with \textit{inner envelope rotation}. \end{enumerate} The central velocity $V_{\mathrm{LSR}}$ of the 4 separated components are shown in Fig.~\ref{fig:SOcomponents}. We show the best fit parameters (central velocity and dispersion) for each component in Appendix \ref{ap:SOdecomp-results}. Figure \ref{fig:SOspectra} shows the components in selected SO spectra. % The inner envelope rotation component contributes to the diamond shape shown in the PV diagram in Fig.~\ref{fig:pvSO-SO2} with the blueshifted emission on the northern side and the redshifted emission in the southern side. This rotation component is resolved in our observations, extending a factor of $\sim 2$ farther in radius than the continuum emission (see Fig.~\ref{fig:SOcomponents} Top Left), so it does not correspond to disk rotation, and has the same rotation direction shown in our C$^{18}$O data (see Fig.~\ref{fig:keplerC18O}). % Within the continuum emission contour, the SO spectra present emission fitted with Gaussians with blueshifted and redshifted velocities with respect to Per-emb-50 and with $\sigma_{\mathrm{v}} > 2$ \kms (see Fig.~\ref{fig:SOcomponents} Top Right and Fig.~\ref{fig:aux-SOsigmav} Top Right). The observed gradient in its central velocities is consistent with rotation kinematics, with the same rotation direction suggested by the C$^{18}$O PV diagram (see Fig.~\ref{fig:keplerC18O}) and the inner envelope rotation. However, as this component only emits within the continuum emission, we assume this gas belongs to the gas disk only, unlike C$^{18}$O which also traces the flattened inner envelope rotation. Using the stellar mass obtained in Sect. \ref{sec:protostellarmass} and the velocity dispersion from this SO component, we estimate the radius of this compact component assuming it traces Keplerian rotation and that at the disk edge the Keplerian velocity is $v_k\sim \sigma_{\mathrm{v}}\approx 4$ \kms. This estimate returns a disk radius of approximately 90 au. Therefore, this component is consistent with a gas disk around the protostar. % Towards the south of Per-emb-50, one of the fitted Gaussian components is consistent with the streamer structure found in H$_2$CO, both in position in the sky and velocity (compare Fig.~\ref{fig:H2COfit} and Fig.~\ref{fig:SOcomponents} Bottom Left). This component is clearly separated from all other components in the south as it is blueshifted with respect to the protostar's $V_{\mathrm{LSR}}$, whereas the other component close-by (inner envelope rotation) is redshifted (see Fig.\,\ref{fig:SOcomponents} and Fig.\,\ref{fig:SOspectra} Left). This component's SO spectra shows the same central velocity as H$_2$CO (see Spectra d in Fig.~\ref{fig:SO2withspectra}) and acceleration toward blueshifted velocities found in the H$_2$CO Gaussian fitting (see Fig.~\ref{fig:SOcomponents} Bottom Left). SO traces only the inner 1000 au of the streamer, likely tracing its denser regions. % The fourth component found through Gaussian decomposition is strongly redshifted with respect to the protostar (see Fig.~\ref{fig:SOcomponents} Bottom Right). This component has a larger velocity close to the center of the continuum emission (around 9.5 \kms) and decreases radially (to approximately 8.0 \kms). Its radial velocity gradient is not consistent with the direction of the outflow nor the streamer. We propose this component might be tracing another asymmetric infall, located along the line of sight. This infall is asymmetric as we do not see a strongly blueshifted counterpart ($V_{LSR}< 7$ \kms) covering a similar area, expected for an axisymmetric infall. The only strongly blueshifted component is very thin and located in the same area as the streamer. With the present observations, we do not have enough spatial resolution to characterize this infall further. \section{Discussion\label{sec:discussion}} \subsection{Why are mass and infall rate lower limits?\label{sec:discussion-streamlowlims}} The estimated mass of the streamer (see Sect. \ref{sec:streamermass}) is a lower limit because of observational limits in our data and the assumptions made in the mass calculation. We estimate the length of the streamer as 3300 au, using H$_2$CO emission and the streamline model. This is possibly not the full length of the streamer for three reasons. First, the H$_2$CO emission is cut off by the primary beam of the NOEMA observations ($22\arcsec$), and our observations are not sensitive to strong emission beyond this radius. Second, there is a strong offset emission toward the southwest of Per-emb-50, located just outside the primary beam at $\sim 3000$ au, seen in all of the molecular tracers used in this work (see Figs. \ref{fig:images} and \ref{fig:C18Ofit}). Moreover, there is significant C$^{18}$O emission observed in the SMA MASSES program \citep{Stephens2019} in the same location as the H$_2$CO streamer, which extends to a bright emission located beyond the streamer's observed extent in this work. Third, the streamline model requires an initial velocity $\mathrm{v}_{r,0}=1.25$ \kms in the direction of the protostar to fit the outer 1500 au of the streamer (see Table \ref{tab:paramsstream}). The initial velocity might indicate that the streamer starts farther away and was already infalling by the time it reaches $r_0$. % Another observational limitation is the lack of zero-spacing data. C$^{18}$O emission is extended and the observations have no sensitivity to scales larger than the MRS (22\arcsec), but our observations start losing sensitivity to scales larger than 4\arcsec due to the coverage in u-v space. Therefore, we are not certain if the clumpiness observed in C$^{18}$O is real or it is influenced by missing flux due to lack of zero-spacings. % The main assumptions that we use in the streamer's mass calculation are, first, a fixed ratio between column densities which is suitable for an undepleted gas, $X_{\mathrm{C}^{18}\mathrm{O}} = 5.9\times10^{6}$ \citep{Frerking1982}, and second, we assume a constant excitation temperature $T_{ex}$. Most likely, $X_{\mathrm{C}^{18}\mathrm{O}}$ is not constant along the streamer. % Within the dense core, it is more probable that there is a larger C$^{18}$O depletion into grains due to an increase in density \citep[see ][and references within]{BerginTafalla2007coldcloudsreview}. Where C$^{18}$O is depleted, $X_{\mathrm{C}^{18}\mathrm{O}}$ should be higher to estimate the mass correctly. Also, this conversion factor is calibrated using Taurus molecular clouds, and might differ in Perseus. \cite{Pineda2008PerseusCO} show that there is variation in the conversion factors of the C$^{18}$O(1--0) line in different regions in Perseus. Secondly, a constant $T_{ex}$ along the streamline is unlikely: the temperature might be higher closer to the protostar due to thermal heating. This is also suggested by the presence of SO$_2$ emission toward the south of Per-emb-50. Unfortunately, we do not have a good estimation of the gas temperature in the vicinity of Per-emb-50. NH$_3$ is a commonly used chemical thermometer, combining the (1,1) and (2,2) inversion transitions, both observed in NGC 1333 with GBT \citep{Friesen2017}. Although the NH$_3$(1,1) line is present in Per-emb-50, the NH$_3$(2,2) line is too faint to be detected around the protostar and provide a gas temperature estimation. Higher spatial resolution observations of both NH$_3$ lines do not detect emission in this region \citep{Dhabal2019}. Instead, we use the values in the vicinity of Per-emb-50 in \cite{Friesen2017} and \cite{Dhabal2019}, which are between 10 and 20 K. The variance in $T_{ex}$ adds less than 5\% of the total uncertainty, and therefore it does not dominate the uncertainties. Given that the mass and mass infall rates we report are lower limits, the general results of this paper are strengthened: the streamer delivers more than enough mass toward the protostellar disk to sustain its high accretion rate in comparison with its neighbors (see Sect. \ref{sec:streamermass} and Sect. \ref{sec:discussion-outbursts}). If the streamer masses or infall rates are actually higher, this streamer can deliver even more mass than what we report here. \subsection{Classical free-fall time versus streamline model\label{sec:discussion-fftime}} For the first time, we calculate the infall timescales along a streamer using the streamline model based on the analytical solution from \cite{Mendoza2009}. We show that in Per-emb-50, where the streamline model requires an initial radial velocity, the infall rate is underestimated by at least a factor of 2 when calculated with the classic -- and initially static -- free-fall timescale. The factor by which the timescale is underestimated depends on the initial velocity of the streamer: if the streamer presents an initial impulse at the starting radius $r_0$, it will take less time to reach the protostellar disk than if the streamer started from rest. The streamline model allows to estimate the initial radial velocity. This highlights the importance of the use of a streamline model to calculate the timescales involved in infall. The calculation of the initial radial velocity (and consequently, the infall rate) relies on a streamer model that has good constraints both spatially in the image plane and kinematically in the velocity along the line of sight. If the streamer is fully contained along the line of sight, the velocity is correctly characterized but we do not have information about the length of the streamer. On the other hand, if the streamer moves completely within the plane of the sky, there is information about the length and path of the streamer, but the velocity cannot be characterized. Fortunately, in the case of Per-emb-50, the streamer is mostly contained in the plane of the sky, with a small inclination at the start of the streamline (approximatelt 10$^{\circ}$ according to the streamline model in Sect. \ref{sec:streaminemodel}), and it becomes more inclined with respect to the line of sight where we see the acceleration closer to the disk. This allows us to sample both the distance (up to the primary beam edge) and the velocity, and therefore constrain the initial radial velocity. \subsection{Streamer is landing within disk scales \label{sec:discussion-impactzone}} Our results indicate mass is infalling to disk scales (which corresponds to distances of $\sim 100-200$ au), both in the case of the streamer and the asymmetric infall seen in the redshifted component of SO emission (see Sect. \ref{sec:SOdecomposition}). We can model the streamer down to $\approx 250$ au from the edge of the gas disk (see Sect. \ref{sec:streaminemodel}) and the toy model in Sect. \ref{sec:SOpvdiag} has a centrifugal radius between 100 and 130 au, similar to the 90 au of the gas disk. It is possible that SO$_2$ traces the impact zone where gas is infalling, either that of the streamer or the redshifted SO component. H$_2$CO and SO emission tracing the streamer end within a beam size of the location of the SO$_2$ peak emission, located at $\sim 150$ au (see Fig.\,\ref{fig:SO2withspectra}), which is compatible with the centrifugal radius obtained for the streamline model is $\approx 250$ au (see Sect. \ref{sec:streaminemodel}) as the emission is seen in projected distance and $r_c$ is a 3-dimensional distance. According to the streamline model, the impact velocity component along the line of sight at the assumed impact location ($r_c$) is $1.7$ \kms. The FWHM of the SO$_2$ emission spectra at the location of the streamer's end is similar to the estimated impact velocity, suggesting that the impact of the streamer is responsible for the SO$_2$ velocity dispersion. However, SO$_2$ peaks at the same velocity as the strong, redshifted component that could be attributed to another asymmetric infall, and at the peak location, both have the same shape (see Fig.\,\ref{fig:SO2withspectra} Top Right). Therefore, it is unclear which infalling feature most influences the SO$_2$ emission. % One interesting result is that the centrifugal radius of the streamer $r_c$ ($\sim 250$ au, see Sect. \ref{sec:streaminemodel}) is about twice the centrifugal radii obtained for the rotating-infalling envelope, $r_{c, \mathrm{env}, \mathrm{r}}=130$ au and $r_{c, \mathrm{env}, \mathrm{b}}=100$ au (see Sect. \ref{sec:SOpvdiag}). % This suggests that the streamer and envelope have different origins and then the streamer might come from outside the dense core. The streamer component seen in the SO emission might indicate the entrance of the streamer to the inner envelope, where the latter is flattened and has a rotating and infalling motion of its own (represented by the redshifted component in Sect. \ref{sec:SOdecomposition}). For the streamer material to reach the centrifugal radius of the inner envelope, which is slightly larger than the gas disk radius (90 au, see Sect. \ref{sec:SOdecomposition}), and for its material to reach the gas disk, it must lose angular momentum, for example, through magnetic braking \citep{Mestel-Spitzer1956MagBraking, Mouschovias1980MagBraking, Basu-Mouschovias1994MagBraking}. Loss of angular momentum of material coming from $>10\,000$ au has been observed for Class 0 sources by \cite{Pineda2019SpAngMomCores} down to $\sim 1000$ au, becoming low enough to generate a rotationally supported disk in scales $<100$ au. Future high resolution observations can clarify the interaction between the streamer and the inner envelope for Class I sources. \subsection{Relation between streamers and accretion outbursts \label{sec:discussion-outbursts}} \commentstere{The presence of streamers with a high infall rate, like the one found toward Per-emb-50, are linked to accretion variability and luminosity outbursts. Simulations of turbulent molecular clouds suggest that infall from larger scales regulates the accretion toward the protostar, even in later phases than Class 0 \citep{Padoan2014Infallsim, Kuffmeier2018EpisodicAcc}. In the case presented in this work, the relation between the streamer and a (current or future) accretion burst is supported by the high accretion rate and luminosity of Per-emb-50 in comparison with other Class I protostars, as well as other asymmetric structures found toward current (and past) outbursting sources. } \commentstere{The streamer feeding Per-emb-50 might explain the high accretion rate of this protostar in comparison to other Class I sources in NGC 1333. Its $\Dot{M}_{acc}$ is $\sim$10$\times$ higher than for other Class I sources in NGC 1333 \citep{Fiorellino2021-mdot}, and the infall rate provided by the streamer is 5--10$\times$ larger than $\Dot{M}_{acc}$ (see Sect. \ref{sec:streamerinfallrate}), more than enough to replenish the mass consumed by accretion. The luminosity \citep[between 10 and 25 \Lsun,][]{Enoch2009, Dunham2015gouldbeltcatalog} and accretion rate are consistent with those of Class Is % undergoing an accretion burst \citep{Hsieh2019-ALMA_Outbursts}. However, Per-emb-50's envelope mass is similar to those around other Class I objects \citep[at 2.2 \Msun,][]{Enoch2009, Agurto-Gangas2019}, and the streamer might be the key ingredient to sustain Per-emb-50’s high accretion rate. } \commentstere{It is also possible that we are seeing the protostar in the onset of an accretion burst, as it is significantly brighter than other Class I protostars, or the streamer might produce one in the future. Since the streamer's infall rate is 5--10$\times$ larger than the current accretion rate, it is possible that in the future 9000 yr the accretion rate can grow up to one order of magnitude. This shows that streamers might provide a significant amount of mass for stellar accretion, and suggest that intense accretion events can take place during the Class I phase. Moreover, if more streamers in Class I protostars are found and their masses characterized \citep[e.g., in this work and ][]{Yen2014L1489Infall}, this also suggests that the main accretion phase of the protostar might extend beyond the Class 0 phase. } \commentstere{Recent observations toward young stellar objects find a correlation between accretion bursts and infall from larger scales. Asymmetric structures of 1000 au length have been associated with some FU Ori protostars \citep{Liu2016FUOriandInfall}. Other protostars with a known accretion burst in the past, such as Per-emb-2 \citep{Pineda2020} and V883 Ori \citep{White2019V883Ori}, also have streamers with an infall rate higher than their accretion rate. For these sources, it is suggested that the large-scale infall regulates the episodic accretion. This might be the case for Per-emb-50 as well: we propose that the mass is delivered to the protostellar disk, which triggers a disk instability \citep[like a gravitational instability, as suggested by][]{Kuffmeier2018EpisodicAcc,Hennebelle2017SpiralsAccretion}, the mass is transported through the disk and afterwards is accreted by the protostar in a burst. This idea is supported by the disk’s mass in comparison to other disks: Per-emb-50’s dust disk has between 0.28--0.58 \Msun, around twice the mass seen in other Class I disks \citep{Segura-Cox2016, SeguraCox2018VANDAM}, which suggests that this disk might be accumulating mass coming from the streamer. Additionally, even if we are currently unable to resolve this disk, gravitational instabilities produced by infalling material have been suggested to account for the spiral structures found in the disks of other protostars, for instance, in IRAS16293--2422 B \citep[a Class 0 source,][]{Zamponi2021iras16293}, HH 111 VLA 1 \citep[a Class I source,][]{Lee2020InfallProdSpirals} and Elias 2-27 \citep[a Class II protostar,][]{Paneque-Carreno2021spiralsdisk}. Higher resolution observations of the gas disk around Per-emb-50 are required to study these possible instabilities.} \subsection{Where does the streamer come from?\label{sec:discussion-streamerorigin}} The streamer possibly connects to larger scale structures such as filaments and fibers. Within molecular clouds, simulations suggest that that up to 50\% of the final protostellar mass comes from beyond their natal core \citep{Pelkonen2021massbeyondcore}, and observations of other protostars show that gas can flow from beyond the protostar's natal core, connecting the protostar with other structures \citep[e.g., ][]{Chou2016DiskandFilconnectionL1455}. Our data, together with the observed environment where Per-emb-50 lives, suggest that this might be the case for this protostar as well. First, as discussed previously (see Sect. \ref{sec:discussion-streamlowlims}), the H$_2$CO and C$^{18}$O emission are truncated by the NOEMA primary beam and there is significant C$^{18}$O (2 -- 1) emission observed in the SMA MASSES program \citep{Stephens2019}, located at the position of the offset emission outside the primary beam, directly in line with the streamer. Moreover, the MASSES emission is also cut short at its primary beam \citep[48\arcsec,][]{Stephens2019}. % The gas reservoir seen in MASSES C$^{18}$O observations might be funneled by the streamer or be part of it, implying that streamers might connect with larger structures in their natal molecular clump. Zooming out, NGC 1333 consists of a complex association of filaments, revealed in dense gas observations \citep{Chen2020fibers,Dhabal2019,Friesen2017}. At larger scales, the streamer points directly toward the crossing of two dense gas filaments observed in NH$_3$ observations \citep[filament b in][]{Chen2020fibers} and toward a bright extended emission source observed in C$^{18}$O (see Fig.~\ref{fig:C18Olargescale}), located between Per-emb-50 and Per-emb-54. If the streamer continues outside the primary beam, it may connect both protostars, as observed with the protostar L1544-IRS1 and the starless core HRF40 by \cite{Chou2016DiskandFilconnectionL1455}. There are currently no observations at intermediate resolution ($6-10\arcsec$) with an appropriate tracer in NGC 1333 that connects the large scale clumps and filaments surrounding Per-emb-50 to the core. Studies of filaments and fibers such as those of \cite{Chen2020fibers} and \cite{Dhabal2019} show an intricate connection between filaments and cores, but they are not sensitive enough to detect emission close to Per-emb-50, and the C$^{18}$O(1--0) has too coarse resolution \citep[46\arcsec beam,][]{Hatchell2005Perseus}. Nevertheless, the general direction of the streamer suggests that this streamer is connected to the larger scale filaments. \subsection{Asymmetries in SO and SO$_2$ emission} The SO and SO$_2$ emission (see Fig.~\ref{fig:pvSO-SO2}) are asymmetrical: they are both brighter toward the south and in redshifted velocities. SO shows that the kinematics in this source are complex and include both asymmetric infall and rotation. This is more evident in the Gaussian decomposition (see Sect. \ref{sec:SOdecomposition}). These asymmetries show that the inner envelope of Per-emb-50 is not infalling monolithically, like the classical picture of core collapse \citep{Terebey1984rotation,Shu1977corecollapse}. Through Gaussian decomposition, we find that the redshifted component that dominates the SO emission is centered around the protostar and has a central velocity of approximately 9.5 \kms (2 \kms redshifted with respect to Per-emb-50, see Sect. \ref{sec:SOdecomposition}). We interpret this emission as another asymmetric infall completely contained within the line of sight. Given the velocity gradients seen in Fig.~\ref{fig:SOcomponents} Bottom Right, this component might not be a streamer but rather a wider asymmetric infall, comprising one side of the envelope located between the observer and the protostar. Finding a possible second infall feature in Per-emb-50 shows that the envelope infall kinematics are complex and reaffirms the idea that mass accretion does not follow an inside-out, axisymetric fashion. % The asymmetries might be related to the environment where Per-emb-50 is located, close to the intersection of two filaments in NGC 1333 \citep{Chen2020fibers} and close to several other protostars \citep{Enoch2009, Dunham2015gouldbeltcatalog}.% \subsection{Comparison with other streamers\label{sec:discussion-comparison}} Streamers, defined as long ($\gtrsim 1000$ au) and asymmetric accretion flows toward disk-forming scales \citep[$\lesssim 300$ au, as in ][]{Pineda2020}, are a relatively new phenomenon which is proving relevant in star formation, with new discoveries both in gas tracers \citep[][Segura-Cox et al. in prep]{Alves2020} and dust \citep{Ginski2021}. Per-emb-50's streamer is the first Class I protostellar streamer to be characterized using a free-falling model. We illustrate the streamer and its relation with the various components found surrounding Per-emb-50 in Fig.~\ref{fig:diagramPer50}. Per-emb-50's structure and kinematics are similar to other asymmetric features found in protostars in Perseus and other molecular clouds. The observed streamer size in this work is within the range of other observed streamers (between 1000 and 10\,000 au), such as toward [BHB2007]11 \citep{Alves2020}, Per-emb-2 \citep{Pineda2020} and SU Aur \citep{Ginski2021,Akiyama2019}. Similar infalling structures have been found at smaller scales (between roughly 200 to 1000 au), within inner envelopes of single systems \citep[e.g.,][]{Garufi2021arXiv-accretionDGTauHLTau,Tang2012ABAurLateAccretion} and within the circumbinary disk and inner envelope of binary systems \citep[e.g.,][]{Phuong2020GGTauSpirals,Takakuwa2017L1551NEBinarySpirals,Dutrey2014GGTau, Takakuwa2014L1551NEBinaryAngularMom}. The streamer in this work also shows a velocity gradient and a curved appearance in the image plane, as many of the streamers mentioned above \citep[e.g.,][]{Pineda2020,Akiyama2019}. We note that the infalling structures in smaller scales (200 -- 1000 au) might be of different nature, possibly driven by the tidal forces of the binary systems instead of pure free-fall. However, these structures also play a role in feeding the circumstellar disks. Our work uses the same analytical solution as in \cite{Pineda2020} for Per-emb-2, a Class 0 protostellar close binary ($<20$ au), the first streamer where mass and infall rate were obtained, but extends the method to include the analysis of infall rates along the streamer. In Per-emb-2, the streamer's kinematics are consistent with a model with $\mathrm{v}_{r,0}=0$, so using the free-fall timescale does not severely underestimate the infall rate. Per-emb-50's mean streamer infall rate $\langle \Dot{M}_{in}\rangle_{\mathrm{Per50}} = 1.3\times 10^{-6}$ \Msun yr$^{-1}$ is similar to the infall rate in Per-emb-2, $\Dot{M}_{in, \mathrm{Per2}}\approx 10^{-6}$ \Msun yr$^{-1}$ \citep{Pineda2020}. While the infall rate is similar in both sources, the mean ratio $\Dot{M}_{in}/\Dot{M}_{acc}$ is higher for Per-emb-50 (5-10, in contrast with 1.4 for Per-emb-2). Nevertheless, both are $>1$, even assuming the highest accretion rate possible for Per-emb-50, $(0.6-1.2)\times10^{-6}$ \Msun yr$^{-1}$ (blue area in Fig.~\ref{fig:massaccretion}). Per-emb-50 is unique in that it is the first streamer to definitively show, through the use of a free-fall model, that the infall rate can sustain the accretion rate.% This implies that streamers can contribute important amounts of mass in later phases than Class 0, therefore suggesting that important accretion events can happen in the Class I phase, and in some cases, can occur in Class II sources \citep[as suggested by Garufi et al. 2021 subm., ][]{Alves2020, Tang2012ABAurLateAccretion}. It is still uncertain if the lack of streamers found in observations is due to an observational bias, or streamers are uncommon in the majority of star forming systems. If streamers live as much as the estimated free-fall timescale of Per-emb-50 ($t_{ff}\sim9000$ yr), and the protostar has only one streamer in their life, there is a chance between 2\% and 30\% of observing one during the Class I phase: the lower limit obtained by dividing $t_{ff}$ by the estimated Class I phase duration \citep[0.44. Myr, ][]{Evans2009C2Dlifetime} and the upper limit dividing $t_{ff}$ by itself plus the time between accretion bursts, estimated to occur once every few 10 000 yrs \citep[][]{Frimann2017-MASSES_Outburst, Jorgensen2015accbursts}. This is just an order of magnitude estimate, as the time between bursts is uncertain and has a wide range of values in different protostars \citep[from few 1000 to few 10 000 yrs, e.g.,][]{Hsieh2018burstsinVELLOs, Frimann2017-MASSES_Outburst, Jorgensen2015accbursts} and previous works show this time might increase from Class 0 to Class I protostars \citep{Hsieh2019-ALMA_Outbursts, Audard2014OutburstsReview}. Nevertheless, asymmetric infall features are seen along the complete simulations of star formation within a molecular cloud \citep{Kuznetsova2019, Kuffmeier2018EpisodicAcc}. As streamers are a new emerging phenomenon in observations, it is unclear which are the best molecules to trace them. Per-emb-50 shows the first streamer characterized with H$_2$CO emission, whereas other streamers are observed in $^{12}$CO \citep[e.g.,][]{Alves2020}, HC$_3$N \citep[e.g.,][]{Pineda2020} and HCO$^+$ \citep[e.g.,[]{Yen2019HLTau}. Upcoming NOEMA observations from the PRODIGE project can uncover more asymmetric infalls and streamers around Class 0/I sources and, in the future, we might be able to make a statistical study of streamers in protostars and investigate which molecules are the best tracers of this phenomenon. \section{Conclusions\label{sec:conclusions}} In this work, we present new NOEMA observations of H$_2$CO, C$^{18}$O, $^{12}$CO, SO and SO$_2$ molecular lines toward Per-emb-50, a Class I source in NGC 1333. We use these observations to characterize the kinematics from envelope to disk scales around the protostar. An illustration of our main findings is shown in Fig.~\ref{fig:diagramPer50}. The main results are summarized as follows: \begin{enumerate} \item We find a streamer depositing % material close to the edge of the gas disk around Per-emb-50. It presents a rougly constant velocity in H$_2$CO emission in the line of sight from roughly 1500 to 3000 au from the protostar. There is acceleration toward more blueshifted velocities closer to the protostar, up to around 1000 au. \item The analytical solutions for infalling gas along a streamline can reproduce the observed kinematics of the H$_2$CO emission. An initial velocity of 1.25 \kms at the initial position \commentstere{3330} au away from the protostar is required to replicate the observed velocity along the line of sight. Taking the initial velocity into account, the free-fall timescale of the streamer is $\sim9000$ yr. \item The streamer is delivering more than enough mass to sustain its protostellar accretion rate. We estimate a lower limit to the streamer's mass at $1.2\times10^{-2}$ \Msun, from which we obtain a mean infall rate of $1.3\times 10^{-6}$ \Msun yr$^{-1}$, with factor 3 variations along the streamer. The infall rate is consistently about 5 to 10 times larger than the estimated accretion rate of the protostar. This means that the streamer can deliver enough mass to sustain the high accretion rate of this protostar for at least the next 9000 yrs. \item We find signatures of asymmetry in SO and SO$_2$ emission. The PV diagram of SO shows a diamond shape consistent with rotation and infall motions, but there is an asymmetry between the redshifted and blueshifted velocities. Through Gaussian decomposition, we find that SO is tracing mostly the inner envelope rotation and a redshifted asymmetric infall located along the line of sight. SO is also tracing the inner 1000 au of the streamer. SO$_2$ emission hints at an impact zone toward the south of Per-emb-50, which is consistent with both the estimated landing site of the streamer and the peak of the redshifted asymmetric infall. \end{enumerate} The description of the envelope around Per-emb-50 and each of its distinct kinematic components are limited by the resolution and primary beam of our observations, together with the lack of zero-spacing data. We emphasize that the streamer might extend further than the 3000 au we characterize in this work, as it is traced in C$^{18}$O, which is observed outside of the primary beam of our observations, points toward the crossing of two dense gas filaments, and that the mass is a lower limit. Further observations with single dish antennas will allow to obtain the total flux (and therefore, mass) along the streamer and confirm its mass fluctuations. Intermediate resolution observations ($\approx6\arcsec$) that cover an area larger than the NOEMA primary beam will allow us to investigate the connection of this streamer to the larger filament. Higher spatial resolution data of more than one SO and SO$_2$ molecular transitions will help determine the precise landing site of the streamer and allow us to characterize the redshifted infall better. Observations of other transitions of the same molecules observed in this work will allow to derive physical parameters (volume density and temperature) of the streamer and its landing site. The presence of the streamer and the redshifted SO component highlight the importance of asymmetric infall for the growth and development of protostars at all evolutionary stages. The high infall rate of this source and the presence of streamers in Class I and II sources suggests that important accretion events of protostars can occur after the Class 0 phase. \begin{acknowledgements} \commentstere{The authors would like to thank the anonymous referee for their detailed suggestions, which helped improve the final version of this paper. The authors also thank the IRAM staff at the NOEMA observatory for their invaluable work making these observations possible.} M.T.V. would like to thank L. Testi for his valuable discussions and comments. M.T.V., J.E.P., P.C. and D. S.-C. acknowledge the support by the Max Planck Society. D. S.-C. is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2102405. A.F. thanks the Spanish MICINN for funding support from PID2019-106235GB-I00. D.S. acknowledges financial support by the Deutsche Forschungsgemeinschaft through SPP 1833: ``Building a Habitable Earth'' (SE 1962/6-1). I.J.-S. has received partial support from the Spanish State Research Agency (AEI) through project number PID2019-105552RB-C41. \commentstere{N.C. acknowledges funding from the European Research Council (ERC) via the ERC Synergy Grant \textsl{ECOGAL} (grant 855130). M.T. acknowledges support from project PID2019-108765GB-I00 funded by MCIN/ AEI /10.13039/501100011033. This work is based on observations carried out under project number L19MB with the IRAM NOEMA Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). This research made use of Astropy,\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \citep{astropy:2013, astropy:2018}.} \end{acknowledgements} \bibliographystyle{aa} % \bibliography{main} % \begin{appendix} \section{Continuum at 220 GHz \label{sec:cont}} Figure \ref{fig:cont} shows the continuum imageat 1.3 mm (220 GHz) resulting from the LI continuum window of our dataset. The noise level of this image is 0.2 m\Jyb. \section{Gaussian components fitting \label{ap:gaussfit}} We fit a single Gaussian component to all the spectra in the H$_2$CO(3$_{0,3}$--2$_{0,2}$) and the C$^{18}$O(2--1) cubes, using the Python \texttt{pyspeckit} library \citep{ginsburg2011}. We leave out of the analysis all spectra with a peak signal to noise ratio lower than 4. After fitting, we select for further analysis the fitted spectra that meet all of the following requirements: \begin{itemize} \item the parameter uncertainties are all smaller than 50\%, \item the Gaussian component has a central velocity in the observed emission velocity range (between 5.5 and 9.5 \kms for H$_2$CO and C$^{18}$O), and \item the fitted amplitude has $\mathrm{S/N}>4$. \end{itemize} The results of the fit for H$_2$CO(3$_{0,3}$--2$_{0,2}$) are in Fig.~\ref{fig:H2COfit} and for C$^{18}$O(2--1) in Fig.~\ref{fig:C18Ofit}. We fit one, two and three Gaussian components to the SO spectra near the protostar using the same criteria as above, similar to the multifit approach by \cite{Sokolov2019}. After fitting, we keep the pixels where for each Gaussian, all of the above criteria are met, except for the central velocity, where the emission range changes from $5.5-9.5$ \kms to $-1.0-14.0$ \kms. We use the Akaike Information Criterion (AIC) to decide whether one, two or three Gaussian components reproduce best each SO spectra. This criterion uses the AIC value $AIC$ to determine which model minimizes information loss: \begin{equation} AIC = 2k + \chi^2 + C, \end{equation} where $k$ is related to the number of free parameters of the model (see below), $\chi^2$ is the classical chi-squared statistic and C is a constant defined by the number of independent channels and the uncertainties \citep{Choudhury2020}. For $k$, each Gaussian component has three free parameters, so $k=3g$, where $g$ is the number of Gaussian components in each model. For $C$, we assume that each channel in the spectra has a constant normal error, which corresponds to the rms of the SO cube, and we use the same data to test the three models, C is the same for all models and does not play a role in choosing the best model, so we set $C=0$. The fit with the lowest AIC value is the preferred one for each spectrum. We evaluate the probability that the model with the minimum information loss is a considerable improvement from the other two models for each spectrum. The difference between the minimum AIC, $AIC_{min}$ (which comes from the "best" model) and the AIC value of model i, $AIC_{i}$, is proportional to the probability that model i is as good as the minimum to minimize information loss as: \begin{equation} P \propto \exp \Big(\frac{AIC_{min}-AIC_{i}}{2}\Big). \end{equation} For SO($5_5-4_4$), all of the fitted spectra have less than 5\% probability than the competing models minimize the information loss better than the model with minimum $AIC$. This means that, for those spectra that are well fitted by three Gaussians, the improvement from two Gaussians is significant. The same can be said for the improvement in those spectra that are best fitted with two Gaussians instead of only one. Therefore, we conclude that each spectra is well described by one, two or three Gaussian components, depending on each case. Figure \ref{fig:SOspectra} shows four spectra fitted with either one, two or three Gaussians. \section{Envelope mass calculation\label{ap:envelopemass}} We obtained the envelope mass upper and lower limits using the flux in our continuum obtained with NOEMA (see Appendix \ref{sec:cont}) and the Bolocam 1.1 mm image from \cite{Enoch2006}. First, we obtain the flux in the Bolocam 1.1 mm continuum within a beam-sized aperture ($\theta_{FWHM}=31\arcsec$), centered at the location Per-emb-50, $F_{\text{Bolocam}} = 324 \pm 46$ mJy, together with the peak value within this aperture $I_{\text{Bolocam}} = 573 \pm 55$ m\Jyb. Then, we obtain the total flux and peak value within the primary beam of the continuum obtained with NOEMA ($22\arcsec$), $F_{\text{NOEMA}} = 89 \pm 2$ mJy and $I_{\text{NOEMA}} = 72.9 \pm 1$ m\Jyb, respectively. We assume that the NOEMA continuum contains disk emission only, as it does not contain zero-spacing information, whereas the Bolocam 1.1 mm image includes emission from the disk and envelope. We subtract the flux in the NOEMA continuum to the flux obtained from Bolocam, thus obtaining the flux of the envelope only $\Delta S_{1\text{mm}} = S_{\text{Bolocam}} - S_{\text{NOEMA}}$, and use Equation 4 of \cite{Enoch2009} to calculate the envelope mass: \begin{equation} M_{env} = \frac{D^2 \Delta S_{1\text{mm}}}{B_{1\text{mm}}(T_D)\kappa_{1\text{mm}}}. \end{equation} We assume that the continuum at 1 mm consists of optically thin emission and use $\kappa_{1mm}=0.0114$ cm$^{2}$ g$^{-1}$, $T_D=15$ K as stated in \cite{Enoch2009}, and a distance $D=293$ pc \citep{Ortiz-Leon2018}. Using the flux difference we obtain an envelope mass of 0.18 \Msun and with the peak difference, 0.39 \Msun. \section{Determination of column density \label{ap:columndens}} We first obtain the integrated intensity map of the primary beam corrected C$^{18}$O(2 --1) emission in the spatial region where the streamer is defined for the streamline model (see Fig.~\ref{fig:C18Ofit}). We integrate the map between 5.5 and 9.5 \kms. This velocity range covers the spectral emission of the streamer in C$^{18}$O(2 --1) completely. Then, we calculate the total column density of the C$^{18}$O molecule using equation 80 of \cite{Mangum2015} in each pixel of the integrated intensity map. We use a line strength $S=\frac{J^2}{J(2J+1)} = 2/5$ in relation to the dipole moment of the C$^{18}$O molecule $\mu=0.11079$ Debye $=1.1079\times10^{-19}$ esu cm, the rotor rotation constant for C$^{18}$O $B_0 = 54891.420$ MHz, the upper state energy for the C$^{18}$O(2 --1) transition $E_u=15.81$ K, and the degeneracy of the C$^{18}$O (2 -- 1) transition $g_J = 2J+1 = 5$. We assume a beam filling factor $f=1$, as emission is resolved. The resulting equation for $N(\mathrm{C}^{18}\mathrm{O})$ in cm$^{-2}$, $T_{ex}$ in K and $\int T_R\, dv$ in K \kms is \begin{multline} N(\mathrm{C}^{18}\mathrm{O}) = 1.63 \times 10^{15} \frac{Q_{rot}(B_0, T_{ex})}{5} \frac{\exp(\frac{15.81}{T_{ex}})}{\exp(\frac{10.54}{T_{ex}})-1} \\ \times \frac{\int T_R\, dv}{J_{\nu}(T_{ex})-J_{\nu}(T_{bg})}, \end{multline} where \begin{equation} Q_{rot} = \frac{k_B T_{ex}}{h B_0} + \frac{1}{3} \end{equation} is the first order Taylor approximation of the partition function of a rigid-rotor diatomic molecule, and \begin{equation} J_{\nu}(T) = \frac{\frac{h\nu}{k_B }}{\exp(\frac{h\nu}{k_B T})-1} \end{equation} is the Rayleigh-Jeans equivalent temperature in K. We use $T_{bg}=2.7$ K and $\nu=219.560$ GHz (the frequency of the C$^{18}$O(2 --1) line). We use a constant $T_{ex}=15\pm5$ K. \section{SO$_2$ spectra and image \label{sec:SO2}} Figure \ref{fig:SO2withspectra} shows the integrated intensity map of SO$_2$(11$_{1,11}$ -- 10$_{0,10}$) between 5 and 12 \kms and to the left and right, spectra of SO, SO$_2$ and H$_2$CO in the same selected positions as in Fig.~\ref{fig:SOspectra}. \section{SO decomposition \label{ap:SOdecomp-results}} Figure \ref{fig:aux-SOsigmav} shows the velocity dispersion $\sigma_{\mathrm{v}}$ of each kinematic element found in Sect. \ref{sec:SOdecomposition} (see Fig.~\ref{fig:SOcomponents}) through the Gaussian fitting described in Appendix \ref{ap:gaussfit}. Note that all images have different colorscales. \end{appendix}
Title: Nuclear mass predictions with machine learning reaching the accuracy required by $r$-process studies
Abstract: Nuclear masses are predicted with the Bayesian neural networks by learning the mass surface of even-even nuclei and the correlation energies to their neighbouring nuclei. By keeping the known physics in various sophisticated mass models and performing the delicate design of neural networks, the proposed Bayesian machine learning (BML) mass model achieves an accuracy of $84$~keV, which crosses the accuracy threshold of the $100$~keV in the experimentally known region. It is also demonstrated the corresponding uncertainties of mass predictions are properly evaluated, while the uncertainties increase by about $50$~keV each step along the isotopic chains towards the unknown region. The shell structures in the known region are well described and several important features in the unknown region are predicted, such as the new magic numbers around $N = 40$, the robustness of $N = 82$ shell, the quenching of $N = 126$ shell, and the smooth separation energies around $N = 104$.
https://export.arxiv.org/pdf/2208.04783
\begin{CJK*}{UTF8}{gbsn} \title{Nuclear mass predictions with machine learning reaching the accuracy required by $r$-process studies} \author{Z. M. Niu$^{1}$}\email{zmniu@ahu.edu.cn} \author{H. Z. Liang$^{2,3}$}\email{haozhao.liang@phys.s.u-tokyo.ac.jp} \affiliation{$^1$School of Physics and Optoelectronic Engineering, Anhui University, Hefei 230601, China} \affiliation{$^2$Department of Physics, Graduate School of Science, The University of Tokyo, Tokyo 113-0033, Japan} \affiliation{$^3$RIKEN Nishina Center, Wako 351-0198, Japan} \date{\today} \textit{Introduction}---The origin of heavy elements in the Universe is an important but unanswered fundamental question of science~\cite{Haseltine2002Discover}. The rapid neutron-capture process ($r$-process) is responsible for producing about half of the elements heavier than iron~\cite{Burbidge1957RMP}. During the past decades, the $r$-process studies have made substantial progress from both nuclear physics and astrophysics sides~\cite{Arnould2007PRp, Cowan2021RMP}. However, the $r$-process astrophysical sources and their specific conditions remain mysteries, and the identification of the most important $r$-process site also remains a hot topic~\cite{Smartt2017Nature, Watson2019Nature, Siegel2019Nature}. The $r$-process studies necessitate the joint efforts of nuclear physicists and astrophysicists~\cite{Kajino2019PPNP}. From the nuclear side, nuclear mass is a crucial input~\cite{Martin2016PRL}, which determines the $r$-process path, and hence relates the main $r$-process abundance peaks at $A=130$ and $195$ to the nuclear shell closures at $N=82$ and $126$, respectively. Nuclear mass also determines the reaction energies of $\beta$ decay and neutron capture in the $r$ process, so it is one important source of theoretical uncertainties of $\beta$-decay half-lives and neutron-capture rates~\cite{Li2019SCPMA, Ma2019PRC}. Although the measurements of nuclear mass have been made great progress in recent years, especially for the nuclei on the $r$-process path around $N=82$~\cite{Wang2021CPC}, the $r$-process path near $N=126$ or above is still unreachable for the present, or even the next-generation, radioactive ion beam facilities. Therefore, accurate nuclear mass predictions are essential to understand the mysteries in the $r$ process. Due to the difficulties in the quantum many-body problem and the complexity of nuclear force, accurate nuclear mass prediction is a very challenging theoretical task. Even in the experimentally known region, the accuracies of nuclear mass predictions are generally around $500$ keV~\cite{Lunney2003RMP}, which is much poorer than the accuracy of $100$ keV required by the $r$-process studies~\cite{Mumpower2016PPNP}. The greater difficulty lies in the extrapolation. It is found that the deviations of different mass models can even reach tens of MeV when they are extrapolated to the unknown neutron-drip line. Therefore, the accurate nuclear mass prediction has become one of the bottlenecks in the $r$-process studies. In particular, one of the hot topics in the $r$-process studies during past decades is the origin of the rare-earth peak, which has been claimed to be associated with the $N\approx 104$ kink in the separation energies~\cite{Surman1997PRL} or the doubly asymmetric fission fragment distributions in the $A\approx 278$ region~\cite{Goriely2013PRL}. If one can construct accurate enough mass predictions for the $r$-path nuclei leading to the rare-earth peak, one can confirm whether there is a kink in the separation energies near $N=104$, which will become an essential step for understanding the origin of the rare-earth peak. For the above key open questions, we recall that the famous Bethe-Weizs\"{a}cker (BW) formula is the first nuclear mass model, in which the nucleus is assumed as a charged liquid drop~\cite{Weizsacker1935ZP, Bethe1936RMP}. It achieves an accuracy of about $3$~MeV, while large deviations from the experimental data are found in the nuclei near the magic numbers. These large deviations can be reduced by including the microscopic correction energies, and the nuclear mass predictions with the accuracy of about $300$--$500$~keV can be obtained. This kind of mass model is usually named as ``macroscopic-microscopic'' model, such as finite-range droplet model (FRDM)~\cite{Moller2012PRL} and Weizs\"{a}cker-Skyrme (WS) model~\cite{Wang2014PLB}. However, the microscopic correction energies are generally extracted from the single-particle levels of phenomenological mean fields, which are generally independent of the macroscopic part. Such an inconsistency between the macroscopic and microscopic parts would affect the model reliability. The microscopic mass models based on the nuclear density functional theory are usually believed to have better extrapolation abilities, e.g., the relativistic mean-field model~\cite{Geng2005PTP, Xia2018ADNDT} and the nonrelativistic Hartree-Fock-Bogliubov (HFB) model with Skyrme~\cite{Goriely2009PRLa} or Gogny~\cite{Goriely2009PRLb} force. Their present accuracies are, however, generally lower than the macroscopic-microscopic models. To further improve the accuracy of nuclear models, the machine learning techniques have attracted much attention during the past years. In particular, the Bayesian version of machine learning is expected to be able to provide the corresponding theoretical uncertainties~\cite{Utama2016PRC}. For the nuclear mass predictions with Bayesian neuron network (BNN), we pointed out that the performance of BNN can be improved by enriching the network inputs with information of physics~\cite{Niu2018PLB}, such as the pairing and shell effects. Neufcourt \emph{et al}.~\cite{Neufcourt2018PRC} agreed with this idea in their study of two-neutron separation energies. Since then, nuclear structure with machine learning techniques has become a hot frontier, for example, in the studies of neutron-drip line in the Ca region~\cite{Neufcourt2019PRL}, the incomplete fission yields~\cite{Wang2019PRL}, and the low-lying excitation spectra~\cite{Lasseri2020PRL}. From the above studies, see also a recent review~\cite{Bedaque2021EPJA} and the references therein, one can conclude that the accuracy and the capability of extrapolation of study with machine learning techniques crucially depend on the delicate designs of neuron network, by taking into account as much physics as possible. In this Letter, we propose a nuclear mass model with Bayesian machine learning and pay special attention on the designs of the structure, outputs, and inputs of the neuron networks. We will first demonstrate the accuracy of mass prediction as well as the capability of extrapolation of the present model with a theory-to-theory validation. We will then show the present BML mass model achieves an accuracy of $84$~keV with respect to the experimental data in AME2016~\cite{Wang2017CPC} and also discuss the shell structures in the experimentally known and unknown regions, which are crucial for the $r$-process studies. \textit{Designs of BNN}---In the present study, we adopt the general scheme of BNN~\cite{Neal1996Book}. BNN can avoid the over-fitting problem automatically by using the hyper priors. It can also quantify the uncertainties in predictions, since all model parameters are described with probability distributions. For the present designs of the network structure, we keep in mind that the physics (e.g., the ground-state spin and parity) of odd-$A$ and odd-odd nuclei are much more sophisticated than that of even-even nuclei. Thus, the predictive power, especially the extrapolation capability, will be substantially affected if we directly train the neural network with the whole nuclear mass surface. A much more effective strategy is the training of neural network with the smoother mass surface of even-even nuclei, together with the trainings with the separation energies related to their neighbouring odd-$A$ and odd-odd nuclei. As a result, there are in total $9$ different BNNs to cover the mass predictions for the whole nuclear chart. See Fig.~\textcolor[rgb]{0.00,0.00,1.00}{1}\iffalse\ref{Fig:Diag}\fi~in Supplemental Materials~\cite{SuppMat} and the corresponding descriptions. For the designs of the network outputs, in our previous study~\cite{Niu2018SciB}, we showed quantitatively that the performance of machine learning is very limited if crucial information of physics is missing. The discrepancy between the experimental data and the predictions of a given model $\delta M = M^{\rm exp} - M^{\rm model} $ is usually taken as the output, i.e., the learning target \cite{Utama2016PRC, Niu2018PLB}, which can effectively inlcude the known physics in the given model. To make the best use of the established nuclear mass models, we employ the macroscopic model BW2~\cite{Kirson2008NPA}, the macroscopic-microscopic models KTUY~\cite{Koura2005PTP}, FRDM12~\cite{Moller2012PRL}, and WS4~\cite{Wang2014PLB}, the microscopic models RMF~\cite{Geng2005PTP} and HFB-31~\cite{Goriely2016PRC}, and other high-precision global mass models Bhagwat~\cite{Bhagwat2014PRC} and DZ28~\cite{Duflo1995PRC}. These mass models have taken into account the physics important to the description of nuclear mass from different aspects. For the designs of the network inputs, in addition to $Z$ and $N$, we further introduce $E^{\rm model}_{\rm mic} \equiv M^{\rm model} - E_{\rm mac}$ or the counterparts of the separation energies as an input. This quantity is completely missing in the macroscopic mass models, while it is related to the effective mass of nucleon in the microscopic mass models. It can be seen that the prefect mass model that reproduces all the experimental data holds a prefect correlation between the input and output, $E^{\rm model}_{\rm mic} = E_{\rm mic}$, independent of $Z$ and $N$. In such a way, the systematic overestimation or underestimation on $E^{\rm model}_{\rm mic}$ of a given model can be corrected by BNN in an efficient way. In principle, $E_{\rm mac}$ can be taken as any smooth function of $Z$ and $N$ on the nuclear mass surface. Here, it is taken from the macroscopic part of FRDM12. Based on each mass model $i$, we can get its corresponding BNN mass prediction $M_i$ with the error $\sigma_i$. To describe the systematic error of mass prediction, the weighted mean $M$ and standard deviation $\sigma_{M}$ of $M_i$ are taken as the final mass prediction, which are \begin{eqnarray}\label{Eq:Maver} M = \frac{\sum_{i=1}^m \omega_i M_i}{\sum_{i=1}^m \omega_i}, \quad \sigma_{M} = \frac{1}{\sqrt{\sum_{i=1}^m \omega_i}} \end{eqnarray} where $\omega_i=1/\sigma_i^2$ and $m$ is the number of mass models. Since some sources of error may not be taken into account, the error $\sigma_{M}$ is further corrected with a factor $\chi_\nu$, which considers the deviations between mass predictions $M$ and experimental data \begin{eqnarray} \chi_\nu^2 = \frac1n \sum_{Z,N\geqslant 8} \frac{1}{\sigma_{M(Z,N)}^2} \left[M(Z,N) - M_{\rm exp}(Z,N)\right]^2, \end{eqnarray} where $n$ is the number of nuclei in the learning set. Here $\chi_\nu=2.1$ for the experimental data in AME2016. For simplicity, this Bayesian machine learning model described above will be denoted by BML hereafter. Before ending this part, we perform a theory-to-theory validation with the above designs of BNN, to demonstrate the accuracy of prediction and the capability of extrapolation. In such a benchmark calculation, the nuclear masses of FRDM12 are used as the pseudo experimental data (i.e., the target values). Meanwhile, the other 7 mass models---BW2, KTUY, WS4, RMF, HFB-31, Bhagwat, and DZ28---are regarded as our present knowledge and used as the inputs of BNN. To simulate the present experimentally known region, the learning set is limited to those nuclei listed in AME2016, and all nuclei outside AME2016 will be used to testify the extrapolation capability. As a result, the mass prediction accuracy in the learning region reaches $93$~keV. It is also found that the mass prediction uncertainties increase by about $50$~keV each step along the isotopic chains towards the unknown region, which agrees with the standard deviations between the mass predictions and the corresponding FRDM12 values. For details, see Fig.~\textcolor[rgb]{0.00,0.00,1.00}{2}\iffalse~\ref{Fig:Extrap}\fi~in Supplemental Materials~\cite{SuppMat} and the corresponding discussion. \textit{Results and Discussion}---Using the high-precision experimental data in AME2016 as the learning set, we construct the mass predictions of the present BML model. The root-mean-square (rms) deviations of $M$ and various separation or decay energies with respect to the experimental data for the learning set are given in Fig.~\ref{Fig:sigrms}. For comparison, the corresponding rms deviations given by some other mass models are also given. It is clear that the BML model achieves a very high accuracy of mass prediction, which is of the best accuracy for global mass predictions as we have known and for the first time crosses the accuracy threshold of $100$~keV in the known region. Furthermore, the BML model also achieves high accuracies for various separation or decay energies, which are at least about $3$ times higher than other shown mass models. Even comparing with the previous machine learning model WS4+BNN-I4~\cite{Niu2018PLB}, whose corresponding rms values are $184$, $208$, $216$, $213$, $227$, and $255$~keV for mass, $S_n$, $S_p$, $S_{2n}$, $S_{2p}$, and $Q_{\beta}$, respectively, the present BML model achieves much smaller rms values, i.e., $84$, $78$, $83$, $105$, $111$, and $99$~keV, respectively. This indicates the BML model describes excellently not only the mass surface globally but also its local details, including its derivatives in different directions on the nuclear chart. The microscopic correction energies $E_{\rm mic}$ can reveal the shell effects in nuclear properties. Therefore, we show $E_{\rm mic}^{\rm BML} = E^{\rm BML} - E_{\rm mac}$ of BML in Fig.~\ref{Fig:Emic}. It is clear that the shell structures in the known region are well reproduced. Being extrapolated to the unknown region, even to the drip lines, there are several remarkable structure features, which are hardly achieved by other learning approaches, such as the radial basis function approach~\cite{Niu2013PRCb, Niu2016PRC, Niu2019PRCb}. Apart from the traditional magic numbers, the new magic numbers around $N=40$ in the light nuclei region and those around $Z=120$ in the superheavy nuclei region are also predicted by BML. It is well known that $N=82$ and $N=126$ shells are crucial for the $r$-process properties, e.g., they are responsible for the main peaks of solar $r$-abundance at $A=130$ and $A=195$, respectively. From Fig.~\ref{Fig:Emic}, it is found that the $N=82$ shell remains robust even going to the neutron-drip line, which has been approved by recent experimental studies~\cite{Watanabe2013PRL}. However, we predict that the $N=126$ shell will first quench and then enhance as approaching the proton magic number $Z=50$ when going to the neutron-drip line. The $N=126$ shell also quenches when going to the proton-drip line, even just away from the known region. The two-proton (neutron) gaps $\delta_{2p}$ ($\delta_{2n}$) are also important signatures of nuclear magic numbers, which take local maxima at proton (neutron) magic numbers. From Fig.~\ref{Fig:d2nd2p}, which shows $\delta_{2p}$ and $\delta_{2n}$ of BML, the traditional magic numbers are well exhibited. The BML model predicts a neutron magic number at $N=184$, although its $\delta_{2n}$ is not as strong as those of traditional magic numbers. It should be pointed out that the larger $\delta_{2n}$ at $N\thickapprox 200$ is not necessarily a signature of magic number, which mainly originates from the lack of mass predictions for nuclei with $N>200$ in the KTUY model. To show the details of the present BML mass model and illustrate explicitly its extrapolation capability, the mass differences between $M_{\rm th}$ from various mass models and $M_{\rm exp}$ in AME2016 are shown in Fig.~\ref{Fig:MdCrNd}, by taking the Cr and Nd isotopes as examples. In particular, in these two isotopic chains, there are several experimental data on both neutron-rich and proton-rich sides with the accuracy worse than $100$~keV, shown as the white regions in Fig.~\ref{Fig:MdCrNd}. Therefore, we did not include those data in the learning set. It is clear that in the BML learning areas, the shaded regions in Fig.~\ref{Fig:MdCrNd}, the BML mass predictions are in an excellent agreement with the experimental data with an accuracy around $100$~keV, apart from the region around $^{154}$Nd. It is also seen that in the extrapolation areas, both neutron-rich and proton-rich sides, the BML mass predictions agree with the experimental data within the experimental and theoretical uncertainties. Remarkably, we still hold such a nice agreement, when we extrapolate the mass predictions from $^{58}$Cr to $^{70}$Cr with $12$ neutrons more. In both learning and extrapolation areas, the performance of BML is much better than those of HFB-31 and FRDM12, even better than the previous machine learning results of WS4+BNN-I4. In the regions of Cr and Nd, there are a number of new experimental data~\cite{Mougeot2018PRL, Orford2018PRL} after AME2016, which are shown with blue solid symbols in Fig.~\ref{Fig:MdCrNd}. The comparison between the new data and the BML mass predictions again show excellent agreements not only on the values of the mass but also on the systematics of the mass surface. For example, the BML mass prediction on $^{154}$Nd is consistent with the new data, instead of that in AME2016. As a step further, to show the influence of new experimental data, the new BML mass predictions are made by including these new data after AME2016~\cite{Mougeot2018PRL, Orford2018PRL, Vilen2018PRL, Vilen2019PRC, Vilen2020PRC, Canete2020PRC, Welker2017PRL, Manea2020PRL, Leistenschneider2018PRL, Reiter2020PRC, Ito2018PRL, Michimasa2018PRL, Izzo2018PRC, Ong2018PRC, Valverde2018PRL, Puentes2020PRC, Althubiti2017PRC, Ascher2019PRC, Nesterenko2017JPG, Brodeur2017PRC, Reiter2018PRC, Babcock2018PRC, Hartley2018PRL, Zhang2018PRC, Xu2019PRCR, Xu2019PRC, Andres2020EPJA, Mougeot2020PRC} into the learning set, which are denoted by NewBML for simplicity. The corresponding results are also shown in Fig.~\ref{Fig:MdCrNd} with dark-green shaded bands. It is found that, if the new data are included in the learning set, the theoretical uncertainties near the new data reduce to about half of the original values. For the important issue related to the origin of the rare-earth peak and the possible kinks in the separation energies near $N = 104$, we show in Fig.~\ref{Fig:S2nInfNewExp} the two-neutron separation energies $S_{2n}$ for the $Z = 60$--$65$ isotopes. While the BML mass predictions agree well with both AME2016 and new data in this region, the new data can further substantially reduce the theoretical uncertainties of $S_{2n}$ for the neighboring nuclei. As a result, the $S_{2n}$ predictions around $N=104$ by NewBML tend to be smooth, rather than with kinks. In other words, it is more likely that the origin of the rare-earth peak is due to the doubly asymmetric fission fragment distributions in the $A\approx 278$ region~\cite{Goriely2013PRL}. More experimental data in the coming years will further testify this conclusion. By taking the new nuclei with $Z, N \geqslant 8$ first appearing in latest database AME2020~\cite{Wang2021CPC} as the testing set, the rms deviation of BML model with respect to those new data with the experimental uncertainties smaller than $100$~keV is $170$~keV. This indicates a good accuracy is also achieved by the BML model for these new data, which are not in the training. In contrast, the corresponding rms deviations are $245$, $691$, and $718$~keV for WS4+BNN-I4~\cite{Niu2018PLB}, FRDM2012~\cite{Moller2012PRL}, and HFB-31~\cite{Goriely2016PRC} models, respectively. Finally, all experimental masses with $Z, N \geqslant 8$ and uncertainties smaller than $100$ keV in the latest database AME2020~\cite{Wang2021CPC} are employed to train the BML model, the resulting mass predictions are given in the Supplemental Materials~\cite{SuppMat}. \textit{Summary}---High-precision mass predictions are made with the Bayesian neural networks by learning the mass surface of even-even nuclei and the correlation energies to their neighbouring nuclei. The known physics in various mass models are kept to achieve good predictive capability. With this strategy, the proposed BML mass model describes well not only the mass surface globally but also its local details including its derivatives in different directions on the nuclear chart. As a result, BML achieves high accuracy for both nuclear masses and various separation or decay energies. The accuracy of BML mass predictions reaches $84$~keV, which has crossed the accuracy threshold of $100$~keV in the known region. The uncertainties of BML mass predictions are also reasonably evaluated, which increase about $50$~keV as going forward one step along the isotopic chain from the known region, and the new experimental data after AME2016 can be precisely predicted by BML within the experimental and theoretical uncertainties. While the shell structures in the known region are well described, we also predict several important features in the unknown region, such as the new magic numbers around $N = 40$, the robustness of $N = 82$ shell, the quenching of $N = 126$ shell, and the smooth separation energies around $N = 104$, which are all crucial for the quantitative $r$-process calculations. With the present designs of the BML mass model, the future experimental data of nuclear mass as well as the future advanced nuclear mass models can be taken into account by the same strategy. The nuclear mass predictions towards the unknown region can be carried out and improved systematically and continuously. \section*{Acknowledgements} We are grateful to Professor Shan-Gui Zhou and Professor Yi-Fei Niu for the fruitful discussions. This work was partly supported by the National Natural Science Foundation of China under Grant No.~11875070 and No.~11935001, the Anhui project (Z010118169), the JSPS Grant-in-Aid for Early-Career Scientists under Grant No.~18K13549, the JSPS Grant-in-Aid for Scientific Research (S) under Grant No.~20H05648, the RIKEN iTHEMS program, and the RIKEN Pioneering Project: Evolution of Matter in the Universe. The authors acknowledge the High-performance Computing Platform of Anhui University for providing computing resources. \end{CJK*}
Title: Spatially-resolved properties of early-type group-dominant galaxies with MUSE: gas content, ionisation mechanisms and metallicity gradients
Abstract: With the goal of a thorough investigation of the ionised gas and its origin in early-type group-dominant galaxies, we present archival MUSE data for 18 galaxies from the Complete Local-Volume Groups Sample (CLoGS). This data allowed us to study the spatially-resolved warm gas properties, including the morphology of the ionised gas, EW(H$\alpha$) and kinematics as well as the gas-phase metallicity (12 + log(O/H)) of these systems. In order to distinguish between different ionisation mechanisms, we used the emission-line ratios [O III]/H$\beta$ and [N II]/H$\alpha$ in the BPT diagrams and EW(H$\alpha$). We find that the ionisation sources in our sample have variable impacts at different radii, central regions are more influenced by low-luminosity AGN, while extended regions of LINER-like emission are ionised by other mechanisms with pAGBs photoionisation likely contributing significantly. We classified our sample into three H$\alpha$+[N II] emission morphology types. We calculate the gas-phase metallicity assuming several methods and ionisation sources. In general, 12 + log(O/H) decreases with radius from the centre for all galaxies, independently of nebular morphology type, indicating a metallicity gradient in the abundance profiles. Interestingly, the more extended filamentary structures and all extranuclear star-forming regions present shallow metallicity gradients. Within the uncertainties these extended structures can be considered chemically homogeneous. We suggest that group-dominant galaxies in our sample likely acquired their cold gas in the past as a consequence of one or more mechanisms, e.g. gas-clouds or satellite mergers/accretion and/or cooling flows that contribute to the growth of the ionised gas structures.
https://export.arxiv.org/pdf/2208.14115
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} galaxies: abundances -- galaxies: groups: general -- galaxies: ISM -- galaxies: elliptical and lenticular, cD, S0 -- galaxies: nuclei \end{keywords} \section{Introduction}\label{sec:Intro} Most galaxies are found in groups and clusters. The distinction between them is made by the number of their members, clusters have $\gtrsim$50 member galaxies and are more massive than groups. Bright galaxies at the centres of galaxy groups (hereafter BGGs) or group-dominant galaxies typically have stellar masses of M$_{*}$ $\gtrsim$10$^{10-13}$ M$_{\odot}$ \citep[e.g.,][]{Gozaliasl2016,OSullivan2018,Kolokythas2022}, with some BGGs also displaying X-ray emission \cite[e.g.,][]{OSullivan2017} and containing cold H\,{\sc i} and/or molecular gas in the range $\sim$10$^{8-10}$ M$_{\odot}$ \citep[e.g.,][]{OSullivan2015,OSullivan2018}. A significant fraction \citep[$\sim$25\%;][]{Gozaliasl2016} of BGGs lie on the main sequence of the star-forming (late-type) galaxies at z $\lesssim$ 0.4, and this fraction increases with redshift. However, most BGGs are early-type galaxies (ETGs), i.e., elliptical and/or lenticular (S0) galaxies. Early-type BGGs \citep[e.g.,][]{Kolokythas2022}, and many other ETGs in the literature \citep[e.g.,][]{Shapiro2010,Fang2012,Gomes2016}, contain warm ionised gas and ongoing low-level star-formation (SF) \citep[SFR$_{FUV} \sim$0.01--0.4 M$_{\odot}$ yr$^{-1}$;][]{Kolokythas2022}, while in late-type BGGs the SF can reach $\sim$10 M$_{\odot}$ yr$^{-1}$ \citep{OlivaAltamirano2014,Gozaliasl2016}. This same activity is also found in a significant fraction of Brightest Cluster Galaxies \citep[BCGs; e.g.,][]{Bildfell2008,Pipino2009,McDonald2010,Donahue2011,McDonald2011,Werner2014,Loubser2016}. ETGs \citep[e.g.,][]{Phillips1986,TrinchieridiSeregoAlighieri1991,Annibali2010}, including some early-type BGGs\footnote{See the list of galaxies in this paper.} also display low-ionisation nuclear emission-line regions \cite[LINERs,][]{Heckman1980}. The spectra of LINERs exhibit strong low-ionisation optical emission lines such as [O \,{\sc iii}]$\lambda$5007, [O \,{\sc ii}]$\lambda\lambda$3726/29, [N \,{\sc ii}]$\lambda$6584, [S \,{\sc ii}]$\lambda\lambda$6717,6731 and hydrogen Balmer lines (H$\alpha$, H$\beta$, etc), commonly with [N \,{\sc ii}]$\lambda$6584 > H$\alpha$ emission. Several explanations for LINER emission, in galaxies, have been proposed including shock-ionisation \citep[e.g.,][]{Heckman1980,DopitaSutherland1995,DopitaSutherland1996,Allen2008}, photoionisation by ongoing low-level SF \citep[e.g.,][]{Schawinski2007,Fogarty2015,Shapiro2010}, hot evolved post-asymptotic giant branch (pAGB) stars \citep[e.g.,][]{Binette1994,Stasinska2008,Annibali2010,Sarzi2010,CidFernandes2010,CidFernandes2011}, photoionisation by the "cooling" X--ray--emitting gas \citep{VoitDonahue1990,DonahueVoit1991, Ferland1994,Voit1994,Polles2021}, thermal conduction from the hot gas \citep{Sparks1989} and ionisation by low-luminosity active galactic nuclei (AGN) \citep[e.g.,][]{FerlandNetzer1983,Barth1998,Ho1999}. Observational evidence suggests that more than one mechanism are likely to be at work in LINERs \cite[e.g,][in BCGs]{Edwards2007,Fogarty2015}. For instance, the contribution of AGN to the photoionisation of LINERs may be restricted to the nuclear region \citep[e.g.,][]{Sarzi2010}, while pAGB stars are likely responsible for the extended gas emission \citep[e.g.,][]{Binette1994,Sarzi2010,Gomes2016}. Extended low-ionisation emission line regions \citep[LIERs;][]{Belfiore2016} or LINER-like extranuclear diffuse ionised gas (DIG) emission in ETGs has been detected and studied \citep[e.g.,][]{Sarzi2006,Gomes2016} as part of the SAURON \citep{Bacon2001} and Calar Alto Legacy Integral Field Area \citep[CALIFA,][]{Sanchez2012} surveys. These studies suggests low-luminosity AGN are likely contributing to LINER emission in a less than dominant way \citep{YanBlanton2012}. Extended and DIG filaments are observed in BGGs/BCGs \citep[e.g.,][]{Heckman1989,Crawford1999,OSullivan2012,Tremblay2018,Olivares2019}. These ionised gas filaments seem to be co-spatial with large quantities of H\,{\sc i} and molecular gas, traced by CO, and soft X--ray emission features \citep{Werner2014}. \cite{McDonald2010} found that the extended warm gas in cluster galaxies, in general, is spatially coincident with cooling intracluster medium (ICM) flows. \cite{McDonald2011,McDonald2012} rule out collisional ionisation by cosmic rays, thermal conduction, photoionisation by ICM X-rays and AGN as strong contributors to the ionisation of the warm gas, in both the nuclei and filaments of cool core clusters. They argue that the data are adequately described by a composite model of slow shocks and SF. According to \cite{McDonald2011}, the H$\alpha$ filaments in BGGs/BCGs are more strongly correlated with the cooling properties of the ICM than with the radio properties of the BCGs. The molecular gas in the filaments is located along old radio jets, lobes and/or below X-ray cavities, suggesting that AGN activity supplies a regular input of energy in those systems \citep[][]{Russell2016,OSullivan2021}. Generally, the dominant ionisation mechanism, as well as their locations (i.e., nuclear and/or extended regions), is not clear and different processes may be at work during the evolution of each type of system. Hot (10$^7$ K) gas-phase metallicity is another key property for studying the effect of galactic environments and AGN/SF feedback in groups and clusters \citep[for a review see][]{Gastaldello2021}. The X-ray emitting intra-group medium (IGrM) shows several emission lines typical of elements synthesised by stars/supernovae (SNe) and deposited in the IGrM/ICM via galactic winds from the group/cluster members \citep[e.g.,][]{Liang2016}, ram-pressure stripping/tidal interactions \citep[][]{TaylorBabul2001,McCarthy2008} and others \citep[for a review see][]{SchindlerDiaferio2008}. Several studies \cite[e.g.,][and references therein]{Mernier2017,Gastaldello2021} show that the average abundance profiles of Fe and others elements (e.g., O, Si, Ar) in the ICM/IGrM increase towards the core of cool-core clusters/groups up to values of $\sim$1.0 Z$_{\odot}$, and decrease at large radii. This peaked distribution is associated with the release of metals from the galaxy members via mechanical feedback from AGN/SF activity, gas turbulence \citep[][]{Rennehan2019} or infalling galaxies through ram-pressure stripping across the evolution of the systems. Whereas some studies \cite[e.g.,][]{Werner2006} report a rather flat O and Mg profiles, non-cool-core clusters and groups do not exhibit clear Fe abundance gradient in their cores. Mechanisms such as merger–induced sloshing motions \cite[][and references therein]{OSullivan2014} can also transport the released metals from the galaxies into the IGrM/ICM. On the other hand, the spatial distribution of abundances of the warm (10$^4$ K) gas-phase in the interstellar medium (ISM) of the galaxies is also important for understanding the formation history of these systems. Studies of local star-forming galaxies have shown a negative metallicity gradient (the metallicity decreases radially) with increasing galactocentric radius \cite[e.g.,][among many others]{MartinRoy1992,vanZee1998}. However, this radial distribution can present deviations from this simple negative gradient, namely a steep decrease of oxygen abundance (12 + log(O/H)) in the nuclear regions and a flattening of the gradient in the outer parts \cite[see][and references therein]{SanchezMenguiano2018,Sanchez2020}. Several mechanisms have been proposed to explain these features such as radial migration, infall of gas or satellite accretion. In early-type group-dominant galaxies, the presence of these features and the origin of their gas are poorly understood \cite[e.g.,][]{Annibali2010}. The cold gas in group and cluster ETGs potentially originates from two main sources: internal production \citep[a large fraction of stellar mass is believed to be released in the form of cold gas,][]{Davis2011} and external accretion, i.e., mergers, stripping gas from satellites, gas from the IGrM/ICM \citep[e.g.,][]{Jung2022}. A kinematical decoupling between gas and stars as evidence for external gas origin was found in many ETGs in the SAURON \citep{Sarzi2006,Sarzi2010} and CALIFA \citep{Gomes2016} surveys. CLoGS \citep{OSullivan2017} is an optically-selected sample of 53 groups at distances of less than 80 Mpc and is the first statistically complete sample of nearby galaxy groups studied with an extensive multi-wavelength dataset, including X-ray \citep[Chandra and XMM-Newton;][]{OSullivan2017}, radio \citep[GMRT and VLA;][]{Kolokythas2018, Kolokythas2019} and sub-mm \citep[IRAM-30m and APEX;][]{OSullivan2018} observations. In addition, we have analysed spatially-resolved long-slit spectroscopy from the 10m Hobby-Ebberly Telescope \citep[][]{vandenBosch2015} for a sub-sample of 23 of the CLoGS dominant central galaxies \citep[see][]{Loubser2018}. Archival \textit{GALEX} FUV and \textit{WISE} photometry for the group-dominant galaxies are presented in \cite{Kolokythas2022}. In this paper, we use archival optical ESO Very Large Telescope (VLT) Integral Field Unit (IFU) Multi-Unit Spectroscopic Explorer (MUSE) observations of a sample of 18 early-type group-dominant CLoGS galaxies. We aim to investigate the relation between the properties (i.e., emission line ratios, chemical abundances, etc) and structure of each galaxy's ISM in order to constrain the ionisation processes, the origin of their gas and its chemical abundance distribution. In related papers, using the same MUSE dataset, \cite{Olivares2022} presents a detailed analysis of the ionised gas kinematics, while in \cite{Loubser2022} we study the stellar kinematics of the fast and slow rotators in this sample. The paper is organized as follows: Section \ref{sec:data} contains the technical details regarding the observations, data reduction, spectral fitting and measurement of line fluxes; Section \ref{sec:results} describes our results on the structure and physical properties of the ionised gas; Section \ref{sec:discussion} discusses the results. Finally, in Section \ref{sec:conclusions} we summarise our conclusions. In the following analysis we adopt as solar abundances Z$_{\odot}$ = 0.0142, X$_{\odot}$ = 0.7154, and [12 + log(O/H)]$_{\odot}$ = 8.69 from \cite{Asplund2009}. \section{The data and emission line measurements}\label{sec:data} \subsection{Observations and data reduction}\label{sec:observations} Our sample consists of 18 group-dominant galaxies selected from the CLoGS nearby groups sample \citep{OSullivan2017}. Each galaxy has archival ESO VLT/MUSE \citep{Bacon2010} data. Four of our targets were previously observed in the CALIFA IFU survey (NGC 677, NGC 924, NGC 1060 and NGC 7619). The observations (program ID 097.A-0366(A)) were made with the MUSE spectrograph, which used the Nasmyth B focus on Yepun the 8.2m VLT UT4. The observations were made in the wide field mode over a 1$'\times$1$'$ Field of View (FoV) sampled by a system of 24 spectrographs with an spectral coverage between $\sim$4800 and 9300 $\rm \AA$, a spectral resolution of $\sim$2.6 $\rm \AA$, spectral sampling of 1.25 $\rm \AA$ and spatial sampling of 0.2$''$ per spaxel, leading to a total of 90,000 spectra per exposure. For each galaxy three exposures were taken. These observations were dithered with a few arcsecond dither pattern in order to cover a large portion of each galaxy's ISM. Offset sky observations were used for sky subtraction. In Table \ref{table:1} we show the general parameters for the sample members and the observation log. The data were reduced using the MUSE pipeline \citep[v2.6.2,][]{Weilbacher2020}, with steps including: bias correction, wavelength calibration, construction of datacubes from the individual spectra from the detectors, correction of the spectra to the heliocentric frame, skysubtraction, and merging of the individual exposures to form a combined datacube with a total of $\sim$200,000 spectra per each galaxy. Finally, the data cubes were corrected for redshift and galactic extinction using the reddening function given by \cite{Cardelli1989}. In Figure \ref{fig:figures_spectra} we show the nuclear spectra of each member of our sample. More details in Section \ref{sec:spec_fitting}. \begin{table*} \hspace*{-0.7cm} \begin{minipage}{190mm} \caption{General parameters of our sample and observing log. Col. (1) gives the galaxy name, Col. (2) the morphology, Cols. (3) and (4) give their coordinates, Col. (5) presents the galactic extinction A$_{V}$ (mag) \citep{SchlaflyFinkbeiner2011} and Cols. (6)-(7) gives the distance and redshift, respectively. D (Local Group) and z from NED assuming H$_o$ = 73.0 km/sec/Mpc. In Col. (8) we give X-ray morphology (extent of the gas halo) and core type as cool-core/non-cool-core (CC/NCC) based on their temperature profiles \citep{OSullivan2017} for systems where thermal emission was detected: group-like (GRP, extent >65 kpc), galaxy-like (gal, extent $\sim$10-65 kpc) and point-like (pnt, unresolved, extent smaller than the XMM-Newton PSF). Radio morphology \citep{Kolokythas2018}: point-like (pnt, radio source with sizes $\lesssim$11 kpc), diffuse emission (diffuse, with no clear jet or lobe structure), small-scale jets (jet, <20 kpc jets confined within the stellar body of the host galaxy) and large-scale jets (JET, >20 kpc jets extending beyond the host galaxy and into the IGrM. Finally, Col. (9)-(11) shows the observation date, exposure time and mean airmass during the observations.} \label{table:1} \centering \begin{tabular}{l c c c c c c c c c c c c} \hline Name & Morph. & RA & DEC & A$_{V}$ & D & z & X--ray / Radio & Date & Exp. time & Mean & \\ & & (J2000) & (J2000) & (mag) & (Mpc) & & morphology & of observation & (s) & airmass & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11)\\ \hline NGC 193 & E &00h39m18.6s & +03d19m52s & 0.062 & 62.7 & 0.014723 & GRP (CC) / JET & 2016-06-08 & 3$\times$900 & 1.524\\ NGC 410 & E &01h10m58.9s & +33d09m07s & 0.161 & 75.8 & 0.017659 & GRP (CC) / pnt & 2016-08-16 & 3$\times$900 & 1.937\\ NGC 584 & E &01h31m20.7s & -06d52m05s & 0.116 & 25.9 & 0.006011 & \dots / pnt & 2016-07-01 & 3$\times$900 & 1.243\\ NGC 677 & E &01h49m14.0s & +13d03m19s & 0.243 & 71.9 & 0.017012 & GRP (CC) / diffuse & 2016-07-20 & 3$\times$900 & 1.344\\ NGC 777 & E &02h00m14.9s & +31d25m46s & 0.128 & 71.4 & 0.016728 & GRP (NCC) / pnt & 2016-08-16 & 3$\times$900 & 1.823\\ NGC 924 & S0 &02h26m46.8s & +20d29m51s & 0.467 & 63.1 & 0.014880 & \dots / pnt & 2016-08-14 & 3$\times$900 & 1.651\\ NGC 940 & S0 &02h29m27.5s & +31d38m27s & 0.246 & 72.6 & 0.017075 & pnt / pnt & 2016-08-21 & 3$\times$900 & 1.814\\ NGC 978 & E/S0 &02h34m47.6s & +32d50m37s & 0.253 & 67.3 & 0.015794 & gal (NCC) / pnt & 2016-08-16 & 3$\times$900 & 1.866\\ NGC 1060 & E/S0 &02h43m15.0s & +32d25m30s & 0.532 & 73.4 & 0.017312 & GRP (CC) / jet & 2016-08-22 & 3$\times$900 & 1.845\\ NGC 1453 & E &03h46m27.2s & -03d58m08s & 0.289 & 52.9 & 0.012962 & GRP (NCC) / pnt & 2016-07-22 & 3$\times$900 & 1.358\\ NGC 1587 & E &04h30m39.9s & +00d39m42s & 0.197 & 50.0 & 0.012322 & GRP (NCC) / diffuse & 2016-08-18 & 3$\times$900 & 1.360\\ NGC 4008 & E &11h58m17.0s & +28d11m33s & 0.064 & 49.0 & 0.012075 & gal (NCC) / pnt & 2016-05-13 & 3$\times$900 & 1.668\\ NGC 4169 & S0 &12h12m18.8s & +29d10m46s & 0.058 & 51.4 & 0.012622 & pnt / pnt & 2016-05-19 & 3$\times$900 & 1.728\\ NGC 4261 & E &12h19m23.2s & +05d49m31s & 0.049 & 28.4 & 0.007378 & GRP (CC) / JET & 2016-04-17 & 3$\times$880& 1.311\\ ESO0507-G025 & E/S0 &12h51m31.8s & -26d27m07s & 0.245 & 41.1 & 0.010788 &\dots / diffuse & 2016-04-12 & 3$\times$900 & 1.107\\ NGC 5846 & E &15h06m29.3s & +01d36m20s & 0.153 & 23.1 & 0.005711 & GRP (CC) / jet & 2016-05-16 & 3$\times$880& 1.579\\ NGC 6658 & S0 &18h33m55.6s & +22d53m18s & 0.339 & 61.6 & 0.014243 & pnt / \dots & 2016-04-23 & 3$\times$870& 1.557\\ NGC 7619 & E &23h20m14.5s & +08d12m22s & 0.224 & 54.6 & 0.012549 & GRP (CC) / pnt & 2016-05-23 & 3$\times$870& 1.444\\ \hline \end{tabular} \end{minipage} \end{table*} \subsection{Spectral fitting and emission-line measurement}\label{sec:spec_fitting} We fitted and removed the stellar component within each spectrum (bin or spaxel), in order to obtain pure emission line spectra using the spectral synthesis code Fitting Analysis using Differential evolution Optimization \citep[FADO,][]{GomesPapaderos2017} version V1.B. Simple stellar population (SSP) templates from the \cite{BruzualCharlot2003} libraries were used with metallicities between Z=0.004 and Z=0.05 and stellar ages between 1 Myr and 15 Gyr, for a Chabrier initial mass function \citep[IMF,][]{Chabrier2003}. FADO self-consistently reproduces the nebular characteristics of a galaxy, including the hydrogen Balmer-line luminosities, equivalent widths and nebular continuum. An important advantage of FADO is that its convergence scheme employs genetic differential evolution optimization. This results in improvements with respect to the uniqueness of spectral fits and the overall efficiency of the convergence schemes. Artificial intelligence is used to eliminate redundancies in the SSP base libraries which increases the computational efficiency. The fit was performed in the 4800-7000 $\AA$ spectral range assuming the \cite{Cardelli1989} extinction law. The data cubes were tessellated using the Voronoi binning method \citep{CappellariCopin2003} to achieve an adequate signal to noise (S/N; we choose a S/N$\sim$30-50) per bin for the continuum between 6000 to 6200 $\AA$. The Voronoi binning method lead to the dilution of the emission lines within the bins, so the shape of extended and diffuse emission lines was lost \citep[see][for a similar approach]{ErrozFerrer2019}. Then, the best fit stellar continuum or synthetic Spectral Energy Distribution (SED) was subtracted from the observed spaxels, with S/N $>$ 3 in the continuum, to obtain pure emission-line data cubes. FADO provides as results, among others, mass contributions of individual stellar populations, mass- and luminosity-weighted stellar ages and metallicities, emission-line fluxes, equivalent widths (EWs), FWHMs and estimates of their uncertainties. In Figure \ref{fig:Spectrum_example_fit} we show an example to illustrate the spectral modelling results with FADO from a nuclear 3" aperture for the galaxy ESO0507-G025. In order to test the accuracy of these results, we carried out a single-Gaussian fit to each emission line. Our fit results were in good agreement with those from FADO, consequently for the analysis in this study we create the following emission line maps using FADO: H$\alpha$, H$\beta$, [O \,{\sc iii}]$\lambda$5007, [N \,{\sc ii}]$\lambda$6584 and [S \,{\sc ii}] $\lambda\lambda$6717, 6731. We note that, when deriving the maps we only use spaxels with emission fluxes $>$3$\sigma$ above the background. We did not detect significant broad asymmetric emission-line profiles, so multiple-component decomposition of the emission lines was not necessary, because each emission line is well described by a single Gaussian profile. More details about the morphologies of these maps is given in Section \ref{subsec:emission}. Following a similar procedure to \cite{Parkash2019} we extracted for each galaxy a spectrum from a circular region at the galaxy continuum centre with a diameter of 3" in order to mimic the SDSS spectral fibre aperture. From here on we refer to this circular region as the nuclear region and the spectra from this region as the nuclear spectrum (see Figure \ref{fig:figures_spectra}). The nuclear spectra of the galaxies are characterized by the presence of strong [N \,{\sc ii}]$\lambda$6584 emission with [N \,{\sc ii}]$\lambda$6584/H$\alpha>$ 1, which is consistent with low-ionisation nuclear emission-line regions. Additionally, we extracted an integrated spectrum for each galaxy within an aperture which encompasses the extended emission in Figure \ref{fig:figures_NB} (see Section \ref{subsubsec:Emission_line_morphology}) with [N \,{\sc ii}]$\lambda$6584 emission > 3$\sigma$ per pixel. In summary, we find 15/18 objects with intensive nuclear ionised-gas (LINERs) with [N \,{\sc ii}]$\lambda$6584 $>$ H$\alpha$. \cite{Pagotto2021} found the strongest and most spatially extended emission, in their sample of massive ETGs in densest clusters of galaxies, came from the [N \,{\sc ii}]$\lambda$6584 lines. We find the same result for our sample. In Table \ref{table:2} we present for each galaxy the 3" nuclear and integrated observed F(H$\alpha$) flux, F(H$\alpha$)/F(H$\beta$) multiplied by a factor of 100, F([N \,{\sc ii}]]$\lambda$6584) flux and EW(H$\alpha$), respectively. We note that using the spectral synthesis code FADO \citep{GomesPapaderos2017} and the \cite{BruzualCharlot2003} stellar spectral library, we resolved weak [O \,{\sc iii}]$\lambda$5007, H$\beta$, H$\alpha$ and [N \,{\sc ii}]$\lambda$6584 emission in the nuclear region of NGC 6658. However, \cite{Olivares2022} using pPXF \citep{Cappellari2017} and the Indo-U.S. Coud\'e Feed stellar library \citep{Valdes2004} find no emission lines in the spectrum of this galaxy using the same MUSE dataset. The results for this galaxy are therefore uncertain especially given that no clear emission line peaks are observed in the integrated spectrum. \begin{table*} \centering \begin{minipage}{170mm} \caption{H$\alpha$+[N \,{\sc ii}] morphology, observed emission line fluxes and EW(H$\alpha$) for our sample galaxies from a simulated 3" diameter and an integrated (int) aperture. The radial size of the integrated aperture is shown in parentheses to the right of F(H$\alpha$).} \label{table:2} \begin{tabular}{l c c c c c c c c c c c c c c c} \hline & H$\alpha$+[N \,{\sc ii}] & \multicolumn{2}{c}{F(H$\alpha$)} & \multicolumn{2}{c}{(F(H$\alpha$)/F(H$\beta$))$\times$100} & \multicolumn{2}{c}{F([N \,{\sc ii}]$\lambda$6584)} & \multicolumn{2}{c}{EW(H$\alpha$)}\\ & morphology &\multicolumn{2}{c}{($\times$10$^{-14}$ erg cm$^{-2}$ s$^{-1}$)} & \multicolumn{2}{c}{ } & \multicolumn{2}{c}{($\times$10$^{-14}$ erg cm$^{-2}$ s$^{-1}$)} & \multicolumn{2}{c}{($\AA$)} \\ & & \scriptsize{3"} & \scriptsize{int} & \scriptsize{3"} & \scriptsize{int} & \scriptsize{3"} & \scriptsize{int} & \scriptsize{3"} & \scriptsize{int}\\ \hline NGC 193 & i & 0.43$\pm$0.02 & 1.64$\pm$0.11 (6.0$"$) & 245.16$\pm$0.31 & 134.39$\pm$0.25 & 0.59$\pm$0.06 & 1.70$\pm$0.22 & 2.71 & 1.65\\ NGC 410 & i0 & 0.52$\pm$0.01 & 0.83$\pm$0.02 (4.5$"$) & 146.81$\pm$0.11 & 112.64$\pm$0.14 & 0.52$\pm$0.04 & 0.87$\pm$0.08 & 1.63 & 0.69\\ NGC 584 & i & 0.66$\pm$0.07 & 5.83$\pm$0.67 (21.0$"$) & 205.26$\pm$0.69 & 131.12$\pm$0.88 & 0.90$\pm$0.08 & 4.1$\pm$0.7 & 0.77 & 0.53\\ NGC 677 & i & 0.63$\pm$0.03 & 4.51$\pm$0.45 (7.5$"$) & 322.66$\pm$0.36 & 313.84$\pm$0.60 & 0.62$\pm$0.07 & 3.87$\pm$0.60 & 3.43 & 3.13 \\ NGC 777 & i0 & 0.10$\pm$0.01 & \dots & 25.55$\pm$0.09 & \dots & 0.19$\pm$0.01 & \dots & 0.28 & \dots \\ NGC 924 & ii & 0.83$\pm$0.06 & 3.83$\pm$0.30 (24$"$) & 257.83$\pm$0.66 & 246.32$\pm$1.24 & 0.96$\pm$0.06 & 3.48$\pm$0.25 & 2.24 & 1.60 \\ NGC 940 & ii & 0.52$\pm$0.04 & 10.01$\pm$0.24 (13.5$"$) & 436.02$\pm$1.12 & 287.04$\pm$0.31 & 1.14$\pm$0.10 & 6.49$\pm$0.77 & 2.22 & 5.06 \\ NGC 978 & i0 & 0.33$\pm$0.02 & 0.81$\pm$0.08 (4.5$"$) & 148.83$\pm$0.19 & 104.83$\pm$0.21 & 0.28$\pm$0.02 & 0.70$\pm$0.08 & 1.23 & 0.81 \\ NGC 1060 & i & 0.37$\pm$0.05 & 1.37$\pm$0.23 (6.0$"$) & 202.02$\pm$1.96 & 129.84$\pm$0.73 & 0.46$\pm$0.04 & 1.18$\pm$0.19 & 0.86 & 0.63 \\ NGC 1453 & i & 1.35$\pm$0.06 & 8.14$\pm$0.79 (13.5$"$) & 188.92$\pm$0.21 & 91.13$\pm$0.19 & 2.07$\pm$0.08 & 9.43$\pm$0.82 & 2.60 & 2.00 \\ NGC 1587 & i & 0.63$\pm$0.04 & 5.14$\pm$0.18 (15$"$) & 227.91$\pm$0.40 & 161.05$\pm$0.33 & 0.75$\pm$0.04 & 4.13$\pm$0.22 & 1.50 & 1.41 \\ NGC 4008 & i0 & 0.16$\pm$0.02 & \dots & \dots & \dots & 0.14$\pm$0.02 & \dots & 0.46 & \dots \\ NGC 4169 & ii & 0.99$\pm$0.05 & 4.67$\pm$0.28 (10.5$"$) & 325.06$\pm$0.57 & 235.69$\pm$0.76 & 1.71$\pm$0.06 & 6.63$\pm$0.36 & 2.24 & 1.35 \\ NGC 4261 & i0 & 4.31$\pm$0.58 & \dots & 450.88$\pm$0.78 & \dots & 9.44$\pm$0.64 & \dots & 2.61 & \dots\\ ESO0507-G025 & ii & 2.59$\pm$0.14 & 20.37$\pm$1.34 (27.0$"$) & 363.09$\pm$0.32 & 291.93$\pm$0.29 & 2.9$\pm$0.18 & 13.68$\pm$1.95 & 5.93 & 3.65\\ NGC 5846 & i & 0.74$\pm$0.07 & 11.33$\pm$1.27(18$"$) & 189.27$\pm$0.35 & 139.11$\pm$0.30 & 0.98$\pm$0.09 &11.46$\pm$1.44 & 2.10 & 1.49\\ NGC 6658 & i0 & 0.15$\pm$0.02 & \dots & 242.79$\pm$1.89 & \dots & 0.23$\pm$0.02 & \dots & 0.47 & \dots\\ NGC 7619 & i0 & 0.19$\pm$0.05 & \dots & 36.63$\pm$0.13 & \dots & 0.47$\pm$0.04 & \dots & 0.31 & \dots\\ \hline \end{tabular} \end{minipage} \end{table*} Finally, we explored the data cubes of our target BGGs (see appendix \ref{emission_galaxies_apen}) in order to find additional emission line galaxies in the FoV. We found two additional objects in the FoV of NGC 677 and one in the FoVs of NGC 777, NGC 924 and NGC 1453. These detections are not physically associated (0.1 $\lesssim$ z $\lesssim$ 0.5) with the main galaxies. In Figure \ref{fig:emission_galaxies} we show the position of these objects in the FoV of the galaxies, while in Table \ref{table:emission_galaxies} we summarize their main properties. \section{Results}\label{sec:results} \subsection{Ionised gas structure: H$\alpha$ and [N \,{\sc ii}]$\lambda$6584 emission line maps and extinction correction}\label{subsec:emission} \subsubsection{Emission line morphology}\label{subsubsec:Emission_line_morphology} To assist understanding of the physical conditions and detection of the extended and diffuse line emitting gas we created continuum-subtracted H$\alpha$+[N \,{\sc ii}]$\lambda\lambda$6548,6584 (H$\alpha$+[N \,{\sc ii}]) emission line images by subtracting the continuum SED models, as explained in Section \ref{sec:spec_fitting}, and then summing the flux in the data cubes between 6528 $\rm \AA$ and 6604 $\rm \AA$. In Figures \ref{fig:figures_NB} and \ref{fig:figures_NB_apend} (Appendix \ref{HaNII_maps_apend}) we show the maps resulting from this procedure for each galaxy. We also show, in these figures, the continuum-subtracted MUSE spectra from the nuclear 3" aperture covering the wavelength range from 6400 $\AA$ to 6800 $\AA$ as an inset panel within each image. The DIG and extended filamentary line emitting H$\alpha$+[N \,{\sc ii}] gas reaches values of $\sim$10$^{-19}$ (e.g., NGC 4008) to $\sim$10$^{-17}$ (e.g., NGC 5846) erg cm$^{-2}$ s$^{-1}$ per pixel. We used the continuum-subtracted H$\alpha$+[N \,{\sc ii}] images to classify the galaxies in our sample. To identify the morphology of the line emitting regions and determine how much these lines correlate with the overall morphology and properties of our sample galaxies, we divided the galaxies into three morphological groups or types: \textit{type i0} - strong or diffuse nuclear emission (i.e. within a $\sim$1.5" radius from the galaxy centre) with (or without) unextended ($\lesssim$1 kpc) filamentary structures connected to the nuclear region, \textit{type i} - strong or diffuse nuclear emission with extended (several kpc) filamentary structures beyond the nuclear region and \textit{type ii} - i0 or i plus extranuclear H\,{\sc ii} regions (well-defined or in distorted ring-like structures). Using this simple classification scheme we find that 7/18 (NGC 410, NGC 777, NGC 978, NGC 4008, NGC 4261, NGC 6658 and NGC 7619) are type i0, another 7/18 are type i (NGC 193, NGC 584, NGC 677, NGC 1060, NGC 1453, NGC 1587 and NGC 5846) and 4/18 galaxies are type ii (NGC 924, NGC 940, NGC 4169 and ESO0507-G025). \subsubsection{Intrinsic reddening} We computed the extinction corrections with the colour excess E(B-V) from the gas using the Balmer decrement H$\alpha$/H$\beta$=2.86 assuming case B recombination \citep[][n$_e$=100 cm$^{-3}$ and T$_e$=10$^4$ K]{OsterbrockFerland2006}. The E(B-V)$_{gas}$ is calculated as follows \begin{equation} E\left(B-V\right)_{gas} $=$ \frac{2.5}{\kappa\left(H\beta\right)-\kappa\left(H\alpha\right)} \log \left(\frac{\left(H\alpha \diagup H\beta\right)_{o}}{2.86}\right), \label{eq:extinction} \end{equation} \noindent where $\kappa$(H$\alpha$) and $\kappa$(H$\beta$) are the extinction values from the \cite{Cardelli1989} curve with R$_V$ = 3.1. (H$\alpha$/H$\beta$)$_{o}$ is the observed ratio between H$\alpha$ and H$\beta$. Therefore, we corrected the emission lines for extinction using I$_{\lambda}$=F$_{\lambda,o}$ 10$^{0.4 E\left(B-V\right)_{gas} \kappa\left(\lambda\right)}$. The reddening parameters were set to 0.0 for unrealistic values of (H$\alpha$/H$\beta$)$_{o}$ < 2.86 and when H$\beta$ was not detected. The assumption of (H$\alpha$/H$\beta$)$_{o}$ = 2.86 instead of 3.1 in spectra with AGN(-like) features does not significantly affect the results. Galaxies with (H$\alpha$/H$\beta$)$_{o}$ > 2.86 have low-reddening E(B-V)$_{gas}$ values from 0.13 to 0.46 mag. This result is not surprising as low-balmer decrements are also found in other ETGs \citep[e.g.,][]{Annibali2010} and in the nuclei and nebular filaments of cool-core clusters \citep{McDonald2012}. On the other hand, $\sim$72\% (13/18) and $\sim$83\% (15/18) of our galaxies have unrealistic nuclear and integrated (H$\alpha$/H$\beta$)$_{o}$ values, respectively. This indicates that the derived H$\beta$ fluxes are larger for their corresponding H$\alpha$ fluxes. For galaxies with (H$\alpha$/H$\beta$)$_{o}$ > 2.86 the E(B-V)$_{nebular}$ are similar or slightly lower than the E(B-V)$_{stellar}$, while for galaxies with (H$\alpha$/H$\beta$)$_{o}$ < 2.86 the E(B-V)$_{stellar}$ is higher. This might produce an under subtraction of the observed stellar populations. It is worth noting that unrealistic extinctions are also obtained with other SSP codes \citep[e.g.,][]{Annibali2010,Herpich2018} in ETGs. However, uncertainties in the reddening correction produced by the stellar fitting will not significantly affect the emission line ratios nor the properties obtained from them in our case. Therefore, large errors in the H$\alpha$/H$\beta$ are due to the uncertainty in the H$\beta$ emission. \subsection{[O \,{\sc iii}]$\lambda$5007/H$\beta$, [S \,{\sc ii}]$\lambda\lambda$6717,6731/H$\alpha$ and [N \,{\sc ii}]$\lambda$6584/H$\alpha$ emission line ratio maps and BPT diagrams}\label{subsec:emission_ratios} Using the information derived in the previous sections it is possible to distinguish between different ionisation mechanisms using BPT \citep[][]{Baldwin1981} diagrams. To do this, we used the following emission line ratios: [O\,{\sc iii}] $\lambda$5007/H$\beta$ ([O \,{\sc iii}]/H$\beta$) and [N\,{\sc ii}] $\lambda$6584/H$\alpha$ ([N\,{\sc ii}]/H$\alpha$). The [O\,{\sc iii}] $\lambda$5007/H$\beta$ emission line ratio is an excitation indicator and provides information about the available fraction of hard ionising photons. On the other hand, [N\,{\sc ii}] $\lambda$6584 is a low-ionisation emission line tracer, which is usually weak in pure H\,{\sc ii} regions. Therefore, the emission line ratios [N\,{\sc ii}]/H$\alpha$ and [O \,{\sc iii}]/H$\beta$ can effectively discriminate between photoionisation under physical conditions that are typical for H\,{\sc ii} regions and other excitation mechanisms (e.g., AGN or shocks) which are likely present in our sample. In Figure \ref{fig:example_figure} we show an example of the emission line ratio maps log([N \,{\sc ii}]/H$\alpha$) and log([O \,{\sc iii}]/H$\beta$, the log([N \,{\sc ii}]/H$\alpha$) vs log([O \,{\sc iii}]/H$\beta$ BPT diagram and the 2D BPT map for the galaxy ESO0507-G025. In this figure, we separate star-forming and AGN-ionised regions (i.e., Seyferts and LINERs) with blue demarcation lines from models by \cite{Kauffmann2003} (dotted line), \cite{Kewley2001} (solid line) and \cite{Schawinski2007} (dashed line). We colour coded the data points located in the different areas of the BPT diagram, i.e, AGN/LINERs in red, composite in orange and the star-forming H\,{\sc ii} dominated area in green. In Appendix \ref{Maps_sample} we show the emission line ratio maps and the BPT diagrams/maps for our entire sample. The BPT maps show that most of our data points/spaxels in the nuclear regions (see Figure \ref{fig:example_figure}) are dominated by AGN/LINERs while the SF becomes more important in the extended regions. We note that, in most cases, we obtain a very small or nonexistent number of data points ($\sim$17\% of the spaxels in ESO0507-G025) in the H\,{\sc ii} area of the BPT diagrams (see figures in Appendix \ref{Maps_sample}). Most spaxels lie in the AGN/LINERs region of the diagrams ($\sim$71\% of the spaxels on average), while only in two cases (NGC 978 and ESO0507-G025) are most spaxels in the composite region with 62\% and 55\% of the spaxels, respectively. In Table \ref{table:2b} we present for each galaxy the emission line ratios log([O \,{\sc iii}]/H$\beta$), log([N \,{\sc ii}]/H$\alpha$) and log([S \,{\sc ii}]/H$\alpha$) from the 3" and integrated apertures. Figure \ref{fig:figures_BPT_int} shows the BPT diagram [O \,{\sc iii}]/H$\beta$ versus [N\,{\sc ii}]/H$\alpha$ of the nuclear regions (small squares) for galaxies where [O\,{\sc iii}] $\lambda$5007 and H$\beta$ are detected. In the figure, we compare these with values obtained from the integrated apertures (circles). We use the same colour code to identify the galaxies in this diagram throughout the paper. In most cases, the nuclear values lie in the LINER region of the BPT diagram, while the integrated ones are in the composite region. The log([O \,{\sc iii}]/H$\beta$) for most galaxies decreases significantly when we consider the larger integrated aperture, which changes their positions in the BPT diagram. In the case of ESO0507-G025, NGC 924 and NGC 5846 the log([O \,{\sc iii}]/H$\beta$) increases and the log([N \,{\sc ii}]/H$\alpha$) ratio decreases. Figure \ref{fig:figures_BPT_int} clearly shows that the ionisation structure of our sample galaxies can not be correctly assessed from small fixed apertures (e.g., SDSS aperture) and the ionisation sources and structure of the warm gas may be different in different parts of the galaxies (see Appendix \ref{Maps_sample}). In Section \ref{subsec:morphology_aper} and \ref{subsec:morphology_2DBPT} we explore the effect of using different apertures on the observed properties of the ISM and the ionisation mechanisms likely present in our sample galaxies. \begin{table*} \centering \hspace*{-1.4cm} \begin{minipage}{165mm} \caption{Main properties of our sample galaxies from the simulated SDSS spectral fibre diameter (3") and integrated (int) apertures. SFR(H$\alpha$) calculated from the L(H$\alpha$) in the largest aperture int/3".} \label{table:2b} \begin{tabular}{l c c c c c c c c c c c c c c c} \hline & \multicolumn{2}{c}{log ([O \,{\sc iii}]/H$\beta$)} & \multicolumn{2}{c}{log ([N \,{\sc ii}]/H$\alpha$)} & \multicolumn{2}{c}{log ([S \,{\sc ii}]/H$\alpha$)} & \multicolumn{2}{c}{L(H$\alpha$)} & SFR(H$\alpha$)\\ & & & & & & &\multicolumn{2}{c}{($\times$10$^{39}$ erg cm$^{-1}$)} & (M$_{\odot}$ yr$^{-1}$) \\ & 3" & int & 3" & int & 3" & int & 3" & int & \\ \hline NGC 193 & -0.27$\pm$0.22 & -0.49$\pm$0.40 & 0.14$\pm$0.16 & 0.02$\pm$0.20 & -0.02$\pm$0.15 & -0.14$\pm$0.22 & 2.01$\pm$0.12 & 7.70$\pm$0.52 & 0.0354$\pm$0.0024\\ NGC 410 & \dots & \dots & 0.01$\pm$0.11 & 0.02$\pm$0.12 & -0.33$\pm$0.15 & -0.13$\pm$0.17 & 3.55$\pm$0.08 & 5.72$\pm$0.13 & 0.0263$\pm$0.0006\\ NGC 584 & -0.04$\pm$0.40 & -0.17$\pm$1.03 & 0.13$\pm$0.19 & -0.15$\pm$0.29 & -0.16$\pm$0.34 & -0.44$\pm$0.75 & 0.53$\pm$0.06 & 4.68$\pm$0.54 & 0.0215$\pm$0.0025\\ NGC 677 & -0.06$\pm$0.22 & -0.15$\pm$0.30 & -0.01$\pm$0.16 & -0.07$\pm$0.23 & -0.03$\pm$0.08 & -0.09$\pm$0.15 & 5.22$\pm$0.24 & 34.71$\pm$3.50 & 0.1597$\pm$0.0161\\ NGC 777 & \dots & \dots & 0.28$\pm$0.22 & \dots & 0.30$\pm$0.28 & \dots & 0.61$\pm$0.09 & \dots & 0.0028$\pm$0.0004\\ NGC 924 & 0.05$\pm$0.37 & 0.14$\pm$0.60 & 0.06$\pm$0.14 & -0.04$\pm$0.15 & -0.10$\pm$0.20 & -0.12$\pm$0.27 & 3.97$\pm$0.28 & 18.24$\pm$1.41 & 0.0839$\pm$0.0065\\ NGC 940 & 0.19$\pm$0.43 & -0.55$\pm$0.36 & 0.34$\pm$0.17 & -0.19$\pm$0.14 & 0.02$\pm$0.25 & -0.45$\pm$0.20 & 8.84$\pm$0.75 & 63.67$\pm$1.50 & 0.2929$\pm$0.0069\\ NGC 978 & -0.09$\pm$0.17 & -0.15$\pm$0.22 & -0.08$\pm$0.14 & -0.07$\pm$0.22 & -0.18$\pm$0.21 & -0.11$\pm$0.42 & 1.81$\pm$0.21 & 4.40$\pm$0.49 & 0.0203$\pm$0.0023\\ NGC 1060 & \dots & \dots & 0.10$\pm$0.23 & -0.06$\pm$0.33 & 0.09$\pm$0.44 & 0.18$\pm$0.57 & 2.38$\pm$0.33 & 8.80$\pm$1.51 & 0.0405$\pm$0.0069\\ NGC 1453 & -0.27$\pm$0.26 & -0.52$\pm$0.37 & 0.19$\pm$0.08 & 0.06$\pm$0.18 & 0.09$\pm$0.08 & -0.12$\pm$0.17 & 4.52$\pm$0.19 & 27.23$\pm$2.64 & 0.1253$\pm$0.0121\\ NGC 1587 & 0.27$\pm$0.16 & -0.05$\pm$0.34 & 0.07$\pm$0.11 & -0.09$\pm$0.09 & 0.03$\pm$0.16 & -0.11$\pm$0.23 & 1.88$\pm$0.11 & 15.37$\pm$0.52 & 0.0707$\pm$0.0024\\ NGC 4008 & \dots & \dots & -0.07$\pm$0.25 & \dots & 0.09$\pm$0.41 & \dots & 0.47$\pm$0.06 & \dots & 0.0022$\pm$0.0003\\ NGC 4169 & -0.05$\pm$0.32 & -0.16$\pm$0.51 & 0.24$\pm$0.08 & 0.15$\pm$0.12 & 0.10$\pm$0.16 & 0.10$\pm$0.15 & 4.24$\pm$0.19 & 14.75$\pm$0.90 & 0.0679$\pm$0.0041\\ NGC 4261 & 0.25$\pm$0.48 & \dots & 0.34$\pm$0.20 & \dots & -0.07$\pm$0.29 & \dots & 12.08$\pm$1.62 & \dots & 0.0556$\pm$0.0074\\ ESO0507-G025 & -0.08$\pm$0.35 & 0.02$\pm$0.34 & 0.05$\pm$0.12 & -0.17$\pm$0.21 & 0.07$\pm$0.15 & -0.16$\pm$0.26 & 9.17$\pm$0.52 & 42.56$\pm$2.96 & 0.1958$\pm$0.0136\\ NGC 5846 & -0.40$\pm$0.19 & -0.24$\pm$0.41 & 0.12$\pm$0.18 & 0.01$\pm$0.24 & -0.12$\pm$0.21 & -0.26$\pm$0.43 & 0.47$\pm$0.04 & 7.23$\pm$0.81 & 0.0333$\pm$0.0037\\ NGC 6658 & 0.54$\pm$0.79 & \dots & 0.19$\pm$0.19 & \dots & \dots & \dots & 0.65$\pm$0.08 & \dots & 0.0030$\pm$0.0004\\ NGC 7619 & \dots & \dots & 0.40$\pm$0.36 & \dots & \dots & \dots & 0.66$\pm$0.17 & \dots & 0.0030$\pm$0.0008 \\ \hline \end{tabular} \end{minipage} \end{table*} In summary, we find that all galaxies in our sample, with [O\,{\sc iii}] $\lambda$5007 and H$\beta$ detections, have a dominant AGN/LINER nuclear region. Extended LINER-like regions are observed in most galaxies with filamentary structures (type i galaxies) and ring-like structures and/or extranuclear H\,{\sc ii} regions (type ii galaxies). Most spaxels in these regions fall in the BPT area of mixed contribution (composite) from SF and/or AGN. \subsection{Velocity field maps}\label{sub:velocity} We obtained the radial velocity V([N\,{\sc ii}]) and velocity dispersion $\sigma$([N\,{\sc ii}]) maps by fitting a single Gaussian to the [N\,{\sc ii}]$\lambda$6584 emission line profiles. The velocity dispersion $\sigma$ (or FWHM=2.35$\sigma$) was obtained as $\sigma^2$ = $\sigma^2_{obs}$ - $\sigma^2_{inst}$ where $\sigma_{inst}$ is the instrumental dispersion, see Section \ref{sec:observations}. The detailed kinematical analysis of the emitting gas is beyond the scope of this paper, however in Figure \ref{fig:ESO0507_velocity} and Appendix \ref{appendix_NII_velocity_maps} we show the velocity field maps for the type ii galaxies ESO0507-G025 and NGC 924, NGC 940 and NGC 4169, respectively. Radial velocity and velocity dispersion maps for our full sample are shown in \cite{Olivares2022}. In order to include the kinematical information from the extended and filamentary structures we considered spaxels with S/N $\gtrsim$ 2. In Figure \ref{fig:ESO0507_velocity} we see an example of these velocity fields for the galaxy ESO0507-G025. Clearly, this galaxy has two independent velocity structures or rotating discs, while the velocity dispersion in the nuclear region shows a bicone-shaped structure which increases in velocity dispersion with distance from the centre. The gas in these bicone structures is located in the AGN/LINERs region of the BPT diagram (see Figures \ref{fig:example_figure} and \ref{fig:figures_BPT_int}) suggesting an outflow perpendicular to the observer. Our velocity fields (V and $\sigma$) are in agreement with those obtained independently by \cite{Olivares2022}. In general, the radial velocity fields in our sample galaxies (see Appendix \ref{appendix_NII_velocity_maps} and \citealt{Olivares2022}) show rotation or gradients but with relatively small velocity ranges of around $\pm$200-300 km s$^{-1}$. These variations are similar in both, the compact and extended gas emission regions. We see the highest $\sigma$([N \,{\sc ii}]) values, in most cases, at positions close to the continuum flux maximum with values of $\sim$150 to $\sim$300 km s$^{-1}$, while extended regions or filaments show lower values. In general $\sigma$([N\,{\sc ii}]) values in the filaments vary little across the galaxies. This trend was also reported by \cite{McDonald2012} for a sample of galaxies in galaxy clusters. They found that the most extended optical filaments in their sample, likely originated from ICM cooling and were experiencing only minor turbulence. \subsection{[S \,{\sc ii}]$\lambda$6716 / [S \,{\sc ii}]$\lambda$6731 ratio}\label{subsec:density} The [S \,{\sc ii}]$\lambda$6716 / [S \,{\sc ii}]$\lambda$6731 intensity ratio was used to determine the electron density n$_e$([S \,{\sc ii}]). We computed the values of n$_e$(S \,{\sc ii}) using the IRAF STS temden package assuming t$_e$([O \,{\sc iii}])=10000 K. We set unrealistic (saturated) values of [S \,{\sc ii}]$\lambda$6716 / [S \,{\sc ii}]$\lambda$6731 > 1.43 to n$_e$([S \,{\sc ii}]) $\sim$1 cm$^{-3}$ and for [S \,{\sc ii}]$\lambda$6716 / [S \,{\sc ii}]$\lambda$6731 < 0.46 to $\sim$52452 cm$^{-3}$. In Table \ref{table:properties_density} we show the electron density for the nuclear (3" aperture) regions for our sample. We note that the [S \,{\sc ii}]$\lambda$6716 / [S \,{\sc ii}]$\lambda$6731 ratio is greater than 1 for 12/18 galaxies, which indicates a predominantly low density regime ($\sim$100 cm$^{-3}$). Only 6/18 galaxies show values $\gtrsim$1000 cm$^{-3}$ in the nuclear regions. In Figure \ref{fig:density} (appendix \ref{appendix_SII_maps}) we show the [S \,{\sc ii}]$\lambda$6716 / [S \,{\sc ii}]$\lambda$6731 ratio maps for our sample galaxies. \begin{table} \hspace*{-0.7cm} \begin{minipage}{95mm} \centering \caption{Electron density for the nuclear (3" aperture) regions of our sample of group-dominant galaxies. We show the unrealistic (saturated) values of [S \,{\sc ii}]$\lambda$6716 / [S \,{\sc ii}]$\lambda$6731 by $\checkmark$ symbols.} \label{table:properties_density} \begin{tabular}{l c c c} \hline Name & [S \,{\sc ii}]$\lambda$6716 / [S \,{\sc ii}]$\lambda$6731 & Saturated & n$_e$([S \,{\sc ii}])\\ & & values & (cm$^{-3}$) \\ \hline NGC 193 & 0.92$\pm$0.16 & & 839.3$\pm$397.4\\ NGC 410 & 1.85$\pm$0.23 & $\checkmark$ & $\lesssim$1.0\\ NGC 584 & 1.46$\pm$0.70 & $\checkmark$ & $\lesssim$1.0\\ NGC 677 & 1.29$\pm$0.09 & & 131.3$\pm$98.0\\ NGC 777 & 1.11$\pm$0.30 & & 377.5$\pm$653.8\\ NGC 924 & 1.38$\pm$0.36 & & 43.7$\pm$257.3\\ NGC 940 & 1.06$\pm$0.34 & & 471.4$\pm$992.9\\ NGC 978 & 1.23$\pm$0.37 & & 200.8$\pm$436.0\\ NGC 1060 & 0.66$\pm$0.37 & & 2788.0$\pm$1126.3\\ NGC 1453 & 1.29$\pm$0.10 & & 131.3$\pm$109.2\\ NGC 1587 & 1.21$\pm$0.24 & & 226.4$\pm$229.5\\ NGC 4008 & 0.69$\pm$0.39 & & 2352.9$\pm$960.4\\ NGC 4169 & 1.07$\pm$0.25 & & 451.4$\pm$584.7\\ NGC 4261 & 1.12$\pm$0.36 & & 360.4$\pm$649.3\\ ESO0507-G025 & 0.46$\pm$0.09 & & 48893.1$\pm$21200.4\\ NGC 5846 & 1.06$\pm$0.24 & & 471.4$\pm$574.6\\ NGC 6658 & 0.29$\pm$0.12 & $\checkmark$ & $\gtrsim$52452.0\\ NGC 7619 & 0.74$\pm$0.11 & & 1823.7$\pm$1122.5\\ \hline \end{tabular} \end{minipage} \end{table} Interestingly, the ionisation cones observed in ESO0507-G025 (see the emission line ratio maps [O \,{\sc iii}]/H$\beta$ and [N \,{\sc ii}]/H$\alpha$ in Figure \ref{fig:example_figure} and velocity fields in Figure \ref{fig:ESO0507_velocity}) together with the high electron density and the shape of the spatially resolved [S \,{\sc ii}]$\lambda$6716 / [S \,{\sc ii}]$\lambda$6731 ratio map indicate that the central region of this galaxy is almost certainly ionised by an AGN \citep[e.g.,][]{Kakkad2018}. This clear pattern is not observed in our other galaxies. In general, the presence of high electron density in the nuclear regions and extended enhanced regions with densities $\gtrsim$1000 cm$^{-3}$ could indicate outflowing ISM likely driven by expanding hot gas heated by SNe, SF stellar winds, AGN activity and/or the collision/interaction of galaxies \citep{Westmoquette2012}. We will discuss in detail in Section \ref{sec:discussion} the results obtained in this section and their implications for the evolutionary stages of the gas in our sample galaxies. \subsection{Star-formation rate}\label{sub:sfr} Following a common practice, the current star-formation rates (SFRs) were estimated from the integrated H$\alpha$ luminosity L(H$\alpha$), assuming the \cite{Kennicutt1998} formula, for a solar metallicity, after correction for a \cite{Chabrier2003} initial mass function, i.e., SFR(H$\alpha$) (M$_{\odot}$ yr$^{-1}$) = 4.6 $\times$ 10$^{-42}$ $\times$ L(H$\alpha$) (erg s$^{-1}$) \citep{Parkash2019}. However, it is unlikely in our case that all H$\alpha$ emission results from SF, since the BPT diagnostics indicate a LINER or composite nature for most spaxels in our emission line maps. Therefore, the derived SFRs are likely to be an overestimates and are included here for comparison with other studies. Table \ref{table:2b} shows the resultant SFRs, obtained from the integrated apertures of the galaxies. We note that, our average SFRs is $\sim$16\% lower than that found by \cite{Kolokythas2022} using GALEX FUV fluxes as SF indicator, while FIR SFRs \citep{OSullivan2015} for some of our galaxies (NGC 777, NGC 940, NGC 1060 and NGC 5846) are one or two orders of magnitude higher than our SFR(H$\alpha$). Contamination of the FIR luminosity as a SF tracer can not be discarded. Most galaxies in both samples are AGN-dominated at FIR wavelengths and their derived FIR SFRs are greater than that expected from SF. However, in the case of NGC 940, they do not confirm the presence of a radio AGN \citep{OSullivan2015}. In summary, we find that in this and the aforementioned studies, the SFR in type i and type ii group-dominant galaxies, on average, are higher than in type i0 galaxies. \section{Discussion}\label{sec:discussion} \subsection{Aperture effects}\label{subsec:morphology_aper} As shown in Section \ref{subsec:emission_ratios}, aperture selection and 2D mapping can impact conclusions about the dominant mechanism ionising the gas in our sample. This was demostrated in Figure \ref{fig:figures_BPT_int} where we show the [O \,{\sc iii}]/H$\beta$ vs. [N \,{\sc ii}]/H$\alpha$ diagnostic diagram for the nuclear and integrated apertures. The displacement of the data points in this diagram when the aperture is increased, indicates how our interpretation about the ionising sources can change depending on aperture size. The limitations of using SDSS spectra for nearby galaxies has also been highlighted recently using IFU spectroscopy \citep[see][and references therein]{Gomes2016}. In our case, we can see that in the inner most part of the galaxies, the gas is likely dominated by shocks and/or AGN activity given that the nuclear gas emission line ratios lie in the LINER region of the BPT diagram, however, at large apertures photoionisation by OB stars likely becomes increasingly more important. In Figure \ref{fig:figures_radial} we show the effects of using different apertures on the observed properties of our sample. In panel a) we show the L(H$\alpha$) surface density $\Sigma$(H$\alpha$) calculated as L(H$\alpha$)/area, in b) the EW(H$\alpha$) and panel c) shows the log([N \,{\sc ii}]/H$\alpha$) emission line ratio. In this figure, the apertures were selected to increase in steps of 1.5" of radius. The $\Sigma$, in panel a) of Figure \ref{fig:figures_radial}, in all cases decreases with radius and shows the presence of extended H$\alpha$ emission beyond the SDSS aperture. \cite{CidFernandes2011} used the EW(H$\alpha$) as an alternative method (explained below in Section \ref{subsec:morphology_2DBPT}) to the BPT diagrams (emission-line classification). Using this method, \cite{Gomes2016} argue that evolved pAGB stellar background is sufficient to photoionise the diffuse gas in ETGs and explain the observed EW(H$\alpha$) in the range 0.5 - 2.4 $\AA \rm$. Most EW(H$\alpha$) values in panel b) irrespective of aperture lie within the area (grey region in the panel) which is consistent with pure pAGB photoionisation \citep{CidFernandes2011,Gomes2016}. We find that one type ii galaxy (ESO0507-G025) and three type i galaxies (NGC 193, NGC 677 and NGC 1453) show EW(H$\alpha$) $>$2.4 $\AA$ at their centres. These results indicate that pAGBs alone do not explain the observed values in these regions. The EW(H$\alpha$) values of remaining galaxies are consistent with pure pAGB emission. The [N \,{\sc ii}]/H$\alpha$ found in the nuclear regions (SDSS aperture; see panel c) and at large radii, in most cases, is consistent with LINER emission since log([N \,{\sc ii}]/H$\alpha$) $\gtrsim$ 0.0. We observe that the log([N \,{\sc ii}]/H$\alpha$) declines with increasing apertures in almost all cases, although the radial profiles differ significantly. \subsection{Ionisation mechanisms}\label{subsec:morphology_2DBPT} In this section we examine the mechanisms responsible for the ionisation which produces the optical emission lines in our sample. For this, we use photoionisation models from the literature, which incorporate the expected physical conditions in our sample galaxies. In Figure \ref{fig:figures_BPT_EWHa_Shocks} we show the BPT diagram for the nuclear emission for our sample galaxies. The fill colour of the data points indicates the EW(H$\alpha$) from the colour scale and the edge colour of the circles identifies the galaxies from the legend. Superimposed on the plot are shock models from \cite{Allen2008} calculated with the MAPPINGS III shock and photoionisation code. These shock models have solar metallicity, densities from n = 0.01 cm$^{-3}$ to 1000 cm$^{-3}$, velocities from v = 100 kms$^{-1}$ to 1000 kms$^{-1}$, and magnetic field B = 1 $\mu $G. The models for v = 100 kms$^{-1}$ (purple), 200 kms$^{-1}$ (red), 300 kms$^{-1}$ (green), 500 kms$^{-1}$ (orange) and 1000 kms$^{-1}$ (blue) are shown as coloured lines. The nuclear regions in our sample lie in the velocity range of > 200 kms$^{-1}$ to $\gtrsim$1000 kms$^{-1}$ and density between $\sim$0.1 cm$^{-1}$ to < 100 cm$^{-1}$. \cite{Annibali2010} compared the [S \,{\sc ii}] densities obtained for a sample of ETGs/LINERs with ionised gas with the \cite{Allen2008} models. The majority of their galaxies had n$_e \gtrsim$ 100 cm$^{-3}$ for an assumed T = 10,000 K. But as in our case (see Table \ref{table:properties_density}), a significant fraction of galaxies have saturated [S \,{\sc ii}] ratios around 1.45 which are consistent with pre-shock densities (n$_e\approx$ 0.01 to $\lesssim$100 cm$^{-3}$). These pre-shock regions are observed in some regions of Figure \ref{fig:density}. This could indicate together with the increasing [S \,{\sc ii}]$\lambda$6716 / [S \,{\sc ii}]$\lambda$6731 ratios that AGN/SF-driven shocks are ionising the gas mainly in the nuclear regions. This would be consistent with the \cite{Molina2018} conclusion that the line emission of LINERs with low-luminosity AGN are predominantly powered by shocked gas due to jets or outflows \cite[see also][]{Edwards2007}. In Figure \ref{fig:figures_BPT_int_krabbe} (upper panels) we show the BPT diagrams, as in Figure \ref{fig:figures_BPT_EWHa_Shocks}, but including two different photoionisation model grids from \cite{Krabbe2021}. Their models consider the main (LINER) properties observed in our sample of group-dominant galaxies for two distinct SEDs. One model assumes pAGB stars (upper left panel) with different T$_{eff}$ as the ionising source and the other considers AGN SED with a multicomponent continuum (upper right panel). The models were created using the CLOUDY code version 17.0 \citep{Ferland2017} for metallicities Z/Z$_{\odot}$ = 0.2, 0.5, 0.75, 1.0, 2.0 and 3.0 and ionisation parameters U\footnote{The ratio of the ionizing photon density to the particle density.} in the range of -4.0 $\le$ log(U) $\le$ -0.5 in steps of 0.5 dex. Their assumed density of n$_e$ 500 cm$^{-3}$ is typical for BCGs \citep[e.g.,][]{Ciocan2021} and it is compatible with our values from Section \ref{subsec:density}. In both cases, most galaxies in our sample lie in the region between Z/Z$_{\odot}$ = 0.75 (12 + log(O/H)= 8.56) and Z/Z$_{\odot}$= 1 (12 + log(O/H) = 8.69). However, NGG 4261 and NGC 940 have values compatible with models at 2-3 Z$_{\odot}$ (12 + log(O/H) = 8.99 - 9.17), assuming AGN activity. While for pAGB models, the values for NGG 4261, NGC 940, NGC 4169 and NGC 6658 are compatible with models at > 1.0 Z$_{\odot}$ metallicities. These results indicate that the nuclear regions in our sample are consistent, within the uncertainties, with metallicities slightly (above) solar, independent of the ionising sources. In Section \ref{subsec:abundances} we estimate the gas phase metallicity for each nuclear region in our sample. In Figure \ref{fig:figures_BPT_int_krabbe} (Bottom pannel) we show the BPT diagram overlaid with the predicted emission line ratios from CLOUDY models obtained by \cite{Polles2021}, which include photoionisation by X-ray emission. We show, in this figure, models for three values of metallicity Z/Z$_{\odot}$ = 0.3, 0.65 and 1 and several values of X-ray emission log(G$_x$) from 2.8 to 1.4 in steps of 0.2 dex with the turbulent velocity (produced by e.g., AGN jets, turbulent mixing between the hot and cold phases and the collisions between filaments) fixed to 10 km s$^{-1}$ (dotted colored lines and dotted black lines). In the same panel, we added three models for the aforementioned metallicities and turbulence velocities v$_{tue}$ = 30, 10, 2 and 0 km s$^{-1}$ (solid colored lines and small dotted black lines). The X-ray radiation field G$_{x}$ is fixed to 100. In general, these grid of models can reproduce the observed values without an excess in X-ray luminosity, even if the optical depth A$_V$ increases \citep[see figures E1 and E2 in][]{Polles2021}. In the case of NGC 5846, the observed emission line ratios (maroon open circle) are reproduced by models at very low (or no) turbulence velocities. Also, we find no sign of a clear rotational pattern (disturbed velocity field) in the FoV of the galaxy \cite[see the v$_r$ map in][]{Olivares2022} together with extended filaments of low velocity dispersion. This is in agreement with a cooling flows scenario, where the cool gas may have cooled in (or close to) the centre of the group \citep{Temi2018,Jung2022}. In addition, in this galaxy there is a good correlation between the CO cloud positions, detected by \cite{Temi2018}, and the warm gas emission. \cite{CidFernandes2011} introduced the WHAN diagram which uses the EW(H$\alpha$) (or W$_{H\alpha}$) in order to discriminate low ionisation AGN from galaxies that are ionised by evolved pAGB stars. This diagram identifies 5 classes of galaxies: 1) pure star-forming galaxies: log([N \,{\sc ii}]/H$\alpha$) $<$-0.4 and EW(H$\alpha$) $>$3 $\AA$, 2) strong AGN emission: log([N \,{\sc ii}]/H$\alpha$) $>$-0.4 and EW(H$\alpha$) $>$6 $\AA$, 3) weak AGN: log([N \,{\sc ii}]/H$\alpha$) $>$-0.4 and EW(H$\alpha$) between 3 and 6 $\AA$, 4) retired galaxies: EW(H$\alpha$) $<$3 $\AA$ and 5) passive galaxies: EW(H$\alpha$) and EW([N \,{\sc ii}]) $<$0.5 $\AA$. According to this classification scheme most of our sample (see Figure \ref{fig:figures_BPT_WHAM}) can be classified as retired galaxies likely ionised by pAGB stars, independent of their position on the BPT diagram (see the EW(H$\alpha$) colour scale values in Figures \ref{fig:figures_BPT_EWHa_Shocks} and \ref{fig:figures_BPT_int_krabbe}) and H$\alpha$+[N \,{\sc ii}] emission morphology. Only in ESO0507-G025 and NGC 677 do we see clear indications of AGN activity in the nuclear region from the WHAN diagram, with the EW(H$\alpha$) $\sim$3--6 $\AA$ indicating a weak AGN). However, some galaxies in our sample have clear signatures of AGN activity as reported in studies at other wavelengths. For example, NGC 4261 is the brightest galaxy in the NGC 4261 group. Hubble Space Telescope (HST) WFPC2 observations \citep{Jaffe1993} and our MUSE data reveals a bright nuclear optical source surrounded by a disc of gas and dust, while radio observations identified two jets perpendicular to the disc \citep{BirkinshawDavies1985,Scheneider2006,Kolokythas2015}. Recent kinematical studies using ALMA CO \citep{Boizelle2021} find a dynamical mass for the central supermassive black hole (BH) in NGC 4261 of 1.67$\times$10$^9$ M$_{\odot}$. So, the observed LINER properties in this galaxy may be explained by an obscured low-luminosity AGN \citep{Zezas2005}. In addition, 17/18 galaxies in our sample show detected radio continuum emission \citep{Kolokythas2018}; 4/18 show small ($\lesssim$20 kpc; NGC 1060 and NGC 5846) and large-scale ($>$20 kpc; NGC 193 and NGC 4261) jets, 3/10 diffuse (NGC 677, NGC 1587 and ESO0507-G025) and 10/18 a point-like or unresolved point source radio continuum morphology ($\lesssim$11 kpc; NGC 410, NGC 584, NGC 777, NGC 924, NGC 940, NGC 978, NGC 1453, NGC 4008, NGC 4169 and NGC 7619). The results in this section indicate that EW(H$\alpha$), log [N \,{\sc ii}]/H$\alpha$ and the BPT diagrams, alone, cannot distinguish between dominant ionising sources producing the central and extended LINER-like emission in our sample. However, the overall log [N \,{\sc ii}]/H$\alpha$ (Figure \ref{fig:figures_BPT_int}) shows that this emission line ratio decreases as the extent of the emission line region increases, indicating that \textit{the ionisation sources are having different impacts at different radii}. The same decreasing value with the distance pattern is observed for $\sigma$([N \,{\sc ii}]), the 12 + log(O/H) abundances and in most cases the EW(H$\alpha$). Central regions are almost certainly are influenced by low-luminosity AGN, while the extended regions are ionised by other mechanisms (SF and/or cooling flows shocks and pAGBs). The EW(H$\alpha$) in the outer parts is in the range of 0.5-2.4 $\AA$ (see Figure \ref{fig:figures_radial}), thus indicating that pAGBs are likely contributing significantly to the ionization of these regions. Finally, our interpretations are based on the comparison with models. Factors like $Ly_{c}$ photon escape and dilution of nuclear EWs \citep{Papaderos2013} have not been considered. Consequently, the addition of these processes may constitute an important element in understanding of ETGs with extended optical LINER-like emission. \subsection{Gas-phase abundances 12 + log(O/H) and metallicity gradients}\label{subsec:abundances} Warm gas-phase abundances in our sample are difficult to obtain at optical wavelengths due to the uncertainty about the ionisation mechanism (see Section \ref{sec:Intro} and previous results in Section \ref{subsec:morphology_2DBPT}). The wavelength range covered by MUSE does not allow the detection of [OII]$\lambda$3727 and [OIII]$\lambda$4363. Therefore, the 12+log(O/H) abundance in each nuclear region was derived using linear interpolation between the models (AGN, pAGBs and X-ray emission) and their measured emission line ratios (log([O\,{\sc iii}]/H$\beta$ and log([N \,{\sc ii}]/H$\alpha$)). Additionally, we used the following calibrators: i) AGN calibration (AGN N2 calibrator) proposed by \cite{Carvalho2020}: \begin{equation} 12 $ + $ log\left(\frac{O}{H}\right) $ = $ 12 $ + $ log\left((\frac{Z}{Z_{\odot}} \times 10^{log\left(\frac{O}{H}\right)_{\odot}}\right), \end{equation} \noindent where Z/Z$_{\odot}$ = 4.01$^{N2}$ - 0.07 for 0.3 < (Z/Z$_{\odot}$) < 2.0, and N2=log([N \,{\sc ii}]$\lambda$6584/H$\alpha$). Although this calibration was developed for seyfert 2 nuclei, it has been used to derive the O/H abundance in narrow line AGN regions and LINERs \cite[e.g.,][]{Krabbe2021,doNascimento2022}. ii) most of our extended regions lie to the right of the empirical \cite{Kauffmann2003} boundary line on the BPT diagrams in Figures \ref{fig:example_figure} and \ref{fig:BPT_maps_1} (appendix \ref{Maps_sample}). The O3N2 ratio was suggested by \cite{Kumari2019} as a metallicity tracer of DIG and LI(N)ERs (EW(H$\alpha$)< 3 $\rm \AA$) since these regions are likely ionised by pAGB stars, star-forming clusters and weak AGN. Therefore, the abundances in these regions can be estimated by: \begin{equation} 12 $ + $ log\left((\frac{O}{H}\right) $ = $ 7.673 $ + $ 0.22 \times \sqrt[]{25.25 - 9.072 \times O3N2} $ + $ 0.127 \times O3, \label{eq:oxygen_O_Kumari} \end{equation} with O3N2 = log([O \,{\sc iii}]$\lambda$5007/H$\beta \times$H$\alpha$/[N \,{\sc ii}]$\lambda$6584) and O3 = log([O \,{\sc iii}]$\lambda$5007/H$\beta$. \noindent Finally, iii) we compare those values with the ones inferred from the N2 diagnostic, calibrated by \cite{Marino2013} (star-forming H\,{\sc ii} calibrator) obtained using empirically calibrated direct abundance data (T$_e$-based measurements) from H\,{\sc ii} regions in the CALIFA survey. The \cite{Marino2013} calibration is defined as: \begin{equation} 12 $ + $ log\left(\frac{O}{H}\right) $ = $ 8.743 $ + $ 0.462 \times N2, \label{eq:oxygen_N2_M13} \end{equation} \noindent We note that the emission line ratios used in any of these relations are not highly affected by reddening. \noindent In Table \ref{table:abundances} we show the oxygen abundance for the nuclear 3" region of each galaxy using the aforementioned methods and their distributions are shown in Figure \ref{fig:figures_Histogram_OH}. In the figure and table we see that the 12 + log(O/H) derived from the AGN model and the AGN N2 calibrator are in agreement within the uncertainties (of $\pm$0.1 dex) with those inferred from the H\,{\sc ii} calibration in $\sim$62\% (8/13) and $\sim$89\% (16/18) of the galaxies, respectively. These values drop to $\sim$38\% (5/13) and $\sim$0\% (0/13) when compared to pAGB and X--ray emission models, respectively. However, 100\% (13/13) of the nuclear DIG/LI(N)ERs abundances are in agreement, at 1$\sigma$ level, with the H\,{\sc ii} values. We note that nuclear metallicities obtained from pAGB and X-ray emission models are, in most cases, lower compared to those obtained from the H\,{\sc ii}-based method. Two of our galaxies are included within the \cite{Annibali2010} sample of ETGs with ionised gas: NGC 1453 and NGC 5846. Those authors found 12 + log(O/H) = 8.55$\pm$0.19 and 8.84$\pm$0.17 using the calibration in \cite{Kobulnicky1999} for NGC 1453 and NGC 5846, respectively. Our measurements are in reasonable agreement within the errors with those derived by \cite{Annibali2010} considering apertures and the intrinsic differences between the calibrations. \begin{table*} \centering \begin{minipage}{152mm} \caption{Oxygen abundance determinations for the nuclear region (3" aperture) assuming pure H\,{\sc ii} regions (Marino et al. (2013; H\,{\sc ii}), AGN models (Krabbe et al. 2021) and the AGN N2-based calibrator (Carvalho et al. 2021), the DIG/LI(N)ERs O3N2 calibrator (Kumari et al. 2019; DL), and pAGB (Krabbe et al. 2021) and X-ray (Polles et al. 2021) emission models, respectively.} \label{table:abundances} \begin{tabular}{l c c c c c c c c} \hline & 12 + log(O/H)$_{HII}$ & \multicolumn{2}{c}{12 + log(O/H)$_{AGN}$} & 12 +log(O/H)$_{DL}$ & 12 + log(O/H)$_{pAGB}$ & 12 + log(O/H)$_{X-ray}$ \\ & N2 & model & N2 & O3N2 & model & model\\ \hline NGC 193 & 8.81$\pm$0.09 & 8.76$\pm$0.16 & 8.75$\pm$0.10 & 8.82$\pm$0.10 & 8.63$\pm$0.16 & 8.68$\pm$0.10\\ NGC 410 & 8.75$\pm$0.05 & \dots & 8.66$\pm$0.07 & \dots & \dots& \dots\\ NGC 584 & 8.80$\pm$0.10 & 8.71$\pm$0.16 & 8.74$\pm$0.11 & 8.81$\pm$0.16 & 8.60$\pm$0.17 & 8.67$\pm$0.15\\ NGC 677 & 8.74$\pm$0.08 & 8.56$\pm$0.08 & 8.65$\pm$0.10 & 8.78$\pm$0.10 & 8.55$\pm$0.05 & 8.48$\pm$0.27\\ NGC 777 & 8.87$\pm$0.13 & \dots & 8.84$\pm$0.13 & \dots & \dots& \dots\\ NGC 924 & 8.77$\pm$0.07 & 8.57$\pm$0.11 & 8.70$\pm$0.08 & 8.79$\pm$0.15 & 8.55$\pm$0.12 & 8.60$\pm$0.18\\ NGC 940 & 8.90$\pm$0.11 & 8.90$\pm$0.10 & 8.88$\pm$0.10 & 8.83$\pm$0.17 & 8.92$\pm$0.17 & >8.69\\ NGC 978 & 8.71$\pm$0.07 & 8.56$\pm$0.01 & 8.64$\pm$0.08 & 8.77$\pm$0.08 & 8.55$\pm$0.03 & 8.33$\pm$0.25\\ NGC 1060 & 8.79$\pm$0.12 & \dots & 8.72$\pm$0.14 & \dots & \dots& \dots\\ NGC 1453 & 8.83$\pm$0.06 & 8.83$\pm$0.06 & 8.78$\pm$0.05 & 8.83$\pm$0.10 & 8.74$\pm$0.14 & >8.69\\ NGC 1587 & 8.78$\pm$0.06 & 8.59$\pm$0.11 & 8.70$\pm$0.07 & 8.77$\pm$0.08 & 8.54$\pm$0.10 & 8.60$\pm$0.13\\ NGC 4008 & 8.71$\pm$0.12 & \dots & 8.61$\pm$0.15 & \dots & \dots& \dots\\ NGC 4169 & 8.85$\pm$0.06 & 8.85$\pm$0.04 & 8.81$\pm$0.05 & 8.83$\pm$0.12 & 8.81$\pm$0.10 & >8.69\\ NGC 4261 & 8.90$\pm$0.13 & 8.90$\pm$0.13 & 8.88$\pm$0.12 & 8.83$\pm$0.19 & 8.91$\pm$0.26 & >8.69\\ ESO0507-G025 & 8.76$\pm$0.06 & 8.57$\pm$0.08 & 8.69$\pm$0.07 & 8.79$\pm$0.14 & 8.55$\pm$0.07 & 8.59$\pm$0.16\\ NGC 5846 & 8.80$\pm$0.10 & 8.75$\pm$0.18 & 8.74$\pm$0.11 & 8.83$\pm$0.09 & 8.59$\pm$0.16 & 8.67$\pm$0.14\\ NGC 6658 & 8.83$\pm$0.11 & 8.81$\pm$0.28 & 8.78$\pm$0.11 & 8.78$\pm$0.31 & 8.81$\pm$0.32 & >8.69\\ NGC 7619 & 8.93$\pm$0.20 & \dots & 8.91$\pm$0.22 & \dots & \dots& \dots\\ \hline \end{tabular} \end{minipage} \end{table*} It is important to bear in mind that the N2 metallicity diagnostics are known to have a dependency on the ionisation parameter and the nitrogen-to-oxygen ratio of the gas, given that metallicities increase with N-enrichment. Unfortunately, we are unable to explore the extended metallicity distribution using methods that do not have a dependence on the aforementioned parameters, given that only the [N \,{\sc ii}]$\lambda$6584 and H$\alpha$ emission lines were detected in most of our sample galaxies. On the other hand, the O3N2 method gives nuclear abundances that are in agreement, within the uncertainties, with the ones found using the N2-based abundances. Therefore, we used the N2 and O3N2 indicators, in galaxies with extended [N \,{\sc ii}]$\lambda$6584 emission, as a way of obtaining the spatially resolved morphologies of the ionised gas which can be translated into metallicities in a relative rather than absolute way. In Section \ref{subsec:emission_ratios} we showed that most spaxels in our spatially resolved BPT maps lie in the AGN and composite areas of the diagrams. Therefore, we calculated the pixel-by-pixel 12 + log(O/H) abundance in those regions by adopting the \cite{Carvalho2020} (AGN N2; green dots) and \cite{Kumari2019} (DIG/LI(N)ERs O3N2; blue dots) calibrators. While values from the N2 calibration by \cite{Marino2013} (H\,{\sc ii}; black dots) are included for comparison. In Figure \ref{fig:figures_OH_profiles_pp} we show for each galaxy in our sample the 12 + log(O/H) abundances as function of the radius from the galaxy's centre. The dots are the median values within circular bins of 1.5" radii, except for the first bin which has a radius of 0.5". The error bars, in this figure, denote the 1$\sigma$ distribution of the H\,{\sc ii} 12 + log(O/H) per bin. In Figure \ref{fig:figures_OH_profiles_pp} we see that the calibrators predict comparable metallicities but with an small offset ($\lesssim$0.1 dex in most cases) in the innermost regions, while for the extended regions this difference can reach values of $\sim$0.3 dex in the case of ESO0507-G025. From this figure it is clear that there is a break in the metallicity gradient slope with a very steep gradient in the central region which, as indicated in Section \ref{subsec:morphology_2DBPT}, is more influenced by low-luminosity AGN. We calculated the metallicity gradient ($\nabla_{\text{O/H}}$) as the slope of the linear fit to the median 12 + log(O/H) values separately for the innermost and extended regions for all the calibrations considered. In Table \ref{table:4} we present for each galaxy the results of our linear fitting and statistics of all pixels/spaxels used to create the 12 + log(O/H) profiles. From this table we see that the central metallicity gradients are in all cases negative, while the extended regions show a flat gradient. \subsection{Properties of the gas and its origin}\label{subsec:On_the_properties} \subsubsection{Properties of the gas and metallicity gradients}\label{subsec:properties_gas} From the previous section we see that the mean gas-phase metallicity in the nuclear and extended regions are $\langle$(O/H)$_{nuclear}\rangle$ > $\langle$(O/H)$_{extended}\rangle$ with the 12 + log(O/H) abundance generally decreasing with radius, independently of the ionisation source or method considered, with a flattening of metallicity gradients for the outermost regions. In Figure \ref{fig:type_gradient} we show the relationship between metallicity gradients $\nabla_{\text{O/H}}$ for the central and extended regions and H$\alpha$+[N\,{\sc ii}] morphology of our sample. While in Table \ref{table:metallicity_gradients} we summarise the average metallicity gradients for each region and calibrator considered. A weak positive Spearman's correlation $\sim$0.6 between metallicity gradients and morphological H$\alpha$+[N\,{\sc ii}] types of the galaxies is found in Figure \ref{fig:type_gradient}. Despite the low number statistics in our study, these results suggest that the nuclear $\nabla_{\text{O/H}}$ of type i group-dominant galaxies, on average, is lightly higher than type i0 and type ii galaxies. However, the intrinsic uncertainties associated with these values lead us to consider the metallicity gradients may not be statistically significant. Therefore, we argue that the similarity in the shape of the nuclear metallicity gradients in all galaxy types and the flattening of the outer regions in our sample of group-dominant galaxies with type i (strong nuclear emission plus extend filamentary structures) and type ii (strong or diffuse nuclear emission plus extranuclear H\,{\sc ii} regions) morphologies are a common property in this group of galaxies. Several groups \cite[e.g.,][and references therein]{Sanchez2014,Belfiore2017} have studied the metallicity gradients in local galaxy samples. In \cite{Sanchez2014} and \cite{Sanchez2020} they show that a significant fraction of galaxies in the CALIFA sample exhibit shallow metallicity slopes in their innermost and/or outermost regions. In particular, they argue that the flattening in the outer regions is a universal property of disc galaxies, which is independent of the inclination, mass, luminosity, morphology and the presence of bars. In this and other similar works, several mechanisms like radial motion, inside-out growth, metal-poor/rich gas accretion, turbulent transport and outflow of gas \citep[e.g.,][see references therein]{Kewley2010,SanchezMenguiano2018,Sanchez2020} are invoked as the sources of producing the gas metallicity profiles. \begin{table*} \centering \begin{minipage}{130mm} \caption{Average and standard deviation (sd) metallicity gradients ($\nabla_{\text{O/H}}$), in units of dex/arcsec, for the central and extended regions (left and right values in each column) using the calibrators H\,{\sc ii} N2 regions (Marino et al. 2013), AGN N2 (Carvalho et al. 2020) and O3N2 DIG/LI(N)ERs (Kumari et al. 2019).} \label{table:metallicity_gradients} \begin{tabular}{l c c c c c c} \hline & \multicolumn{2}{c}{H\,{\sc ii} N2} & \multicolumn{2}{c}{AGN N2} & \multicolumn{2}{c}{DIG/LIERs O3N3 }\\ & central & extended & central & extended & central & extended \\ & $\langle\nabla_{\text{(O/H)}}\rangle$/sd& $\langle\nabla_{\text{(O/H)}}\rangle$/sd & $\langle\nabla_{\text{(O/H)}}\rangle$/sd & $\langle\nabla_{\text{(O/H)}}\rangle$/sd& $\langle\nabla_{\text{(O/H)}}\rangle$/sd& $\langle\nabla_{\text{(O/H)}}\rangle$/sd\\ \hline type i0 & -0.032/0.022 & \dots & -0.040/0.037 & \dots & -0.010/0.012 & \dots \\ type i & -0.026/0.019 & 0.0011/0.0043 & -0.031/0.028 & -0.0013/0.0053 & -0.010/0.005 & -0.0012/0.0027\\ type ii & -0.029/0.020 & -0.0039/0.0019 & -0.038/0.027 & -0.0025/0.0017 & -0.008/0.005 & -0.0000/0.0003 \\ \hline \end{tabular} \end{minipage} \end{table*} The properties of the extended warm ISM in our sample of type i and type ii galaxies suggest, within the uncertainties, nearly solar ($\pm$0.2 dex) homogeneous chemical abundances (see Figure \ref{fig:figures_OH_profiles_pp} and Table \ref{table:4}). This likely requires mechanisms (e.g., radial motions, gas-clouds or satellite accretion/interactions, AGN/SF-driven outflows) for the efficient gas transport, mixing and radial flattening of metallicity into the outer regions of the galaxies in a relative short time scale \citep[e.g.,][]{Werk2011,Bresolin2012,Rennehan2019,Rennehan2021}, likely similar to other low redshift galaxy classes. \subsubsection{Cold gas content}\label{subsec:HI_gas} H\,{\sc i} is a sensitive tracer of external environmental mechanisms in galaxies. A study of H\,{\sc i} in ETGs by \cite{Serra2012} found that H\,{\sc i} detections were relatively uncommon near Virgo cluster centre (10\%) but were common (40\%) in field with the detected H\,{\sc i} mass inversely related to environment density. H\,{\sc i} morphology was also found to vary in a continuous way from regular, settled H\,{\sc i} discs and rings in the field to unsettled gas distributions (including tidal or accretion tails) at the Virgo cluster centre, with the H\,{\sc i} and CO-richest galaxies found in the poorest environments where the SF detection rate was also higher. This implies that galaxy group processing is involved in evolving pre-existing ETG gas properties. In our sample, 8/18 galaxies have the H\,{\sc i} properties available from the literature and 18/18 have been observed in CO \citep[see our Table \ref{table:properties_HI} for a summary;][]{OSullivan2015,OSullivan2018}. In Figure \ref{fig:figures_HI} we show single dish H\,{\sc i} spectra from the literature for 7 of these galaxies. Excluding the two galaxies in Table \ref{table:properties_HI} which have a caveat about their H\,{\sc i} properties (see Table \ref{table:properties_HI} note) the mean H\,{\sc i} in type ii galaxies is 17.2$\times$10$^{9}$ M$_{\odot}$ compared to the mean H\,{\sc i} in type i galaxies of 1.4$\times$10$^{9}$ M$_{\odot}$, i.e., the type ii have an order of magnitude more H\,{\sc i}. NGC 940 was excluded from the above calculation because of the high uncertainty about it's H\,{\sc i} detection, however it has a large M(H$_2$) mass (6.1$\times$10$^{9}$ M$_{\odot}$). The H$_2$ mass of NGC 940 together with the H\,{\sc i} masses of the other type ii galaxies confirms that the type ii galaxies are cold gas rich. To varying degrees, all the H\,{\sc i} in our galaxies display double horned H\,{\sc i} profiles which are indicative of rotating discs, with the clearest examples being NGC 924 and NGC 940 which also have the lowest H\,{\sc i} profile asymmetries as measured by the A$_{flux}$ parameter \citep{Espada2011}. Its seems likely the cold and warm discs in the type ii galaxies are part of the same kinematical gas structures, with support for this coming from the H\,{\sc i} profiles and the [N\,{\sc ii}] velocity fields in Figure \ref{fig:ESO0507_velocity} and Appendix \ref{appendix_NII_velocity_maps}. In the [N\,{\sc ii}] velocity fields we see that all of the type ii galaxies show clear rotating disc patterns, with NGC 940 presenting a highly symmetric case. The high levels of symmetry in the velocity fields, especially in NGC 940, together with the flattened metallicity gradients argue against the gas having been recently acquired. In particular, using Romulus hydrodynamic cosmological simulations \cite{Jung2022} examined the re-emergence of gaseous and stellar discs in BGGs following their destruction by mergers. They find that ordered discs take $\sim$1 Gyrs to be established. We suggest that our gas-rich systems obtained their cold gas at least $\sim$1 Gyr ago \citep{Holwerda2011,Jung2022}, i.e., the H\,{\sc i} virialization time scale after gas-clouds or satellite mergers/accretion. However, we observe that $\sigma_{gas,central} \gtrsim \sigma_{gas,extended}$, so the warm gas in the central regions is unlikely to be dynamically relaxed \citep{Olivares2022} in the gravitational potential of the galaxy, indicating an AGN outburst contribution. Those results are in agreement with the morphology of the warm gas distribution and velocity fields observed in our sample (Section \ref{sub:velocity}). \subsubsection{Summary of properties, chemical abundances and possible scenarios for the origin of the gas in group-dominant galaxies}\label{subsec:origin_gas} The interpretation of internal (stellar mass loss) and external mechanisms (cooling from the IGrM or mergers/interactions) for the origin of the gas in our sample is difficult, since different mechanisms could be acting at different evolutionary stages. \cite{Kolokythas2022} examined the relationships between radio power, SFR(FUV) and stellar mass for CLoGS, which includes our analysed galaxies, and find no correlations between these quantities. This suggests a mix of origins for the cool gas in these systems, including stellar mass loss, cooling from the IGrM/galaxy halo, settling gas due to mergers and tidal or ram pressure stripping \citep[see also][]{Loubser2022}. In systems like NGC 978 and NGC 1587 the observed ionised gas could be associated with an interaction with the companion galaxy. Although, no cold gas has been reported in NGC 978 (see Figure \ref{fig:figures_NB_apend}), but the interaction debris gas may eventually enter the galaxy's (hot) halo to trigger the SF and feed the AGN activity. On the other hand, the SF in extranuclear regions (type ii galaxies) is likely a later stage after streams gas settling from orbiting satellites \citep{Jung2022}. \cite{Kolokythas2022} argued that S0 group-dominant galaxies (type ii galaxies) occupy X-ray faint systems and have point-like radio sources \citep{Kolokythas2018}, which indicates that the cold gas is more likely to be the result of gas-rich mergers or tidal interactions instead of cooling from a hot IGrM. While in some type i galaxies (as in the case of NGC 5846 where the ionised gas morphology supports a cooling flow) the cooled gas from the IGrM may be an effective mechanism forming filaments and rotating discs in the galaxy nuclei \citep[][and references therein]{OSullivan2015,Olivares2022}. 3/4 galaxies (NGC 193, NGC 1060 and NGG 5846) with radio jet-like morphology present filamentary ionised gas structures characteristic of ICM cooling. The presence of misaligned or counter-rotating ionised gas discs with respect to the stellar body is a strong indication of external accretion of gas. We find direct evidence of this, \cite{Olivares2022} found that most of the galaxies in our sample have ionised gas kinematically decoupled from the stellar component, which suggests an external origin of the gas. Observational evidence \citep[e.g.,][]{Sarzi2006,Gomes2016} and simulations \citep[][among others]{Starkenburg2019,Khim2021,Jung2022} indicate possible origins of this misalignment in galaxies regardless of the morphology and environment, with early-type having higher misaligned fractions. We argue that stellar mass loss is unlikely to be the dominant source of cold gas in our sample \citep[see][]{Olivares2022}. \begin{table} \centering \begin{minipage}{67mm} \caption{Comparison between Z (this work) and IGrM metallicities in units of Z/Z$_{\odot}$. Column (1) average value between the AGN N2 and interpolated abundances from the AGN models. Column (2) abundances in the extended regions from DIG/LI(N)ERs O3N2 calibrator. Column (3) corresponds to IGrM metallicity from O'Sullivan et al. (2017).} \label{table:abundances_IGrM} \begin{tabular}{l c c c c c c c} \hline & nuclear 3" & extended & IGrM \\ & AGN & DIG/LI(N)ERs & \\ & (1) & (2) & (3) \\ \hline NGC 193 & 1.15 & \dots & 0.68\\ NGC 410 & 0.94 & \dots & 0.42\\ NGC 584 & 1.08 & 1.35 & $\dots$\\ NGC 677 & 0.84 & \dots & 0.38 \\ NGC 777 & 1.41 &$\dots$& 0.63\\ NGC 924 & 0.89 & 0.85 & $\dots$\\ NGC 940 & 1.57 & 1.07 & 0.06\\ NGC 978 & 0.82 & \dots & >0.29\\ NGC 1060 & 1.08 & \dots & 0.28\\ NGC 1453 & 1.30 & 1.29 & 0.42\\ NGC 1587 & 0.91 & 1.05 & 0.03\\ NGC 4008 & 0.84 &$\dots$& 0.32\\ NGC 4169 & 1.38 & 1.35 & 0.11\\ NGC 4261 & 1.58 &$\dots$& 0.23\\ ESO0507-G025 & 0.88 & 0.98 & $\dots$\\ NGC 5846 & 1.12 & 1.32 & 0.27\\ NGC 6658 & 1.27 &$\dots$& <0.18\\ NGC 7619 & 1.67 &$\dots$& 0.54\\ \hline \end{tabular} \end{minipage} \end{table} If we assume that the metallicity of the warm gas, in our sample, is represented by a single calibrator (AGN N2 or DIG/LI(N)ERs O3N2), on average, the nuclear regions are more metal rich than their extended structures, i.e., $\langle$(O/H)$_{nuclear}\rangle \gtrsim \langle$(O/H)$_{extended}\rangle$. However, in Section \ref{subsec:morphology_2DBPT} we find that the ionisation sources are having different impacts at different radii. Therefore, the abundance in the nuclear regions is well represented by the 3" apertures and they can be obtained as the average between the AGN N2 and the interpolated AGN abundances from the models (Figure \ref{fig:figures_BPT_int_krabbe}). While for the extended regions the average abundances are from the data points in Section \ref{subsec:abundances} (see Table \ref{table:4}) assuming the DIG/LI(N)ERs O3N2 calibrator. This is a reasonable assumption given that most of the spaxels in these regions are in the composite area of the BPT diagrams. In Table \ref{table:abundances_IGrM} we summarize and compare the gas-phase metallicities (Z = [O/H] = $\langle$log(O/H)$\rangle$ - log(O/H)$_{\odot}$) found in this work with those in the IGrM by \cite{OSullivan2017}. In the nuclear regions the metallicities range from $\sim$0.9 to 1.7 Z$_{\odot}$. The metallicity in the extended structures often rise to values approaching the solar $\sim$1.0 Z$_{\odot}$ or higher ($\lesssim$0.3 dex), while the IGrM has metallicities down to $\sim$0.1-0.7 Z$_{\odot}$. In the case of NGC 940, NGC 4261 and NGC 7619 the nuclear metallicities are $\gtrsim$0.6 dex higher than the solar value. Interestingly, in four cases (NGC 584, NGC 1587, ESO0507-G025 and NGC 5846) we found a drop in the nuclear metallicity with respect to the extended regions of $\lesssim$0.2 dex in NGC 584 and NGC 5846 and $\sim$0.1 dex for NGC 1587 and ESO0507-G025. This suggest the accretion of metal-poor gas to the central AGN \citep[e.g.,][]{doNascimento2022}. Since metallicity in the nuclear regions represent the average metallicity within the 3" apertures and the uncertainties on the abundances are of the order of $\sim$0.1 dex, we find in our sample of group-dominant galaxies that $Z_{nuclear}\gtrsim$Z$_{extended}$>Z$_{IGrM}$. The mixing and dispersion of heavy elements in the ISM of galaxies, in general, should follow the "evolutionary" stages of disc growth at different spatial scales. It might be suggested, in our case, by a correlation between gas-phase metallicity gradients $\nabla_{\text{O/H}}$ and H$\alpha$+[N\,{\sc ii}] morphology (see Figure \ref{fig:type_gradient}), since the effect of gas flows over the lifetime of the galaxies seems to produce the flattening of abundances out to large radii \citep[e.g.,][]{Kewley2010,LopezSanchez2015,Sanchez2014}, following the formation of the extended structures. However, we find no correlation between the metallicity gradients and morphology (Section \ref{subsec:properties_gas}) in our sample of BGGs. This is in agreement with the idea of relatively short time scales for the radial dispersion and mixing of metals to large spatial scales likely produced by the AGN/SF-driven outflows, gas accretion and mergers/interactions. Furthermore, some of these metals will be transport by these mechanisms from the galaxies into the IGrM/ICM. In particular, group-dominant galaxies often host radio AGN that are interacting with the surrounding gas by forming cavities and shock fronts \citep[see][for a description of these structures in our sample]{Olivares2022}. As seen in Section \ref{sub:velocity} we observe large gas velocity dispersion in the central regions of the galaxies, likely associated with the presence of AGN activity. Therefore, group-dominant galaxies likely acquired their cold gas as a consequence of several possible mechanisms, i.e., gas-clouds or satellite mergers/accretion and cooling flows which together with the AGN/SF activity are likely contributing to the growth of the ionized gas structures and flattening the metallicity gradients. \section{Conclusions}\label{sec:conclusions} In this paper, we present archival MUSE observations for a sample of 18 group-dominant galaxies from the CLoGS sample \citep{OSullivan2017}. We derive and removed the stellar continuum for all galaxies by fitting the stellar SEDs using the spectral synthesis code FADO \citep{GomesPapaderos2017}. We studied the properties (i.e., emission line ratios, chemical abundances, etc) and structure of the warm gas, in each galaxy, in order to constrain the ionisation processes, the origin of their gas and its chemical abundance distribution. We summarise our main results as follows: \begin{itemize} \item We used the continuum-subtracted H$\alpha$+[N \,{\sc ii}] images (see Figure \ref{fig:figures_NB}) to classify the galaxies into three morphological groups or types: \textit{type i0} - strong or diffuse nuclear emission with (or without) unextended filamentary ($\lesssim$1 kpc) structures connected to the nuclear region, \textit{type i} - strong or diffuse nuclear emission with extended (several kpc) filamentary structures beyond the nuclear region and \textit{type ii} - i0 or i plus extranuclear H\,{\sc ii} regions (well-defined or in distorted ring-like structures). We find that 5/18 (NGC 410, NGC 978, NGC 4008, NGC 4261 and NGC 6658) are type i0, 9/18 of these objects are type i (NGC 193, NGC 584, NGC 677, NGC 777, NGC 1060, NGC 1453, NGC 1587, NGC 5846 and NGC 7619) and 4/18 galaxies are type ii (NGC 924, NGC 940, NGC 4169 and ESO0507-G025). \item In order to distinguish between different ionisation mechanisms, in Section \ref{subsec:emission_ratios} we used the following emission line ratios [O \,{\sc iii}]/H$\beta$ and [N\,{\sc ii}]/H$\alpha$ and the equivalent width EW(H$\alpha$). The spatially resolved log [N \,{\sc ii}]/H$\alpha$ ratios decreases as the extent of the emission line region increases, indicating that the sources of the ionisation are acting at different spatial scales. The same decreasing pattern with the distance is observed for the velocity dispersion $\sigma$([N \,{\sc ii}]), the 12 + log(O/H) abundances and in most cases the EW(H$\alpha$). Using emission-line diagnostic diagrams (or BPT diagrams) we find that all galaxies in our sample have a dominant LINER/AGN nuclear region. Extended LINER-like regions are observed in most galaxies with filamentary structures. In the same section, we studied the mechanisms (pAGBs, AGN and X--ray emission) responsible for the ionisation which produce the optical emission lines in our sample. Although, AGN, pAGBs and X--ray emission models are able to reproduce the observational data, we suggest that central regions are more influenced by a low-luminosity AGN, while extended regions are ionised by other mechanisms with pAGBs photoionisation likely contributing significantly as suggested by their EW(H$\alpha$) values. \item We calculated the gas-phase metallicity (12 + log(O/H)) using linear interpolations between the AGN, pAGBs and X-ray emission models \citep{Krabbe2021} and their measured emission line ratios (log([O\,{\sc iii}]/H$\beta$ and log([N \,{\sc ii}]]/H$\alpha$)). We also used the AGN N2 \citep{Carvalho2020} and DIG/LI(N)ERs O3N2-based \citep{Kumari2019} calibrators. Using a single calibrator (AGN N2 or DIG/LI(N)ERs O3N2), the 12 + log(O/H) in the nuclear and extended regions (see Figure \ref{fig:figures_OH_profiles_pp}) are $\langle$(O/H)$_{nuclear}\rangle \gtrsim \langle$(O/H)$_{extended}\rangle$. We found that the metallicity gradients for the pixel-by-pixel data points are, in most cases, negative in the innermost regions with a flat gradient for the extended areas beyond the centre, which includes extended structures and some star-forming regions. In this sense, the morphological H$\alpha$+[N \,{\sc ii}] types defined in this study indicate that group-dominant galaxies with extended filamentary structures (type i) and S0 galaxies with extranuclear SF regions (type ii), on average, have shallow metallicity gradients. Therefore, extended regions and ring-like structures of ionised gas can be considered chemically homogeneous (nearly solar) within the uncertainties. If the ionisation sources have different impacts at different radii (as seen in Section \ref{subsec:emission_ratios}) we use the AGN N2 calibrator and AGN models to estimate the nuclear (3" aperture) abundances and the DIG/LI(N)ERs O3N2 calibrator for the extended regions. Therefore, we found in NGC 584, NGC 1587, ESO0507-G025 and NGC 5846 a slight drop in the nuclear metallicity with respect to the extended regions, suggesting the accretion of metal-poor gas to the central regions. However, we find within the uncertainties that $Z_{nuclear}\gtrsim$Z$_{extended}$>Z$_{IGrM}$. \item We suggest that group-dominant galaxies likely acquired their cold gas in the past as a consequence of one or more external mechanisms where gas-clouds or satellite mergers/accretion (Section \ref{subsec:HI_gas}) and cooling flows are likely supplying the gas for the growth of the ionised gas structures and AGN/SF activity. Our results favor a scenario in which metals are distributed across the ISM of the galaxies on short timescales. \end{itemize} \section*{Acknowledgements} We thank the reviewer for his/her careful reading of the manuscript and helpful comments which substantially improved the paper. PL (contract DL57/2016/CP1364/CT0010) and TS (contract DL57/2016/CP1364/CT0009) are supported by national funds through Funda\c{c}\~ao para a Ci\^encia e Tecnologia (FCT) and the Centro de Astrof\'isica da Universidade do Porto (CAUP). SIL and KK are supported in part by the National Research Foundation of South Africa (NRF Grant Numbers: 120850). Opinions, findings and conclusions or recommendations expressed in this publication is that of the author(s), and that the NRF accepts no liability whatsoever in this regard. EOS ackowledges support for this work from the National Aeronautics and Space Administration through XMM-Newton award 80NSSC19K1056. AB acknowledges support from NSERC through its Discovery Grant Program. PL thanks Polychronis Papaderos for his very useful comments. We thank Angela Krabbe for providing us with the CLOUDY models used in this work. \section*{Data availability}\label{sec:data_availability} The data underlying this article will be shared on reasonable request to the corresponding author. \clearpage \appendix \section{Other emission line galaxies in our MUSE datacubes}\label{emission_galaxies_apen} Other emission line galaxies in our MUSE FoVs. We found two objects in the FoV of NGC 677 and one in the FoV of NGC 777, NGC 924 and NGC 1453. In Figure \ref{fig:emission_galaxies} we show the position of these objects in the FoV. Using the H$\alpha$ and [N \,{\sc ii}]$\lambda$6584 emission lines we calculated: the redshifts, H$\alpha$ SFRs and 12 + log(O/H) abundances using the H\,{\sc ii} calibrator. In Table \ref{table:emission_galaxies} we summarize their main properties. \clearpage \begin{table} \centering \begin{minipage}{70mm} \caption{Properties of the emission line galaxies detected in the field of our sample galaxies. $^{(a)}$ z calculated using H$\beta$.} \label{table:emission_galaxies} \begin{tabular}{l c c c c} \hline FoV & z & SFR(H$\alpha$) & 12 + log(O/H)\\ & &(M$_{\odot}$yr$^{-1}$) & N2 \\ \hline NGC 677 & \\ R1 & 0.283658 & 0.0154$\pm$0.0001 & 8.50$\pm$0.09 \\ R2 & 0.282776 & 0.0030$\pm$0.0010 & 8.53$\pm$0.17 \\ NGC 777 & \\ R1 & 0.232878 & 0.0046$\pm$0.0001 & 8.48$\pm$0.15 \\ NGC 924$^{(a)}$ & \\ R1 & 0.491319 & \dots & \dots \\ NGC 1453 & \\ R1 & 0.118373 & 0.0003$\pm$0.0001 & 8.56$\pm$0.15 \\ \hline \end{tabular} \end{minipage} \end{table} \clearpage \section{H$\alpha$+[N \,{\sc ii}]$\lambda\lambda$6548,6584 emission line maps}\label{HaNII_maps_apend} \clearpage \section{Emission line ratio maps and BPT diagrams of the galaxies}\label{Maps_sample} \clearpage \section{[N \,{\sc ii}]$\lambda$6584 velocity fields}\label{appendix_NII_velocity_maps} \clearpage \section{[S \,{\sc ii}]$\lambda$6716 / [S \,{\sc ii}]$\lambda$6731 ratio maps}\label{appendix_SII_maps} \clearpage \section{Results of the linear fitting}\label{N2_fitting_apen} \begin{landscape} \begin{table} \begin{minipage}{205mm} \caption{Results of the linear fitting of Figure \ref{fig:figures_OH_profiles_pp} and statistics for all data points or spaxels using the H\,{\sc ii} N2 (Marino et al. 2013), AGN N2 (Carvalho et al. 2020) and AGN/LI(N)ERs O3N2 (Kumari et al. 2019) methods, respectively. The slope (metallicity gradient) from the linear fitting is indicated by $\nabla_{\text{O/H}}$.} \label{table:4} \centering \begin{tabular}{l c c c c c c c c c c} \hline & \multicolumn{4}{c}{nuclear region} & \multicolumn{4}{c}{extended region} &\\ Name & intercept & slope $\nabla_{\text{(O/H)}}$ & mean & sd & intercept & slope $\nabla_{\text{(O/H)}}$ & mean & sd\\ & & (dex/arcsec) & & & & (dex/arcsec) \\ \hline NGC 193 & 8.87/8.83/8.83 & -0.022/-0.026/-0.009 & 8.80/8.74/8.81 & 0.09/0.11/0.04 & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots\\ NGC 410 & 8.87/8.82/\dots & -0.034/-0.027/\dots & 8.80/8.76/\dots & 0.10/0.12/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots\\ NGC 584 & 8.84/8.78/8.81 & -0.036/-0.042/-0.009 & 8.77/8.69/8.85 & 0.13/0.18/0.05 & 8.69/8.63/8.78 & 0.0030/0.0016/-0.0002 & 8.77/8.69/8.82 & 0.12/0.17/0.08 \\ NGC 677 & 8.76/8.68/8.80& -0.008/-0.008/-0.008 & 8.73/8.65/8.78 & 0.10/0.12/0.02 & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots \\ NGC 777 & 8.85/8.81/\dots & -0.030/-0.027/\dots & 8.83/8.78/\dots & 0.12/0.15/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots\\ NGC 924 & 8.80/8.74/8.79 & -0.009/-0.009/-0.002 & 8.72/8.75/8.60 & 0.17/0.24/0.04 & 8.79/8.68/8.75 & -0.0060/-0.0025/0.0001 & 8.76/8.72/8.62 & 0.16/0.12/0.03 \\ NGC 940 & 8.97/8.96/8.85 & -0.055/-0.072/-0.017 & 8.60/8.51/8.70 & 0.04/0.06/0.02 & 8.63/8.53/8.82 & -0.0051/-0.0052/-0.0005 & 8.68/8.56/8.72 & 0.14/0.20/0.05 \\ NGC 978 & 8.75/8.66/8.79 & -0.005/-0.005/ -0.009 & 8.74/8.65/8.78 & 0.08/0.11/0.03 & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots\\ NGC 1060 & 8.75/8.67/\dots & -0.014/-0.006/\dots & 8.79/8.72/\dots & 0.05/0.06/\dots & 8.73/8.70/\dots & 0.0081/0.0010/\dots & 8.75/8.66/\dots & 0.04/0.06/\dots\\ NGC 1453 & 8.86/8.83/8.83 & -0.021/-0.028/-0.008 & 8.83/8.78/8.80 & 0.14/0.19/0.04 & 8.77/8.69/8.78 & -0.0011/0.0019/0.0013 & 8.80/8.74/8.80 & 0.13/0.18/0.04 \\ NGC 1587 & 8.79/8.72/8.78 & -0.016/-0.013/-0.004 & 8.75/8.67/8.72 & 0.08/0.11/0.04 & 8.74/8.73/8.80 & -0.0047/-0.0119/-0.0058 & 8.71/8.69/8.71 & 0.13/0.07/0.05 \\ NGC 4008 & 8.79/8.72/\dots & -0.017/-0.015/\dots & 8.76/8.69/\dots & 0.13/0.17/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots\\ NGC 4169 & 8.88/8.85/8.83 & -0.042/-0.059/-0.006 & 8.71/8.61/8.78 & 0.05/0.07/0.03 & 8.81/8.74/8.81 & -0.0036/-0.0004/0.0004 & 8.74/8.69/8.82 & 0.2/0.25/0.06 \\ NGC 4261 & 8.90/8.88/8.82 & -0.010/-0.015/0.005 & 8.89/8.86/8.83 & 0.08/0.10/0.04 & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots\\ ESO0507-G025 & 8.81/8.74/8.82 & -0.011/-0.014/-0.007 & 8.81/8.75/8.68 & 0.09/0.13/0.02 & 8.58/8.50/8.68 & -0.0010/-0.0019/-0.0001 & 8.74/8.66/8.68 & 0.06/0.09/0.01 \\ NGC 5846 & 8.91/8.89/8.85& -0.068/-0.094/-0.020 & 8.82/8.77/8.84 & 0.04/0.05/0.08 & 8.78/8.71/8.81 & 0.0002/0.0011/-0.0002 & 8.79/8.72/8.81 & 0.08/0.11/0.05 \\ NGC 6658 & 8.88/8.89/8.83 & -0.064/-0.105/-0.025 & 8.78/8.74/8.79 & 0.16/0.20/0.08 & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots\\ NGC 7619 & 8.97/8.97/\dots & -0.065/-0.089/\dots & 8.88/8.85/ & 0.12/0.17/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots & \dots/\dots/\dots \\ \hline \end{tabular} \end{minipage} \end{table} \end{landscape} \clearpage \section{Cold gas content}\label{cold_gas_apen} \begin{table*} \begin{minipage}{180mm} \caption{H\,{\sc i} and H$_2$ properties and M$_{*}$ available from the literature.} \label{table:properties_HI} \centering \begin{tabular}{l c c c c c c c c c c c} \hline Name & H$\alpha$+[N\,{\sc ii}] & V(H\,{\sc i}) & W$_{20}$(H\,{\sc i}) & A flux(H\,{\sc i})\footnote{Asymmetry in the H\,{\sc i} profiles using the method from \cite{Espada2011}.} & M(H\,{\sc i})\footnote{From H\,{\sc i} compilation in \cite{OSullivan2018}, although as noted by those authors H\,{\sc i} was not detected in NGC\,940 and NGC\,5846 during the ALFALFA survey as expected.}&M(H$_{2}$)\footnote{From \cite{OSullivan2015} and \cite{OSullivan2018}}& M$_{*}$\footnote{From \cite{Kolokythas2022}} & H\,{\sc i} spectrum \\ & morphology & (km s$^{-1}$) & (km s$^{-1}$)& & $\times$10${^9}$ M$_{\odot}$& $\times$10${^9}$ M$_{\odot}$& $\times$10${^{11}}$ M$_{\odot}$ & source\\ \hline NGC 584 & i & \dots & \dots & \dots & 0.12& <0.01 & 2.13 & \dots \\ NGC 677 & i & 5138$\pm$6 & 272$\pm$12 & 1.10$\pm$0.07 & 1.70& <0.23 & 3.52 & \cite{Haynes2018}\\ NGC 924 & ii& 4428$\pm$5 & 509$\pm$10 & 1.07$\pm$0.08 & 9.12& 0.05 & 1.88 & \cite{Haynes2018}\\ NGC 940 & ii& 5127$\pm$4 & 218$\pm$8 & 1.02$\pm$0.08 & 9.14&6.10& 2.94 & \cite{Paturel2003}\\ NGC 1587 & i & 7163$\pm$1 &302$\pm$2& 1.29$\pm$0.04 & 2.51 & 0.23& 3.03 & \cite{Gallagher1981}\\ NGC 4169 & ii& 3811$\pm$7 & 470$\pm$13 & 1.56$\pm$0.07 & 10.71 &0.14& 1.27 & \cite{Haynes2018}\\ ESO0507-G025 & ii & 3248$\pm$8 & 450$\pm$16 & 1.26$\pm$0.51 &31.62 & 0.42 & 2.84 & \cite{Barnes2001}\\ NGC 5846 & i & 1804$\pm$14 &502$\pm$27 & 1.4$\pm$0.04 & 0.28& 0.01 & 3.39 & \cite{Bottinelli1979}\\ \hline \end{tabular} \end{minipage} \end{table*} \bsp % \label{lastpage}
Title: Per aspera ad astra simul: Through difficulties to the stars together
Abstract: In this article, we detail the strategic partnerships "Per Aspera Ad Astra Simul" and "European Collaborating Astronomers Project: Espa\~na-Czechia-Slovakia". These strategic partnerships were conceived to foment international collaboration for educational activities (aimed at all levels) as well as to support the development and growth of early career researchers. The activities, carried out under the auspices of these strategic partnerships, demonstrate that Key Action 2 of the Erasmus+ programme can be an extremely valuable resource for supporting international educational projects, as well as the great impact that such projects can have on the general public and on the continued development of early career researchers. We strongly encourage other educators to make use of the opportunities offered by the Erasmus+ scheme.
https://export.arxiv.org/pdf/2208.08134
\begin{frontmatter} \begin{keywords} Erasmus+; early career researchers; outreach. \end{keywords} \end{frontmatter} \section{Introduction} Many will be familiar with the Erasmus+ programme of the European Commission through its scheme to support students spending a semester/year abroad during their undergraduate degrees (generally as part of its Key Action 1). However, Key Action 2 of the Erasmus+ programme provides funding for multinational strategic partnerships between institutions with the aim of promoting educational, professional and personal development. These strategic partnerships are applied for and managed on the national level with, for example, the Czech national agency having budgets of approximately 1.5~M\euro{} and 3~M\euro{} in the 2017 and 2020 calls, respectively. \href{http://erasmus.asu.cas.cz/erasmus2017/index.html}{``Per Aspera Ad Astra Simul''} (2017-2020, 2017-1-CZ01-KA203-035562) and its (currently active) successor \href{http://erasmus.asu.cas.cz/index.html}{``European Collaborating Astronomers Project: Espa\~na-Czechia-Slovakia''} (2020-2023, 2020-1-CZ01-KA203-078200), the subjects of this article, are excellent examples of how this scheme can support astronomy education and the professional development of early career researchers (ECRs). \section{Background} \label{sec:background} The Erasmus+ program is designed to support education, training and youth through international cooperation. In such a competitive environment, the development of ECRs in astronomy rely heavily on the most modern techniques (be they observational or theoretical) and international collaboration. Therefore, it is crucial for young astronomers to acquire as much wide-ranging expertise and international experience as possible during their studies. Ultimately, this makes ECRs better scientists with improved career prospects. Although Spain and the Czech Republic joined the European Southern Observatory (ESO -- Europe's premier ground-based astronomy consortium) almost simultaneously\footnote{At the time of publication Slovakia is not an ESO member state.} (late 2006/early 2007), the Spanish astronomical community is objectively larger and more developed \citep{barcons07,palous07}. Spain has contributed more staff to the ESO observatories (indeed, the current Director General is Spanish), won more industrial ESO contracts, and been awarded more observing time on ESO facilities. Spain is also home to several major astronomical facilities, including the World's largest optical-infrared telescope -- the Gran Telescopio Canarias (GTC).% The Czech and Slovak astronomical communities are smaller, but with a rich history particularly in the fields of theoretical astrophysics, variable stars and minor solar system bodies. More recently, both communities have expanded significantly in the theoretical and observational study of exoplanets. Several 1--2m class telescopes have been built in Czechia and Slovakia, but the continental weather impacts significantly upon their usage. The Erasmus+ strategic partnerships ``Per Aspera Ad Astra Simul'' and ``European Collaborating Astronomer Projects: Espa\~na-Czechia-Slovakia'' were conceived as mutually beneficial agreements between institutes in Spain, the Czech Republic and Slovakia. The significant Spanish expertise in large observing facilities offers a clear opportunity for learning and growth for ECRs from the Czech and Slovak partners. Similarly, Spanish ECRs would greatly benefit from extended research stays at the Czech and Slovak partners, developing long-lasting collaborations and gaining valuable international experience. Furthermore, the Erasmus+ partnerships fosters collaboration also among senior staff, ensuring continued benefit to further generations of ECRs beyond the duration of the project. \section{Partners and their roles} \label{sec:partners} The partners of the aforementioned Erasmus+ strategic partnerships are listed in Table \ref{tab:partners}. The lead institute of the project is the Astronomical Institute of the Academy of Sciences of the Czech Republic (AI ASCR), the largest astronomical institute in the Czech Republic. The Department of theoretical physics and astrophysics of Masaryk University and the Astronomical Institute of Charles University (AI CU, who joined in 2020) are the other Czech partners in the project and comprise two of the longest established and important astronomical institutes in the Czech Republic. In Slovakia, the Astronomical Institute of the Slovak Academy of Sciences (AI SAS) and Comenius University Bratislava are both partners. Spain is represented in the partnership by the Instituto de Astrof\'isica de Canarias (IAC), the largest astronomical institute in the country and the operators of the Teide and Roque de los Muchachos Observatories, the latter of which is home to the GTC. Indeed, GranTeCan (the public company responsible for the GTC's design, construction, and continued operation) was an associate partner from 2017--2020. As lead institute, AI ASCR has been responsible for the administration and coordination of the project and the associated funds. AI SAS and Masaryk University have organised summer schools for ECRs in 2019 and 2020, respectively. Further summer schools, organised by the IAC and Comenius University Bratislava are planned for the coming years. All partners have participated in mobilities of ECRs and more senior researchers (see Section \ref{sec:mobilities}), and all have been involved in local outreach and educational activities (see Section \ref{sec:outreach}). Similarly, senior researchers from all partner institutions contributed to a book of thirteen detailed reviews, designed to be useful graduate-level introductions to the topics ranging from meteors and space debris through to the formation of galaxies and the standard cosmological model, and which was published in 2020 \citep{kabath20}. \begin{table*}[bt!] \caption{List of partner institutions}\label{tab:partners} \begin{tabularx}{\linewidth}{l l L L} \toprule {Institution} & {Country} & {Notes}\\ \midrule Astronomical Institute of the Academy of Sciences of the Czech Republic (AI ASCR) & Cz & Lead institute\\ Instituto de Astrof\'isica de Canarias (IAC) & Es & Summer school organiser 2022\\ Comenius University Bratislava & Sk & Summer school organiser 2023\\ Masaryk University & Cz & Summer school organiser 2020\\ Astronomical Institute of the Slovak Academy of Sciences (AI SAS) & Sk & Summer school organiser 2019\\ Astronomical Institute of Charles University (AI CU) & Cz & Partner since 2020\\ GranTeCan & Es & Associate partner until 2020\\ \bottomrule \end{tabularx} \end{table*} \section{Mobilities} \label{sec:mobilities} The strategic partnerships offered significant financial support for mobilities of researchers between the partner institutions. Short-term ($\sim$a few weeks) exchanges of more senior researchers designed to foment collaborations between the institutions which can then be used to achieve the objectives of the Erasmus+ programme\footnote{For further details of these objectives, please see \href{https://erasmus-plus.ec.europa.eu/programme-guide/part-a/priorities-of-the-erasmus-programme/objectives-features}{What are the objectives of the Erasmus+ programme?}}, as well as longer-term (2--6 months) exchanges of ECRs with the aim of aiding their personal and professional development. At the end of the stay, the researchers were encouraged to summarise their activities in the form of a blog post on the project's \href{http://erasmus.asu.cas.cz/erasmus2017/blog.html}{webpages}. In total more than 30 international exchanges have already been completed (in spite of the obvious problems posed by the COVID-19 pandemic striking part way through the execution of the first strategic partnership), with further exchanges planned for 2022 and 2023. In many cases, the benefits of these exchanges have been tangible and obvious \citep[for example, resulting in peer-reviewed publications in high impact journals;][]{jones19,gonzalez20,paunzen21}, while in other cases the benefits are more ``soft'' -- leading to long-term collaborations, personal and professional development, improved career prospects, etc. \subsection{Gran Telescopio Canarias} \label{sec:tereza} A special case among the ECR exchanges, where the majority were conceived as collaborations on blue skies astronomy research under the supervision of a senior researcher at the receiving institution, was the long-term mobility of one ECR to GranTeCan. During their stay, the PhD student combined continued research towards their doctoral thesis with support astronomer activities at the World's largest optical-infrared telescope -- the 10.4-m Gran Telescopio Canarias (GTC). This offered the student a unique opportunity to obtain in-depth experience of the tasks involved in the day-to-day operations of a world-class astronomical observatory -- experience which has undoubtedly been extremely valuable in their career development, with the student going on to take up highly competitive positions at the European Space Agency (ESA) and European Southern Observatory (ESO) following the completion of their mobility and thesis defense. \section{Summer schools} \label{sec:schools} In June 2019, the summer school \href{https://opticon-schools.nbi.ku.dk/other-schools/from-proposals-to-publication/}{``Observational astrophysics: from proposals to publication''} was organised in Tatransk\'a Lomnica (Slovakia) under the auspices of the Per Aspera Ad Astra Simul grant and in collaboration with OPTICON. The school comprised two parts: hands-on projects with archival data and lectures/activities to teach the participants about observing time proposal evaluation procedures. The hands-on projects were undertaken in groups of approximately four students (with a total of approximately 40 students in attendance) under the supervision of an experienced tutor. The projects themselves covered topics ranging from near-Earth asteroids \citep{NEAs} through to binary quasars \citep{quasars} all based on publicly available archive data. Upon completion of these projects, the participating students presented their results in the form of a mini-conference and later reported their analyses and results in papers published in the Contributions of the Astronomical Observatory Skalnat\'e Pleso journal \citep{summerschool}. A second summer school, \href{https://gate.physics.muni.cz/}{``GAIA and TESS: Tools for understanding the Local Universe''}, was organised in 2020, originally to be hosted in Brno (Czech Republic) but ultimately held remotely due to the COVID-19 pandemic (with 17 students attending via video conference). The school featured talks on the capabilities and applications of the Gaia and TESS missions by highly experienced experts, as well as hands-on sessions and research projects making use of the public data products of these missions \citep{2021CoSka..51...41S}. A third school, \href{https://iacerasmus.github.io/ERASMUS2022/}{``Eclipsing Binaries and Asteroseismology: Precise fundamental stellar parameters in the golden age of time-domain astronomy"}, will be hosted on the Spanish island of La Palma in September of 2022. This school will be run in a hybrid format with 15 in-person students and up to 200 attending via video conference. A final school is planned to be held in Bratislava (Slovakia) in 2023. \section{Outreach} \label{sec:outreach} Thus far, we have discussed the activities of the strategic partnerships targeted towards the education and professional development of ECRs. However, the project has always had a significant component geared towards the education of a younger and/or wider audience, principally in the form of activities in local schools and public talks. We stress the importance of early education and thus target our effort to work together with the local schools and preschool classes. In Ond\v{r}ejov, activities were focused towards children of age 5--6 years \footnote{Similar activities for older children were planned, but have been significantly impacted by the COVID-19 pandemic.}, preparing for school which begins at the age of 6 in the Czech Republic. We invite the preschoolers to observatory and to the 2-m telescope dome, where they can see the telescope. In the dome, we present a short talk about a selected topic, ranging from the solar system to the life of stars, which the children will then later build upon in class with their teachers. After the visit, the children prepare art work or a similar project inspired by the selected topic, which is then presented in a small gathering with their parents invited. These activities not only engage the children with their first experience of space and science, but also help them to develop creative and presentation skills. In Spain, activities were organised for local 4th grade students (ages 9--10) which focused on the importance of astronomy in the Canary islands throughout history, from the ancient aborigines through to the current day. These included hands-on workshops on the solar system and the timescales of the Universe, offering many students their first contact with a professional astronomer. Activities for high school students were also organized, focusing on different aspects of technological development related to astronomy and on a general view of the Universe through the different ranges of the electromagnetic spectrum. Similarly, members of our team contributed to the long-running ``Nuestros Alumnos en el Roque de Los Muchachos'' programme, giving guided tours of the Roque de Los Muchachos Observatory (including going inside the GTC dome) to local high school students in the 4th grade of compulsory secondary education (4$^o$ ESO, age 15). For the general public, several popular lectures were delivered by members of the team in planetariums and museums across partner countries. Talks for citizen scientists and amateur astronomers were also organised, in particular outlining the potential for observations of stellar occultations by asteroids. Similarly, a \href{https://www.youtube.com/channel/UCXBLE1tzrL2mhY3Kb0ELieQ/videos}{YouTube channel} was created for the project, hosting educational videos on the history of astronomy, astronomical techniques and principles, and interviews. Furthermore, the partners at Comenius University in Bratislava contributed to their pre-existing \href{https://www.youtube.com/playlist?list=PLqiGU4u5LkCF2YYU450gssOPPhE-4jOfI}{YouTube channel} as part of the project. These open talks and YouTube videos serve not only to educate but also to strengthen the connection between the public and the scientists whom are at least partially supported by public funding. \section{Conclusions} \label{sec:conclusions} Key Action 2 of the European Union's Erasmus+ programme offers substantial grant funding for international collaboration on educational projects. The (on-going) outputs of two such grants, awarded to a consortium of Czech, Slovak and Spanish research institutes and universities, have been briefly presented here. These grants have supported educational activities ranging from outreach events for young children through to summer schools for graduate students, and from public lectures and YouTube videos through to financial support for extended international research visits to support the professional development of ECRs. The impact of the projects, particularly for the professional development of ECRs, has been clear. A number of summer school participants have now completed their Masters or PhDs (stating that the skills acquired as part of the schools were useful for their research), while many of the ECRs to have research trips funded by the projects have since gone on to obtain prestigious fellowships (e.g.\ ESA) or permanent research/teaching positions (e.g.\ through the Mexican ``C\`atedra CONACYT para j\'ovenes investigadores'' scheme). The work undertaken as part of ``Per Aspera Ad Astra Simul" and ``European Collaborating Astronomers Project: Espa\~na-Czechia-Slovakia" clearly demonstrates the potential impact that Erasmus+ Key Action 2 funding can facilitate. We strongly encourage other astronomy educators to consider how they might make similarly good use of the available funding in future calls. \section{Declarations} \subsection{List of abbreviations} \begin{itemize} \item[AI ASCR] Astronomical Institute of the Academy of Sciences of the Czech Republic \item[AI CU] Astronomical Institute of Charles University \item[AI SAS] Astronomical Institute of the Slovak Academy of Sciences \item[ECR] Early career researchers \item[ESA] European Space Agency \item[ESO] European Southern Observatory \item[GTC] Gran Telescopio Canarias \end{itemize} \subsection{Ethical Approval} Not applicable. \subsection{Consent for publication} Not applicable. \subsection{Competing Interests} The author(s) declare that they have no competing interests. \subsection{Funding} This work was supported by the Erasmus+ programme of the European Union under grant numbers 2017-1-CZ01-KA203-035562 and 2020-1-CZ01-KA203-078200 (PI P.\ Kab\'ath). \subsection{Author's Contributions} David Jones - Conceptualization, Funding acquisition, Project administration, Writing – original draft, Writing – review \& editing\\ Petr Kab\'ath - Conceptualization, Funding acquisition, Project administration, Writing – review \& editing, Project administration Jorge Garc\'ia-Rojas - Project administration, Writing - review \& editing Josef Hanu\v s - Project administration, Writing - review \& editing Marian Jakub\'ik - Project administration, Writing - review \& editing Jan Jan\'ik - Project administration, Writing - review \& editing Roman Nagy - Project administration, Writing - review \& editing Juraj T\'oth - Project administration, Writing - review \& editing \section{Acknowledgements} We would like to acknowledge the role of the researchers at the participating institutions without whom these partnerships would not have been a success -- the ECRs for their eagerness to participate in mobilities and schools, and senior researchers for their willingness to host ECR mobilities and to contribute to \citet{kabath20}. \bibliography{paper-refs}
Title: The distribution and morphologies of Fornax Cluster dwarf galaxies suggest they lack dark matter
Abstract: Due to their low surface brightness, dwarf galaxies are particularly susceptible to tidal forces. The expected degree of disturbance depends on the assumed gravity law and whether they have a dominant dark halo. This makes dwarf galaxies useful for testing different gravity models. In this project, we use the Fornax Deep Survey (FDS) dwarf galaxy catalogue to compare the properties of dwarf galaxies in the Fornax Cluster with those predicted by the Lambda cold dark matter ($\Lambda$CDM) standard model of cosmology and Milgromian dynamics (MOND). We construct a test particle simulation of the Fornax system. We then use the MCMC method to fit this to the FDS distribution of tidal susceptibility $\eta$ (half-mass radius divided by theoretical tidal radius), the fraction of dwarfs that visually appear disturbed as a function of $\eta$, and the distribution of projected separation from the cluster centre. This allows us to constrain the $\eta$ value at which dwarfs should get destroyed by tides. Accounting for an $r'$-band surface brightness limit of 27.8 magnitudes per square arcsec, the required stability threshold is $\eta_{\textrm{destr}} = 0.25^{+0.07}_{-0.03}$ in $\Lambda$CDM and $ 1.88^{+0.85}_{-0.53}$ in MOND. The $\Lambda$CDM value is in tension with previous $\textit{N}$-body dwarf galaxy simulations, which indicate that $\eta_{\textrm{destr}} \approx 1$. Our MOND $\textit{N}$-body simulations indicate that $\eta_{\textrm{destr}} = 1.70 \pm 0.30$, which agrees well with our MCMC analysis of the FDS. We therefore conclude that the observed deformations of dwarf galaxies in the Fornax Cluster and the lack of low surface brightness dwarfs towards its centre are incompatible with $\Lambda$CDM expectations but well consistent with MOND.
https://export.arxiv.org/pdf/2208.02265
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} gravitation -- dark matter -- galaxies: clusters: individual: Fornax -- galaxies: dwarf -- galaxies: interactions -- galaxies: statistics \end{keywords} \section{Introduction} \label{Introduction} Dwarf galaxies are the smallest and most common type of galaxy. They are characterized by their low mass ($M < 10^9 \, M_{\odot}$) and low metallicity. Most dwarfs are found in galaxy clusters or near a larger galaxy, making them potentially susceptible to the gravitational effect of these larger structures. The currently standard Lambda-cold dark matter ($\Lambda$CDM) cosmological model \citep{Efstathiou_1990, Ostriker_1995} provides two different scenarios by which dwarf galaxies can form \citep[the Dual Dwarf Galaxy Theorem;][]{Kroupa_2012}: \begin{enumerate} \item From the collapse of dark matter particles into haloes, which then accrete baryonic matter into their potential wells \citep{White_1978}. Such dwarfs are known as `primordial dwarf galaxies' and are expected to be dark matter-dominated; and \item From the collapse of overdense regions in tidal tails generated by an interaction between larger, gas-rich galaxies. These so-called `tidal dwarf galaxies' (TDGs) must be free of dark matter as the velocity dispersion of the dark matter particles surrounding the host galaxy is too high to allow for their efficient capture by the shallow potential wells of substructures in the tidal tail \citep{Barnes_1992, Wetzstein_2007}. In recent years, cosmological $\Lambda$CDM simulations have advanced to the point where they can resolve TDGs \citep{Ploeckinger_2018, Haslbauer_2019_TDG}. \end{enumerate} Dwarf galaxies can also be classified according to their morphology into early and late types depending on whether they have star-forming regions, which are present only for late-type dwarfs. This category includes blue compact dwarfs and dwarf irregular galaxies like the Magellanic Clouds, while early-type dwarfs include dwarf elliptical (dE) and dwarf spheroidal (dSph) galaxies, with dSphs generally having a lower stellar mass ($M_{\star}$). The lowest $M_{\star}$ dwarfs tend to have velocity dispersions ($\sigma$) which are too high if one assumes virial equilibrium, with $\sigma$ sometimes even exceeding the escape velocity \citep{Aaronson_1983, Grebel_2001}. This discrepancy relies on the validity of General Relativity and our ability to detect nearly all the matter. $\Lambda$CDM is a cosmological model based on General Relativity in which the addition of the dark matter component was motivated by the mismatch between the observed baryonic mass and the mass calculated dynamically from the observed $\sigma$ assuming the virial theorem \citep{Zwicky_1933}. Such acceleration discrepancies are also apparent in the gravity between the Milky Way (MW) and Andromeda \citep[M31;][]{Kahn_Woltjer_1959} and in the outer rotation curves of galaxies \citep[e.g.,][]{Babcock_1939, Rubin_1970, Rogstad_1972, Roberts_1975, Bosma_1978, Bosma_1981}, as reviewed in \citet{Faber_1979}. Therefore, the natural $\Lambda$CDM explanation for dSphs having such high $\sigma$ is to assume that most of their mass is in the form of dark matter, in which case they must be primordial dwarfs. $\Lambda$CDM predicts that primordial dwarfs should be distributed nearly isotropically around galaxies \citep{Moore_1999, Gao_2004}. However, the dwarf satellite galaxies of the MW, M31, and Centaurus A preferentially align in flattened planes \citep{Lynden_Bell_1976, Ibata_2013, Tully_2015_Cen_A, Muller_2018}. This is in significant tension with the $\Lambda$CDM model \citep{Kroupa_2005}. While it was later shown that the distribution of dark matter subhaloes is not supposed to be exactly isotropic due to the preferential accretion of subhaloes along cosmic filaments and the intrinsic triaxiality of dark matter haloes \citep{Libeskind_2005, Zentner_2005}, the mild expected flattening is not sufficient to explain the strong correlation in position and velocity space observed in nearby satellite systems \citep{Ibata_2014, Pawlowski_2014, Pawlowski_2020, Pawlowski_Sohn_2021, Muller_2021}. The satellite plane problem is reviewed in \citet{Pawlowski_2021}, which also considers tentative evidence for more satellite planes beyond the three mentioned above. The Local Group (LG) satellite planes are each in $3.55\sigma$ tension with $\Lambda$CDM \citep[table 3 of][and references therein]{Banik_2021_backsplash}, while the satellite plane around Centaurus A is only 0.2\% ($3.09\sigma$) likely to arise in this paradigm \citep{Muller_2021}. These are the only three host galaxies near enough for us to reliably know the phase-space distribution of their satellites. We can approximately combine their low likelihoods in $\Lambda$CDM using Gaussian statistics. Since we effectively have $\chi^2 = 3.55^2 + 3.55^2 + 3.09^2 = 34.75$, the combined tension can be estimated as the likelihood of the $\chi^2$ statistic exceeding this value for three degrees of freedom. This suggests that the LG and Centaurus A satellite planes combined cause a tension of $1.40\times 10^{-7}$ ($5.27\sigma$). A new interpretation is thus needed to explain the origin of the observed satellite galaxy planes. Another less widely known problem is the distorted morphologies of MW satellites, which strongly imply that they have been affected by tidal forces \citep{Kleyna_1998, Walcher_2003, Sand_2012}. Because the inner region of a satellite galaxy can hardly be affected by tides if it is protected by a dominant dark matter halo \citep{Kazantzidis_2004}, $\la 10\%$ of the MW satellites are expected to be distorted in this paradigm \citep{Kroupa_2012}. However, \citet{McGaugh_Wolf_2010} found that the majority of the MW satellites present signs of being disturbed, both in their elevated $\sigma$ and in their observed ellipticity. More recently, \citet{Hammer_2020} pointed out that the high $\sigma$ of dSphs surrounding the MW and their proximity to perigalacticon makes it extremely unlikely for them to be dark matter dominated. An alternative explanation for the planar distribution of the satellite galaxies is that they are of tidal origin. This is because TDGs are expected to be phase-space correlated \citep{Pawlowski_2011, Kroupa_2012, Pawlowski_2018, Haslbauer_2019_TDG}. But if the observed satellites are of tidal origin, they would be dark matter free, in which case their high $\sigma$ for their low $M_{\star}$ should be explained in a different way. \citet{Kroupa_1997} proposed that due to close encounters of the TDGs with their parent galaxy, the TDGs are highly perturbed. As a result, they should be significantly anisotropic both in terms of their internal structure and their velocity dispersion tensor. More generally, they should not be in dynamical equilibrium, making it incorrect to directly apply the virial theorem to infer the mass from $\sigma$ as this could cause a significant overestimate. However, purely baryonic dwarfs would be very fragile and easily destroyed, making it unlikely that so many of them exist in the LG right now \citep{Haslbauer_2019_DF2, Haslbauer_2019_TDG}. Even if this scenario can explain the high $\sigma$ of all observed dSphs, $\Lambda$CDM would still struggle to explain why almost all observed dwarf satellites of the MW, M31, and Centaurus A are of tidal origin $-$ the quenching mechanisms invoked to solve the missing substructure problem are not expected to be so destructive as to get rid of all observable primordial dwarfs \citep{Kim_2018, Read_2019_missing_satellites, Webb_2020}. Given these difficulties, it is important to note that the properties of both primordial and tidal dSphs can be explained without resorting to the assumption of a surrounding dark matter halo. This entails discarding the $\Lambda$CDM cosmological model and using instead an alternative framework, the currently leading contender being Milgromian dynamics \citep[MOND;][]{Milgrom_1983}. MOND proposes that the deviations from Newtonian behaviour in the rotation curves of galaxies should be attributed to a departure from Newtonian gravity in the regime of weak gravitational fields \citep[$g \la a_{_0} = 1.2 \times 10^{-10}$~m/s$^2 = 3.9$~pc/Myr$^2$;][]{Begeman_1991, Gentile_2011, McGaugh_Lelli_2016}. The gravity boost that dwarf galaxies experience in this regime would explain their high $\sigma$ \citep{McGaugh_Wolf_2010, McGaugh_2013a, McGaugh_2013b, McGaugh_2021}. It would also make the dwarfs less vulnerable to tides and stellar feedback than Newtonian TDGs, which are expected to be extremely fragile. Moreover, MOND offers an elegant scenario for the origin of the LG satellite planes by means of a past flyby encounter between M31 and the MW $9 \pm 2$~Gyr ago, which is required in MOND \citep{Zhao_2013} and seems to reproduce important aspects of their satellite planes \citep{Banik_Ryan_2018, Bilek_2018, Bilek_2021, Banik_2022_satellite_plane}. Therefore, we will focus mainly on $\Lambda$CDM and MOND in this contribution. The planes of satellites problem is one of the most well-known challenges to $\Lambda$CDM on galaxy scales \citep{Kroupa_2005, Pawlowski_2018, Pawlowski_2021_Nature_Astronomy, Pawlowski_2021}. It provides a compelling motivation to further investigate dwarf galaxies and question their very nature. Fortunately, the properties of dwarf galaxies make them very suitable for testing different gravity theories. Due to their low mass and especially their low surface brightness, dwarf galaxies can be very susceptible to the effects of gravitational tides. Depending on whether we assume the $\Lambda$CDM or MOND model to be valid significantly affects the expected influence of tides on dwarfs. These expectations can then be compared with observations to try and distinguish the models. Since MOND is a non-linear theory of gravity, the internal dynamics of an object can be affected by the presence of an external field \citep{Bekenstein_1984}. This is because the enhancement to the self-gravity depends on the total strength of $g$, including any external sources. In a dwarf galaxy that experiences a strong gravitational field (usually from a nearby massive galaxy), the MOND boost to the self-gravity will be limited by the dominant external field from the larger central galaxy. This effect becomes stronger as the dwarf gets closer to the central galaxy, to the point that the dwarf can become almost fully Newtonian. Because of this, dwarfs are expected to be more vulnerable to tides in MOND than in $\Lambda$CDM, where they would be shielded by their dark matter halo throughout their whole trajectory \citep{Brada_2000_tides}.\footnote{For an isolated dwarf, the dark matter halo in $\Lambda$CDM and the correction to Newtonian gravity in MOND both provide a similar enhancement to the self-gravity.} In this project, we use the Fornax Deep Survey (FDS) dwarf galaxy catalogue \citep{Venhola_2018, Venhola_2019} to compare the observed morphological properties of Fornax Cluster dwarf galaxies with the properties predicted by $\Lambda$CDM and MOND. Our aim is to find out if the observed level of disturbance in the Fornax dwarfs is similar to that expected in $\Lambda$CDM or MOND, or if neither model works well. $\Lambda$CDM could provide too much protection against tides such that it under-predicts the observed level of disturbance in the Fornax dwarfs population. Meanwhile, the lack of protective dark matter haloes around all dwarf galaxies and their reduced self-gravity due to the background cluster gravity could mean that in the MOND scenario, dwarfs are too fragile to survive in the harsh Fornax Cluster environment. Determining which of these scenarios is more likely would help to clarify the physics governing the formation and dynamics of galaxies, whose dominant source of gravity remains unknown. The layout of this paper is as follows: In Section~\ref{Fornax}, we describe the FDS dwarf galaxy catalogue and the selection criteria that we apply to it (Section~\ref{data_sel}). In Section~\ref{effects_gravi}, we explain the relevant types of gravitational interactions that dwarfs might experience in this cluster: disruption from cluster tides (Section~\ref{cluster_tides}) and galaxy-galaxy harassment (Section~\ref{harassment}). These sections consider only Newtonian gravity $-$ the generalization to MOND is presented in Section~\ref{MOND}. In Section~\ref{tidal_sus}, we provide the equations describing the susceptibility of dwarfs to tidal forces in the $\Lambda$CDM and MOND models, obtain the tidal susceptibility of the dwarfs in the FDS catalogue for each model (Section~\ref{tidal_sus_Fornax}), and show how this theoretical quantity is related to the distribution of the dwarfs (Section~\ref{surf_dens_dwarfs}) and whether their observed morphology appears disturbed or undisturbed (Section~\ref{comparison_disturbance}). In Section~\ref{test_mass}, we construct a test particle simulation of the orbits of Fornax dwarfs and, using the MCMC method, fit it to the real Fornax system using the FDS catalogue. In Section~\ref{Results}, we present the results obtained from our MCMC analysis and how they compare to the results of \textit{N}-body simulations, which we complement with our own \textit{N}-body simulations of a typical Fornax dwarf in MOND (Section~\ref{Nbody_sim}). We then discuss our results in Section~\ref{discussion} before concluding in Section~\ref{conclusions}. \section{The Fornax Deep Survey (FDS)} \label{Fornax} The Fornax Cluster is one of the nearest galaxy clusters \citep[$d_{\textrm{Fornax}} = 20.0 \pm 0.3$~Mpc;][]{Blakeslee_2009}. It is named after its sky position in the southern hemisphere constellation of Fornax. The cluster is structured into two main components: the main Fornax Cluster centred on NGC 1399, and an infalling subcluster (Fornax A) centred $3^\circ$ to the south-west in which NGC 1316 is the central galaxy \citep*{Drinkwater_2001}. The Fornax Cluster contains a significant number of dwarf galaxies with different luminosities, colours, shapes, sizes, and distances to the cluster centre, making it very valuable for studying the properties of dwarf galaxies. The FDS is the most recent survey of the Fornax Cluster. It includes the main Fornax Cluster and part of the Fornax A subcluster, with a total sky coverage of $26 \, \textrm{deg}^2$ \citep{Venhola_2018}. The FDS represents a significant improvement in resolution and image depth with respect to the previous spatially complete Fornax Cluster Catalogue \citep[FCC;][]{Ferguson_1989}. This has allowed the FDS to identify a large number of previously unknown faint galaxies, which can be useful to test the effects of the cluster environment on smaller, more vulnerable galaxies. The FDS reaches the 50\% completeness limit at an apparent (absolute) magnitude in the red band of $M_{r'} = -10.5$ ($m_{r'} = 21$), while the corresponding surface brightness limit is $\mu_{e,r'} = 26 \, \textrm{mag} \, \textrm{arcsec}^{-2}$. However, the FDS can still clearly detect some dwarf galaxies down to $M_{r'} = -9$ and $\mu_{e,r'} = 27.8 \, \textrm{mag} \, \textrm{arcsec}^{-2}$ \citep{Venhola_2018}. The FDS catalogue of dwarf galaxies \citep{Venhola_2017, Venhola_2018, Venhola_2019} includes 564 dwarf galaxies with $2 \times 10^5 < M_{\star}/M_{\odot} < 2 \times 10^9$, some in the main Fornax Cluster and others in the infalling subcluster. As in other galaxy clusters, dEs and dSphs are the most common types of dwarf galaxy that can be found in the Fornax Cluster. These are estimated to have an age of $t_{\textrm{Fornax}} = 10 \pm 1$~Gyr \citep{Rakos_2001}, where $t_{\textrm{Fornax}}$ is the age of the elliptical galaxies in Fornax, which we assume to have a similar age to that of the dwarf galaxies. Because of the similarities in some of their morphological properties, the FDS classifies dE and dSph galaxies as one single type, dE. The FDS catalogue also provides information about other properties of the dwarfs. The ones which are relevant for this project are: $M_{\star}$, the effective radius, the right ascension and declination, the apparent surface brightness in the $r'$ band, the S{\'e}rsic index of the surface brightness profile \citep{Sersic_1963}, the morphological type, the nucleated flag indicating if the dwarf is nucleated or non-nucleated, and the tidal morphology \citep[undisturbed, possibly/mildly disturbed, very disturbed, or unclear;][]{Venhola_2022}. The effective radius, the S{\'e}rsic index, and the apparent brightness in the $r'$-band are obtained by fitting the data to a 2D S{\'e}rsic profile \citep{Venhola_2018} using the \textsc{galfit} software \citep{Peng_2002}. $M_{\star}$ is obtained from the empirical relation between the $g'-i'$ colour and mass-to-light ($M/L$) ratio (\citealt{Taylor_2011}; for further details, see \citealt{Venhola_2019}). The morphological classifications such as the nucleated flags, the Hubble type \citep{Venhola_2018, Venhola_2019}, and the tidal morphologies are done visually. The tidal morphology is classified in \citet{Venhola_2022} based on the following criteria: \begin{enumerate} \item{Undisturbed}: Dwarf galaxies that do not present irregularities, distortions to their shape, or tidal tails; \item{Possibly/mildly disturbed}: Hints of irregularities are present in the outskirts of the dwarf galaxy; \item{Very disturbed}: Dwarf galaxies with tidal tails and/or very clear distortion in the shape; and \item{Unclear}: Nearby bright objects or data artefacts make the classification difficult. \end{enumerate} Fig.~\ref{fig:tid_morph_class} shows some illustrative examples of dwarfs in these categories. \subsection{Data selection} \label{data_sel} From the 564 FDS dwarfs, we remove those which are classified as late-type as there is a high chance that these are not physically in the cluster but instead represent line of sight contamination \citep{Venhola_2019}. We also remove dwarfs which have an `unclear' tidal morphology because they are not useful for the analysis. This leaves us with 456 dwarfs. We then obtain the angular distance between each dwarf and the centre of the Fornax Cluster based on the right ascension (RA) and declination (Dec) of the dwarf and that of the Fornax Cluster, whose sky coordinates are $\textrm{RA}_{\textrm{centre}} = 54.6^\circ$, $\textrm{Dec}_{\textrm{centre}} = -35.5^\circ$ \citep[table D1 of][]{Watson_2009}. \begin{eqnarray} \!\!\!\!\!\! \Delta \textrm{RA} &\equiv& \textrm{RA} - \textrm{RA}_{\textrm{centre}} \, , \\ \!\!\!\!\!\! \Delta \textrm{Dec} &\equiv& \textrm{Dec} - \textrm{Dec}_{\textrm{centre}} \, , \\ \!\!\!\!\!\! \Delta' \textrm{RA} &=& \Delta \textrm{RA} \cdot \cos \left( \frac{\textrm{Dec} + \textrm{Dec}_{\textrm{centre}}}{2} \right), \\ \!\!\!\!\!\! \textrm{Angular distance} &=& \sqrt{\left( \Delta' \textrm{RA} \right)^2 + \left( \Delta \textrm{Dec} \right)^2} \, . \end{eqnarray} Expressing this angular distance in radians and multiplying it by the 20~Mpc distance to Fornax \citep{Blakeslee_2009} then gives the dwarf's sky-projected distance $R_{\textrm{sky}}$ from the centre of the Fornax Cluster. \begin{eqnarray} R_{\textrm{sky}} ~=~ d_{\textrm{Fornax}} \times \left( \textrm{Angular distance} \right) \, . \end{eqnarray} We remove dwarfs with $R_{\textrm{sky}} > 800$~kpc as dwarfs further out mostly belong to the subcluster Fornax A, so including these would contaminate our sample of dwarfs belonging to the main Fornax Cluster \citep[see fig. 4 of][]{Venhola_2019}. This leaves us with 353 dwarf galaxies. \section{Effects of gravitational interactions on dwarfs} \label{effects_gravi} Before discussing the gravitational perturbations experienced by Fornax Cluster dwarf galaxies, we first discuss why non-gravitational forces are not expected to perturb Fornax Cluster dwarfs today. Old dwarf galaxies in a cluster environment are expected to be gas-poor. Most dwarfs in the FDS catalogue are classified as early-type galaxies, implying that they are dominated by old stellar populations and are not currently forming new stars. The scarcity of star-forming dwarfs in the Fornax Cluster is consistent with the fact that they are likely to be gas-poor. One important reason for this is ram pressure stripping \citep{Gunn_1972}. This takes place when a galaxy containing a large amount of cold gas moves through a galaxy cluster full of hot gas. The temperature difference and motion between the two gas components generate a pressure gradient that strips the cold gas from the galaxy. \citet{Venhola_2019_error} estimated in the left panel of their fig.~21 that ram pressure stripping of Fornax Cluster dwarfs at the low masses relevant to our analysis should have been quite efficient $-$ the vast majority of the dwarfs in our sample have $M_{\star} < 10^8 \, M_{\odot}$ (Section~\ref{data_sel}). The fact that the Fornax dwarfs are gas-poor has been observationally confirmed by \citet{Zabel_2019}, who studied the molecular gas in the Fornax Cluster and showed that its dwarfs are gas deficient. \citet{Loni_2021} showed the same for neutral hydrogen in FDS dwarfs with $M_{\star}$ down to a few times $10^7 \, M_\odot$, below which theoretical arguments indicate that the gas reservoir should have been ram pressure stripped by now \citep[see section 7.3.1 of][]{Venhola_2019}. Moreover, the colours of the FDS dwarfs also suggest a lack of recent star formation (see their fig.~18). Ongoing gas loss is thus very unlikely to explain the observed disturbances to the structures of some Fornax Cluster dwarfs. We therefore conclude that their internal structure is to a good approximation only affected by gravity from surrounding structures. The main types of gravitational interaction that can disturb and transform the structure of a dwarf galaxy in the Fornax Cluster are tidal disruption from the cluster’s tidal field and galaxy-galaxy harassment due to encounters with the cluster's massive elliptical galaxies \citep[see section~7 of][]{Venhola_2019}. In the following, we discuss these processes in the context of Newtonian gravity before deriving their generalization to MOND (Section~\ref{MOND}). \subsection{Disruption from cluster tides} \label{cluster_tides} In this type of interaction, the structure of a dwarf with mass $M_{\textrm{dwarf}}$ is affected by gravitational tides coming from the overall cluster potential, i.e., from the difference in the cluster gravity across the finite size of the dwarf. We quantify the influence of cluster tides on a dwarf using the concept of its tidal radius $r_{\textrm{tid}}$. This is defined such that if $r_{\textrm{tid}}$ were the dwarf's actual size, then the tidal force of the cluster and the self-gravity of the dwarf would have the same strength. We can intuitively see that \begin{eqnarray} \frac{G M_{\textrm{dwarf}}}{r^2_{\textrm{tid}}} ~&\approx&~ r_{\textrm{tid}} \overbrace{\left( \frac{\Delta g_c}{\Delta R} \right)}^{\textrm{Tidal stress}} \, , \\ \Rightarrow r_{\textrm{tid}} ~&\approx&~ \left( \frac{G M_{\textrm{dwarf}}}{\Delta g_c / \Delta R} \right)^{1/3} \, , \label{approx_rtid} \end{eqnarray} where $G$ is the Newtonian constant of gravitation and $\Delta g_c/\Delta R$ is the tidal stress from the cluster potential, with $g_c$ and $R$ being the cluster gravity and the 3D distance to the cluster centre, respectively. Since we want to find out the maximum degree of disturbance that a dwarf can experience due to the cluster potential, we obtain $g_c$ and its gradient when the dwarf is at pericentre ($R = R_{\textrm{per}}$). In order to obtain $R_{\textrm{per}}$ for each dwarf from its projected distance in the FDS, we use $R_{\textrm{per}} = 0.29 \, R$ (see Appendix~\ref{Rper}), with $R$ obtained by deprojecting $R_{\textrm{sky}}$ using the method described in Appendix~\ref{deproj}. As in \citet{Venhola_2019}, we assume that the galaxy number density and cluster potential have remained constant over time. This approximation is reasonable because the orbital periods of galaxies in the Fornax Cluster are typically much shorter than a Hubble time: The estimated 1D velocity dispersion of 370~km/s \citep{Drinkwater_2001} combined with a maximum size of 800~kpc (Section~\ref{data_sel}) implies a crossing time of only 1.2~Gyr. We assign the cluster a Newtonian dynamical mass profile given by \begin{eqnarray} M_c \left( < \theta_{\textrm{3D}} \right) ~=~ M_{\textrm{norm}} \left( \frac{\theta_{\textrm{3D}}}{\theta_{\textrm{norm}}}\right)^{\alpha} \, , \label{M_cluster} \end{eqnarray} where $\theta_{\textrm{3D}} \equiv R/d_{\textrm{Fornax}}$ is the 3D angular distance to the Fornax Cluster centre. The parameters are: $M_{\textrm{norm}} = 3 \times 10^{10}~M_{\odot}$, $\theta_{\textrm{norm}} = 10\arcsec$, and $\alpha = 1.1$. This radial mass dependency is obtained from fitting the above power-law to the mass profile derived in fig.~17b of \citet{Paolillo_2002}, which uses the X-ray surface brightness distribution of the central Fornax galaxy and its gas temperature profile to find the gas density distribution. The mass profile is then derived assuming hydrostatic equilibrium by applying the spherical Jeans equation. Note that the mass derived here is a Newtonian dynamical mass. A more model-independent way to describe the observations is in terms of the cluster gravity $g_c \equiv GM_c/R^2$. This method of obtaining $g_c$ relies on the well-understood physical process of thermal X-ray emission from hot gas. Its temperature and density profile require a particular radial run of $g_c$ regardless of the gravity law. Therefore, it is not relevant whether $g_c$ has been enhanced by a dark matter halo or by MOND (or indeed by some elements of both, as argued in Section~\ref{MOND}). Consequently, $g_c$ will be the same in the $\Lambda$CDM and MOND scenarios, as will the resulting tidal stress on each dwarf. This is not the case for $M_{\textrm{dwarf}}$. The FDS catalogue gives only $M_{\star}$ for each dwarf. This can be equated with $M_{\textrm{dwarf}}$ in MOND, but not in $\Lambda$CDM where each dwarf is expected to have a substantial dark halo of mass $M_{\textrm{halo}}$. We find this using the same abundance matching procedure as \citet{Venhola_2019}. We first find $M_{\textrm{halo}}$ from the relation between $M_{\star}$ and $M_{\textrm{halo}}$ given in equation 2 of \citet{Moster_2010}: \begin{eqnarray} \label{m/M} && \frac{M_{\star}}{M_{\textrm{halo}}} \\ && = 2 \left( \frac{M_{\star}}{M_{\textrm{halo}}} \right)_0 \left[ \left( \frac{M_{\textrm{halo}}}{M_1} \right)^{-\beta} + \left( \frac{M_{\textrm{halo}}}{M_1} \right)^{-\gamma} \right]^{-1} \, . \nonumber \end{eqnarray} Their table~1 clarifies that the parameters in this equation are: $\left( \frac{M_{\star}}{M_{\textrm{halo}}} \right)_0 = 0.0282$, $M_1 = 10^{11.884} ~M_{\odot}$, $\beta = 1.057$, and $\gamma = 0.556$. As the dark halo of each dwarf is not observable and remains hypothetical, we are only interested in whether tides are perturbing the dwarf's stellar component \citep[which they might not be even if its dark matter halo is being stripped; see][]{Smith_2016}. For this, the Shell Theorem indicates that we only need to consider the dark matter within the dwarf's optical radius. Following \citet{Venhola_2019}, we assume that this is only 4\% of the total halo mass $-$ \citet{Diaz_2016} found this fraction to be consistent with the dark matter masses within the optical radii of S\textsuperscript{4}G galaxies \citep{Sheth_2010}. Adding the halo contribution to $M_{\star}$, the total mass of the dwarf in $\Lambda$CDM for the purposes of our analysis is therefore: \begin{eqnarray} M_{\textrm{dwarf, } \Lambda\textrm{CDM}} ~=~ M_{\star} + 0.04 \, M_{\textrm{halo}} \, . \label{M_dwarf_rule} \end{eqnarray} In Section~\ref{newDMfrac}, we consider other possible choices for the fraction of the halo mass within the optical radius of a dwarf. Equation~\ref{approx_rtid} is only a very crude estimate for the tidal radius of a dwarf. While it should capture the essential physics, we expect a more careful treatment to yield an additional factor of order unity. Numerical simulations are required to capture the details of mass loss from a dwarf undergoing tidal disruption, which is expected to substantially distort its shape. To account for this, the $\Lambda$CDM expression for $r_{\textrm{tid}}$ in equation 1 of \citet{Baumgardt_2010} includes an extra factor of $2^{-1/3}$. Taking this into consideration, we adopt the following expression for $r_{\textrm{tid}}$ in $\Lambda$CDM: \begin{eqnarray} r_{\textrm{tid, } \Lambda\textrm{CDM}} ~=~ \left( \frac{G \, M_{\textrm{dwarf, } \Lambda\textrm{CDM}}}{2 \Delta g_c / \Delta R} \right)^{1/3} \, . \label{rtid_LCDM} \end{eqnarray} This is based on using their study to obtain the numerical pre-factor in Equation~\ref{approx_rtid} for circular orbits in a central potential with a flat rotation curve ($\alpha = 1$) $-$ other approaches are discussed below Equation~\ref{beta_definition}. Notice that $g_c$ itself does not directly affect the tidal radius: The cluster gravity only affects the dwarf through the tidal stress it creates on the dwarf. This is not so in the corresponding expression for MOND (Equation \ref{rtid_MOND}), which we derive in Section~\ref{MOND}. \subsection{Galaxy-galaxy harassment} \label{harassment} The morphology of the Fornax Cluster dwarf galaxies can also be disrupted by gravitational interactions with individual large galaxies in the cluster. This effect is called harassment \citep{Venhola_2019}. Assuming a high relative velocity between the dwarf galaxy and the larger galaxy, we can use the impulse approximation to estimate the impact of each encounter on the internal structure of the dwarf. We then need to combine the effects of many such interactions, each time adding the squares of the velocity perturbations as these would generally be in random directions, leading to a process resembling a diffusive random walk. Equivalently, we should add the energy gained by the dwarf from each encounter, leading to the concept of a heating rate $\dot{E}$ \citep[equation 8.52 of][]{Binney_Tremaine_2008}. The disruption time-scale $t_{d, \Lambda\textrm{CDM}}$ is the time-scale over which putting energy into the dwarf at the presently calculated $\dot{E}$ would cause it to become unbound given its present gravitational binding energy per unit dwarf mass of \begin{eqnarray} \lvert E \rvert ~=~ \frac{G M_{\textrm{dwarf, } \Lambda\textrm{CDM}}}{2 r_{h,\textrm{dwarf}}} \, , \label{Binding_energy} \end{eqnarray} where $r_{h,\textrm{dwarf}}$ is the half-mass radius of the dwarf. Since only the baryons are visible, we again restrict our attention to the baryonic component of each dwarf, so $r_{h,\textrm{dwarf}}$ refers to only its visible component and $M_{\textrm{dwarf, } \Lambda\textrm{CDM}}$ is again found using Equation~\ref{M_dwarf_rule}. Dividing the magnitude of the binding energy by the heating rate gives the disruption time-scale \citep[equation 8.54 of][]{Binney_Tremaine_2008}: \begin{eqnarray} t_{d, \Lambda\textrm{CDM}} \equiv \frac{\lvert E \rvert}{\dot{E}} = \frac{0.043}{W_p} \frac{\sqrt{2} \sigma M_{\textrm{dwarf,} \Lambda\textrm{CDM}} r_{h,p,{\Lambda\textrm{CDM}}}^2}{G M_{p, \Lambda\textrm{CDM}}^2 n_p r_{h,\textrm{dwarf}}^3} \, . \label{td_LCDM} \end{eqnarray} The `p' subscript denotes the massive galaxy (perturber), while `dwarf' refers to the dwarf galaxy that is being perturbed. $W_p$ is a factor accounting for the shape of the perturber galaxy's mass distribution. We choose $W_p = 1$ as an intermediate value between that of the Plummer and Hernquist models \citep[chapter 8.2 of][]{Binney_Tremaine_2008}. $n_p$ is the number density of perturbers, which \citet{Venhola_2019} estimated to be $25 \, \textrm{Mpc}^{-3}$ by counting 48 large galaxies inside the virial volume of the Fornax Cluster ($R_{\textrm{vir}} = 0.77$~Mpc). Its 1D velocity dispersion is $\sigma = 370$~km/s \citep{Drinkwater_2001}, with the extra factor of $\sqrt{2}$ accounting for the fact that we need to consider the dwarf-perturber relative velocity. $M_{p, \Lambda\textrm{CDM}}$ and $r_{h,p,{\Lambda\textrm{CDM}}}$ are the perturber galaxy's mass and half-mass radius, respectively. Note that we use $r_h$ for the deprojected half-mass radius of the baryonic component. $r_h$ does not include the dark matter halo unless we explicitly say so and label it accordingly as $r_{h, \Lambda\textrm{CDM}}$. \citet{Venhola_2019} use $r_h$ for the radius containing half of the total mass including dark matter, so our notation is different in this respect. To obtain $r_{h,\textrm{dwarf}}$ from the projected effective radius $r_e$ containing half of the dwarf's total stellar mass, we use equation B3 of \citet{Wolf_2010}, though a good approximation is that $r_{h,\textrm{dwarf}} \approx \left( 4/3 \right) r_e$. Our adopted $M_{p,*} = 10^{10}~M_{\odot}$ is the median stellar mass of the large galaxies catalogued in table~C1 of \citet{Iodice_2019} and in the FCC. In the $\Lambda$CDM case, the contribution of the dark halo should be added to this mass. Unlike with the dwarf galaxies, the full extent of the dark halo is considered for the large galaxies because these are expected to be quite robust to cluster tides, so the full halo mass should be considered when estimating the perturbation to a passing dwarf. \citet{Venhola_2019} found $M_{p, \Lambda\textrm{CDM}} = 10^{11.6} ~M_{\odot}$ following this procedure, which we also verified. Using a single $M_p$ value for all perturbers gives only an approximate estimate of the heating rate. A more accurate calculation should use the power-law distribution of all the galaxies and make predictions based on that, but this would be extremely difficult. Moreover, the other simplifications assumed throughout the whole calculation of $t_d$ have a larger impact on the result than taking into account the right distribution of perturbing galaxy masses. Fortunately, we will see that $t_d$ greatly exceeds a Hubble time, a conclusion which should remain valid even with small adjustments to the calculation. In particular, we will show that considering the mass spectrum of perturbers should affect the estimated heating rate by only a small factor such that $t_d$ remains very long (Section~\ref{tidal_sus_Fornax}). The $r_{h,p}$ value of the large galaxies is also obtained from the median of all the documented large galaxies (perturbers) in the cluster, yielding $r_{h,p} = 4$~kpc based on the luminous matter. This is applicable to MOND, but in the $\Lambda$CDM case, the $r_{h,p}$ of the large galaxies should account for half of the perturber's total mass, not only the stellar mass given in the catalogues. This is because the gravitational effect of the dark matter halo also contributes to perturb the stellar content of a passing dwarf. To find out the relation between $r_{h,p}$ and $r_{h,p, \Lambda\textrm{CDM}}$, \citet{Venhola_2019} looked into the Illustris cosmological simulations \citep{Pillepich_2018} to infer the relation between these two quantities in simulated large galaxies in a galaxy group with a similar mass to the Fornax Cluster, yielding $r_{h,p, \Lambda\textrm{CDM}}/r_{h, p} = 3.6$. Therefore, the half-mass radius of the perturbers in $\Lambda$CDM is taken to be $r_{h,p, \Lambda\textrm{CDM}} = 14.4$~kpc. To summarize, the disruption time-scale in $\Lambda$CDM can be found by directly applying Equation \ref{td_LCDM} once we include the contribution of the dark matter halo to $M_{\textrm{dwarf}}$, $M_p$, and $r_{h,p}$. In Section~\ref{Harassment_MOND}, we describe how to obtain the corresponding disruption time-scale expression in MOND. \subsection{Generalization to MOND} \label{MOND} The MOND model proposes that Newtonian gravity breaks down in the limit of low accelerations such that the actual gravitational field $g$ is related to the Newtonian field $g_{_N}$ according to $g = \sqrt{a_{_0} g_{_N}}$. Milgrom's constant $a_{_0} = 1.2 \times 10^{-10}$~m/s$^2$ is a new fundamental acceleration scale added by MOND. Its value has been empirically determined by matching observed galaxy rotation curves \citep{Begeman_1991, Gentile_2011, McGaugh_Lelli_2016}, which MOND does extremely well \citep{Famaey_McGaugh_2012, Lelli_2017, Li_2018}. Due to the very small numerical value of $a_{_0}$ \citep[which may be related to the quantum vacuum; see][]{Milgrom_1999, Senay_2021}, the behaviour of gravity has never been directly tested in the deep-MOND regime ($g \ll a_{_0}$). Indeed, Solar system tests are typically only sensitive to the behaviour of gravity in the regime where $g$ exceeds $a_{_0}$ by many orders of magnitude \citep[though for a proposed Solar system test in the MOND regime, see][]{Penner_2020}. For an isolated spherically symmetric problem, the expression for the MOND gravitational field $g$ as a function of the Newtonian field $g_{_N}$ can be written as \begin{eqnarray} g ~=~ g_{_N} \nu \left(g_{_N}\right) \, , \label{g_g_N} \end{eqnarray} where $\nu$ is the interpolating function with argument $g_{_N}$. To satisfy Solar system constraints and the observed flat rotation curves in the outskirts of galaxies, this function must have the following asymptotic limits: \begin{eqnarray} \nu \to \begin{cases} 1 \, , & \textrm{if} ~g_{_N} \gg a_{_0} \, , \\ \sqrt{\frac{a_{_0}}{g_{_N}}} \, , & \textrm{if} ~g_{_N} \ll a_{_0} \, . \end{cases} \label{nu_cases} \end{eqnarray} The first case is the Newtonian regime in which $\nu = 1$ and $g = g_{_N}$ to a very good approximation. In the MOND regime, $g = \sqrt{a_{_0} g_{_N}}$. This causes the gravity from an isolated point mass $M$ to decline as $1/r$ beyond its MOND radius $r_{_{\textrm{MOND}}} \equiv \sqrt{GM/a_{_0}}$, which is necessary to explain the rotation curve data using only the luminous matter. Several forms of the MOND interpolating function have been proposed \citep{Kent_1987, Hees_2014, Hees_2016, McGaugh_Lelli_2016}. Among these, the simple interpolating function \citep{Famaey_Binney_2005} seems to work better with recent observations \citep{Iocco_Bertone_2015, Banik_2018_Centauri, Chae_2018}. Therefore, we will use the simple interpolating function: \begin{eqnarray} \nu \left( g_{_N} \right) ~=~ \frac{1}{2} + \sqrt{\frac{1}{4} + \frac{a_{_0}}{g_{_N}}} \, . \label{simple_interpolating} \end{eqnarray} It is well known that although MOND is capable of fitting the rotation curves of galaxies without dark matter \citep[see the review by][]{Famaey_McGaugh_2012}, it cannot fit the temperature and density profiles of galaxy clusters using only their visible mass $-$ MOND still needs an additional contribution to the gravitational field \citep{Sanders_1999, Aguirre_2001}. The central galaxy of the Fornax Cluster (NGC 1399) is no exception \citep{Samurovic_2016_Fornax}. To solve this discrepancy and to account for other observations hinting at the presence of collisionless matter in galaxy clusters \citep[most famously in the Bullet cluster;][]{Clowe_2006}, it has been proposed that MOND should be supplemented by sterile neutrinos with a rest energy of 11~eV, a paradigm known as the neutrino hot dark matter ($\nu$HDM) cosmological model \citep{Angus_2009}. $\nu$HDM can fit observations of virialized galaxy clusters using the MOND gravity of their directly detected baryons plus the sterile neutrinos \citep{Angus_2010}. It can also fit the power spectrum of anisotropies in the cosmic microwave background (CMB) because the typical gravitational field at the epoch of recombination was $\approx 20 \, a_{_0}$ and the cosmic expansion history would be standard. Neutrino free streaming reduces the power on small scales compared to $\Lambda$CDM, but this is consistent with CMB observations provided the rest energy of the neutrinos exceeds 10~eV \citep[see section 6.4.3 of][]{Planck_2016}. The gravitational fields from density perturbations would enter the MOND regime only when the redshift $\la 50$, before which the MOND corrections to General Relativity should be small \citep[for a more detailed explanation of this model, see][]{Haslbauer_2020}. $\nu$HDM relies on the existence of eV-scale sterile neutrinos, but these are also hinted at by several terrestrial experiments \citep[for a recent review, see][]{Berryman_2022}. Equation \ref{simple_interpolating} shows that unlike Newtonian gravity, MOND is a non-linear theory of gravity. A physical consequence of this non-linearity is the so-called external field effect \citep[EFE;][]{Milgrom_1986}. This implies that the internal gravity of a system can be weakened by a constant gravitational field from its external environment even if this is completely uniform, violating the strong equivalence principle. The reason is that the MOND boost to the Newtonian gravity is approximately given by $\nu$, which is damped due to the external field. In MOND, the EFE explains why some galaxies like NGC 1052-DF2 have a very low observed velocity dispersion \citep{Van_Dokkum_2018, Famaey_2018, Kroupa_2018_DF2, Haghi_2019_DF2}, even though other galaxies like DF44 with similar properties but in a more isolated environment have a much higher velocity dispersion \citep{Van_Dokkum_2019, Bilek_2019, Haghi_2019_DF44}.\footnote{In a conventional gravity context, the very low observed velocity dispersion of NGC 1052-DF2 implies a lack of dark matter, which however is not easily explained in $\Lambda$CDM \citep{Haslbauer_2019_DF2, Moreno_2022_DF2}.} Strong evidence for the EFE has recently been obtained based on the outer rotation curves of galaxies showing a declining trend if the galaxy experiences a significant EFE, while galaxies in more isolated environments have flat outer rotation curves \citep{Haghi_2016, Chae_2020_EFE, Chae_2021}. For a discussion of observational evidence relating to the EFE, we refer the reader to section~3.3 of \citet{Banik_Zhao_2022}. The EFE is also important to Fornax Cluster dwarfs because their low surface brightness implies rather little self-gravity, allowing the gravitational field of the cluster to dominate over that of the dwarf. As a result, the dwarf is in the quasi-Newtonian (QN) regime where its internal dynamics are similar to a Newtonian dwarf but with a renormalized gravitational constant $G_{\textrm{eff}} > G$. We need to determine $G_{\textrm{eff}}$ from the cluster gravitational field $g_c$. We do this by writing Equation \ref{simple_interpolating} in the inverse form: \begin{eqnarray} g_{_N} ~&=&~ g \mu \left( g \right) \, , \text{ where} \\ \mu \left( g \right) ~&=&~ \frac{g}{g + a_{_0}} \, . \label{g_N_g} \end{eqnarray} As the cluster gravity is dominant over the self-gravity of the dwarf, we can set $g = g_c$, with $g_c$ obtained from observations as described in Section~\ref{cluster_tides}. Since the Newtonian gravity of the cluster is directly proportional to the Newtonian gravitational constant ($g_{c,N} \propto G$), the effective gravity of a dwarf in the cluster will be directly proportional to an analogous constant parameter $G_{\textrm{eff}}$ defined such that $g_c = \left( G_{\textrm{eff}}/G \right) g_{c,N}$. From Equation \ref{g_N_g}, we infer $G_{\textrm{eff}}$ to be: \begin{eqnarray} G_{\textrm{eff}} ~=~ \left( \frac{a_{_0} + g_c}{g_c} \right) G \, . \label{G_eff} \end{eqnarray} Note that replacing $G \to G_{\textrm{eff}}$ can only be applied if the dwarf’s self-gravity is dominated by the external field of the cluster, so that the combined gravitational field of the dwarf and cluster will remain approximately constant with increasing distance with respect to the dwarf's centre. \subsubsection{Tidal radius} \label{Tidal_radius_MOND} At the tidal radius of a dwarf, the difference in cluster gravity across the dwarf is comparable to its self-gravity. Therefore, the total cluster gravity $g_c$ dominates over the dwarf's self-gravity. Thus, the MOND tidal radius of any dwarf is necessarily in the EFE-dominated/QN regime where its dynamics are approximately Newtonian but with $G \to G_{\textrm{eff}}$. Substituting this into Equation \ref{approx_rtid} gives an approximate expression for the MOND tidal radius. Accounting for additional details like the non-spherical nature of the point mass potential in the QN regime \citep[discussed further in section~2.4 of][]{Banik_Zhao_2022}, the MOND tidal radius can be expressed as \citep[equations~26 and 36 of][]{Zhao_2006}: \begin{eqnarray} \label{rtid_MOND} && r_{\textrm{tid, MOND}} \\ && = \frac{2}{3} \left. \sqrt{ \frac{\partial \ln g}{\partial \ln g_{_N}}} \right|_{g = g_c} \left[ \left( \frac{2 - \alpha}{3 - \alpha} \right) \frac{G_{\textrm{eff}} \, M_{\textrm{dwarf}}}{\Delta g_c / \Delta R} \right]^{1/3}, \nonumber \end{eqnarray} where the factor of order unity is the MOND Roche lobe scaling factor accounting for such subtleties. Note that we have generalized their equation 26 to write the result in terms of $G_{\textrm{eff}}$ and the tidal stress. The parameter $\alpha \equiv 2 + \frac{d\ln g_c}{d\ln r}$ has the same meaning as in Equation~\ref{M_cluster}, so its value remains 1.1. For the case of a dwarf orbiting a point mass in the deep-MOND limit ($\alpha = 1$), the numerical factors combine to give $2^{1/6}/3$, matching equation~44 of \citet{Zhao_2005}.\footnote{Equation~\ref{rtid_MOND} is the extent of the Roche Lobe in the tangential direction within the orbital plane. The extent along the orbital pole is similar, and in both cases is smaller than the extent along the radial direction \citep[see section~4.2 of][]{Zhao_2006}.} \subsubsection{Galaxy-galaxy harassment} \label{Harassment_MOND} When a dwarf interacts with a massive galaxy in the Fornax Cluster environment, we need to consider both the gravity from the elliptical and the background EFE due to the cluster potential. As in Section~\ref{harassment}, we estimate the perturbation to the dwarf by assuming it is a collection of test particles that receive some impulse $\bm{u}$ from the elliptical, with the heating rate of the dwarf proportional to the square of $\lvert \Delta \bm{u} \rvert$, the spread in $\bm{u}$ across the dwarf. Once it has moved away from the elliptical, the binding energy of the dwarf is given by Equation \ref{Binding_energy} but with $G \to G_{\textrm{eff}}$ as discussed above. The main difficulty lies in estimating the energy gained by the dwarf due to interactions with impact parameter $b$, which for a high-velocity encounter is approximately the same as the closest approach distance between the dwarf and the elliptical. We need to consider encounters in two different regimes: \begin{enumerate} \item The QN regime in which $g_c \ll a_{_0}$ dominates over gravity from the elliptical; and \item The isolated deep-MOND (IDM) regime in which the gravity from the elliptical dominates over $g_c$ but is still much weaker than $a_{_0}$. \end{enumerate} We do not need to consider the Newtonian regime because the perturbers have a radius that is numerically similar to their MOND radius for the parameters given in Section~\ref{harassment}. This is not unique to the Fornax Cluster: Elliptical galaxies generally have a size similar to their MOND radius \citep{Sanders_2000}. This is because if the initial radius was much smaller and the system is nearly isothermal, then a significant proportion of the mass in the outskirts would be moving faster than the Newtonian escape velocity, causing the system to expand to its MOND radius \citep{Milgrom_1984, Milgrom_2021_polytropes}. The QN and IDM regimes are separated by encounters with $b = r_{_{\textrm{EFE}}}$, the distance from the elliptical beyond which the cluster gravity dominates. \begin{eqnarray} r_{_{\textrm{EFE}}} ~=~ \sqrt{\frac{GM_p}{g_{c,N}}} \, , \end{eqnarray} where $g_{c,N} \equiv g_c \mu \left( g_c \right)$ is the Newtonian gravity of the cluster at the location of the dwarf-elliptical encounter. These encounters would generally not occur when the dwarf is at the pericentre of its orbit around the Fornax Cluster. However, encounters at this point would be more damaging because the dwarf's self-gravity would be weaker. We therefore assume that the encounters with ellipticals take place at a typical distance from the cluster of $R_{\textrm{enc}} = 0.5 \, R$, which is slightly more than the pericentre distance of $0.29 \, R$ (Appendix \ref{Rper}) but less than the present distance. We will first consider the heating rate $\dot{E}_{\textrm{{QN}}}$ from encounters in the QN regime before turning to the heating rate $\dot{E}_{\textrm{{IDM}}}$ from encounters in the IDM regime. The total heating rate is then \begin{eqnarray} \label{E_dot_MOND} \dot{E}_{\textrm{MOND}} ~\equiv~ \dot{E}_{\textrm{Newt}} \times \textrm{CF} ~=~ \dot{E}_{\textrm{{QN}}} + \dot{E}_{\textrm{IDM}} \, , \end{eqnarray} where CF is the correction factor that needs to be applied to the Newtonian $\dot{E}$ to make it MONDian. Our approach is to assume a sharp transition between the QN and IDM regimes such that the EFE is completely dominant in the former and completely negligible in the latter. This approximate approach should be accurate to within a factor of order unity, which we will argue later is sufficient for our purposes. In all regimes, the heating rate due to encounters with an impact parameter in the range $b \pm db/2$ ($db \ll b$) is $\dot{E}_b = \dot{C} \, \langle \Delta \widetilde{E} \rangle$, where $\dot{C} \propto b \, db$ is the average rate of such encounters and $\langle \Delta \widetilde{E} \rangle$ is the average energy gain of the dwarf per unit mass due to each such encounter. Since accelerating the dwarf as a whole does not alter its internal structure, we only need to consider the variation in the impulse $\bm{u}$ across the dwarf, so $\langle \Delta \widetilde{E} \rangle \propto {\lvert \Delta \bm{u} \rvert}^2$. In Newtonian dynamics, the magnitude of the impulse on a passing test particle is $u \propto 1/b$, so $\lvert \Delta \bm{u} \rvert \propto 1/b^2$ \citep[equation 8.41 of][]{Binney_Tremaine_2008} and $\langle \Delta \widetilde{E} \rangle \propto 1/b^4$. This explains the $1/b^3$ scaling in the integrand in equation 8.53 of \citet{Binney_Tremaine_2008}, which states that the Newtonian heating rate per unit dwarf mass is: \begin{eqnarray} \dot{E}_{\textrm{Newt}} ~=~ \underbrace{\frac{14}{3} \sqrt{2\mathrm{\pi}} \frac{G^2 M^2_p n_p r^2_{h,\textrm{dwarf}}}{\sqrt{2}\sigma}}_A \int^{\infty}_{r_{h,p}} \frac{db}{b^3} ~=~ \frac{A}{2 r^2_{h,p}} \, , \label{E_dot_Newt} \end{eqnarray} where $A$ is a constant. We are now in a position to MONDify this result for the QN regime. Both the dwarf's self-gravity and the elliptical's gravity on the dwarf are similar to the Newtonian result but with $G \to G_{\textrm{eff}}$. The heating rate in the QN regime is thus similar to Equation \ref{E_dot_Newt}, but using $G_{\textrm{eff}}$ instead of $G$ in the calculation of the normalization constant. To distinguish this result from the Newtonian case, we call the QN normalization constant $A' = A \left(G_{\textrm{eff}}/G \right)^2$. Since by definition the QN regime involves only those encounters with $b > r_{_{\textrm{EFE}}}$, the total heating rate from encounters in this regime is \begin{eqnarray} \dot{E}_{\textrm{QN}} ~=~ A' \int^{\infty}_{r_{_{\textrm{EFE}}}} \frac{db}{b^3} ~=~ \frac{A'}{2 r^2_{\textrm{EFE}}} \, . \label{E_dot_QN} \end{eqnarray} In the IDM regime, the scalings are different because the gravity from the elliptical follows an inverse distance law. Since the interaction time-scale rises linearly with the closest approach distance, the impulse becomes independent of this ($u \propto b^0$). However, as the direction from the elliptical to the dwarf is still different for different parts of the dwarf, the variation in the impulse across it scales as $\lvert \Delta \bm{u} \rvert \propto 1/b$, implying that the energy gain per encounter scales as $\langle \Delta \widetilde{E} \rangle \propto {\lvert \Delta \bm{u} \rvert}^2 \propto 1/b^2$. Since the encounter rate again behaves as $\dot{C} \propto b \, db$ due to the geometry being the same in both models, we obtain: \begin{eqnarray} \label{E_dot_IDM} \dot{E}_{\textrm{IDM}} ~&=&~ \frac{A'}{r^2_{\textrm{EFE}}} \int^{r_{_{\textrm{EFE}}}}_{r_{_{\textrm{MOND}}}} \frac{db}{b} \\ ~&=&~ \frac{A'}{r^2_{\textrm{EFE}}} \ln \left(\frac{r_{_{\textrm{EFE}}}}{r_{_{\textrm{MOND}}}}\right) \, . \end{eqnarray} The normalization of the integrand ensures continuity of the specific heating rate per unit $b$ between the QN and IDM regimes. Inserting our results for $\dot{E}_{\textrm{QN}}$ and $\dot{E}_{\textrm{IDM}}$ into Equation \ref{E_dot_MOND} and noting that $A' = A \left( G_{\textrm{eff}}/G \right)^2$, we obtain that \begin{eqnarray} \textrm{CF} ~=~ \left[1 + \ln \left(\frac{a_{_0}}{g_{c,N}}\right)\right] \left(\frac{G_{\textrm{eff}} \, r_{\textrm{h,p}}}{G \, r_{_{\textrm{EFE}}}}\right)^2 \, . \label{corr_fact} \end{eqnarray} Since $t_d \equiv \lvert E \rvert/\dot{E}$ and the MONDian binding energy of the dwarf exceeds the Newtonian result (Equation \ref{Binding_energy}) by a factor of ($G_{\textrm{eff}}/G$), the effect of the MOND corrections to Newtonian gravity amount to multiplying the Newtonian $t_d$ (Equation \ref{td_LCDM}) by a factor of $\textrm{CF}^{-1} \left( G_{\textrm{eff}}/G \right)$. \begin{eqnarray} \label{td_MOND} t_{d, \textrm{MOND}} ~&\equiv&~ \frac{\lvert E \rvert_{\textrm{MOND}}}{\dot{E}_{\textrm{MOND}}} \\ &=& \frac{0.043}{W_{p}} \frac{\sqrt{2} \sigma \, M_{\textrm{dwarf}} \, r_{_{\textrm{EFE}}}^2}{G_{\textrm{eff}} \, M_p^2 \, n_p \, r_{h,\textrm{dwarf}}^3 \left[1 + \ln \left(\frac{a_{_0}}{g_{c,N}}\right)\right]} \, . \nonumber \end{eqnarray} We assume $W_p = 1$ as in the Newtonian case. Our derivation assumes that $g_{c,N} \ll a_{_0}$, which is valid in the Fornax Cluster. In general, we recommend that the logarithmic term be omitted if $g_{c,N} > a_{_0}$. \section{Tidal susceptibility} \label{tidal_sus} Now that we have defined the main effects which can disturb the structure of a dwarf in a galaxy cluster, we estimate the susceptibility of a dwarf to these effects in both $\Lambda$CDM and MOND. To quantify the disturbance caused by tides from the global cluster potential, we define the tidal susceptibility as the ratio between the half-mass radius $r_h$ and the tidal radius $r_{\textrm{tid}}$ of a dwarf: \begin{eqnarray} \eta_{\textrm{rtid}} ~\equiv~ \frac{r_h}{r_{\textrm{tid}}} \, . \label{eta_rtid} \end{eqnarray} From the definition of $r_{\textrm{tid}}$ in both $\Lambda$CDM (Equation \ref{rtid_LCDM}) and MOND (Equation \ref{rtid_MOND}), we have that $r_{\textrm{tid}} \propto M^{1/3}$. This implies that: \begin{eqnarray} \eta_{\textrm{rtid}} ~\propto~ \frac{r_h}{M^{1/3}} ~\propto~ \rho^{-1/3} \, . \end{eqnarray} Therefore, only the density $\rho$ of the dwarf is relevant to its tidal susceptibility in both $\Lambda$CDM and MOND. If a dwarf has strong self-gravity (e.g. due to being surrounded by a dark matter halo or being in the deep-MOND regime), then the point at which the tidal force of the cluster will start to dominate over the self-gravity of the dwarf will be far from the centre of the dwarf. Therefore, the dwarf's $r_{\textrm{tid}}$ will be large and its tidal susceptibility will be low. Such a dwarf should be little disturbed by the cluster tides. If instead the dwarf has only weak self-gravity (e.g. because it is a TDG with little dark matter or because it is a MONDian dwarf but the EFE from the cluster is very significant), then the point at which the tidal force of the cluster will start to dominate over the self-gravity of the dwarf will be close to the dwarf's centre. Its $r_{\textrm{tid}}$ will then be small and its tidal susceptibility high. Such a dwarf would be significantly disturbed by tides. In the extreme case that $r_{\textrm{tid}} \ll r_h$ ($\eta_{\textrm{rtid}} \gg 1$), the dwarf will be destroyed within a few dynamical times. As a result, we need to consider the maximum value of $\eta_{\textrm{rtid}}$ attained throughout the trajectory, i.e., we need to evaluate $\eta_{\textrm{rtid}}$ at pericentre. If the disturbance is caused by interaction with massive galaxies (harassment), we define the tidal susceptibility as the ratio between the age of the elliptical galaxies in the Fornax Cluster \citep[$t_{\textrm{Fornax}} \approx 10$~Gyr;][]{Rakos_2001} and the disruption time-scale $t_d$ of the dwarf, which we assume to typically be about as old as the cluster itself. \begin{eqnarray} \eta_{\textrm{har}} ~\equiv~ \frac{t_{\textrm{Fornax}}}{t_d} \, . \label{eta_har} \end{eqnarray} According to this definition, if $t_{\textrm{Fornax}} \ll t_d$ for a dwarf, then it will hardly be susceptible to the effect of galaxy-galaxy harassment. If instead $t_{\textrm{Fornax}} \gg t_d$ for a dwarf, then we expect that it will be significantly disturbed due to this process. Although our definitions for $\eta_{\textrm{rtid}}$ and $\eta_{\textrm{har}}$ differ somewhat because the former is a ratio of radii while the latter is a ratio of time-scales, both definitions share the feature that low values of $\eta$ indicate that a dwarf should be little affected by the process under consideration. In principle, there should not be any dwarf galaxies for which either $\eta \gg 1$. It is possible to have $\eta$ slightly above 1 due to projection effects and other subtleties like the time required to achieve destruction, which can be significant for $\eta_{\textrm{rtid}}$ as multiple pericentre passages may be required and the orbital period can be long (Section~\ref{cluster_tides}). However, we should very seriously doubt the validity of any theory which tells us that a significant fraction of the dwarf galaxies in a galaxy cluster have $\eta_{\textrm{rtid}} \gg 1$ or $\eta_{\textrm{har}} \gg 1$. It is harder to falsify a theory in the opposite limit where it yields very low values for both measures of $\eta$ for all the dwarfs in a galaxy cluster. In this case, we could gain evidence against the theory if there is strong evidence that the dwarf galaxy population has been significantly affected by tides. In this project, we apply these considerations to the dwarf galaxy population in the Fornax Cluster. \subsection{Tidal susceptibility of the Fornax dwarfs} \label{tidal_sus_Fornax} Our first quantitative result is the susceptibility of dwarfs in the FDS catalogue to cluster tides, which we calculate in $\Lambda$CDM and MOND using Equations \ref{rtid_LCDM} and \ref{rtid_MOND}, respectively. We show the results as histograms in the top row of Fig.~\ref{fig:hist_tidal_sus}, with $\Lambda$CDM shown on the left and MOND on the right. The $\eta_{\textrm{rtid}}$ values are $\approx 5\times$ higher in MOND than in $\Lambda$CDM. Since an isolated dwarf has a similar amount of self-gravity in both frameworks by construction, the difference in $\eta_{\textrm{rtid}}$ values is primarily caused by the EFE weakening the self-gravity of a MONDian dwarf as it approaches the cluster centre (Section~\ref{Introduction}). This effect does not exist for a $\Lambda$CDM dwarf, which would retain the same dark matter fraction within its baryonic extent throughout its trajectory. The bottom row of Fig.~\ref{fig:hist_tidal_sus} shows the susceptibility of FDS dwarfs to galaxy-galaxy harassment according to $\Lambda$CDM (Equation~\ref{td_LCDM}) and MOND (Equation~\ref{td_MOND}). In both theories, the histogram of $\eta_{\textrm{har}}$ peaks at very low values such that $\eta_{\textrm{har}} \ll \eta_{\textrm{rtid}}$ and $\eta_{\textrm{har}} \ll 1$. Therefore, both frameworks predict that the FDS dwarfs should be little affected by interactions with massive elliptical galaxies in the Fornax Cluster. This implies at face value that in $\Lambda$CDM, the observed signs of tidal disturbance \citep[section 7.4 of][]{Venhola_2022} cannot be assigned to either cluster tides or to harassment. Since we explore the impact of cluster tides more carefully later in this contribution, we briefly reconsider our calculation of $\eta_{\textrm{har}}$. As explained in Section~\ref{harassment}, one simplifying assumption we made is that there are 48 equal mass and equal size perturbers within the 0.77~Mpc virial radius of the Fornax Cluster. However, the heating rate due to any individual perturber scales as $\dot{E} \propto \left( M_p/r_{h,p} \right)^2$ (Equation~\ref{E_dot_Newt}). We can use this to find the ratio $\dot{E}/\dot{E}_{\textrm{fid}}$ between the heating rate due to individual perturbers and the assumed heating rate $\dot{E}_{\textrm{fid}}$ for an `average' perturber with $M_{\star} = 10^{10} \, M_{\odot}$ and $r_{h,p} = 4$~kpc, taking into account that the actual mass and size are larger in $\Lambda$CDM and assuming a de Vaucouleurs profile for the stars \citep{De_Vaucouleurs_1948}. We obtain that in descending order of $M_{\star}$, the ratio $\dot{E}/\dot{E}_{\textrm{fid}}$ for the perturbers listed in table~C1 of \citet{Iodice_2019} is 14.7 (FCC 219), 42.7 (FCC 167), 10.3 (FCC 184), 4.76 (FCC 161), 5.41 (FCC 147), 11.8 (FCC 170), 1.06 (FCC 276), 1.93 (FCC 179), and 0.13 (FCC 312). Other perturbers have $M_{\star} < 10^{10} \, M_{\odot}$, so we assume their contribution to the heating rate is small. Adding up the above ratios and averaging over 48 perturbers (many of which are too low in mass to appreciably harass Fornax dwarfs), we get that $\dot{E}/\dot{E}_{\textrm{fid}}$ is on average 1.9. Therefore, using a more accurate treatment of the heating rate would not change our conclusion that the FDS dwarfs are not really susceptible to galaxy-galaxy harassment: Doubling all the $\eta_{\textrm{har}}$ values would still lead to its distribution having a mode $<0.1$ and all the dwarfs having $\eta_{\textrm{har}} < 1$. Moreover, using $t_{\textrm{Fornax}}$ as the time-scale for interactions is an optimistic assumption $-$ dwarfs in $\Lambda$CDM may have been accreted by the cluster long after they formed, while in MOND they could be TDGs that formed more recently \citep{Renaud_2016}. This implies that the dwarfs would not have experienced that many encounters with elliptical galaxies, which themselves might only have been accreted $\ll 10$~Gyr ago. As an example, we may consider the case of FCC 219 $\equiv$ NGC 1404, the most massive perturber listed in table~C1 of \citet{Iodice_2019} in terms of $M_{\star}$. Its radial velocity exceeds that of the brightest cluster galaxy NGC 1399 by 522~km/s, but modelling indicates that the relative velocity could be higher still as most of it should lie within the sky plane \citep{Machacek_2005}. Moreover, NGC 1404 appears to lie in front of the Fornax Cluster: Its heliocentric distance is only $18.7 \pm 0.3$~Mpc \citep{Hoyt_2021}, whereas the distance to NGC 1399 is $20.0 \pm 0.3$~Mpc \citep{Blakeslee_2009}. Detailed modelling in a $\Lambda$CDM context indicates that although NGC 1404 is not on a first infall, it has likely spent $\la 3$~Gyr within the cluster \citep{Shearman_2018}. During this time, the high relative velocity would have reduced the heating rate on any dwarf galaxy that it came near (Equation~\ref{E_dot_Newt}). It is therefore clear that $\eta_{\textrm{har}}$ is overestimated by assuming that both all the dwarfs and all 48 massive ellipticals were in the virial volume of the Fornax Cluster over the last 10~Gyr. Based on this, we will neglect the role of harassment in what follows and focus on cluster tides.\footnote{This is consistent with the previous $\Lambda$CDM result that harassment is not very significant for dwarfs in a Virgo-like cluster \citep{Smith_2015}.} Thus, $\eta$ will be used to mean $\eta_{\textrm{rtid}}$ unless stated otherwise. An important example of this is our discussion of Newtonian TDGs that are purely baryonic, where $\eta_{\textrm{har}}$ plays an important role (Appendix~\ref{tidal_sus_newton}). \subsection{Testing the effect of cluster tides on Fornax dwarfs} \label{surf_dens_dwarfs} A significant fraction of the FDS dwarfs appear disturbed in a manual visual classification \citep[Fig.~\ref{fig:tid_morph_class}; see also][]{Venhola_2022}. To check if cluster tides are truly the main mechanism responsible for the apparent disturbance of the Fornax dwarfs $-$ as our results in Fig.~\ref{fig:hist_tidal_sus} seem to suggest $-$ in Fig.~\ref{fig:tid_edge} we plot the projected distance of the selected Fornax dwarfs against the ratio between their effective radius $r_e$ and $r_{\textrm{max}}$, where $r_{\textrm{max}}$ is the maximum $r_e$ that the dwarf could have to remain detectable given its $M_{\star}$ and the FDS detection limit of $27.8 \, \textrm{mag} \, \textrm{arcsec}^{-2}$. Dwarfs with larger size at fixed stellar mass $-$ i.e., lower surface brightness dwarfs $-$ are more susceptible to tides and will be more easily destroyed, especially near the cluster centre where the tides are stronger. In Fig.~\ref{fig:tid_edge}, we can see a deficit of low surface brightness dwarfs near the cluster centre. The absence of dwarfs in this region of the parameter space cannot be explained by the survey detection limit as we find an increasing number of dwarfs with the same or lower surface brightness at larger $R_{\textrm{sky}}$, e.g. if we consider a horizontal line at $r_e/r_{\textrm{max}} = 0.4$. This tendency is highlighted in Fig.~\ref{fig:tid_edge} using a sloped dotted line that appears to be a tidal edge. Further from the cluster, its tides become weaker, so it is quite possible that dwarfs in this region are not much affected by tides. Additional evidence for the importance of tides towards the cluster centre comes from the colours of the dots in Fig.~\ref{fig:tid_edge}, which indicate whether the dwarf visually appears disturbed (red) or undisturbed (blue). Just below the claimed tidal edge, we would expect that the dwarfs are much more likely to appear disturbed as they should be close to the threshold of being destroyed altogether. This is indeed apparent: The proportion of disturbed galaxies is much higher in this part of the parameter space.\footnote{This is not expected if the disturbances are due to harassment because dwarfs subject to this would be well mixed throughout the cluster \citep{Smith_2015}.} To emphasize this trend further, we use Fig.~\ref{fig:hist_disturb} to show the observed fraction of disturbed dwarfs ($f_d$) in different $R_{\textrm{sky}}$ bins. This is found as $f_d = S/T$, with the uncertainties calculated using binomial statistics as $\sqrt{S \left( T - S \right)/T^3}$, where $T$ is the number of galaxies in each $R_{\textrm{sky}}$ bin and $S \leq T$ is the number of these galaxies which appear disturbed.\footnote{\label{Uncertainty_fixed_P}This is based on the binomial uncertainty in $S$ assuming that the probability of a galaxy appearing disturbed in each $R_{\textrm{sky}}$ bin is $f_d = S/T$. In reality, $f_d$ is not precisely constrained by the observations $-$ we handle this complexity later (Equation~\ref{Bernoulli_mean_stdev}).} As expected from our previous results, $f_d$ is very high in the central 200 kpc of the Fornax Cluster. Although $f_d$ is very low further out, it is still non-zero and remains so out to the largest distances covered by our dataset. We attribute this to the complexities of visually assessing whether a dwarf is tidally disturbed: If a dwarf appears asymmetric due to observational difficulties or due to a dense star cluster on one side, this could lead to a false positive. It is also possible that the dwarf is genuinely disturbed due to a recent close encounter with a massive galaxy in the cluster, which could happen even in the cluster outskirts. When we construct a detailed model of the Fornax Cluster dwarf galaxy population in Section~\ref{subsubsec:dwarf_disturbance}, we will need to allow a non-zero likelihood that a dwarf appears disturbed even if it is unaffected by cluster tides. \subsection{Correlating tidal susceptibility with the observed level of disturbance} \label{comparison_disturbance} Having obtained the tidal susceptibility $\eta$ of each Fornax dwarf in our sample (Section~\ref{tidal_sus_Fornax}), we can compare this to its visual level of disturbance. We do so using the proportion of dwarfs classified as disturbed in each $\eta$ bin, which is similar to the analysis shown in Fig.~\ref{fig:hist_disturb} but binning in $\eta$ instead of $R_{\textrm{sky}}$. We consider each $\eta$ bin as an experiment with $T$ trials (dwarfs) out of which $S$ are `successes' (disturbed-looking dwarfs). We then use binomial statistics to infer the probability distribution of the disturbed fraction $f_d$ assuming a uniform prior over the range $0-1$ and applying Bayes' Theorem. The mean and standard deviation of $f_d$ are: \begin{eqnarray} \label{Bernoulli_mean_stdev} \textrm{mean} &=& \frac{S + 1}{T + 2} \, , \\ \textrm{standard deviation} &=& \frac{1}{T + 2}\sqrt{\frac{ \left( S + 1 \right) \left( T - S + 1 \right)}{ \left( T + 3 \right)}} \, . \nonumber \end{eqnarray} For the extreme case $S = T = 0$, we expect that the probability distribution of $f_d$ is uniform over the range $0-1$ as there is no data. In this case, we recover the standard result that the mean of this distribution is $1/2$ and its variance is $1/12$. We use Fig.~\ref{fig:tid_sus_obs} to plot the mean and standard deviation obtained in this way against the central $\eta$ value for the bin under consideration. In both $\Lambda$CDM and MOND, a clear trend is apparent whereby dwarfs with higher $\eta$ are more likely to appear disturbed. We quantify this by dividing the FDS sample into two subsamples where $\eta$ is below or above some threshold $\eta_t$, thereby assuming only a monotonic relation between $f_d$ and $\eta$ that is not necessarily linear. Appendix~\ref{Binomial_significance} explains how we obtain the likelihood that the same $f_d$ can explain the number of disturbed dwarfs and the total number of dwarfs in both subsamples given binomial uncertainties. Using this method, we find that the `signal' is maximized in $\Lambda$CDM if we use $\eta_t = 0.36$, in which case the null hypothesis of $f_d$ being the same in both subsamples can be rejected at a significance of $P = 4.1 \times 10^{-3}$ ($2.87\sigma$). If instead we use MOND, the optimal $\eta_t = 0.85$ and the significance rises to $P = 4.4 \times 10^{-4}$ ($3.52\sigma$).\footnote{Section~\ref{discussion} provides a more rigorous quantification of how confident we can be that $f_d$ rises with $\eta$.} Though both theories imply that $f_d$ is higher in the high $\eta$ subsample, $f_d$ starts rising at a much lower value of $\eta$ in $\Lambda$CDM than in MOND, as clearly shown by the optimal $\eta_t$ values. We may expect that dwarfs start to look disturbed when their half-mass radius is about the same as their tidal radius, so $f_d$ should start rising only when $\eta \ga 0.5$. This is not the case in $\Lambda$CDM, which implies that dwarfs are more likely to be classified as disturbed once their $\eta \ga 0.1-0.2$. A dwarf with such a low $\eta$ should be little affected by tides, indicating a problem for this framework. In the MOND case, we see that dwarfs start being classified as disturbed more often once their $\eta \ga 1-1.5$, which is much more plausible physically. Another important aspect is the overall distribution of $\eta$, whose decline towards the highest bin is responsible for a larger uncertainty in the probability of appearing disturbed. The distribution of $\eta$ is shown explicitly in the top row of Fig.~\ref{fig:hist_tidal_sus}. There are no $\Lambda$CDM dwarfs with $\eta > 0.7$, even though a dwarf with $\eta = 0.7$ should still be tidally stable. In MOND, the maximum $\eta \approx 3$, though there are very few dwarfs with $\eta > 1.5$. The high calculated $\eta$ for these dwarfs could indicate that they lie very close to the cluster centre in projection but not in reality. To handle such projection effects and other uncertainties like the unknown orbital eccentricity distribution of the dwarfs, we next construct a test mass simulation of the dwarf galaxy population in the Fornax Cluster. \section{Test mass simulation of the Fornax Cluster} \label{test_mass} In order to quantify the aforementioned trends and thereby obtain the range of values that the minimum $\eta$ required for disturbance and the $\eta$ required for destruction can have to be consistent with observations $-$ both in $\Lambda$CDM and in MOND $-$ we need to construct a forward time-evolution model of the Fornax Cluster. With this forward model, we can also account for projection effects that can make dwarfs appear closer to the cluster centre than they actually are. In this section, we describe the set-up of the simulated Fornax system with test masses, as well as the methods that we use to quantify the properties of the Fornax dwarfs and their orbits. Here we focus only on those dwarfs classified as `non-nucleated' as this type of galaxy is more numerous than the `nucleated' type. Moreover, having the same deprojection method (Appendix~\ref{deproj}) for all dwarfs will simplify the analysis. Removing the nucleated dwarfs from the sample leaves us with 279 dwarfs. \subsection{Orbit integration} \label{orbit_integration} The first step in building a simulation of test masses orbiting in the observed cluster potential is to generate a grid of orbits for a wide range of semi-major axis ($R_i$) and eccentricity ($e$) values, with the integrations started at $R = R_i$. The initial radii have a range of values from 15~kpc to 2015~kpc, while the eccentricities cover the full range of values for an ellipse ($0 < e < 1$). The grid is divided into $100 \times 100$ cells. Initially, we assign the test mass a mass and half-mass radius which are typical for a Fornax dwarf ($M_{\textrm{dwarf}} = 3.16 \times 10^7~M_{\odot}$ and $r_h = 0.84$~kpc), but these values are not relevant as the results will be rescaled later according to the distribution of dwarf densities in the system (Section~\ref{subsubsec:dwarf_density}). We initialize the simulated dwarfs for every possible combination of $R_i$ and $e$ as described below. We start the simulation at the semi-major axis of the orbit, where the velocity $v$ satisfies \begin{eqnarray} v ~=~ v_c ~=~ \sqrt{-\bm{r} \cdot \bm{g}} \, . \label{v_i} \end{eqnarray} As discussed in section~2.3.1 of \citet{Banik_2018_Centauri}, the eccentricity $e$ is defined such that \begin{eqnarray} e ~\equiv~ \lvert \widehat{\bm{r}} \cdot \widehat{\bm{v}} \rvert \, , \label{e_i} \end{eqnarray} where $r = R_i \widehat{\bm{r}}$ and $\bm{g} = -g_c \widehat{\bm{r}}$, with $\widehat{\bm{v}}$ indicating the unit vector parallel to any vector $\bm{v}$ of length $v$. The modulus is not required in our case because we start with the dwarf going away from the cluster if $e > 0$. Using Cartesian coordinates, we define the initial positions and velocities of the orbit as: \begin{eqnarray} x ~&=&~ R_i \, , \\ y ~&=&~ 0 \, , \\ \vspace{10pt} v_x ~&=&~ ve \, , \\ v_y ~&=&~ v\sqrt{1 - e^2} \, . \end{eqnarray} Equation \ref{v_i} defines $v$ and Equation \ref{e_i} sets the component of $\bm{v}$ along the radial direction. $v_y$ is the remaining tangential velocity. In order to obtain the positions and velocities of the simulated dwarf at each point of the orbit, we implement a fourth-order Runge-Kutta integrator in 2D. To ensure that the time-step we use for each iteration is computationally efficient but also small enough to yield accurate results, we use an adaptive time-step that depends on the dynamical time-scale at the instantaneous orbital radius $R$: \begin{eqnarray} dt ~=~ 0.01 \sqrt{\frac{R}{g_c}} \, . \end{eqnarray} We evolve the system for $t_{\textrm{Fornax}} = 10$~Gyr, the estimated age of the system \citep{Rakos_2001}. At each time-step, we calculate the tidal radius of the simulated dwarf at its current position and, by comparing this with the half-mass radius, we obtain its instantaneous tidal susceptibility $\eta$. We record the $e$ value of each simulated orbit and its final $R$, the distance with respect to the cluster centre at which we should be seeing the dwarf today. We also record two $\eta$ values in each orbit simulation: the maximum $\eta$ over the whole simulation ($\eta_{\textrm{max}}$), and the maximum $\eta$ in the last 2~Gyr ($\eta_{\textrm{max, recent}}$). We use $\eta_{\textrm{max}}$ to decide whether the dwarf is destroyed and should be removed from our statistical analysis. If not, then $\eta_{\textrm{max, recent}}$ is used to set the likelihood that the dwarf appears disturbed. This is because we expect a dwarf to return to a nearly undisturbed appearance if it experiences only low $\eta$ values along its orbit for over 2~Gyr, provided $\eta$ is never so high as to destroy the dwarf. \subsection{Assigning probabilities to the orbits} \label{orbits_probability} The orbital and internal properties of the Fornax dwarfs (e.g., the radial profile of the orbits, the distribution of their eccentricities, the likelihood of appearing perturbed) follow certain probability distributions. Because of this, we assign probabilities to each of our simulated orbits by fitting them to a few crucial observed properties (next subsection) in order to make our simulated system as similar as possible to the observed Fornax dwarf galaxy system. The parameters governing these probability distributions are described below. \subsubsection{Number density of dwarfs} \label{subsubsec:dwarf_number_density} The number density $n$ of dwarfs is assumed to be a function only of the distance $R$ from the cluster centre. It is related to the radial probability distribution $P_r$ as: $n \propto P_r/R^2$. We assume that $P_r$ is described by a double power-law: \begin{eqnarray} P_r ~=~ R^2 \left( R + r_{\textrm{core}} \right)^{\textrm{Slope}_{P_r}} \, , \label{P_r} \end{eqnarray} where $r_{\textrm{core}}$ is the radius of the constant density central region of the Fornax Cluster and $\textrm{Slope}_{P_r}$ is the power-law slope of the radial profile in the cluster outskirts. To obtain a convergent number of dwarfs, $\textrm{Slope}_{P_r} < -3$. \subsubsection{The eccentricity distribution} \label{subsubsec:dwarf_eccentricity} For the probability distribution of the orbital eccentricities, we assume a linear function as in \citet{Banik_2018_Centauri}: \begin{eqnarray} P_e ~=~ 1 + \textrm{Slope}_{P_e} \left( e - \frac{1}{2} \right) \, , \label{P_e} \end{eqnarray} where $\textrm{Slope}_{P_e}$ is the slope of the eccentricity probability distribution. \subsubsection{Distribution of dwarf densities} \label{subsubsec:dwarf_density} The tidal susceptibility of a dwarf depends on both its mass and its radius, which in general differ from the values assumed in our test mass simulation. As discussed below Equation~\ref{eta_rtid}, the mass and radius of a dwarf affect its tidal susceptibility only to the extent that they affect its density $\rho$. Therefore, the $\eta$ values that we recorded in Section~\ref{orbit_integration} should be multiplied by a density-related factor accounting for the difference between the intended density $\rho$ and the fixed value $\rho_0$ assumed in that section. We therefore set \begin{eqnarray} \eta_{\textrm{max}} ~&=&~ \eta_{\textrm{max, 0}} \left( \frac{\rho}{\rho_0} \right)^{-1/3} \, , \\ \eta_{\textrm{max, recent}} ~&=&~ \eta_{\textrm{max, recent, 0}} \left( \frac{\rho}{\rho_0} \right)^{-1/3} \, , \end{eqnarray} where the `0' subscript denotes values obtained in Section~\ref{orbit_integration}. The $-1/3$ exponent comes from the fact that $\eta \propto {r_h}/M^{1/3}$ in both theories. The density $\rho$ of each Fornax dwarf within its $r_h$ can be inferred from the data in the FDS catalogue using $\rho = 3 M_{\star}/ \left( 8 \mathrm{\pi} r_h^3\right)$. Fig.~\ref{fig:hist_dens} shows a histogram of the so-obtained densities of these dwarfs, from which it can be seen that the FDS distribution of $\log_{10} \, \rho$ follows a Gaussian distribution with mean $-2.74$ in units of $M_{\odot}/\textrm{pc}^3$. Therefore, when we assign a density to each of the simulated dwarfs obtained in Section~\ref{orbit_integration}, we associate a probability to this density according to a log-normal distribution. This is assumed to be independent of $R_i$ since the central region of a cluster should be able to accrete dwarfs that formed further out, leading to mixing of dwarfs that formed in different positions within the cluster. In order to set the lowest density that can be assigned to a dwarf in a way that is consistent with the observational constraints of the FDS, we check down to which surface brightness $\mu$ dwarfs can be detected in this survey. The limiting $\mu$ is given by the $1\sigma$ signal-to-noise threshold per pixel, which in the FDS is $27.8 \, \textrm{mag} \, \textrm{arcsec}^{-2}$ in the red band \citep[section~4.1 of][]{Venhola_2018}. To infer the corresponding $\rho$, we first convert this $\mu$ value to astronomical units ($L_{\odot}/\textrm{pc}^2$): \begin{eqnarray} \log_{10} \, \mu \left[ L_{\odot}/\textrm{pc}^2 \right] = \frac{\mu \left[ \textrm{mag} \, \textrm{arcsec}^{-2} \right] - 21.57 - \textrm{Mag}_{\odot}}{-2.5} \, , \end{eqnarray} where $\textrm{Mag}_{\odot} = 4.65$ is the absolute magnitude of the Sun in the red band \citep[table 3 of][]{Willmer_2018}. This gives $\mu_{\textrm{min}} = 0.23 \, L_{\odot}/\textrm{pc}^2$. We then use the mass-luminosity relation (solid grey line in Fig.~\ref{fig:M_L}) to obtain that $M/L_{r'} = 1.10 \pm 0.38 \, M_{\odot}/L_{\odot, r'}$. From this we can convert $\mu_{\textrm{min}}$ to a surface density $\Sigma_{\textrm{min}}$ with some error due to the scatter in $M/L_{r'}$, yielding $\Sigma_{\textrm{min}} = 0.26 \pm 0.09 \, M_{\odot}/\textrm{pc}^2$. Finally, we can convert this $\Sigma_{\textrm{min}}$ to a threshold density $\rho_t$ by plotting the surface density of the Fornax dwarfs against their volume density and doing a linear regression (Fig.~\ref{fig:surfdens_voldens}). Since the slope is very close to 1, we fix it to 1 for simplicity, leading to a fixed ratio of $\rho/\Sigma = 0.59 \pm 0.33 \, \textrm{kpc}^{-1}$. The limiting $\rho$ of the Fornax survey that we obtain with this method is $\rho_t = 1.51^{+1.67}_{-1.09} \times 10^{-4} \, M_{\odot}/\textrm{pc}^3$ considering the $1\sigma$ lower and upper limits to both $M/L_{r'}$ and $\rho/\Sigma$. From Fig.~\ref{fig:hist_dens}, we can see that the distribution of dwarfs is only included in its entirety if we take the lower limit and thus adopt a threshold of $\rho_t = \rho_{\textrm{min}} = 4.2 \times 10^{-5} \, M_{\odot}/\textrm{pc}^3$. Given that the images of the dwarfs have been carefully analysed by observers and labelled as `unclear' whenever the image was not clear enough, we assume that all the considered dwarfs were observed without difficulty by the FDS. Therefore, we consider that a reasonable lower limit for the density distribution in our statistical analysis should encompass all the dwarfs in the dataset, so we take $\rho_t = \rho_{\textrm{min}} = 4.2 \times 10^{-5} M_{\odot}/\textrm{pc}^3$ (black line in Fig.~\ref{fig:hist_dens}) as our nominal lower limit to the density distribution. This choice of $\rho_t$ is 0.09~dex below $\rho_{\textrm{min, FDS}}$, the lowest $\rho$ of any considered dwarf in the FDS. If instead we had assumed that $\rho_t = \rho_{\textrm{mean}} = 1.51 \times 10^{-4} M_{\odot}/\textrm{pc}^3$ (grey line in Fig.~\ref{fig:hist_dens}), we would have needed to discard 7 of the observed dwarfs in the FDS. These and other choices for $\rho_t$ are discussed in Section~\ref{discussion}. In the $\Lambda$CDM case, we need to include the halo mass within the baryonic extent of each dwarf (Equation~\ref{M_dwarf_rule}), leading to higher volume densities. This causes a steeper slope and a larger amount of scatter in the mass-luminosity relation, making it difficult to follow the above-mentioned method. To keep the procedure similar, we set $\rho_{\textrm{min}}$ to a value 0.09~dex below $\rho_{\textrm{min, FDS}}$ as this is the gap assumed for MOND. The steps involved with this model are shown in Appendix~\ref{dwarf_dens_LCDM}. \subsubsection{Disturbance to the dwarf structure} \label{subsubsec:dwarf_disturbance} Assuming that tides are the main cause of the apparent disturbance to the structure of many Fornax dwarfs, we expect the probability of a dwarf appearing perturbed to grow with its tidal susceptibility. We assume a linear relation between $\eta$ and the probability of disturbance ($P_{\textrm{dist}}$) with slope: \begin{eqnarray} \textrm{Slope}_{P_{\textrm{dist}}} ~=~ \frac{P_{\textrm{dist, ceiling}} - P_{\textrm{dist, floor}}}{\eta_{\textrm{destr}} - \eta_{\textrm{min, dist}}} \, , \end{eqnarray} where $\eta_{\textrm{min, dist}}$ is the lowest $\eta$ value at which the dwarf is disturbed by tides, $\eta_{\textrm{destr}}$ is the $\eta$ value at which the dwarf is destroyed (the algorithm rejects all simulated orbits in which $\eta_{\textrm{max}}$ surpasses this value), $P_{\textrm{dist, ceiling}}$ is the probability for a dwarf to appear disturbed right before it gets destroyed at $\eta = \eta_{\textrm{destr}}$, and $P_{\textrm{dist, floor}}$ is the minimum probability for a dwarf to appear disturbed if $\eta_{\textrm{max, recent}} < \eta_{\textrm{min, dist}}$. We allow $P_{\textrm{dist, floor}} > 0$ to capture the possibility that a dwarf appears disturbed for reasons unrelated to cluster tides, e.g. asymmetric star formation. Similarly, we expect that $P_{\textrm{dist, ceiling}} < 1$ because a significantly perturbed dwarf might be elongated along the line of sight and thus appear circular. For a dwarf with $\eta_{\textrm{max, recent}} \geq \eta_{\textrm{min, dist}}$, the probability of disturbance is: \begin{eqnarray} P_{\textrm{dist}} ~=~ P_{\textrm{dist, floor}} + \textrm{Slope}_{P_{\textrm{dist}}} \left( \eta_{\textrm{max, recent}} - \eta_{\textrm{min, dist}} \right). \end{eqnarray} \subsection{Comparison with observations} \label{comparison_observations} The observed parameters of the Fornax dwarfs that we aim to reproduce in our simulation are: \begin{enumerate} \item The distribution of sky-projected distances ($R_{\textrm{sky}}$) to the cluster centre; \item The distribution of apparent $\eta$ values at pericentre ($\eta_{\textrm{obs}}$); and \item The disturbed fraction of dwarfs as a function of $\eta_{\textrm{obs}}$. \end{enumerate} Because these quantities are projected or depend on the deprojection method, we need to obtain the $R_{\textrm{sky}}$ values of our simulated dwarfs and then deproject them using the same method that we use for the observed dwarfs. To obtain the $R_{\textrm{sky}}$ values for each 3D distance $R$ of the simulated dwarf, we consider the view if it is observed from all possible angles $0^{\circ} \leq \theta \leq 90^{\circ}$ in steps of $1^{\circ}$, where $\theta$ is the angle between $\bm{R}$ and the line of sight. The projected distance is given by \begin{eqnarray} R_{\textrm{sky}} ~=~ R \sin \theta \, . \label{R_sky} \end{eqnarray} Each value of $\theta$ is statistically weighted by the difference in $\cos \theta$ across the corresponding bin. We then apply the deprojection method described in Appendix \ref{deproj} and obtain the corresponding distance at pericentre (Appendix~\ref{Rper}). With this, we can calculate $R_{\textrm{tid}}$ and $\eta$ at pericentre in a similar way to that in which we obtain these parameters for the observed dwarfs. We name the new $\eta$ parameter that we obtain with this method $\eta_{\textrm{obs}}$. Therefore, the simulated quantities that we compare to the previously mentioned observables are: $R_{\textrm{sky}}$, the distribution of $\eta_{\textrm{obs}}$, and the probability of disturbance at each $\eta_{\textrm{obs}}$. To do the comparison, we start by dividing the range of $R_{\textrm{sky}}$ and $\eta_{\textrm{obs}}$ into several bins. We then classify the observed dwarfs into these bins according to their values of projected distance or estimated $\eta$ at pericentre. To obtain the probability for a dwarf to have a projected distance or $\eta_{\textrm{obs}}$ which falls in the range of values delimited by each of these bins, we count the number of dwarfs in each bin and compare it to the total number of dwarfs. To obtain the probability of disturbance, we count the number of dwarfs classified as disturbed in each $\eta_{\textrm{obs}}$ bin and compare it to the total number of dwarfs in that bin. For the simulated sample (i.e., the dwarfs generated for all possible combinations of $R_i$, $e$, $\rho$, and $\theta$), we consider the same bins as for the observed sample. For each bin, we add the probability that each simulated dwarf has $R_{\textrm{sky}}$ or $\eta_{\textrm{obs}}$ values that fall in the range given by the bin. We then normalize this by the sum of all the probabilities in all bins. For the probability of disturbance, we apply an additional factor of $P_{\textrm{dist}}$ to the likelihood of each $\left( R_i, e, \rho, \theta \right)$ combination and add this to the appropriate $\eta_{\textrm{obs}}$ bin. We then divide this sum by the probability of $\eta_{\textrm{obs}}$ falling in that bin (i.e., without considering $P_{\textrm{dist}}$). To quantify how closely the properties of the simulated sample of dwarfs resemble the properties of the observed FDS dwarfs in terms of each of the above-mentioned observables, we use the binomial probability \begin{eqnarray} P_{\textrm{x}} ~=~ \prod_{\textrm{Bins}} \frac{T!}{ \left( T - S \right)! S!} p^S \left( 1 - p \right)^{T - S} \, , \label{binomial} \end{eqnarray} where $T$ is the total number of observed dwarfs, $S$ is the number of observed dwarfs in a bin, $p$ is the simulated probability that a dwarf is in that bin, and the `x' subscript refers to the observable under consideration. If this is the disturbed fraction, $T$ is the total number of observed dwarfs in a particular $\eta_{\textrm{obs}}$ bin, $S$ is the observed number of disturbed dwarfs in that bin, and $p$ is the probability given by the simulation that a dwarf in that bin is disturbed. The total probability is given by multiplying all probabilities for all the bins and all the observables: \begin{eqnarray} P_{\textrm{total}} ~=~ P_{R_{\textrm{sky}}} P_{\eta_{\textrm{obs}}} P_{\left.\textrm{perturbed}\right|\eta_{\textrm{obs}}} \, . \label{P_tot} \end{eqnarray} In order to maximize this $P_{\textrm{total}}$, we leave as free parameters: $r_{\textrm{core}}$, $\textrm{Slope}_{P_r}$, $\textrm{Slope}_{P_e}$, $\eta_{\textrm{min, dist}}$, $\eta_{\textrm{destr}}$, $P_{\textrm{dist, floor}}$, and $P_{\textrm{dist, ceiling}}$. We explore this set of parameter values using the Markov Chain Monte Carlo (MCMC) method discussed below. \subsubsection{MCMC analysis} \label{MCMC} The MCMC method generates a sequence of parameter values in such a way that their frequency distribution matches the posterior inference on the model parameters. The basic idea is to start with some initial guess for the parameters with likelihood $P_{\rm{total}}$ and generate a proposal by adding Gaussian random perturbations to the parameters, leading to a likelihood of $P_{\rm{next}}$ with the revised parameters. The proposal is accepted if $P_{\rm{next}} > P_{\rm{total}}$ or if a random number drawn uniformly from the range $\left( 0-1 \right)$ is $<P_{\rm{next}}/P_{\rm{total}}$. If the proposal is rejected, the parameter perturbations are not applied but the previous parameters must be recorded once more. \begin{table} \centering \caption{Priors for the free parameters in our model of the Fornax Cluster dwarf galaxy population.} \begin{tabular}{ccc} \hline Parameter & Minimum & Maximum \\ \hline $\textrm{Slope}_{P_r}$ & $-9$ & $-3$ \\ $\textrm{Slope}_{P_e}$ & $-2$ & 2 \\ $r_{\textrm{core}}$/Mpc & 0.01 & 3 \\ $P_{\textrm{dist, floor}}$ & 0 & 1 \\ $P_{\textrm{dist, ceiling}}$ & 0 & 1 \\ $\eta_{\textrm{min, dist}}$ & 0 & 5 \\ $\eta_{\textrm{destr}}$ & $\eta_{\textrm{min, dist}}$ & 5 \\ \hline \end{tabular} \label{tab:priors} \end{table} We run a total of $10^5$ trials in each chain and check that the acceptance fraction is close to 0.234, the optimal acceptance rate for an efficient MCMC algorithm \citep*{Gelman_1997}. This is achieved by rerunning the chain a few times to determine the optimal step sizes for the parameter perturbations. To ensure that the algorithm chooses physically reasonable parameter values, we impose the priors listed in Table~\ref{tab:priors}. If the algorithm chooses a value for any of these parameters outside the specified range, it is asked to draw another proposal, but this does not count as a new MCMC trial. We let the algorithm consider a sufficiently large number of proposals at each stage in the chain that we are sure to obtain a physically plausible proposal for the parameter combination to try next, even if this is rejected because it fits the observations poorly. To prevent the MCMC algorithm from starting with a set of values which is too far away from the optimal set, we first fit the simulation's free parameters to the observations using a gradient ascent algorithm \citep{Fletcher_1963}. This maximizes $P_{\rm{total}}$ by increasing or decreasing the step size according to how much $P_{\textrm{total}}$ increased or decreased with respect to the previous set of parameter values that it tested. This is done until the step size becomes very small, indicating that the algorithm cannot increase $P_{\textrm{total}}$ any more. Then the algorithm converges and returns the optimal set of parameter values. \section{Results of the statistical analysis} \label{Results} We present our best-fitting model in each theory (Section~\ref{sec:best_model}) before discussing the parameter uncertainties obtained with the MCMC method (Section~\ref{sec:uncertainties}). \subsection{The best-fitting model} \label{sec:best_model} \begin{table} \centering \caption{The parameters of our best-fitting model in each theory, obtained with the gradient ascent method (columns $2-3$) and based on $10^5$ MCMC trials (columns $4-5$). The last row shows the likelihood of the model (Equations~\ref{binomial} and \ref{P_tot}).} \begin{tabular}{ccccc} \hline & \multicolumn{2}{c}{Gradient ascent} & \multicolumn{2}{c}{MCMC} \\ Parameter & $\Lambda$CDM & MOND & $\Lambda$CDM & MOND \\ \hline $\textrm{Slope}_{P_r}$ & $-3.77$ & $-3.67$ & $-5.85$ & $-4.55$ \\ $\textrm{Slope}_{P_e}$ & $-1.55$ & 0.34 & $-1.98$ & $-1.70$ \\ $r_{\textrm{core}}$ & 0.62 & 0.65 & 1.35 & 0.90 \\ $P_{\textrm{dist, floor}}$ & 0.09 & 0.04 & 0.10 & 0.02 \\ $P_{\textrm{dist, ceiling}}$ & 0.65 & 0.76 & 0.54 & 0.53 \\ $\eta_{\textrm{min, dist}}$ & 0.11 & 0.24 & 0.12 & 0.10 \\ $\eta_{\textrm{destr}}$ & 0.24 & 1.88 & 0.23 & 1.24 \\ \hline $\log_{10} P_{\textrm{total}}$ & $-30.69$ & $-32.46$ & $-30.53$ & $-32.25$ \\ \hline \end{tabular} \label{tab:best_fit} \end{table} The optimal set of parameters found by the gradient ascent algorithm are given in columns 2 and 3 of Table \ref{tab:best_fit} for $\Lambda$CDM and MOND, respectively. These are the initial values at which we start the MCMC chains. Due to the use of $10^5$ trials, the MCMC method provides a set of parameter values (a model) that fits the observations slightly better (higher $P_{\textrm{total}}$ in Equation~\ref{P_tot}) than we achieved with gradient ascent. The best-fit parameter values in the MCMC chain are also given in Table~\ref{tab:best_fit} (columns $4-5$) along with the goodness of fit to the observations (last row). In this regard, there is little difference between the theories, though the optimal parameters are rather different. We will return to this later (Section~\ref{discussion}). Using these parameters, Fig.~\ref{fig:prob_dist} shows the simulated and observed probability distributions of $R_{\textrm{sky}}$, $\eta_{\textrm{obs}}$, and disturbed fraction vs. $\eta_{\textrm{obs}}$, revealing a good overall fit to the observations in both theories.\footnote{The low values of $P_{\textrm{total}}$ arise due to the large sample size.} In particular, the rising likelihood of a dwarf appearing disturbed as a function of $\eta_{\textrm{obs}}$ is nicely reproduced by the best-fitting models. \subsection{Parameter uncertainties} \label{sec:uncertainties} \renewcommand{\arraystretch}{1.2} \begin{table} \centering \caption{The most likely value and $1\sigma$ confidence interval of each model parameter in our test mass simulation of the Fornax Cluster dwarf galaxy population, based on $10^5$ MCMC trials.} \begin{tabular}{ccc} \hline Parameter & $\Lambda$CDM & MOND \\ \hline $\textrm{Slope}_{P_r}$ & $-7.43_{-0.99}^{+2.24}$ & $-7.58_{-0.88}^{+2.18}$ \\ $\textrm{Slope}_{P_e}$ & $-1.65_{-0.30}^{+1.80}$ & $0.75_{-1.22}^{+1.20}$ \\ $r_{\textrm{core}}$ & $2.00_{-0.98}^{+0.34}$ & $2.02_{-0.88}^{+0.52}$ \\ $P_{\textrm{dist, floor}}$ & $0.10_{-0.03}^{+0.03}$ & $0.07_{-0.03}^{+0.04}$ \\ $P_{\textrm{dist, ceiling}}$ & $0.49_{-0.15}^{+0.30}$ & $0.79_{-0.20}^{+0.15}$ \\ $\eta_{\textrm{min, dist}}$ & $0.11_{-0.06}^{+0.05}$ & $0.24_{-0.19}^{+0.24}$ \\ $\eta_{\textrm{destr}}$ & $0.25_{-0.03}^{+0.07}$ & $1.88_{-0.53}^{+0.85}$ \\ \hline \end{tabular} \label{tab:MCMC_paramvalues} \end{table} \renewcommand{\arraystretch}{1} To fit the test mass simulation of the Fornax dwarf galaxy system to its observed properties, we require several free parameters in the model (Section~\ref{test_mass}). Having discussed the values of these parameters in the most likely model (Table~\ref{tab:best_fit}), we now find the most likely value of each parameter and its uncertainty. This is somewhat different because instead of considering the most likely model, we use the MCMC chain to obtain the posterior inference on each model parameter, which we then characterize using its mode and $1\sigma$ confidence interval. The results are shown in Table~\ref{tab:MCMC_paramvalues}. We also use Fig.~\ref{fig:triang_MONDLCDM} to show the results of the MCMC analysis by plotting the probability distribution of each parameter and showing contour plots for all possible parameter pairs. The parameters $\textrm{Slope}_{P_r}$, $r_{\textrm{core}}$, $P_{\textrm{dist, floor}}$, and $P_{\textrm{dist, ceiling}}$ cover a similar range of values in both theories. This is to be expected because the distribution of dwarfs in the Fornax Cluster is known observationally such that $\textrm{Slope}_{P_r}$ and $r_{\textrm{core}}$ are not strong tests of the gravity law, while $P_{\textrm{dist, floor}}$ and $P_{\textrm{dist, ceiling}}$ are set by the proportion of dwarfs in different $\eta_{\textrm{obs}}$ bins that appear disturbed (Fig.~\ref{fig:tid_edge}). Unlike these four parameters, $\textrm{Slope}_{P_e}$, $\eta_{\textrm{min, dist}}$, and $\eta_{\textrm{destr}}$ cover very different ranges in these two models. As discussed below, these are the parameters which can help us discern between $\Lambda$CDM and MOND, allowing us to assess which model performs better when compared to observations. The inference on $\textrm{Slope}_{P_e}$ (shown in the top panel of column~2 of Fig.~\ref{fig:triang_MONDLCDM}) peaks close to the minimum allowed value of $-2$ in $\Lambda$CDM. The opposite happens in MOND, where the peak is close to 1. Negative slopes in Equation \ref{P_e} assign higher probabilities to nearly circular orbits. However, according to \citet{Ambartsumian_1937}, we expect the eccentricity distribution to be thermal and thus have $\textrm{Slope}_{P_e} \approx 2$ \citep[for a derivation, see section~4.2 of][]{Kroupa_2008}. In this regard, MOND performs better than $\Lambda$CDM. The major differences between $\Lambda$CDM and MOND are in the parameters $\eta_{\textrm{min, dist}}$ and $\eta_{\textrm{destr}}$, whose posterior inferences are shown in detail in Figs.~\ref{fig:Min_eta_dist} and \ref{fig:eta_destr} due to their importance to our argument. The low values in $\Lambda$CDM arise because dwarfs have quite strong self-gravity by virtue of being embedded in a dominant dark matter halo throughout their trajectory. This makes them less susceptible to the effect of tides (stronger self-gravity raises $r_{\textrm{tid}}$ and thus reduces $\eta$; see Equation~\ref{eta_rtid}). As a result, the algorithm needs to set $\eta_{\textrm{min, dist}}$ and $\eta_{\textrm{destr}}$ to very low values in order to match the observed fact that many dwarfs are morphologically disturbed and we do not observe dwarfs beyond a certain limiting $\eta$. MOND also boosts the baryonic self-gravity of a dwarf, but this boost is damped due to the EFE of the cluster's gravitational field. This effect gets stronger as dwarfs approach the pericentre of their orbits, to the point that dwarfs which are sufficiently close to the cluster centre can become almost Newtonian despite a very low internal acceleration. Because of this, MONDian dwarfs are significantly more susceptible to tides than their $\Lambda$CDM counterparts. This causes the algorithm to choose significantly higher $\eta_{\textrm{min, dist}}$ and $\eta_{\textrm{destr}}$ values in the MOND case. \textit{N}-body simulations of dwarf galaxies show that $\eta_{\textrm{destr}}$ should be $\approx 1$ in $\Lambda$CDM \citep{Penyarrubia_2009, Van_den_Bosch_2018}. However, fitting the observations with our MCMC method gives a much lower value of $\eta_{\textrm{destr}} = 0.25^{+0.07}_{-0.03}$. This implies an important discrepancy between model expectations in $\Lambda$CDM and actual observations of dwarf galaxies in the Fornax Cluster. Turning to MOND, comparing the $\eta_{\textrm{destr}}$ value inferred from observations with that obtained using simulations is not so straightforward given that the best available \textit{N}-body simulations studying the resilience of Milgromian dwarf galaxies to tides is by now very old and poorly suited to the present study \citep{Brada_2000_tides}. Because of this, we perform our own \textit{N}-body simulations of a typical Fornax Cluster dwarf galaxy, as described next. \section{\textit{N}-body simulations of a Fornax dwarf} \label{Nbody_sim} As the last part of this project, we conduct our own \textit{N}-body simulations of a typical Fornax dwarf to find out the expected $\eta_{\textrm{destr}}$ in MOND. The motivation is that while the analytic formula for the tidal radius (Equation~\ref{rtid_MOND}) should capture the scalings with the relevant variables like the tidal stress and the EFE, there could be a constant numerical pre-factor that arises from a detailed simulation. We investigated this using the Milgromian \textit{N}-body code \textsc{phantom of ramses} (\textsc{por}) developed in Bonn by \citet{Lughausen_2015}, who adapted it from the Newtonian \textit{N}-body code \textsc{ramses} \citep{Teyssier_2002}. As a result, \textsc{por} inherits many features of \textsc{ramses}, including the adaptive mesh refinement technique to better resolve denser regions. \textsc{por} can work with both particle and gas dynamics. It is suited for simulations of isolated galaxies \citep{Banik_2020_M33, Roshan_2021_disc_stability, Banik_2022_fake_inclination}, interacting galaxies \citep{Renaud_2016, Thomas_2017, Thomas_2018, Bilek_2018, Banik_2022_satellite_plane}, galaxy formation \citep{Wittenburg_2020}, and even for cosmological structure formation (N. Wittenburg et al., in preparation). The main difference between \textsc{por} and \textsc{ramses} is the fact that \textsc{por} solves the ordinary Poisson equation twice, with $\bm{g}_{_N}$ found using standard techniques in the first stage and the following equation solved in the second stage to implement the MOND corrections: \begin{eqnarray} \nabla \cdot \bm{g} ~=~ \nabla \cdot \left( \nu \bm{g}_{_N} \right) \, , \end{eqnarray} where $\nu$ was defined in Equation~\ref{simple_interpolating}. The boundary condition for the Milgromian potential $\Phi$ is: \begin{eqnarray} \Phi ~=~ \sqrt{GMa_{_0}} \ln r \, , \end{eqnarray} where $M$ is the total mass in the simulation volume and $r$ is the distance from the barycentre in the simulation unit of length, the choice of which has no bearing on the result. Since Fornax Cluster dwarfs are expected to contain little gas (Section~\ref{effects_gravi}), we can simplify the set-up greatly by using the `particle-only' version of the \textsc{por} code. In particular, we use the `staticparts' patch \citep[described in section~4.1 of][]{Nagesh_2021} which allows the use of particles that provide gravity but do not move if their mass exceeds a user-defined threshold. This is helpful because we treat the cluster gravity as sourced by a point mass fixed at the origin, with the dwarf at three possible initial distances $R_i$. To ensure the gravity on the dwarf is the same as in the Fornax Cluster, we use Equation \ref{M_cluster} to obtain $g_{c}$ and then obtain the corresponding $g_{c,N}$ with the simple interpolating function in the inverse form (Equation~\ref{g_N_g}), from which we get the central mass: \begin{eqnarray} M_c ~\equiv~ \frac{g_{c,N}R_i^2}{G} \, . \end{eqnarray} The different MOND dynamical cluster masses obtained in this way are: $M_c = 2.18 \times 10^{12}~M_{\odot}$ at 150~kpc, $M_c = 2.89 \times 10^{12}~M_{\odot}$ at 300~kpc, and $M_c = 3.31 \times 10^{12}~M_{\odot}$ at 450~kpc. We use $7-13$ refinement levels and set the box length to $6 \, R_i$ as the apocentre could be at almost $2 \, R_i$. For the dwarf, we use a half-mass radius of $r_h = 0.84$~kpc and a total mass of $M_{\textrm{dwarf}} = 3.16 \times 10^7~M_{\odot}$ represented by $10^5$ particles, making the mass resolution $316 \, M_{\odot}$. These are typical parameters for a dwarf in the Fornax Cluster (see the red star in Fig.~\ref{fig:surfdens_voldens}). Setting the velocity dispersion $\sigma$ is non-trivial because we need to account for the cluster EFE when we initiate the simulation. We do this by using the Fornax dwarf templates kindly provided by Prof. Xufen Wu, who used a similar method to that described in section~3.3 of \citet{Haghi_2019_DF2} to generate these templates. The idea is to take a Newtonian template and then enhance the velocities by the factor needed to ensure virial equilibrium given the enhanced gravity \citep{Wu_2013}. To set up the dwarf, we apply a Galilean transformation to the template whereby the Cartesian positions of all particles are boosted by ($x_0 = R_i$, $y_0 = 0$, $z_0 = 0$) and the velocities are boosted depending on the circular velocity at $R_i$ and the orbital eccentricity $e$, as described in Section~\ref{orbit_integration}. We start the simulation with the dwarf at the semi-major axis of its orbit and receding from the cluster. We then evolve the system until shortly after the dwarf reaches apocentre for the second time so that there is ample time to assess the impact of the pericentre passage. The code generates an output of the mass, position, and velocity of every particle every 20~Myr, allowing us to analyse the structure of the dwarf and find out if it has been destroyed. Our main objective is to find the threshold value of $\eta$ at pericentre beyond which the dwarf gets destroyed in the simulation. This requires us to perform multiple simulations with different eccentricities in order to obtain different $\eta$ values at pericentre. To guide our choice of parameters, we use a simple MOND Runge-Kutta orbit integrator of a point mass orbited by a test particle in 2D. This is also very helpful when deciding the appropriate duration for each simulation, which we keep fixed for models with the same $R_i$. \subsection{Analysis} We extract the particle positions $\bm{r}_i$, velocities $\bm{v}_i$, and masses $m_i$ using \textsc{extract\_por} \citep{Nagesh_2021}, with the index $i$ used in what follows to distinguish the particles. To assess if a dwarf has been destroyed, we infer three properties of the dwarf from the output at each snapshot: its half-mass radius, velocity dispersion, and aspect ratio. Unlike in Newtonian gravity, the time-varying EFE implies that these quantities are expected to vary around the orbit even if the dwarf is completely tidally stable ($\eta \ll 1$), perhaps most famously for the velocity dispersion \citep{Kroupa_2018_DF2}. To assess tidal stability, we check whether the dwarf responds adiabatically to the time-varying EFE. Tidal stability requires the dwarf to recover the initial values for these parameters after the pericentre passage, at least by the time of the next apocentre. If this is not the case, then the dwarf is either destroyed or unstable, in which case several pericentre passages may be required to destroy the dwarf. However, it is beyond the scope of this project to simulate multiple pericentre passages. \subsubsection{Finding the barycentre} We apply an iterative outlier rejection scheme to accurately obtain the barycentre position $\overline{\bm{r}}$ and velocity $\overline{\bm{v}}$ based on the positions and velocities of the particles. In the first iteration, we consider all the particles and calculate \begin{eqnarray} \overline{\bm{r}} ~\equiv~ \frac{\sum_i m_i \bm{r}_i}{M} \, , \quad M \equiv \sum_i m_i \, . \end{eqnarray} We use a similar definition for $\overline{\bm{v}}$. The barycentre position and velocity are then used to find the root mean square (rms) dispersion in position and velocity. \begin{eqnarray} r^2_{\textrm{rms}} ~\equiv~ \frac{\sum_i m_i {\lvert \bm{r}_i - \overline{\bm{r}} \rvert}^2}{M} \, , \label{r_rms} \end{eqnarray} with a similar definition used for $v_{\textrm{rms}}$, which we call $\sigma$ for consistency with other workers. This lets us define a $\chi^2$ statistic for each particle based on its position. \begin{eqnarray} \chi^2_{\textrm{pos}} ~\equiv~ \left( \frac{\left| \bm{r}_i - \overline{\bm{r}} \right|}{r_{\textrm{rms}}} \right)^2 \, , \end{eqnarray} with a similar definition used for $\chi^2_{\textrm{vel}}$ based on the velocity. In the second iteration, we repeat the above steps for only those particles whose $\chi^2_{\textrm{pos}}$ and $\chi^2_{\textrm{vel}}$ are both below 25, which changes the calculated quantities. In subsequent iterations, we expect to have pinned down the barycentre more precisely, so we use the stricter condition that \begin{eqnarray} \chi^2_{\textrm{pos}} + \chi^2_{\textrm{vel}} ~<~ \chi^2_{\textrm{max}} \, , \end{eqnarray} where $\chi^2_{\textrm{max}} = 11.83$ is set so that the likelihood of the $\chi^2$ statistic for two degrees of freedom exceeding $\chi^2_{\textrm{max}}$ is the same as the likelihood of a Gaussian random variable deviating from its mean value by $\geq 3\sigma$. Our procedure can thus be thought of as $3\sigma$ outlier rejection. We consider the algorithm to have converged once the difference in $\overline{\bm{r}}$ and $\overline{\bm{v}}$ between successive iterations is so small that \begin{eqnarray} \frac{\left| \Delta \overline{\bm{r}} \right|^2}{r^2_{\textrm{rms}}} + \frac{\left| \Delta \overline{\bm{v}} \right|^2}{\sigma^2} ~<~ 10^{-5} \, , \end{eqnarray} with the additional requirement that the number of `accepted' particles deviates from that in the previous iteration by no more than the Poisson uncertainty. In the analyses described below, we will only consider those particles which are accepted on the final iteration. \subsubsection{Velocity dispersion} The velocity dispersion $\sigma$ is already available as part of our $3\sigma$ outlier rejection system for finding the barycentre of the dwarf. This 3D $\sigma$ is found by applying Equation~\ref{r_rms} but using velocities rather than positions. If the dwarf were isolated and unaffected by tides, equation 14 of \citet{Milgrom_1994_virial} tells us to expect that \begin{eqnarray} \sigma ~=~ \left( \frac{4}{9} GMa_{_0} \right)^{1/4} \, . \label{v_MOND_iso} \end{eqnarray} This assumes dynamical equilibrium and the deep-MOND limit, but does not make any assumptions concerning whether the orbits are mostly radial or tangential. If the system is not spherically symmetric, the velocity dispersion would not be the same along every direction, but the bulk 3D velocity dispersion above would still hold. Another important caveat is that the system should consist only of particles with $m_i \ll M$. \subsubsection{Half-mass radius} To obtain the half-mass radius $r_h$, we order the particles in ascending order of their distance to the above-determined dwarf barycentre $\overline{\bm{r}}$. We then find the index $p$ such that \begin{eqnarray} \sum_{i=1}^p m_i ~=~ \frac{M}{2} \, , \end{eqnarray} with the total mass $M$ of all accepted particles in general being slightly below the initial mass of the dwarf. By definition, $r_h$ is the distance of particle $p$ from the dwarf's barycentre. \begin{eqnarray} r_h ~\equiv~ \lvert \bm{r}_p - \overline{\bm{r}} \rvert \, . \end{eqnarray} \subsubsection{Aspect ratio} To quantify the shape of the simulated dwarf, we obtain its inertia tensor \begin{eqnarray} \matr{I}_{jk} ~\equiv~ \sum_i m_i \left( \bm{r} - \overline{\bm{r}} \right)_j \left( \bm{r} - \overline{\bm{r}} \right)_k \, , \end{eqnarray} where the spatial indices $j$ and $k$ take values in the range $1-3$ because there are three dimensions. We then find the eigenvalues of $\matr{I}$. The aspect ratio of the dwarf is defined as \begin{eqnarray} \textrm{aspect ratio} ~\equiv~ \sqrt{\frac{\lambda_{\textrm{min}}}{\lambda_{\textrm{max}}}} \, , \label{Aspect_ratio_def} \end{eqnarray} where $\lambda_{\textrm{min}}$ ($\lambda_{\textrm{max}}$) is the smallest (largest) eigenvalue. \subsection{Results} The results of our \textsc{por} simulations are shown in Fig.~\ref{fig:dwarf_simulations}. Unlike in the Newtonian case, even dwarfs with a very low tidal susceptibility exhibit significant variations in their properties due to the time-varying EFE. We can see that in the cases with low $e$, the dwarf manages to recover the properties it had before pericentre. However, in the cases with higher $e$, these properties do not regain their initial values, indicating that the dwarf is tidally unstable.\footnote{This seems to be the case for the MW satellite Crater II \citep{Torrealba_2016}, whose low surface brightness, small pericentre \citep{Hefan_Li_2021}, and low velocity dispersion for $\Lambda$CDM \citep{Caldwell_2017} suggest that it is the remnant of an originally smaller object that got severely disrupted by tides during its perigalacticon passage \citep{Borukhovetskaya_2022, Errani_2022}. It is also expected to be tidally unstable in MOND \citep[see section~3.3 of][]{Banik_Zhao_2022}.} This was expected because dwarfs with more eccentric orbits have closer pericentre passages and thus higher $\eta$ values at pericentre. To assess whether a dwarf is destroyed in the simulation, the criterion that we apply is to consider destroyed those dwarfs which have a higher $r_h$ at the second apocentre than at pericentre. Since the dwarf is likely to expand even further as it heads towards its next pericentre, this implies that the dwarf has been too destabilized by tides to contract back to its size at its first pericentre passage. As a result, the dwarf would have an even higher tidal susceptibility at subsequent pericentres. This makes it very likely that the dwarf would not be able to survive multiple pericentre passages. On the other hand, if a dwarf that experiences a pericentre passage has a smaller $r_h$ at the subsequent apocentre and is contracting further, then it may well get back to its size at first pericentre by the time it reaches its second pericentre. This should allow it to survive multiple pericentre passages, which in the Fornax Cluster case should allow survival over a Hubble time. To fairly compare our \textit{N}-body results with our MCMC analysis, we should consider how observers calculate $\eta_{\textrm{obs}}$. The $r_h$ entering into Equation~\ref{eta_rtid} is the observed size, so ideally we would calculate $\eta$ at pericentre using the EFE and tidal stress there but using the presently observed size. As a proxy for this, we use the size at apocentre since this is the orbital phase at which we are most likely to observe the dwarf. Physically, the tidal stability of a dwarf depends on the ratio between its size and tidal radius at pericentre. Using the ratio between the tidal radius at pericentre and the half-mass radius at apocentre may seem somewhat counter-intuitive. However, the $\eta_{\textrm{destr}}$ values obtained in this way are much more comparable to those obtained from our MCMC analysis of the Fornax Cluster for the reasons discussed above. In what follows, we will use $\eta$ to mean the value calculated in this way, though Table~\ref{tab:e_eta} also shows results based on the size at pericentre. \begin{table} \centering \caption{Summary of our MOND \textit{N}-body simulation results for a Fornax dwarf with an initial distance of $R_i = 150$~kpc and different orbital eccentricities (first column). The tidal susceptibility is calculated assuming the EFE and tidal stress at pericentre but using the half-mass radius of the dwarf at pericentre (second column) or at the subsequent apocentre (third column), which we argue in the text is more comparable to our MCMC results. The fourth column gives our assessment of the simulation based on the top left panel of Fig.~\ref{fig:dwarf_simulations}.} \begin{tabular}{cccc} \hline & \multicolumn{2}{c}{$\eta$ using $r_h$ at $\ldots$} & \\ $e$ & Pericentre & Apocentre & Outcome \\ \hline 0.03 & 0.6 & 0.5 & Stable \\ 0.29 & 0.9 & 0.6 & Stable \\ 0.45 & 1.2 & 1.0 & Stable \\ 0.48 & 1.3 & 1.4 & Marginal \\ 0.51 & 1.4 & 2.0 & Unstable \\ 0.52 & 1.4 & 2.3 & Unstable \\ 0.53 & 1.4 & 2.7 & Destroyed \\ \hline \end{tabular} \label{tab:e_eta} \end{table} To constrain $\eta_{\textrm{destr}}$, we focus mainly on models with $R_i = 150$~kpc as dwarfs with a larger semi-major axis would typically be observed much further out than the region contributing to the apparent tidal edge in Fig.~\ref{fig:tid_edge}, especially if the eccentricity is significant. The results of these models are summarized in Table~\ref{tab:e_eta}. The models with $\eta \leq 1.0$ respond adiabatically. We choose $\eta_{\textrm{destr}} = 1.4$ as the lowest value at which a dwarf can get destroyed in MOND since dwarfs with this $\eta$ still seem to be marginally capable of contracting their $r_h$ back to their pericentre value by the time they reach apocentre.\footnote{This certainly appears to be the case for the $\eta = 1.5$ model with $R_i = 300$~kpc.} For the upper limit to $\eta_{\textrm{destr}}$ at pericentre, we choose a value of 2.0 because for this $\eta$, the dwarfs in our simulations are clearly larger at apocentre than at pericentre and are still expanding at the end of the simulation, indicating irreversible behaviour. We therefore infer that $\eta_{\textrm{destr}} = 1.70 \pm 0.30$ if $r_h$ is measured at the second apocentre. If instead we obtain $r_h$ at pericentre, then $\eta_{\textrm{destr}}$ has a slightly lower value of $1.35 \pm 0.05$. As expected, $\eta_{\textrm{destr}}$ is of order unity because the main physics should be captured by analytic arguments \citep{Zhao_2005, Zhao_2006}. Our numerical results suggest that it would be more accurate to drop the factor of $\frac{2}{3}$ in Equation~\ref{rtid_MOND}, which would also reconcile the numerical pre-factor with that in the Newtonian tidal radius formula (Equation~\ref{rtid_LCDM}) for the case $\alpha = 1$ and $g \gg a_{_0}$. This seems to indicate that we should identify the tidal radius with the distance to the L1 Lagrange point in the derivation of \citet{Zhao_2006} $-$ their equation~36 introduces a factor of $\frac{2}{3}$ in the Newtonian limit because the Roche Lobe extends to a shorter distance in the two non-radial directions than in the radial direction by about this factor. However, it could be that for somewhat eccentric orbits, the Roche Lobe's extent along the radial direction is the limiting factor to the dwarf's size.\footnote{Without the $\frac{2}{3}$ factor in Equation~\ref{rtid_MOND}, the tidal susceptibility threshold is $\eta_{\textrm{destr}} = 1.13 \pm 0.20$ when using $r_h$ at apocentre and $\eta_{\textrm{destr}} = 0.90 \pm 0.03$ when using $r_h$ at pericentre. Note that the MOND tidal susceptibilities of FDS dwarfs would also be reduced by a factor of $\frac{2}{3}$ in this case, which would affect the inferred $\eta_{\textrm{destr}}$ posterior.} Our simulations also show that the higher the initial distance to the cluster, the more resilient the dwarf is to the effect of cluster tides. This is because a more eccentric orbit implies a shorter amount of time spent near pericentre, so the dwarf is exposed to a high $\eta$ value for only a very brief period, allowing it to recover. Therefore, we would probably still be able to observe dwarfs which have $\eta = 2.4$ (or higher) at pericentre if these have sufficiently large apocentric distances. Given that in our analysis we considered dwarfs up to 800~kpc from the cluster centre, it is likely that there are several dwarfs in our sample which experienced a somewhat higher $\eta$ at some point in their past $-$ but for a sufficiently brief period that the dwarf remained intact. This is fairly consistent with the results of our MCMC analysis, which found that $\eta_{\textrm{destr}} = 1.88_{-0.53}^{+0.85}$. The observed shape of a dwarf is one of the indicators for whether it has been perturbed. Therefore, to estimate the $\eta$ at which simulated dwarfs should start appearing morphologically disturbed, we look at the evolution of their aspect ratio (Equation~\ref{Aspect_ratio_def}). We need to bear in mind that even a uniform external field can cause a MONDian dwarf to become deformed because the potential of a point mass is not spherical once the EFE is considered \citep{Banik_2018_EFE}. \textit{N}-body simulations of dwarfs experiencing the EFE but not tides explicitly show that this process can yield axis ratios of $\approx 0.7$ \citep{Wu_2017}. This is very much in line with our lowest eccentricity orbit with $R_i = 150$~kpc, so the mild degree of flattening evident here is not necessarily indicative of tidal effects. We find that models with $R_i = 150$~kpc start to acquire significantly elongated morphologies throughout most of their trajectories only when $\eta \ga 0.6$ (see column~3 in Fig.~\ref{fig:dwarf_simulations}). Therefore, we take $\eta_{\textrm{min, dist}} \approx 0.6$. This is slightly higher than what our MCMC analysis requires ($\eta_{\textrm{min, dist}} = 0.24_{-0.19}^{+0.24}$). One possible explanation is that dwarfs with higher $R_i$ start acquiring elongated morphologies at lower $\eta$. To check if increasing the resolution would affect our results, we perform a high-resolution rerun of one of our models for each $R_i$. This is shown using the dotted line in each panel of Fig.~\ref{fig:dwarf_simulations}. The only resolution-related effect which we can observe is that the half-mass radius of a distant dwarf expands less than at lower resolution. Because of this, we obtain slightly lower pericentric $\eta$ values for the same orbit with higher resolution. However, the evolution of the dwarf properties as a function of $\eta$ at pericentre remains almost the same as for the low-resolution model. Therefore, our conclusions should barely be affected by the resolution of the simulation. \section{Discussion} \label{discussion} Observations of Fornax Cluster dwarf galaxies show that some of them present a detectable level of disturbance in their morphology. Among the environmental effects inside a galaxy cluster that could be causing this disturbance, we found that gravitational tides from the cluster are the most likely cause (Section~\ref{effects_gravi}). The condition for a dwarf galaxy in a galaxy cluster to be tidally stable is approximately the same as the requirement that the dwarf's density exceed the average density of the cluster interior to the dwarf's orbit (Equation~\ref{approx_rtid}).\footnote{The tidal stress $\Delta g_c/\Delta r$ is related to the cluster mass profile $M_c \left( <R \right)$ by $\Delta g_c/\Delta r = GM_c\left( 2 - \alpha \right)/R^3$, from which it follows that $r_{\textrm{tid}}^3/M_{\textrm{dwarf}} \approx R^3/M_c$. Thus, a dwarf with $r_h \approx r_{\textrm{tid}}$ has $M_{\textrm{dwarf}}/r_h^3 \approx M_c/R^3$.} This should be the case for a $\Lambda$CDM dwarf in a cluster because we expect the dwarf to be dominated by dark matter and to have formed much earlier than the cluster, at which time the cosmic mean density was higher. Therefore, in this paradigm, the dwarf galaxies in the Fornax Cluster should be little affected by the tides it raises. This is indeed what our calculations show (Fig.~\ref{fig:hist_tidal_sus}). In MOND, the enhancement to the Newtonian gravity of an isolated dwarf is similar to that provided by the dark matter halo in $\Lambda$CDM. However, MONDian dwarfs in a galaxy cluster are also affected by the resulting EFE, which weakens their self-gravity. As a result, they are more susceptible to tides than dwarfs in $\Lambda$CDM, which has no EFE due to the strong equivalence principle. Therefore, observations of Fornax dwarfs can be used to compare which of the two models performs better. To check if tides might be important in the Fornax Cluster, we plotted the projected separation ($R_{\textrm{sky}}$) of each FDS dwarf against a measure of its surface brightness (Fig.~\ref{fig:tid_edge}). This revealed a lack of low surface brightness dwarfs in the central $\approx 200$~kpc even though such dwarfs are evident further out, indicating that selection effects are not responsible for the tentative tidal edge marked on this figure as a grey line. Just below this, the proportion of apparently disturbed dwarfs is also much higher than elsewhere in the cluster (see Fig.~\ref{fig:hist_disturb}). We quantified this trend by plotting the disturbed fraction as a function of the tidal susceptibility $\eta$ of each dwarf (Equation~\ref{eta_rtid}), revealing a clear rising trend detected at $2.9\sigma$ significance in $\Lambda$CDM and $3.5\sigma$ in MOND (Fig.~\ref{fig:tid_sus_obs}). These arguments suggest that the dwarf galaxy population in the FDS catalogue has been significantly shaped by tides, as previously argued by \citet{Venhola_2022}. However, the overall distribution of $\eta$ only goes up to $\approx 0.5$ in $\Lambda$CDM (Fig.~\ref{fig:hist_tidal_sus}). We expect a dwarf to be destroyed or severely disturbed only if $\eta \approx 1$, as indicated by $\Lambda$CDM \textit{N}-body simulations \citep{Penyarrubia_2009, Van_den_Bosch_2018}. We quantified this discrepancy using our MCMC analysis, which shows that the tidal stability limit of the Fornax dwarfs should be $\eta_{\textrm{destr}} = 0.25^{+0.07}_{-0.03}$ to match observations. Therefore, $\Lambda$CDM dwarfs should be destroyed when the tidal force that they experience is $\approx 0.25^3 = 1.56 \times 10^{-2}$ times smaller than their internal gravity (tidal force/internal gravity $\approx \eta^3$). Not only is this unrealistic, but also such a low $\eta_{\textrm{destr}}$ is in $>5\sigma$ tension with the $\eta_{\textrm{destr}}$ value of 1 inferred from $\Lambda$CDM \textit{N}-body simulations (Fig.~\ref{fig:eta_min_dist_destr}). The highest $\eta_{\textrm{destr}}$ value achieved with our MCMC analysis for $\Lambda$CDM is only 0.60. This corresponds to the $4.42\sigma$ upper limit because we ran $10^5$ MCMC trials. Since the uncertainty on $\eta_{\textrm{destr}}$ towards higher values from the mode is only 0.07, it is clear that $\eta_{\textrm{destr}} = 1$ is strongly excluded by the observations if the tidal susceptibilities are calculated within the $\Lambda$CDM framework. These calculations are based on Equation~\ref{rtid_LCDM}, which can be written in the alternative form \begin{eqnarray} \frac{r_{\textrm{tid, } \Lambda\textrm{CDM}}}{R} ~=~ \left( \frac{M_{\textrm{dwarf}}}{\beta M_c \left( < R \right)} \right)^{1/3} \, , \quad \beta ~=~ 2 \left( 2 - \alpha \right) \, , \label{beta_definition} \end{eqnarray} where $\alpha = 1.1$ (defined in Equation~\ref{M_cluster}) is the logarithmic slope of the Fornax Cluster mass profile $M_c \left( <R \right)$ based on hydrostatic equilibrium of the gas around its central galaxy \citep{Paolillo_2002}. This implies $\beta = 1.8$. Other workers use slightly different definitions for the tidal radius, which affects the results somewhat because the calculated $\eta \propto \beta^{1/3}$. For example, equation~6 of \citet{Wasserman_2018} gives $\beta = 2 - \alpha = 0.9$ for radial orbits and $\beta = 3 - \alpha = 1.9$ for circular orbits. Allowing even a modest amount of eccentricity, it is clear that $\beta$ in their tidal radius definition is smaller than our adopted 1.8, so their formula generally gives even lower $\eta$ values, worsening the problem for $\Lambda$CDM. Meanwhile, equation~3 of \citet{Penyarrubia_2009} gives $\beta = 3$, though this is for circular orbits and lacks a rigorous derivation \citep[see section~3.1 of][]{Penyarrubia_2008}. $\beta = 2$ is more appropriate to account for elongation in the potential along the radial direction \citep{Innanen_1983, Zhao_2006}. However, even if we adopt $\beta = 3$, this would only raise our calculated $\eta$ values by a factor of $\left( 3/1.8 \right)^{1/3}$, or equivalently imply that we can keep our definition but should consider dwarfs to be destroyed at $\eta_{\textrm{destr}} = \left( 1.8/3 \right)^{1/3} = 0.84$. This is still well above the value given by any of the $10^5$ trials in the MCMC analysis. A more recent detailed derivation affirms that for circular orbits, the appropriate value of $\beta = 3 - \alpha = 1.9$ in the Fornax case \citep[equation~5 of][]{Van_den_Bosch_2018}, which is very similar to our adopted value of 1.8. Although this could be somewhat higher with a lower value for $\alpha$, we can get $\beta = 3$ only for circular orbits around a point mass ($\alpha = 0$), which is not consistent with the Fornax Cluster having an extended dark matter halo. Moreover, a dwarf on an elliptical orbit is exposed to the pericentre value of $\eta$ for only a short time. We may intuitively expect that a dwarf would be disrupted only if it experiences $\eta > 1$ for a significant duration, since otherwise there is not enough time for tidal forces to disrupt the dwarf. This could explain why \citet{Van_den_Bosch_2018} found that dwarfs are actually quite robust to tides, more so than in many numerical simulations where apparent tidal destruction could be a numerical artefact \citep[see also][]{Webb_2020}. It could well be that the appropriate $\eta_{\textrm{destr}}$ is slightly above 1, as in the MOND case. Moreover, \citet{Van_den_Bosch_2018} found that galaxy-galaxy harassment is much less damaging than the tidal shock from pericentre passage. While their work addressed subhaloes in a MW-like halo and neglected hydrodynamics, it is still very useful in showing that a subhalo can resist disruption even if the energy it gains from harassment exceeds the binding energy, justifying our neglect of the harassment scenario (Section~\ref{tidal_sus_Fornax}). In MOND, we obtained a tidal stability limit with the MCMC analysis of $\eta_{\textrm{destr}} = 1.88^{+0.85}_{-0.53}$, which is closer to the expected value of $\approx 1$ based on analytic arguments (Equation~\ref{rtid_MOND}). To check if this limit is accurate, we performed several \textit{N}-body simulations of a dwarf orbiting a central potential similar to the Fornax Cluster (Section~\ref{Nbody_sim}). These simulations suggest that cluster tides would make Fornax dwarfs appear disturbed when $\eta_{\textrm{min,dist}} \ga 0.6$ and destroy them at $\eta_{\textrm{destr}} = 1.70 \pm 0.30$, which is in good agreement with our MCMC results (see Fig.~\ref{fig:eta_min_dist_destr}). We considered several possible explanations for the discrepancy between the low tidal susceptibility values of $\Lambda$CDM dwarfs and the fact that some of the observed Fornax dwarfs appear disturbed. This could be due to the fact that cluster tides are not the main effect responsible for the observed morphological disturbances. However, there are several trends in the FDS that suggest exactly this. These trends are as follows: \begin{enumerate} \item There are fewer low surface brightness dwarfs towards the centre of the cluster, where they are most susceptible to tides (Fig.~\ref{fig:tid_edge}). Since such dwarfs are detectable further out, this feature cannot be ascribed to selection effects. A related finding is that FDS dwarfs are typically larger towards the cluster centre, which could be related to tidal heating \citep[for a more detailed discussion, see section~7.4 of][]{Venhola_2022}; and \item The algorithm in charge of fitting the simulated Fornax system to the observations clearly noticed a rising trend between $\eta$ and the probability of disturbance ($P_{\textrm{dist}}$). This is shown by the fact that the algorithm chose $P_{\textrm{dist, ceiling}} > P_{\textrm{dist, floor}}$ with $\approx 3\sigma$ confidence in both $\Lambda$CDM and MOND (see Fig.~\ref{fig:P_floor_ceiling}), even though we did not impose this condition a priori. \end{enumerate} We have seen that these trends cannot be understood in $\Lambda$CDM as a direct consequence of cluster tides given the very low $\eta$ values. Moreover, the other major environmental effect that could be causing the observed disturbance (galaxy-galaxy harassment) also presents very low $\eta$ values (see Section~\ref{tidal_sus_Fornax}). Another possibility is that our results could be affected by some of the assumptions or choices that we made during the analysis. To check if this is the case, we repeat the procedures described in Section~\ref{test_mass} but change some of the assumed conditions and/or parameters in the following ways: \begin{enumerate} \item{Considering that the FDS dwarfs could have a lower dark matter fraction within their optical radius:} We consider the possibility that the dark matter fraction of the FDS dwarfs is lower than assumed in our nominal case (this is motivated in Section~\ref{newDMfrac}). Assuming that $\Lambda$CDM explains the properties of isolated dwarfs, we use the velocity dispersions of nearby isolated dwarfs to estimate their typical dark matter fraction, which returns a somewhat lower value than assumed in our nominal analysis. Substituting this fit (Equation~\ref{revised_DM_frac}) into our MCMC chain raises $\eta_{\textrm{destr}}$ slightly, but it is still only $0.33^{+0.04}_{-0.05}$. We then consider a very conservative scenario in which there is only $10\times$ as much dark matter as stars within the optical extent of each dwarf, which requires altering Equation~\ref{M_dwarf_rule} to $M_{\textrm{dwarf, } \Lambda\textrm{CDM}} = 11~M_{\star}$. For this very low dark matter fraction, we obtain that $\eta_{\textrm{destr}} = 0.54^{+0.19}_{-0.09}$, which reduces the tension between observations and $\eta_{\textrm{destr}} = 1$ (as expected from \textit{N}-body simulations) to $2.29\sigma$ (the triangle plot for this analysis is shown in Fig.~\ref{fig:triang_newDMfrac}). While this is a significant improvement with respect to the $>5\sigma$ tension in the nominal case, we see that even when considering one of the most conservative assumptions for the amount of dark matter contained within the optical radius of a dwarf, $\eta_{\textrm{destr}} \geq 1$ is still excluded at 97.8\% confidence. Moreover, we show in Section~\ref{newDMfrac} that in a recent high-resolution cosmological $\Lambda$CDM simulation, the dark matter fraction within the stellar $r_h$ of a dwarf is far higher than this at the relevant $M_{\star}$, and is actually quite close to our nominal assumption; \item{Changing the lower limit to the distribution of dwarf densities in the test mass simulation}: To check if the adopted detection limit to the density of the Fornax dwarfs significantly affects the results, we repeat the analysis using a density threshold $\rho_t$ that is $5\sigma$ below the mean logarithmic density. We also consider a density limit of $\rho_{\textrm{mean}}$ (grey line in Fig.~\ref{fig:hist_dens}). For reference, the nominal $\rho_t$ in MOND is $2.88\sigma$ below the mean logarithmic density, while $\rho_{\textrm{mean}}$ is $1.91\sigma$ below. The corresponding values in $\Lambda$CDM are $3.58\sigma$ and $2.56\sigma$, respectively (see Appendix~\ref{dwarf_dens_LCDM}). Fig.~\ref{fig:3dens} shows the triangle plots comparing the results obtained using these two density limits with the nominal one for $\Lambda$CDM and MOND. From these plots (described further in Appendix~\ref{sec:triang}), we can see that choosing a lower $\rho_t$ worsens the tension for $\Lambda$CDM while maintaining consistency in MOND. Using a higher $\rho_t$ helps to increase the estimated values for $\eta_{\textrm{destr}}$ in $\Lambda$CDM. However, even if we use $\rho_t = \rho_{\textrm{mean}}$, the inferred $\eta_{\textrm{destr}}$ is still significantly below the threshold of $\approx 1$ required in \textit{N}-body simulations, while the inference on $\eta_{\textrm{min, dist}}$ hardly changes. Thus, choosing even higher $\rho_t$ could perhaps help $\Lambda$CDM to reach a reasonable $\eta_{\textrm{destr}}$. However, taking such high values for $\rho_t$ would be in disagreement with observations as the whole point of $\rho_t$ is that dwarfs are not detectable if they have a lower density, but dwarfs with a lower density are clearly observed if we adopt such a high $\rho_t$; \item{Changing the values of the deprojection parameters (see Appendix \ref{deproj})}: The deprojection parameters in our nominal analysis were $\textrm{offset} = 0.4^{\circ}$ and $\textrm{nnuc}_{\textrm{floor}} = 1.2^{\circ}$ based on fig.~6 of \citet{Venhola_2019}. We repeat our analysis using deprojection values at the upper limit of the envelope in this figure: $\textrm{offset} = 0.5^{\circ}$ and $\textrm{nnuc}_{\textrm{floor}} = 1.5^{\circ}$. Fig.~\ref{fig:deproj} shows the triangle plots comparing the results for these two different deprojections in $\Lambda$CDM and MOND. From these plots, we can see that these two deprojections give almost the same results in either theory; \item{Changing the ratio between present and pericentre distances (see Appendix~\ref{Rper})}: A related change we could make is to consider altering the assumed ratio of 0.29 between the average $R$ and the pericentre distance. This is valid for a thermal eccentricity distribution with $\textrm{Slope}_{P_e} = 2$, which is expected theoretically but is the highest possible value (Equation~\ref{P_e}). With a lower $\textrm{Slope}_{P_e}$, the ratio would rise as orbits would typically be more circular, reducing the calculated tidal susceptibility at pericentre. This would worsen the problem for $\Lambda$CDM; and \item{Increasing the resolution}: In Section~\ref{orbit_integration}, we created a grid of $100 \times 100$ cells for different values of the orbital eccentricity ($e$) and initial distance to the cluster centre ($R_i$). We increase the resolution to $200 \times 200$ and repeat the analysis to check if this has any effect on the results. The triangle plots showing the results in $\Lambda$CDM and MOND for these two resolutions are shown in Fig.~\ref{fig:highres}. From these plots, we can see that the results are nearly identical for the high- and low-resolution cases. \end{enumerate} From these tests, we infer that our results are not significantly affected by modelling assumptions. \subsection{The dark matter content of dwarf galaxies in $\Lambda$CDM} \label{DM_content} Our conclusion that $\Lambda$CDM is inconsistent with the FDS dwarfs relies heavily on their low values of $\eta$ in this paradigm, which in turn relies on the assumption that they should be dominated by dark matter. We therefore explore whether consistency could be gained by partially relaxing this assumption in a manner consistent with other constraints. To try and raise $\eta$ while continuing to use Newtonian gravity, we consider the possibility that the FDS dwarfs are TDGs. Our results are presented in Appendix~\ref{tidal_sus_newton}. We see that this scenario is also not viable because the elliptical galaxies in the cluster must still contain substantial dark matter haloes, leading to highly efficient disruption of dwarfs through galaxy-galaxy harassment. It thus seems clear that the FDS dwarfs should be primordial. In this case, we may consider whether the dark matter density in their central regions could be substantially less than assumed here, raising their tidal susceptibility within the $\Lambda$CDM framework. The transformation of central cusps in the dark matter density profile into cores is expected to be rather inefficient for dwarfs with $M_{\star} \la 10^{7.2} \, M_{\odot}$ \citep{Di_Cintio_2014, Dutton_2016, Tollet_2016}. Most FDS dwarfs have a lower $M_{\star}$ (i.e., they lie below the red line in Fig.~\ref{fig:M_L}). This makes it unlikely that baryonic feedback has substantially reduced the central dark matter density of most FDS dwarfs, especially at the low mass and low surface brightness end important to our argument about tidal stability. Adiabatic contraction could actually raise the central dark matter density \citep{Li_2022, Moreno_2022_expansion}, as could tidal stripping of the dark matter halo \citep{Penyarrubia_2008}. The colours of the FDS dwarfs also indicate that star formation stopped early, most likely due to ram pressure stripping of the gas (Section~\ref{effects_gravi}). Thus, it would only be possible for strong feedback to substantially reduce the baryonic potential depth once. This is insufficient to substantially affect the central dark matter density even in the extreme case that the entire gas disc is instantaneously removed \citep{Gnedin_2002}. Multiple bursts of star formation would be required to substantially affect the dominant dark matter halo \citep{Pontzen_2012}, but it is very unlikely that this occurred in most FDS dwarfs. Consequently, they should still have a significant amount of dark matter in their central regions, as is the case with Galactic satellites whose star formation ended early \citep{Read_2019_core}. Moreover, the low surface brightness nature of the FDS dwarfs considered here implies an atypically large size at fixed $M_{\star}$, causing the baryonic portion of the dwarf to enclose a larger amount of dark matter than for the more typical Illustris galaxies considered by \citet{Diaz_2016}. Another way in which FDS dwarfs could lose dark matter is through interactions with a massive elliptical galaxy. This scenario has been shown to lead to a dwarf like DF2 with an unusually low dark matter content \citep{Shin_2020}. However, such examples are rare in cosmological simulations \citep{Haslbauer_2019_DF2, Moreno_2022_DF2}. In addition, the possibility that most FDS dwarfs lack dark matter altogether runs into severe difficulties based on simple analytic arguments: Newtonian TDGs would be very fragile and easily disrupted by interactions with massive cluster ellipticals, which must have substantial dark matter haloes in a $\Lambda$CDM context (Appendix~\ref{tidal_sus_newton}). MOND seems to offer the right level of tidal stability: neither too much such that all the dwarfs are completely shielded from tides and the observed signs of tidal disturbance remain unexplained, nor too little such that the dwarfs would have been destroyed by now in the harsh cluster environment studied here. The FDS dwarfs behave just as they ought to in MOND. This conclusion is in agreement with the recent work of \citet{Keim_2022}, which used the observed tidal disturbance of the dwarf galaxies NGC 1052-DF2 and NGC 1052-DF4 to argue that they must be `dark matter free', since otherwise their dark matter halo would have shielded them from tides. Phrased in a less model-dependent way, these observations indicate much weaker self-gravity than for a typical isolated dwarf, which is a clear prediction of MOND due to the EFE \citep{Famaey_2018, Kroupa_2018_DF2, Haghi_2019_DF2}. In the more isolated galaxy DF44, the self-gravity is stronger despite a similar baryonic content \citep{Van_Dokkum_2019}, but this too is in line with MOND expectations \citep{Bilek_2019, Haghi_2019_DF44}. Strong evidence for the EFE has also been reported from the outer rotation curves of spiral galaxies, which tend to be flat for isolated galaxies but have a declining trend for galaxies in a more crowded environment \citep{Haghi_2016, Chae_2020_EFE, Chae_2021}. Our results with the FDS are similar to those of \citet{Chilingarian_2019} and \citet{Freundlich_2021}, who also report signs of tidal disturbance in some of the dwarf galaxies in the Coma cluster. Another case in point is the recent study of the dwarf galaxy population in the Hydra I cluster, where the proximity to the cluster centre seems to be affecting the morphology of the dwarfs in a manner suggestive of tidal effects \citep[e.g., larger half-mass radii for dwarfs closer to the cluster centre;][]{La_Marca_2022}. Closer to home, the MW satellites also show signs of tidal disturbance like elliptical isophotes \citep{McGaugh_Wolf_2010}. There is a good correlation between these features and the value of $\eta$ in MOND, which moreover has a maximum value very close to 1 (see their fig.~6). However, the maximum $\eta$ in $\Lambda$CDM is $\la 0.2$, making it difficult to understand the observations in this framework. \subsubsection{Revised dark matter fraction in $\Lambda$CDM dwarfs} \label{newDMfrac} Throughout our analysis, we followed the \citet{Diaz_2016} prescription that $4\%$ of the total dark matter halo of each dwarf lies within its optical radius, with the total halo mass $M_{\textrm{halo}}$ following from $M_{\star}$ through the \citet{Moster_2010} abundance matching relation. The factor of $4\%$ was obtained by fitting to the dynamically inferred dark matter masses $M_{\textrm{DM}}$ within the optical radii of S\textsuperscript{4}G galaxies, as shown in fig.~6 of \citet{Diaz_2016}. In this figure, we can see that for low-mass galaxies ($M_{\star} \la 10^9 M_{\odot}$), the $M_{\textrm{DM}}/M_{\star}$ vs. $M_{\star}$ relation seems to flatten at $M_{\textrm{DM}} \approx 10 \, M_{\star}$. However, this is unclear because S\textsuperscript{4}G has very few well-observed galaxies with such a low mass. We can use other surveys to extend the S\textsuperscript{4}G results to even lower mass by using measurements of the baryonic properties of dSph galaxies and their line of sight velocity dispersion $\sigma_{\textrm{los}}$. The Newtonian dynamical masses of galaxies from the other surveys are found using equation~2 in \citet{Wolf_2010}: \begin{eqnarray} M_{\textrm{dyn}} \left( < r_h \right) ~=~ \frac{3 r_h \langle \sigma^2_{\textrm{los}} \rangle}{G} \, , \label{M_dyn} \end{eqnarray} where $M_{\textrm{dyn}} \left( < r_h \right)$ is the mass within the baryonic $r_h$. Note that when using this to estimate $M_{\textrm{DM}}/M_{\star}$, we account for the fact that only half the stellar mass is enclosed within $r_h$. To check the consistency between the assumed dark matter fraction and observations of isolated dwarfs, we use Fig.~\ref{fig:M_dyn/M_star} to plot $M_{\textrm{dyn}}/M_{\star}$ of the galaxies in four different galaxy surveys (semi-transparent coloured dots), assuming the \citet{Diaz_2016} result for the dark matter fraction as used in our nominal analysis (black line; Equation~\ref{M_dwarf_rule}), and assuming conservatively that $M_{\textrm{DM}} = 10 \, M_{\star}$ (blue line). We can see that it is rather unlikely that the FDS dwarfs generally have much less dark matter in their baryonic region than we assumed, since the linear regression to the survey data over the $M_{\star}$ range of the FDS dwarfs (dashed black line) is quite close to our nominal dark matter fraction. We can also use the Illustris TNG50 cosmological simulation \citep{Pillepich_2018, Pillepich_2019, Nelson_2019, Nelson_2019b} to check the dark matter fraction that we expect dwarfs to have in the $\Lambda$CDM paradigm. We do this in Fig.~\ref{fig:M_dyn/M_star}, where we show the mean and standard deviation of $M_{\textrm{DM}}/M_{\star} + 1$ within the stellar $r_h$ in $M_{\star}$ bins of width 0.25~dex (cyan dots with error bars). The trend followed by these simulated dwarfs is even steeper than that given by the observed dwarfs, though both give a similar dark matter fraction at the low-mass end crucial to our analysis (the median $M_{\star}$ of the FDS dwarfs is shown by the vertical dashed green line at $\log_{10} \left( M_{\star}/M_{\odot} \right) = 6.96$). This further supports our nominal choice for the dark matter fraction of FDS dwarfs. One reason for their high expected dark matter fraction is that the vast majority of them have too little stellar mass for efficient core formation, the threshold for which is shown by the red vertical line at $\log_{10} \left( M_{\star}/M_{\odot} \right) = 7.2$ for the reasons discussed above. All these arguments highlight that the $M_{\textrm{DM}} = 10 \, M_{\star}$ case is clearly very conservative given the steep relation followed by low-mass galaxies that we expect from abundance matching arguments, Illustris TNG50 results, and the velocity dispersions of nearby dwarfs. To assess the sensitivity of our analysis in Section \ref{test_mass} to the assumed dark matter fraction, we repeat it with the dark matter fraction given by the linear fit \citep[equations~18 and 19 of][]{Banik_2018_escape} to the observed isolated dwarfs in Fig.~\ref{fig:M_dyn/M_star}: \begin{eqnarray} \log_{10} \left( \frac{M_{\textrm{DM}}}{M_{\star}} + 1 \right) = -0.396 + 4.089~\log_{10} \left( \frac{M_{\star}}{M_{\odot}} \right) \, , \label{revised_DM_frac} \end{eqnarray} where $M_{\textrm{DM}}/M_{\star}$ is the ratio of dark matter to stars within the stellar $r_h$. The typical dwarf densities in this case are about 0.5~dex lower than with the nominal dark matter fraction. As a result, the logarithmic mean is lower than in the nominal case by a similar amount: It is now $\log_{10} \, \rho \left( M_{\odot}/\textrm{pc}^3 \right) = -1.41$. In this case, the density threshold $\rho_t = 5.85 \times 10^{-4}~M_{\odot}/\textrm{pc}^3$ is $2.44\sigma$ below the mean. To keep our statistical analysis comparable to our nominal one, we use the same 6 bins in $\eta_{\textrm{obs}}$ as before. In this way, we obtain that Equation~\ref{revised_DM_frac} gives a slightly higher $\eta_{\textrm{destr}} = 0.33^{+0.04}_{-0.05}$. The maximum value achieved by the MCMC chain is only 0.59, which implies that the $\Lambda$CDM model is still in $>5\sigma$ tension with the expected value of 1. For completeness, we repeat our analysis with the very conservative assumption that $M_{\textrm{DM}} = 10 \, M_{\star}$. In this case, the distribution of dwarf densities is similar to that in MOND (Fig.~\ref{fig:hist_dens}) but scaled up $11\times$. Thus, the logarithmic dispersion remains $\sigma = 0.57$~dex and the density threshold $\rho_t = 4.66 \times 10^{-4}~M_{\odot}/\textrm{pc}^3$ is still $2.88\sigma$ below the mean $\log_{10} \, \rho$, which is now $-1.69$ in these units. As expected, the $\rho_t$ value is $11\times$ higher than in the MOND model $-$ and thus much less than in our nominal $\Lambda$CDM analysis. We found that in this reduced density case, $\eta_{\textrm{destr}} = 0.54^{+0.19}_{-0.09}$ and the probability that $\eta_{\textrm{destr}} \geq 1$ is $2.23 \times 10^{-2}$ ($2.29\sigma$). Appendix~\ref{sec:triang} shows the complete triangle plot with the distributions of the model parameters and parameter pairs for the nominal $\Lambda$CDM analysis and the two revised cases described above. There is little impact to the inferences on parameters other than $\eta_{\textrm{destr}}$, $\eta_{\textrm{min, dist}}$, and $\textrm{Slope}_{P_e}$. Therefore, it is clear that assuming a lower dark matter fraction for the $\Lambda$CDM dwarfs helps to alleviate the tension between observations and \textit{N}-body simulations only if this fraction is reduced significantly. However, having a dark matter fraction of $M_{\textrm{DM}}/M_{\star} = 10$ within the optical radius is a very conservative assumption at odds with many other lines of evidence, including cosmological simulations. Even with this assumption, $\eta_{\textrm{destr}} \geq 1$ is still excluded by our MCMC analysis of the FDS at 97.8\% confidence. \section{Conclusions} \label{conclusions} We studied the tidal susceptibility of dwarf galaxies in the Fornax Cluster to gravitational effects of the cluster environment in both $\Lambda$CDM and MOND. In both theories, we found cluster tides to be the main effect. Thus, cluster tides should be able to explain the observed morphological disturbance of some Fornax dwarfs and the lack of low surface brightness dwarfs towards the cluster centre (Fig.~\ref{fig:tid_edge}). By constructing a test mass simulation of the Fornax system and performing a statistical analysis using the MCMC method, we constrained the tidal susceptibility ($\eta \equiv r_h/r_{\textrm{tid}}$) value at which a Fornax dwarf should get destroyed in order to match the observations, which we call $\eta_{\textrm{destr}}$. We found that $\eta_{\textrm{destr}} = 0.25_{-0.03}^{+0.07}$ in $\Lambda$CDM and $1.88_{-0.53}^{+0.85}$ in MOND. The $\eta_{\textrm{destr}}$ value in $\Lambda$CDM falls significantly below analytic expectations (Equation~\ref{rtid_LCDM}) and is in $>5\sigma$ tension with \textit{N}-body simulation results, which indicate that $\eta_{\textrm{destr}} \approx 1$ \citep{Penyarrubia_2009, Van_den_Bosch_2018}. In other words, the very low $\eta$ values of FDS dwarfs imply that they should be unaffected by cluster tides, contradicting the observed signs of tidal disturbance. We also found that the other major environmental influence of interactions with individual massive galaxies in the cluster should not be a significant process in $\Lambda$CDM \citep[see also section~7.3.3 of][]{Venhola_2019}. We discarded the possibility that the above-mentioned discrepancy is due to the minimum allowed density of the simulated sample of dwarfs being too low, the deprojection parameters being different from our nominal ones, the resolution of the test mass simulation not being high enough to get reliable results, and the dwarfs having less dark matter than we assumed (Section~\ref{discussion}). In particular, the velocity dispersions of nearby isolated dwarfs suggest a slightly lower dark matter fraction (dashed line in Fig.~\ref{fig:M_dyn/M_star}). Using this only slightly raises $\eta_{\textrm{destr}}$ to $0.33^{+0.04}_{-0.05}$. Even if we conservatively assume that the FDS dwarfs have only $10\times$ as much dark matter as stars within their optical radius, we still get a $2.29\sigma$ tension with expectations (Equation~\ref{rtid_LCDM}). Therefore, our results reliably show that the $\Lambda$CDM paradigm is in serious tension with observations of perturbed dwarf galaxies in the Fornax Cluster \citep[observations which are strongly suggestive of tidal effects, see also section~7.4 of][]{Venhola_2022}. An alternative model that assumes different properties for the dark matter particles could perhaps reconcile the basics of the $\Lambda$CDM cosmology with the observed morphological disturbances of some Fornax dwarfs. One of the most popular alternatives is the `superfluid dark matter' model \citep{Berezhiani_2015, Hossenfelder_2020}. Like most hybrid models, it attempts to reconcile the successes of MOND on galaxy scales with the advantages of dark matter on larger scales, especially with regards to the CMB anisotropies and galaxy cluster dynamics. However, this model also presents its own problems, including orbital decay of stars in the Galactic disc from Cherenkov radiation \citep{Mistele_2021} and that the LG satellite planes extend beyond the estimated superfluid core radii of the MW and M31, making it difficult to explain the high observed internal velocity dispersions of the satellites in these planes \citep[see section~5.6 of][]{Roshan_2021_disc_stability}. There are also difficulties explaining the observed regularities in rotation curves consistently with gravitational lensing results in a theory where baryons feel extra non-gravitational forces that do not affect photons \citep*{Mistele_2022}. Another possibility is that the dark matter particles are fuzzy with a low mass and thus a long de Broglie wavelength, reducing their density in the central region of a dwarf galaxy. However, ultralight bosons \citep{Hu_2000, Hui_2017} are in significant tension with observations of the Lyman-$\alpha$ forest \citep{Rogers_2021}. More generally, reducing the ability of dark matter to cluster on small scales would make it difficult to form dwarf galaxies at high redshift and to explain their high Newtonian dynamical $M/L$ ratios. This brings us to the MOND case, in which the inferred $\eta_{\textrm{destr}}$ is much more consistent with analytic expectations (Equation~\ref{rtid_MOND}). In order to compare $\eta_{\textrm{destr}}$ with the results of \textit{N}-body simulations as we did for $\Lambda$CDM, we had to perform numerical MOND simulations ourselves \citep[though one pioneering study exists, see][]{Brada_2000_tides}. From our simulations tailored to the properties of a typical dwarf galaxy in the Fornax Cluster, we obtained that $\eta_{\textrm{destr}} = 1.70 \pm 0.30$, in excellent agreement with the value required to fit the observational data according to the MCMC method. We therefore conclude that MOND performs significantly better than $\Lambda$CDM and is clearly the preferred model in all the tests that we conducted throughout this work, even though it was not designed with the FDS in mind. Nevertheless, MOND still needs an additional ingredient to explain some of the observations on larger scales, especially the temperature and pressure profiles in galaxy clusters and the CMB power spectrum \citep{Famaey_McGaugh_2012}. For this, several models have been proposed that complement MOND. Some of the most promising ones are the relativistic MOND theory which can fit the speed of gravitational waves and the CMB anisotropies but likely cannot explain the dynamics of virialized galaxy clusters \citep{Skordis_2021}; and the $\nu$HDM model that assumes MOND gravity and 11~eV sterile neutrinos \citep{Angus_2009}. These proposed particles would play the role of a collisionless component that only aggregates at the scale of galaxy clusters, helping to explain the Bullet Cluster \citep{Angus_2007} and other virialized galaxy clusters \citep{Angus_2010}, where the MOND corrections to Newtonian gravity are generally small. MOND has also proved capable of explaining several physical phenomena that $\Lambda$CDM has been failing to describe, including the planes of satellite galaxies in the LG and beyond \citep{Pawlowski_2021_Nature_Astronomy, Pawlowski_2021}, the weakly barred morphology of M33 \citep{Sellwood_2019, Banik_2020_M33}, and the pattern speeds of galaxy bars \citep{Roshan_2021_disc_stability, Roshan_2021_bar_speed}. Using the $\nu$HDM extension, MOND can also explain the CMB \citep{Angus_Diaferio_2011}, the KBC void and Hubble tension \citep{Haslbauer_2020}, and the early formation of the interacting galaxy cluster El Gordo \citep{Katz_2013, Asencio_2021}. Therefore, this later model is capable of explaining both the CMB and the dynamics of galaxy clusters while preserving the successes of MOND at galaxy scales \citep[][and references therein]{Banik_Zhao_2022}. In this study, we have shown that it should also be capable of resolving the problem faced by $\Lambda$CDM with regards to the observed signs of tidal disturbance in Fornax Cluster dwarf galaxies. \section*{Acknowledgements} EA is supported by a stipend from the Stellar Populations and Dynamics Research Group at the University of Bonn. IB is supported by Science and Technology Facilities Council grant ST/V000861/1, which also partially supports HZ. IB acknowledges support from a ``Pathways to Research'' fellowship from the University of Bonn. PK acknowledges support through the Deutscher Akademischer Austauschdienst-Eastern European Exchange Programme. EA would like to thank Prof. Xufen Wu for providing the initial conditions templates of the dwarf galaxy used in the MOND \textit{N}-body simulations. The authors are grateful to Sara Eftekhari for providing the table of literature data shown in Fig.~\ref{fig:M_dyn/M_star}. They are also grateful to the referee for comments which substantially improved this paper. \section*{Data availability} The results presented can be reproduced by using the data available in the Vizier catalogue\footnote{\url{https://vizier.u-strasbg.fr/viz-bin/VizieR-3?-source=J/A\%2bA/620/A165/dwarf\&-out.max=50\&-out.form=HTML\%20Table&-out.add=_r\&-out.add=_RAJ,_DEJ\&-sort=_r\&-oc.form=sexa}} and following the methods described in this paper. For a user guide describing how to install \textsc{por} and providing links from which it can be downloaded, we refer the reader to \citet{Nagesh_2021}. \bibliographystyle{mnras} \bibliography{FDS_bbl} \begin{appendix} \section{Deprojecting distances in the sky plane to 3D distances} \label{deproj} In order to convert an observed 2D projected distance $R_{\textrm{sky}}$ into a 3D distance $R$, we use a simplified version of the deprojection method applied in \citet{Venhola_2019}. For convenience, we normalize distances to $d_{\textrm{Fornax}} = 20$~Mpc, the distance to the Fornax Cluster \citep{Blakeslee_2009}. Thus, we define \begin{eqnarray} \theta_{\textrm{2D}} ~\equiv~ \frac{R_{\text{sky}}}{d_{\textrm{Fornax}}} \, , \quad \theta_{\textrm{3D}} ~\equiv~ \frac{R}{d_{\textrm{Fornax}}} \, . \end{eqnarray} Fig. 6 of \citet{Venhola_2019} shows the relation between these quantities for nucleated and non-nucleated dEs.\footnote{Results are also shown for dwarf irregulars, but we removed these from our sample.} The relation for nucleated dwarfs is almost parallel to the line of equality, but with an offset of $\approx 0.4^\circ$. Therefore, we deproject a dwarf labelled as `nucleated' using \begin{eqnarray} \theta_{\textrm{3D}} ~=~ \theta_{\textrm{2D}} + \textrm{offset} \, , \end{eqnarray} with offset~$=0.4^\circ$ in our nominal analysis. In the case of non-nucleated dwarfs, $\theta_{\textrm{3D}}$ has a constant floor value of $\approx 1.2^\circ$ until it joins the relation between $\theta_{\textrm{3D}}$ and $\theta_{\textrm{2D}}$ followed by nucleated dwarfs at $\theta_{\textrm{2D}} > \textrm{nnuc}_{\textrm{floor}} - \textrm{offset}$. Therefore, for the non-nucleated dwarfs, we apply the following deprojection: \begin{eqnarray} \theta_{\textrm{3D}} = \begin{cases} \textrm{nnuc}_{\textrm{floor}}, & \textrm{if}\ \theta_{\textrm{2D}} \leq \textrm{nnuc}_{\textrm{floor}} - \textrm{offset} \, , \\ \theta_{\textrm{2D}} + \textrm{offset}, & \textrm{if}\ \theta_{\textrm{2D}} \geq \textrm{nnuc}_{\textrm{floor}} - \textrm{offset} \, , \end{cases} \end{eqnarray} where $\textrm{nnuc}_{\textrm{floor}} = 1.2^\circ$ in our nominal analysis. As with the nucleated dwarfs, we use offset~$=0.4^\circ$. \section{Obtaining $R_{\textrm{per}}$ from a 3D distance} \label{Rper} Assuming a thermal eccentricity distribution \citep{Jeans_1919, Ambartsumian_1937, Kroupa_2008}, we have that the probability distribution of eccentricities is $P_e = 2e$. If the orbits are approximately Keplerian, the pericentre distance $R_{\textrm{per}} = a \left( 1 - e \right)$, where $a$ is the semi-major axis and $e$ is the eccentricity. The time-average distance can be calculated as $\langle R \rangle = a \left(1 + e^2/2 \right)$ \citep[section~3 of][]{Mendez_2017}. To obtain the relation between $\langle R \rangle$ and $R_{\textrm{per}}$, we integrate over the whole eccentricity distribution: \begin{eqnarray} \frac{R_{\textrm{per}}}{\langle R \rangle} = \int \left. \frac{R_{\textrm{per}}}{\langle R \rangle} \right|_e P_e \; de = \int_{0}^{1} \left( \frac{1 - e}{1 + \frac{e^2}{2}} \right) 2e \; de = 0.29. \end{eqnarray} We assume that the 3D distance of a dwarf inferred from its observed projected distance (Appendix~\ref{deproj}) is about the same as its time-average distance. We therefore obtain that for the FDS dwarfs, $R_{\textrm{per}} = 0.29 \, R$. \section{Do two experiments have the same proportion of successes?} \label{Binomial_significance} In Section~\ref{comparison_disturbance}, we encountered the problem that one experiment gives $S_{\textrm{obs, 1}}$ `successes' out of $T_1$ trials while another experiment gives $S_{\textrm{obs, 2}}$ successes out of $T_2$ trials, with a success defined as a dwarf galaxy that appears disturbed. The problem is to test the null hypothesis that the proportion of successes ($x$) is the same in both experiments assuming that $T_1$ and $T_2$ are set in advance independently of the actual number of successes. We consider this problem in two stages as follows: \begin{enumerate} \item Keeping $x$ fixed, we evaluate the likelihood $P_x$ of obtaining data as bad as or worse than the observed combination ($S_{\textrm{obs, 1}}$, $S_{\textrm{obs, 2}}$) for the null hypothesis; and \item We then obtain a weighted mean value for $P_x$ by considering all plausible $x$, each time weighting by the likelihood that the observed ($S_{\textrm{obs, 1}}$, $S_{\textrm{obs, 2}}$) arises with that $x$. \end{enumerate} If we know $x$, we can use binomial statistics (Equation~\ref{binomial}) to find the likelihood of obtaining any combination ($S_1$, $S_2$). We obtain $P_x$ by adding the probabilities of all ($S_1$, $S_2$) combinations which are as likely as or less likely than the observed combination ($S_{\textrm{obs, 1}}$, $S_{\textrm{obs, 2}}$). This follows the usual principle that if the data seems unlikely given the null hypothesis, we should consider all the ways in which it could look as bad or even worse. If the null hypothesis were true, the probability distribution of its parameter $x$ can be found more accurately by combining the two experiments to obtain a single experiment with $\left( S_{\textrm{obs, 1}} + S_{\textrm{obs, 2}} \right)$ successes out of $\left( T_1 + T_2 \right)$ trials. We use Equation~\ref{Bernoulli_mean_stdev} to calculate the mean $x_0$ and uncertainty $\sigma_x$ of the resulting posterior inference on $x$ assuming a uniform prior. We then consider all values of $x$ within the range $x_0 \pm 5\sigma_x$ provided this does not go outside the mathematically allowed range ($0-1$). Within the considered range of $x$, we weight each $P_x$ determination by the binomial likelihood $P_{\textrm{obs}} \left( x \right)$ of obtaining the observed combination ($S_{\textrm{obs, 1}}$, $S_{\textrm{obs, 2}}$), so $P_{\textrm{obs}} \left( x \right)$ is a product of the binomial likelihood from each of the experiments. The idea is that each $P_x$ should be weighted by how plausible the corresponding $x$ is given the data in the context of the null hypothesis. This leads to our estimated $P$-value: \begin{eqnarray} P ~=~ \frac{\int P_x P_{\textrm{obs}} \left( x \right) \, dx}{\int P_{\textrm{obs}} \left( x \right) \, dx} \, . \end{eqnarray} Since it is possible that no value of $x$ matches the observations very well because the null hypothesis is wrong, $P_{\textrm{obs}} \left( x \right)$ might not integrate to 1. In the particular case of Section~\ref{comparison_disturbance}, calculating the significance $P$ in this way only tells us how plausible it is that $f_d$ is the same in the low $\eta$ and high $\eta$ subsamples, which is the null hypothesis. Our alternative hypothesis specifies that $f_d$ should be higher in the high $\eta$ subsample on physical grounds, not merely that $f_d$ should have some sort of correlation with $\eta$. Since the inferred $f_d$ indeed rises with $\eta$, we should bear in mind that the low likelihood of the null hypothesis is caused by a deviation in just the sense expected on physical grounds under the alternative hypothesis where tides are relevant. On the other hand, we tried all possible choices of $\eta_t$ to maximize the significance of the signal, leading to a look-elsewhere effect. \section{Tidal susceptibility of Newtonian TDG\lowercase{s}} \label{tidal_sus_newton} As discussed in Section~\ref{discussion}, our results indicate a higher level of tidal susceptibility than is expected in $\Lambda$CDM. This could be a sign that the Fornax dwarfs lack dark matter altogether, which is possible in this framework if the FDS dwarfs are mostly TDGs. These are expected to be rather rare in $\Lambda$CDM, so the scenario is not very plausible \citep{Haslbauer_2019_TDG}. We nonetheless consider it for completeness. If the dwarfs are of tidal origin, they would be free of dark matter \citep{Barnes_1992, Wetzstein_2007}. However, the massive cluster galaxies would still be surrounded by a dark matter halo. In this scenario, the mass ratio between the dwarfs and the massive galaxies would be rather extreme, suggesting a serious problem with the stability of the dwarfs. To quantify this, we obtain the tidal radius of a dwarf by applying Equation \ref{rtid_LCDM} considering only its baryonic mass. Similarly, we can obtain the disruption time-scale by applying Equation~\ref{td_LCDM} and accounting for the fact that the terms referring to the dwarf (those labelled with a subindex `dwarf') should be purely baryonic while the terms referring to the large galaxies (labelled with a subindex `p') should still account for the dark matter contribution to the mass and half-mass radius. We can then substitute in these results to obtain the susceptibility to cluster tides (Equation \ref{eta_rtid}) and galaxy-galaxy harassment (Equation \ref{eta_har}). The results are shown in Fig.~\ref{fig:hist_tidal_sus_newton}. As expected, the dwarfs are now much more susceptible to cluster tides (higher $\eta_{\textrm{rtid}}$ than in Fig.~\ref{fig:hist_tidal_sus}). The distribution of $\eta_{\textrm{rtid}}$ becomes very similar to MOND, suggesting that maybe the Newtonian TDG scenario is plausible. However, the tidal susceptibility to harassment ($\eta_{\textrm{har}}$) is very large in this scenario and greatly exceeds 1 for the vast majority of the dwarfs. The high $\eta_{\textrm{har}}$ values arise because the dwarfs are completely unprotected: They do not have a boost to their self-gravity either from MOND or from a dark matter halo. Given their low surface brightness, this leads to very weak self-gravity. However, in a $\Lambda$CDM universe, the large galaxies must still have dark matter haloes. As a result, purely baryonic dwarfs governed by Newtonian gravity should have already been destroyed by encounters with the massive cluster galaxies. Therefore, we can consider that the TDG scenario in $\Lambda$CDM is extremely unlikely. Note that in MOND, our analysis is not sensitive to whether the dwarfs are TDGs or formed primordially $-$ they are purely baryonic in either case. \section{Distribution of dwarf densities in $\Lambda$CDM} \label{dwarf_dens_LCDM} Our MCMC analysis relies on an assumed distribution for the dwarf densities, which are crucial to their tidal stability. We therefore need to repeat the steps discussed in Section~\ref{subsubsec:dwarf_density} for the case of $\Lambda$CDM. For this model, we show the mass-luminosity relation (Fig.~\ref{fig:M_L_LCDM}), the surface density-volume density relation (Fig.~\ref{fig:surfdens_voldens_LCDM}), and the histogram of volume densities of the dwarfs in the FDS catalogue (Fig.~\ref{fig:hist_dens_LCDM}). The main difference is that the mass of the dwarfs is higher since it includes the contribution of the dark matter component within the optical radius (Equation~\ref{M_dwarf_rule}). This raises their surface and volume density. We found that $M/L_{r'} = 74.92 \pm 52.38~M_{\odot}/L_{\odot, r'}$, indicating a rather high dispersion. Moreover, we can no longer approximate that the slope of the relation is 1 on logarithmic axes, indicating non-linearity. Due to these difficulties, we found that it would be unsuitable to repeat the steps described in Section~\ref{subsubsec:dwarf_density}. To enable a fair comparison with MOND, we nonetheless used as similar a procedure as possible. For this, we fixed the logarithmic offset between the density of the least dense dwarf in our sample ($\rho_{\textrm{min,FDS}}$) and the adopted density threshold of the survey ($\rho_t$). As a result, the minimum observational limit (black line in Fig.~\ref{fig:hist_dens_LCDM}) is 0.09~dex below $\rho_{\textrm{min,FDS}}$, the mean observational limit (grey line in this figure) is 0.46~dex above $\rho_{\textrm{min,FDS}}$, and the maximum observational limit (dashed black line in this figure) is 0.79~dex above $\rho_{\textrm{min,FDS}}$. As in the MOND case, we choose the minimum observational limit (black line in Fig.~\ref{fig:hist_dens_LCDM}) as our nominal density limit for the distribution since it is the only one of these three choices that implies $\rho_t < \rho_{\textrm{min,FDS}}$, which is required of a realistic detection threshold. Assuming instead the mean observational limit would make us lose 2 observed dwarf galaxies from the low-density tail of the distribution. Note also that these dwarfs have a clear tidal morphology because we removed any dwarfs where this is unclear (Section~\ref{data_sel}). To summarize, our nominal $\rho_t$ in $\Lambda$CDM is $3.58\sigma$ below the mean logarithmic density, while $\rho_{\textrm{mean}}$ is $2.56\sigma$ below. \section{Triangle plots with alternative modelling choices} \label{sec:triang} In this appendix, we rerun our MCMC analysis with different modelling assumptions and show their impact using triangle plots similar to Fig.~\ref{fig:triang_MONDLCDM}. Instead of showing $\Lambda$CDM and MOND results on the same graph as done there, our approach will be that each graph shows results for different modelling assumptions but within the context of the same theory. We will use different panels for the different theories. As before, we show only the $1\sigma$ contour for each pair of parameters, though the full probability distribution is shown when considering the posterior on one parameter marginalized over all others. The results presented here are discussed in more detail in Section~\ref{discussion}. In Fig.~\ref{fig:triang_newDMfrac}, we check how decreasing the dark matter fraction within the optical radius of the FDS dwarfs affects the results. In particular, we consider the revised dark matter fraction given in Equation~\ref{revised_DM_frac} based on the observed velocity dispersions of nearby dwarfs (Section~\ref{newDMfrac}). As discussed there, we also consider the very conservative case $M_{\textrm{DM}} = 10 \, M_{\star}$. The main impact is on the parameters $\eta_{\textrm{destr}}$ and $\eta_{\textrm{min, dist}}$. The inference on the slope of the eccentricity distribution is rather different for the case $M_{\textrm{DM}} = 10 \, M_{\star}$, but otherwise the posteriors are not much different to the nominal $\Lambda$CDM case in both revised analyses shown here. In Fig.~\ref{fig:3dens}, we compare the parameter inferences resulting from the MCMC analysis assuming three different lower limits ($\rho_t$ values) to the density distribution of the dwarfs: \begin{enumerate} \item The lowest considered $\rho_t$ is set at $5\sigma$ below the mean logarithmic density; \item The second-lowest considered $\rho_t$ is the nominal value used in the main analysis; and \item The highest considered $\rho_t$ is the mean observational limit ($\rho_{\textrm{mean}}$), which we obtained in Section~\ref{subsubsec:dwarf_density} and Appendix~\ref{dwarf_dens_LCDM} for MOND and $\Lambda$CDM, respectively. \end{enumerate} In Fig.~\ref{fig:deproj}, we compare $\Lambda$CDM and MOND while assuming two different values for the deprojection parameters `offset' and `non-nucleated floor' (see Appendix~\ref{deproj}). In addition to the nominal values used in the main analysis, we also consider a higher set of values corresponding to the highest plausible 3D distance given the sky-projected distance \citep[see fig.~6 of][]{Venhola_2019}. This entails setting $\textrm{nnuc}_{\textrm{floor}} = 1.5^\circ$ and offset~$=0.5^\circ$ instead of the nominal $\textrm{nnuc}_{\textrm{floor}} = 1.2^\circ$ and offset~$=0.4^\circ$. In Fig.~\ref{fig:highres}, we check if increasing the resolution of the orbital elements in the test mass simulation affects the results for $\Lambda$CDM and MOND. The nominal resolution used is a grid of size $100 \times 100$ for the eccentricity $e$ and initial distance from the cluster centre ($R_i$), which also corresponds to the semi-major axis. In the higher resolution case, this is raised to $200 \times 200$. \end{appendix} \bsp \label{lastpage}
Title: The \emph{R}-Process Alliance: Chemo-Dynamically Tagged Groups II. An Extended Sample of Halo $r$-Process-Enhanced Stars
Abstract: Orbital characteristics based on Gaia Early Data Release 3 astrometric parameters are analyzed for ${\sim} 1700$ $r$-process-enhanced (RPE; [Eu/Fe] $> +0.3$) metal-poor stars ([Fe/H] $\leq -0.8$) compiled from the $R$-Process Alliance, the GALactic Archaeology with HERMES (GALAH) DR3 survey, and additional literature sources. We find dynamical clusters of these stars based on their orbital energies and cylindrical actions using the \HDBSCAN ~unsupervised learning algorithm. We identify $36$ Chemo-Dynamically Tagged Groups (CDTGs) containing between $5$ and $22$ members; $17$ CDTGs have at least $10$ member stars. Previously known Milky Way (MW) substructures such as Gaia-Sausage-Enceladus, the Splashed Disk, the Metal-Weak Thick Disk, the Helmi Stream, LMS-1 (Wukong), and Thamnos are re-identified. Associations with MW globular clusters are determined for $7$ CDTGs; no recognized MW dwarf galaxy satellites were associated with any of our CDTGs. Previously identified dynamical groups are also associated with our CDTGs, adding structural determination information and possible new identifications. Carbon-Enhanced Metal-Poor RPE (CEMP-$r$) stars are identified among the targets; we assign these to morphological groups in the Yoon-Beers $A$(C)$_{c}$ vs. [Fe/H] Diagram. Our results confirm previous dynamical analyses that showed RPE stars in CDTGs share common chemical histories, influenced by their birth environments.
https://export.arxiv.org/pdf/2208.09712
\title{The \emph{R}-Process Alliance: Chemo-Dynamically Tagged Groups II. An Extended Sample of Halo $r$-Process-Enhanced Stars} \author[0000-0001-9723-6121]{Derek Shank} \affiliation{Department of Physics and Astronomy, University of Notre Dame, Notre Dame, IN 46556, USA} \affiliation{Joint Institute for Nuclear Astrophysics -- Center for the Evolution of the Elements (JINA-CEE), USA} \author[0000-0003-4573-6233]{Timothy C. Beers} \affiliation{Department of Physics and Astronomy, University of Notre Dame, Notre Dame, IN 46556, USA} \affiliation{Joint Institute for Nuclear Astrophysics -- Center for the Evolution of the Elements (JINA-CEE), USA} \author[0000-0003-4479-1265]{Vinicius M. Placco} \affiliation{NSF’s NOIRLab, 950 N. Cherry Ave., Tucson, AZ 85719, USA} \author[0000-0003-3246-0290]{Dmitrii Gudin} \affiliation{Department of Mathematics, University of Maryland, College Park, MD 20742, USA} \author{Thomas Catapano} \affiliation{Department of Physics and Astronomy, University of Notre Dame, Notre Dame, IN 46556, USA} \author[0000-0002-5463-6800]{Erika M.\ Holmbeck} \affiliation{Observatories of the Carnegie Institution for Science, 813 Santa Barbara St., Pasadena, CA 91101, USA} \affiliation{Joint Institute for Nuclear Astrophysics -- Center for the Evolution of the Elements (JINA-CEE), USA} \affiliation{Hubble Fellow} \author[0000-0002-8504-8470]{Rana Ezzeddine} \affiliation{Department of Astronomy, University of Florida, Bryant Space Science Center, Gainesville, FL 32611, USA} \affiliation{Joint Institute for Nuclear Astrophysics -- Center for the Evolution of the Elements (JINA-CEE), USA} \author[0000-0001-5107-8930]{Ian U. Roederer} \affiliation{Department of Astronomy, University of Michigan, 1085 S. University Ave., Ann Arbor, MI 48109, USA} \affiliation{Joint Institute for Nuclear Astrophysics -- Center for the Evolution of the Elements (JINA-CEE), USA} \author[0000-0002-5095-4000]{Charli M.\ Sakari} \affiliation{Department of Physics and Astronomy, San Francisco State University, San Francisco, CA 94132, USA} \author[0000-0002-2139-7145]{Anna Frebel} \affiliation{Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \affiliation{Joint Institute for Nuclear Astrophysics -- Center for the Evolution of the Elements (JINA-CEE), USA} \author[0000-0001-6154-8983]{Terese T.\ Hansen} \affiliation{Department of Astronomy, Stockholm University, Albanova University Centre, SE-106 91 Stockholm, Sweden} \date{\today} \keywords{Milky Way dynamics (1051), Galaxy dynamics (591), Galactic archaeology (2178), Milky Way evolution (1052), Milky Way formation (1053), Milky Way stellar halo (1060), R-Process (1324)} \section{Introduction}\label{sec:Introduction} \begin{deluxetable*}{l l l l} \tablecaption{Signatures of Metal-Poor Stars \label{tab:MPsignatures}} \tablehead{\colhead{Signature} & \colhead{Definition} & \colhead{Abbreviation} & \colhead{Source}} \startdata Main $r$-process & $+0.3 < $ [Eu/Fe] $\leq +0.7$, [Ba/Eu] $< 0.0$ & $r$-I & \cite{Holmbeck2020}\\ Main $r$-process & $+0.7 < $ [Eu/Fe] $\leq +2.0$, [Ba/Eu] $< 0.0$ & $r$-II & \cite{Holmbeck2020}\\ Main $r$-process & [Eu/Fe] $> +2.0$, [Ba/Eu] $< -0.5$ & $r$-III & \cite{Cain2020}\\ Carbon Enhanced & [C/Fe] $> +0.7$ & CEMP & \cite{Aoki2007} \\ CEMP and RPE & [C/Fe] $> +0.7$, [Eu/Fe] $> +0.3$, [Ba/Eu] < 0.0 & CEMP-$r$ & \cite{Aoki2007} \\ \enddata \end{deluxetable*} The rapid neutron-capture process ($r$-process) governs the formation of the heaviest elements in the universe, and accounts for the production of roughly half of the elements beyond iron. A large source of neutrons is required in order to allow neutron-rich isotopes to form far from stability, where they subsequently decay to stable, or long-lived, isotopes all the way up to uranium (U; $Z = 92$). The $r$-process was first formalized by the revolutionary work of \citet{Burbidge1957} and \citet{Cameron1957}, and later Truran and colleagues (e.g., \citealt{Truran1971}; for historical reviews see \citealt{Truran2002} and \citealt{Cowan2021}), who suggested that core-collapse supernovae were the source of $r$-process elements. Candidate sites that produce a sufficient neutron fluence to result in an $r$-process are limited, and have been speculated to be either magnetorotationally jet-driven supernovae (see \citealt{Mosta2018} for a debate on this source), or mergers of either binary neutron stars or a binary neutron star and black hole system, in addition to the already suggested core-collapse supernovae \citep{Lattimer1974,Woosley1994,Winteler2012,Wanajo2014,Nishimura2015,Thielemann2017}. Observations of the kilonova associated with the gravitational wave event GW$170817$ have shown a definitive astrophysical source of heavy elements created by the $r$-process in binary neutron star mergers \citep{Abbott2017a,Abbott2017b,Drout2017,Shappee2017,Tanaka2017,Watson2019}. The nature of the $r$-process can also be studied through efforts to classify halo stars into chemical groups (see Table~\ref{tab:MPsignatures} for definitions), furthering the statistics of $r$-process abundance patterns. Large-scale efforts to identify $r$-process-enhanced (RPE) stars have been underway since these stars were first recognized by \citet{Sneden1994} (see, e.g., \citealt{Christlieb2004,Barklem2005,Roederer2014b}). With the rarity of these stars limiting the total number of known moderately $r$-process-enhanced ($r$-I) and highly $r$-process-enhanced ($r$-II stars), the $R$-Process Alliance (RPA) was initiated in 2017 with the goal to dramatically increase the total number of identified RPE stars. Through dedicated spectroscopic analysis efforts \citep{Hansen2018,Sakari2018,Ezzeddine2020,Holmbeck2020}, the RPA has already doubled the number of known $r$-II stars (from $65$ to $137$) across the first four data releases \citep{Holmbeck2020}. Additional RPE stars are expected to be identified in the near future, based on ongoing analysis of over a thousand moderately high-resolution, moderate-S/N ``snapshot" spectra obtained by the RPA over the past few years. The advent of the Gaia satellite mission \citep{GaiaCollaboration2016a} has allowed for precision astrometric parameters (including parallaxes and proper motions) to be collected for over a billion stars, with millions having measured radial velocities (only available for bright sources with $V \lesssim 14$) in Gaia Early Data Release 3 (EDR3; \citealt{GaiaCollaboration2021}). Since $r$-process elements require high-resolution spectra to measure their abundances, accurate radial velocities are often known for RPE stars from such data, even if Gaia does not have this information. These data can be used to reconstruct the orbits of stars once a suitable Galactic potential is chosen. Stars with similar energies and actions, describing the extent of the stellar orbits, can be attributed to the same progenitor satellite or globular cluster which was subsequently accreted into the Milky Way (MW), dispersing the observed RPE stars to their current positions \citep{Helmi1999a}. \citet{Roederer2018a} employed unsupervised clustering algorithms to group stellar orbital dynamics for RPE stars, an approach that has proven crucial to determine structures in the MW that are not revealed through large-scale statistical sampling methods. These authors were able to determine the orbits for $35$ $r$-II stars. Multiple clustering tools were applied to the orbital energies and actions to identify stars with similar orbital characteristics. This study revealed eight dynamical groupings comprising between two and four stars each. The small dispersion of each group's metallicity was noted, and accounted for by reasoning that each group was associated with a unique accretion event whose stars shared a common chemical history. \citet{Gudin2021} extended the work by \citet{Roederer2018a}, using a much larger sample of RPE stars, including both $r$-I and $r$-II stars. Utilizing the \HDBSCAN ~algorithm \citep{Campello2013}, $30$ Chemo-Dynamically Tagged Groups (CDTGs)\footnote{The distinction between CDTGS and DTGs is that the original stellar candidates of CDTGs are selected to be chemically peculiar in some fashion, while DTGs are selected from stars without detailed knowledge of their chemistry, other than [Fe/H].} were discovered. Their analysis revealed statistically significant similarities in stellar metallicity, carbon abundance, and $r$-process-element ([Sr/Fe]\footnote{The standard definition for an abundance ratio of an element in a star $(\star)$ compared to the Sun $(\odot)$ is given by $[A/B] = (\log{N_{A}/N_{B}})_{\star} - (\log{N_{A}/N_{B}})_{\odot}$, where $N_{A}$ and $N_{B}$ are the number densities of atoms for elements $A$ and $B$.}, [Ba/Fe], and [Eu/Fe]) abundances within individual CDTGs, strongly suggesting that these stars experienced similar chemical-evolution histories in their progenitor galaxies. This work aims to expand the efforts of \citet{Roederer2018a} and \citet{Gudin2021}, analyzing the CDTGs present among stars in an updated RPE stellar sample, which includes stars from the literature and the published GALactic Archaeology with HERMES Data Release 3 (GALAH DR3; \citealt{Buder2021}) catalog of metal-poor ([Fe/H] $\leq -0.8$) stars. The procedures employed closely follow the work of \citet{Shank2022a}, which considered DTGs found in the sample of the Best and Brightest selection of \citet{Schlaufman2014} (see \citealt{Placco2019} and \citet{Limberg2021b} for follow-up studies). The association of our identified CDTGs with recognized Galactic substructures, previously known DTGs/CDTGs, globular clusters, and dwarf galaxies is explored, with the most interesting stellar populations being noted for future high-resolution follow-up studies. Statistical analysis of the elemental abundances present in the CDTGs is investigated. This paper is outlined as follows. Section \ref{sec:Data} describes the RPE literature and GALAH DR3 sample, along with their associated astrometric parameters and the dynamical parameters. The clustering procedure is outlined in Section \ref{sec:ClusteringProcedure}. Section \ref{sec:StructureAssociations} explores the clusters and their association to known MW structures. % A statistical analysis of the CDTG abundances is presented in Section~\ref{sec:chemical_structure}. Finally, Section~\ref{sec:Discussion_2} presents a short summary and perspectives on future directions. \vspace{1.5cm} \section{Data}\label{sec:Data} \subsection{Construction of the RPE Initial Sample}\label{subsec:initial_sample} A literature compilation of RPE stars and known RPE stars in GALAH DR3 form the basis for our data set, as described below. \subsubsection{The RPE Literature Sample}\label{subsubsec:construct_literature} A literature search for RPE stars, including the published material from the RPA, is constructed from the most recent version of \href{https://jinabase.pythonanywhere.com/}{JINAbase} \footnote{\href{https://jinabase.pythonanywhere.com/}{https://jinabase.pythonanywhere.com/}.} \citep{Abohalima2018}. This version crucially includes both the abundances relative to the Sun, as well as the absolute abundances. Unlike the work presented in \citet{Gudin2021}, the literature sample is chosen based on the absolute abundances, and scaled to the Solar atmospheric abundances presented in \citet{Asplund2009}. A restriction on the stellar parameters is applied with [Fe/H] $\leq -0.8$ and $4250 \leq T_{\rm eff} \,\rm{(K)} \leq 7000$ being the target range for RPE stars. The RPE sources are all spectroscopic surveys, and while there is not a uniform methodology in common between the analyses for determining stellar parameters, the methodologies do not differ much in their results (see Fig. 5 of \citealt{Sakari2018}). There are a total of $582$ RPE stars from the literature with [Fe/H] $\leq -0.8$ and $4250 \leq T_{\rm eff} \,\rm{(K)} \leq 7000$ that satisfy the requirements for classification as $r$-I ($426$ stars), $r$-II ($155$ stars), or $r$-III ($1$ star) \citep{McWilliam1995a,McWilliam1995b,Ryan1996,Burris2000,Fulbright2000,Johnson2002,Cohen2004,Honda2004,Aoki2005,Barklem2005,Ivans2006,Preston2006,Franccois2007,Lai2008,Hayek2009,Behara2010,For2010,Roederer2010a,Hollek2011,Hansen2012,Masseron2012,Roederer2012c,Roederer2012a,Aoki2013,Ishigaki2013,Casey2014b,Placco2014a,Roederer2014b,SiqueiraMello2014,Hansen2015,Howes2015,Jacobson2015,Howes2016,Aoki2017,Hill2017,Placco2017,Cain2018,Hansen2018,Hawkins2018,Holmbeck2018,Sakari2018,Mardini2019,Sakari2019,Valentini2019,Xing2019,Cain2020,Bandyopadhyay2020,Ezzeddine2020,Hanke2020,Holmbeck2020,Mardini2020,Placco2020,Rasmussen2020,Yong2021a,Yong2021b,Naidu2022,Roederer2022,Zepeda2022}. Limited-$r$ stars are not discussed in this work and left to future studies. \subsubsection{The GALAH DR3 Sample}\label{subsubsec:construct_GALAH} The remainder of our sample is taken from GALAH DR3. The abundances in GALAH DR3 are subject to known quality checks, which are crucial to take into consideration; we only kept stars that have no concerns with their abundance determinations satisfying \texttt{flag\_X\_fe} $= 0$ and \texttt{snr\_c3\_iraf} $> 30$ \citep{Buder2021}. We have employed the same procedure as the literature sample to put the GALAH DR3 stars on the same Solar scale as presented in \citet{Asplund2009}. We restrict stellar values to [Fe/H] $\leq -0.8$ and $4250 \leq T_{\rm eff} \,\rm{(K)} \leq 7000$, the same as the RPE literature subset. We perform this stellar parameter cut to stay consistent with the RPE literature sample, and dynamical studies require a metallicity cut to allow MW substructure formed from accreted dwarf galaxies to be more easily detected \citep{Yuan2020a}. The sample was then cleaned for stars that already were in the RPE Initial Sample Literature subset, though this was only a handful of stars. While the GALAH DR3 sample has spectroscopically derived stellar parameters, there is not sufficient overlap between the stars in the RPE literature sample and the GALAH DR3 sample to comment on the validity for RPE stars. However, Fig. 6 of \citet{Buder2021} shows that the stellar parameters obtained by GALAH DR3 do not differ much from Gaia FGK Benchmark stars. The stellar parameter cut yields $1194$ metal-poor stars from GALAH DR3 that satisfy the requirements for classification as $r$-I ($967$ stars), $r$-II ($226$ stars), or $r$-III ($1$ star). We henceforth refer to the union of the two above samples as the RPE Initial Sample, and list them in Table~\ref{tab:initial_data_descript} in the Appendix. In the print edition, only the table description is provided; the full table is available only in electronic form. The stars from the RPE Initial Sample were then cross-matched with Gaia Early Data Release 3 (EDR3; \citealt{GaiaCollaboration2021}), using the same methods outlined in \citet{Shank2022a}. Figure~\ref{fig:galactic_map} shows the spatial distribution of these subsets of RPE stars in Galactic coordinates. A comparison of the magnitudes and colors for the RPE literature and GALAH DR3 subsets can be seen in Figure~\ref{fig:mag_hist}. From inspection of the figure, it is clear the GALAH DR3 subset of the RPE Initial Sample peaks at fainter magnitudes and redder colors compared to the literature RPE literature subset. RPE stars from the literature are relatively bright, due to the need for high-resolution spectra to detect the $r$-process elements. Bright stars can be studied with smaller aperture telescopes, and require less observation time on larger aperture telescopes. GALAH DR3 (all spectra obtained with the AAT 3.9-m telescope) includes spectra taken for somewhat fainter stars. The available elemental-abundance ratio estimates for stars in the RPE Initial Sample are plotted in Figure~\ref{fig:abundances}, as a function of [Fe/H]. The RPE stars in GALAH DR3 do not offer much in terms of both Mg and Sr, and are included here for the sake of completeness. In the case of Sr, which is a first-peak $r$-process element, Y can be used as a first-peak substitute with more elemental abundances readily available from GALAH DR3; there is no comparable element for Mg. While results using these elements are postulated, future studies will allow further revisions, where necessary, as the information on abundances is updated and expanded. Figure \ref{fig:abundances_histogram} provides histograms of the elemental-abundance ratios considered in the present work. As can be seen from inspection of these figures, stars in the RPE Initial Sample cover a wide range of abundances. This allows the abundance space to be accurately sub-sampled in later stages of our analysis. The Yoon-Beers Diagram of $A(\rm{C})_{c}$ vs. [Fe/H] for these stars is shown in Figure~\ref{fig:yoon_beers}; $A(\rm{C})_{c}$ is the absolute carbon abundance\footnote{The standard definition for absolute abundance of an element X in a star ($\star$) compared to the Sun ($\odot$) is $A(\rm{X}) = \rm{[X/Fe]}_{\star} + \rm{[Fe/H]}_{\star} +\log\epsilon(\rm{X})_{\odot}$.} corrected from the observed value to account for the depletion of carbon on the giant branch, following \citet{Placco2014b}. For the convenience of later research, the CEMP-$r$ stars are also provided in Table~\ref{tab:cemp} of the Appendix. These stars are included in the analysis, with $28$ GALAH DR3 and $82$ Literature CEMP-$r$ stars. CEMP-$r$ stars are expected to be enriched in their birth clouds by external sources, and as such, do not conflict with the carbon-normal stars that dominate the RPE Initial Sample \citep{Hansen2015}. The RPE Initial Sample has $1393$ $r$-I stars, $381$ $r$-II stars, and $2$ $r$-III stars, for a total of $1776$ RPE stars. \subsection{Construction of the Final Sample}\label{subsec:rpe_final_sample} \subsubsection{Radial Velocities, Distances, and Proper Motions}\label{subsec:rv_dist_pm} Radial velocities, parallaxes, and proper motions for each star are taken from Gaia EDR3, when available. Radial velocities are available for about $90\%$ of the RPE Initial Sample from Gaia EDR3, with typical errors of ${\sim} 1$ km s$^{-1}$. The top panel of Figure~\ref{fig:rv_hist} shows a histogram of the residual differences between the high-resolution radial velocities from the RPE Initial Sample and the Gaia EDR3 values. The biweight location and scale of these differences are $\mu = -0.4$ km s$^{-1}$ and $\sigma = 1.5$ km s$^{-1}$, respectively. The bottom panel of this figure shows the residuals between the high-resolution sources and Gaia EDR3 radial velocities, as a function of the Gaia radial velocities. The blue dashed line is the biweight location, while the shaded regions represent the first (1$\sigma$) and second (2$\sigma$) biweight scale ranges.Gaia EDR3 radial velocities are used, when available, with literature values supplementing when not. The distances to the stars are determined either through the StarHorse distance estimate \citep{Anders2022} or the Bailer-Jones distance estimate (BJ21; \citealt{Bailer-Jones2021}). Parallax values in our RPE Initial Sample from EDR3 have an average error of $\sim 0.02$ mas. The BJ21 and StarHorse distances are determined by a Bayesian approach utilizing the EDR3 parallax, magnitude, and color \citep{Bailer-Jones2021,Anders2022}. The errors are presented for each star in the tables provided in the Appendix. Our adopted distances are chosen following the same procedure outlined in \citet{Shank2022b}, though it is noted here that we prioritize Starhorse distances; the full procedure can be found in Table~\ref{tab:initial_data_descript} in the Appendix. The proper motions in our RPE Initial Sample from Gaia EDR3 have an average error of $\sim 17$ $\mu$as yr$^{-1}$. \subsubsection{Dynamical Parameters}\label{subsubsec:DynamicalParameters} The orbital characteristics of the stars are determined using the Action-based GAlaxy Modelling Architecture\footnote{\url{http://github.com/GalacticDynamics-Oxford/Agama}} (\AGAMA) package \citep{Vasiliev2019}, using the same Solar positions and peculiar motions described in \citet{Shank2022a}\footnote{We adopt a Solar position of ($-8.249$, $0.0$, $0.0$) kpc \citep{GravityCollaboration2020} and Solar peculiar motion in (U,W) as ($11.1$,$7.25$) km s$^{-1}$ \citep{Schonrich2010}, with Galoctocentric Solar azimuthal velocity \textit{V} $= 250.70$ km s$^{-1}$ determined from \citet{Reid2020}.}, along with the same gravitational potential, \MWMMXVII~ \citep{McMillan2017}. The $6$D astrometric parameters, determined in Section~\ref{sec:Data}, are run through the orbital integration process in \AGAMA, in the same manner as \citet{Shank2022a}, in order to calculate the orbital energy, cylindrical positions and velocities, angular momentum, cylindrical actions, and eccentricity. See \citet{Shank2022a} for definitions of these orbital parameters, and details of the Monte Carlo error calculation. The RPE Initial Sample of $1776$ stars was cut to exclude stars that are unbound from the MW ($E > 0$) (J$124753.30$-$390802.0$, J$135735.40$-$303249.0$, and J$175159.80$-$475131.0$). Finally, in order to obtain accurate orbital dynamics, we conservatively remove $53$ stars with radial velocity differences $> 15$ km s$^{-1}$ between the Gaia EDR3 values and the high-resolution source values. Most of these stars are expected to be binaries. We also considered a cut to remove stars with [Ba/Eu] $< -0.3$ (rather than [Ba/Eu] $< 0$), in order to more confidently select stars with $r$-process enhancement. We decided not to proceed with this cut, as ultimately including more stars with a wider range of [Ba/Eu] will only serve to increase the abundance dispersion when randomly sampled. Hence, this is a conservative choice; any stars that are included that are in fact not clearly RPE stars (i.e., they have contributions from, e.g., the $s$-process) will only decrease the significance of our dispersion analysis described below\footnote{An explicit test of this cut indeed resulted in a small increase in the statistical significance of both Ba and Eu (with the exception of Eu for the $r$-I sample), following the procedure described in Section~\ref{sec:chemical_structure}.}. Application of the above cuts leaves a total sample of $1720$ stars to perform the following analysis. The dynamical parameters of the stars with orbits determined are listed in Table~\ref{tab:final_data_descript} in the Appendix; we refer to this as the RPE Final Sample. In the print edition, only the table description is provided; the full table is available only in electronic form. \input{Tables/cluster_summary_table} \input{Tables/cluster_stellar_results_stub_table} Figure~\ref{fig:orb_dist} provides histograms of \textit{r}$_{\apo}$ (top), \textit{r}$_{\peri}$ (middle), and Z$_{\maxtext}$ (bottom) for stars in the RPE Final Sample. The full sample is shown in the left column, $r$-I stars are shown in the middle column, and $r$-II and $r$-III stars are shown in the right column. From inspection of this figure, it is clear that the majority of the stars in this sample occupy orbits that take them within the inner-halo region (up to around $15$ to $20$ kpc from the Galactic center), but they also explore regions well into the outer-halo region, up to $\sim 50$ kpc away from the Galactic plane. Although they appear rather similar, according to a Kolmogorov-Smirnov two-sample test between the $r$-I (middle column) and $r$-II plus $r$-III (right column) distributions, the hypothesis that these two samples are drawn from the same parent population is rejected ($p \ll 0.001$) for each of \textit{r}$_{\apo}$ (top), \textit{r}$_{\peri}$ (middle), and Z$_{\maxtext}$ (bottom). It may be the case that this is due to the different masses of the dwarf satellites in which the $r$-I and $r$-II stars were formed (the majority of RPE stars are likely formed in dwarf satellites, but not all are required to be), but we choose not to speculate further on this point at present. Figure~\ref{fig:toomre} is the Toomre Diagram for the RPE Final Sample. The red solid semi-circle indicates whether a stellar orbit is disk-like (inside) or halo-like (outside). There are $234$ ($17\%$ of) $r$-I stars and $30$ ($8\%$ of) $r$-II stars that have disk-like kinematics. Of the disk-like $r$-II stars, there are $22$ that are part of the GALAH DR3 subset of the RPE Final Sample, while the remaining eight are from the literature subset. This difference can be attributed to RPE stars being targeted more in the halo where they have a higher likelihood of detection; most known RPE stars reside in the halo ($85\%$). The blue vertical dashed line indicates whether a stellar orbit is prograde (v$_{y} > 0$ km s$^{-1}$) or retrograde (v$_{y} < 0$ km s$^{-1}$). There are $1026$ ($76\%$) $r$-I stars and $215$ ($64\%$ of) $r$-II stars with prograde orbits. The \citet{Gudin2021} literature sample found an almost fifty-fifty split between RPE stars that have prograde orbits compared to retrograde orbits, while the expanded RPE Final Sample presented here has slightly more prograde stars ($1241$ or $0.70\%$) compared to retrograde stars. Note that simulations performed by \citet{Hirai2022} show that $r$-II stars are slightly favored to be prograde as well. The RPE Final Sample has $1346$ $r$-I stars, $372$ $r$-II stars, and $2$ $r$-III stars, for a total of $1720$ RPE stars. \vspace{1.0cm} \section{Clustering Procedure}\label{sec:ClusteringProcedure} \citet{Helmi2000} were among the first to suggest the use of integrals of motion, in their case orbital energies and angular momenta, to find substructure in the MW using the precision measurements of next-generation surveys that were planned at the time. \citet{McMillan2008} considered the use of actions as a complement to the previously suggested orbital energy and angular momenta, with only the vertical angular momentum being invariant in an axisymmetric potential. Most recently, many authors have employed the orbital energies and cylindrical actions (E,J$_{r}$,J$_{\phi}$,J$_{z}$) to determine the substructure of the MW using Gaia measurements \citep{Helmi2017,Myeong2018b,Myeong2018c,Roederer2018a,Naidu2020,Yuan2020a,Yuan2020b,Gudin2021,Limberg2021a,Shank2022a}. As described in \citet{Shank2022a}, we employ \HDBSCAN ~in order to perform a cluster analysis over the orbital energies and cylindrical actions from the RPE Sample obtained through the procedure outlined in Section \ref{sec:Data}. The \HDBSCAN ~algorithm\footnote{For a detailed description of the \HDBSCAN ~algorithm visit: \url{https://hdbscan.readthedocs.io/en/latest/how_hdbscan_works.html}} operates through a series of calculations that are able to separate the background noise from denser clumps of data in the dynamical parameters. We utilize the following parameters described in \citet{Shank2022a}: \verb ~min_cluster_size ~$= 5$, \verb ~min_samples ~$= 5$, \verb ~cluster_selection_method ~$=$ \verb 'leaf' , \verb ~prediction_data ~$=$ \verb ~True ~, Monte Carlo samples at $1000$, and minimum confidence set to $20\%$. Table~\ref{tab:cluster_summary} provides a listing of the $36$ Chemo-Dynamically Tagged Groups\footnote{Chemo-Dynamically Tagged Groups are derived from purely dynamical parameters. The chemical information comes from the RPE selection criteria, thus distinguishing them from Dynamically Tagged Groups (DTGs).} (CDTGs) identified by this procedure, along with their numbers of member stars, confidence values (calculated as described in \citealt{Shank2022a}), and associations described below. Note that, although a minimum confidence value of $20\%$ was employed, the actual minimum value found for these CDTGs is $43.0\%$ (CDTG-15); only two other CDTGs have an assigned confidence value less than $50.0\%$ (CDTG-18 and CDTG-35). The average confidence level of the $36$ CDTGs is quite high, at $81.9\%$. In total, there were $375$ stars ($22\%$ of the Final Sample) assigned to the $36$ CDTGs. The previously known groups are identified using the nomenclature introduced by \citet{Yuan2020b}. For example, IR18:E refers to the first initial then the last initial of the lead author (IR) \citep{Roederer2018a}, the year the paper was published (18), and after the colon is the group name provided by the authors of the paper (E). We use the following references for associations: AH17: \citet{Helmi2017}, GM17: \citet{Myeong2017}, HK18: \citet{Koppelman2018}, GM18a: \citet{Myeong2018b}, GM18b: \citet{Myeong2018c}, IR18: \citet{Roederer2018a}, HL19: \citet{Li2019}, SF19: \citet{Sestito2019}, ZY19: \citet{Yuan2019}, NB20: \citet{Borsato2020}, HL20: \citet{Li2020}, SM20: \citet{Monty2020}, ZY20a: \citet{Yuan2020b}, ZY20b: \citet{Yuan2020a}, GC21: \citet{Cordoni2021}, DG21: \citet{Gudin2021}, CK21: \citet{Kielty2021}, GL21: \citet{Limberg2021a}, KH22: \citet{Hattori2022}, KM22: \citet{Malhan2022}, DS22a: \citet{Shank2022b}, DS22b: \citet{Shank2022a}, and SL22: \citet{SofieLovdal2022}. Table~\ref{tab:cluster_results_stub} lists the stellar members of the identified CDTGs, along with their values of [Fe/H], [C/Fe]$_c$, [Mg/Fe], [Sr/Fe], [Y/Fe], [Ba/Fe], and [Eu/Fe], where available. The last line (in bold font) in the listing for each CDTG provides the mean and dispersion (both using biweight estimates) for each quantity. Note that for dynamical groups in which only fewer than three measurements of a given element are provided, we list the mean, and code the dispersion as a missing value. Table~\ref{tab:cluster_orbit} lists the derived dynamical parameters (and their errors) derived by \AGAMA ~used in our analysis of the identified CDTGs. \section{Structure Associations}\label{sec:StructureAssociations} Associations between the newly identified CDTGs are now sought between known MW structures, including large-scale substructures\footnote{Here, the term ``large-scale substructure'' is used to distinguish large over-densities of stars determined in the integral of motion space in the Galaxy, e.g., the substructures presented in \citet{Naidu2020}.}, previously identified dynamical groups, stellar associations, globular clusters, and dwarf galaxies. \vspace{5cm} \input{Tables/cluster_orbital_table} \subsection{Milky Way Substructures}\label{subsec:MWSubstructure} Analyzing the orbital energies and actions is insufficient to determine separate large-scale substructures. Information on the elemental abundances is crucial due to the differing star-formation histories of the structures, which can vary in both mass and formation redshift \citep{Naidu2020}. The outline for the prescription used to determine the structural associations with our CDTGs is described in \citet{Naidu2020}, and explained in detail in \citet{Shank2022a}. Simple selections are performed based on physically motivated choices for each substructure, excluding previously defined substructures, as the process iterates to decrease contamination between substructures. Following their procedures, we find six predominant MW substructures associated with our CDTGs, listed in Table~\ref{tab:substructures}. This table provides the numbers of stars, the mean and dispersion of their chemical abundances, and the mean and dispersion of their dynamical parameters for each substructure. The Lindblad Diagram and projected-action plot for these substructures is shown in Figure~\ref{fig:energy_actions}. \input{Tables/substructure_table} \subsubsection{Gaia-Sausage-Enceladus}\label{subsubsec:GSE} The most populated substructure identified here is Gaia-Sausage-Enceladus (GSE), which contains $155$ member stars across the associated CDTGs. The selection criteria for GSE is $\langle {\rm ecc} \rangle > 0.7$ \citep{Naidu2020}. These CDTGs are distinct chemo-dynamical groups within GSE, as detected by previous authors as well, showing that, as a massive merger, GSE has distinct dynamical groupings within the progenitor satellite \citep{Yuan2020b,Limberg2021a,Gudin2021,SofieLovdal2022}. GSE is thought to be the remnant of an early merger that distributed a significant number of stars throughout the inner halo of the MW \citep{Belokurov2018,Helmi2018}. The action space determined for the member stars exhibits an extended radial component, a null azimuthal component within errors, and a null vertical component within errors. These orbital properties are the product of the high-eccentricity selection of the CDTGs, and agree with previous findings of GSE orbital characteristics when using other selection criteria \citep{Koppelman2018,Myeong2018b,Limberg2021a,Limberg2022}. The $\langle$[Fe/H]$\rangle$ of GSE found in our work ($\sim -1.5$) is rather metal poor, consistent with studies of its metallicity in other dynamical groupings, even though our sample contains more metal-rich stars that could have been associated with GSE \citep{Gudin2021,Limberg2021a,Shank2022a}. The stars that form CDTGs in GSE tend to favor the more metal-poor tail of the substructure, which is also seen in previous dynamical analysis. The $\langle$[Mg/Fe]$\rangle$ ($\sim +0.3$) of GSE exhibits a relatively low level, consistent with the low-Mg structure detected by \citet{Hayes2018} and with Mg levels consistent with accreted structures simulated by \citet{Mackereth2019}, and explained through an accretion origin of GSE. % We also obtain a $\langle$[C/Fe]$_{\textit{c}}\rangle$ ($\sim +0.4$) for GSE; this elevated C level is indicative of being produced in Type II Supernovae, in agreement with the scenario put forth by \citet{Hasselquist2021} for GSE. The RPE stars associated with GSE exhibit Eu enhancement on par with other detected MW susbtructures ($\langle$[Eu/Fe]$\rangle \sim +0.6$). Recently, \citet{Matsuno2021} and \citet{Naidu2021} tracked the formation of RPE stars in GSE, finding high levels of Eu present within identified GSE stars, consistent with the work presented here. Finally, we can associate the globular clusters Ryu 879 (RLGC2), IC 1257, NGC 4833, NGC 5986, NGC 6293, and NGC 6402 (M 14) with GSE, based on CDTGs with similar orbital characteristics and stellar associations of these globular clusters (see Sec. \ref{subsec:GCDG} for details). Note in Figure~\ref{fig:energy_actions} how GSE occupies a large region of the Lindblad Diagram, concentrated in the planar and radial portions of the projected-action plot. \subsubsection{The Splashed Disk}\label{subsubsec:SD} The second-most populated substructure identified here is the Splashed Disk (SD), which contains $36$ member stars. The SD is thought to be a component of the primordial MW disk that was kinematically heated during the GSE merger event \citep{Helmi2018,DiMatteo2019,Belokurov2020}. The selection criteria for the SD is $\langle$[$\alpha$/Fe]$\rangle > 0.25 - 0.5\times(\langle$[Fe/H]$\rangle + 0.7)$ \citep{Naidu2020}. The mean velocity components of the SD are consistent with a null radial and vertical velocity, while showing a large positive azimuthal velocity consistent with disk-like stars. The mean eccentricity of these stars is most consistent with disk-like orbits. The SD is the most metal-rich substructure identified here ($\langle$[Fe/H]$\rangle \sim -1.1$). The high $\langle$[Mg/Fe]$\rangle$ ($\sim +0.5$) abundances for the SD shows that these stars are old, and they could be the result of a possible merger event, such as the merger between the MW and GSE progenitor, or heated from a primordial system present within the MW at the time of the GSE merger. The $\langle$[C/Fe]$_{\textit{c}}\rangle$ ($\sim +0.5$) abundance for the SD is high, which is consistent with expectation from the high mean magnesium abundances as a tracer of Type II Supernovae in this substructure \citep{Woosley1995,Kobayashi2006}. Note that the SD overlaps with the Metal-Weak Thick Disk in the Lindblad Diagram (Figure~\ref{fig:energy_actions}). This is due to the selection criteria only using metallicity and Mg abundances to determine the SD stars \citep{Naidu2020}. Considering the SD is thought to be composed of stars that have been heated due to the GSE merger event, the positions of the SD stars in the Lindblad Diagram shows a relatively large deviation from disk-like orbits, though some seem to have certainly been less kinematically displaced. \subsubsection{The Metal-Weak Thick Disk}\label{subsubsec:MWTD} The third-most populated substructure identified here is the Metal-Weak Thick Disk (MWTD), which contains $16$ member stars. The MWTD is thought to have formed from either a merger scenario, possibly related to GSE, or the result of old stars born within the Solar radius migrating out to the Solar position due to tidal instabilities within the MW \citep{Carollo2019}. The selection criteria for the MWTD is $-0.8 <\langle$[Fe/H]$\rangle < -2.5$, $+0.25 < \langle$[$\alpha$/Fe]$\rangle < +0.45$, and $\langle$J$_{\phi}\rangle > 0.5$ \citep{Naidu2020}. The relative lack of RPE stars in the MWTD agrees with simulations performed by \citet{Hirai2022}, which show that the in situ component does not possess large numbers of highly enhanced $r$-II stars ($\langle$[Eu/Fe]$\rangle \sim +0.6$). The non-existent radial and vertical velocity components, as well as the large positive azimuthal velocity component of the MWTD are all consistent with the velocity distribution for the MWTD from \citet{Carollo2019}, even though the [Fe/H] cut in our dynamical analysis included more metal-rich stars than in their sample. The mean eccentricity distribution found within this substructure is also similar to that reported by \citet{Carollo2019}, showing that the MWTD is a distinct component from the canonical thick disk (TD). Recently, both \citet{An2020} and \citet{Dietz2021} have presented evidence that the MWTD is an independent structure from the TD. The distribution in $\langle$[Fe/H]$\rangle$ ($\sim -1.9$) and mean velocity space represents a stellar population consistent with the high-Mg population \citep{Hayes2018} ([Fe/H] $\sim -1.3$), with the mean Mg abundance ($\langle$[Mg/Fe]$\rangle \sim +0.4$) being similar within errors ([Mg/Fe] $\sim +0.3$). The $\langle$[C/Fe]$_{\textit{c}}\rangle$ ($\sim +0.5$) abundance for the MWTD exhibits an enhancement in carbon, possibly pointing to a relation with the strongly prograde CEMP structure found in \citet{Dietz2021}, which was attributed to the MWTD population. While this population is not explicitly recovered, there could be an overlap, and future studies will shed more light on this as new abundance information is explored. Interestingly, the MWTD does not have many identified RPE stars compared to detections of this substructure in previous works that did not focus solely on RPE stars \citep{Shank2022b,Shank2022a}. This could be due to the primordial MW disk not being enhanced in $r$-process elements, shown for $r$-II star simulations in \citet{Hirai2022}, or a selection effect, with more stars with Mg abundances needing to be identified in the disk of the MW (note their are only $395$ stars with Mg abundances detected, which are needed to determine the MWTD substructure based on the procedure in \citealt{Naidu2020}). If future abundance measurements show that relatively few RPE stars are identified for the MWTD, then the formation scenarios of the primordial disk can be further constrained. Notice in Figure~\ref{fig:energy_actions} how the MWTD occupies a lower energy component of the disk (the gray dots mostly positioned with prograde orbits) in the Lindblad Diagram, along with being in a more extended disk position as well, a selection which is seen in \citet{Naidu2020}. \subsubsection{The Helmi Stream}\label{subsubsec:Helmi} \input{Tables/interesting_substructure_table} The third-least populated substructure identified here is the Helmi Stream (HS), which contains $10$ member stars. The HS is one of the first detected dynamical substructures in the MW using integrals of motions \citep{Chiba1998,Helmi1999a,Chiba2000}. The selection criteria for the HS is $0.75\times10^{3} < \langle$J$_{\phi}\rangle < 1.7\times10^{3}$ and $1.6\times10^{3} < \langle$L$_{\perp}\rangle < 3.2\times10^{3}$, with L$_{\perp} = \sqrt{\text{L}_{x}^{2} + \text{L}_{y}^{2}}$ \citep{Naidu2020}. The HS has a characteristically high vertical velocity, which separates it from other stars that lie in the disk, and can be seen in the sample here. The large uncertainty on the vertical velocity of the HS members corresponds to the positive and negative vertical velocity components of the stream, with the negative vertical velocity population dominating, consistent with the members determined here \citep{Helmi2020}. The $\langle$[Fe/H]$\rangle$ of the HS is more metal poor in this sample ($\langle$[Fe/H]$\rangle \sim -2.0$), compared to the known HS members ([Fe/H] $\sim -1.5$; \citealt{Koppelman2019b}). However, \citet{Limberg2021b} recently noted that the metallicity range of HS is more metal poor than previously expected, with stars reaching [Fe/H] $\sim -2.5$, which is consistent with both the study performed by \citet{Roederer2010a} and the results presented here within errors. \citet{Limberg2021b} also considered $r$-process abundances for the HS, showing that [Eu/Fe] ($> +0.3$) is larger over the wide range of metallicities ($-2.5 \lesssim $[Fe/H]$ \lesssim -1.0$), as also found for the stars reported in this work ($\langle$[Eu/Fe]$\rangle$ $\sim +0.5$). Notice in Figure~\ref{fig:energy_actions} how the HS occupies a relatively isolated space in the Lindblad Diagram, thanks to the large vertical velocity of the stars providing the extra energy compared to the other disk stars. \subsubsection{LMS-1 (Wukong)}\label{subsubsec:LMS1} The second-least populated substructure identified here is LMS-1, which contains $8$ member stars. LMS-1 was first identified by \citet{Yuan2020b}, and also detected by \citet{Naidu2020}, who called it Wukong. The selection for LMS-1 is $0.2\times10^{3} < \langle$J$_{\phi}\rangle < 1.0\times10^{3}$, $-1.65\times10^{5} < \langle$E$\rangle < -1.2\times10^{5}$, and $\langle$[Fe/H]$\rangle < -1.45$ \citep{Naidu2020}. This structure is similar to GSE in terms of the velocity component, but is characterized by a higher energy along with a more metal-poor population \citep{Naidu2020}, also found for the small number of stars representing LMS-1 in our sample ($\langle$[Fe/H]$\rangle \sim -2.2$). The carbon and magnesium abundances are also high, which indicates an old population ($\langle$[C/Fe]$_{\textit{c}}\rangle \sim +0.4$ and $\langle$[Mg/Fe]$\rangle \sim +0.4$). LMS-1 exhibits low first-peak $r$-process elements for both strontium and yttrium ($\langle$[Sr/Fe]$\rangle \sim +0.2$ and $\langle$[Y/Fe]$\rangle \sim -0.2$). In Figure~\ref{fig:energy_actions} LMS-1 has a higher energy compared to GSE in the Lindblad Diagram. However, these stars exhibit a lower eccentricity ($\langle$ ecc $\rangle \sim 0.5$) compared to the GSE stars ($\langle$ ecc $\rangle > 0.7$), forming their own distinct substructure, with a more clear division in the projected-action plot. \subsubsection{Thamnos}\label{subsubsec:Thamnos} The Thamnos substructure also contains $8$ member stars. Thamnos was proposed by \citet{Koppelman2019a} as a merger event that populated stars in a retrograde orbit similar to thick-disk stars. The selection criteria for Thamnos is $-1.5\times10^{3} < \langle$J$_{\phi}\rangle < -0.2\times10^{3}$, $-1.8\times10^{5} < \langle$E$\rangle < -1.6\times10^{5}$, and $\langle$[Fe/H]$\rangle < -1.6$ \citep{Naidu2020}. The low energy and strong retrograde rotation suggest that Thamnos merged with the MW long ago \citep{Koppelman2019a}. Here we find a similar low mean orbital energy and strong mean retrograde motion, and we recover as strong a retrograde motion as in \citet{Koppelman2019a}, within errors. The low mean metallicity ($\langle$[Fe/H]$\rangle \sim -2.1$), consistent with the value reported by \citet{Limberg2021a} ($\langle$[Fe/H]$\rangle \sim -2.2$), and the elevated $\langle$[C/Fe]$_{c}\rangle$ ($\sim +0.4$) of these stars also supports the merger being ancient. The $\langle$[Mg/Fe]$\rangle$ ($\sim +0.5$) is high, also suggesting an old population, consistent with \citet{Kordopatis2020} ([Mg/Fe] $\sim +0.3$). As far as we are aware, this study presents the first known $r$-process-element abundances detected in Thamnos, with $\langle$[Eu/Fe]$\rangle \sim +0.5$ and $\langle$[Ba/Fe]$\rangle \sim +0.1$, and shows that this old system was once subjected to multiple $r$-process events, probably before being accreted into the Galaxy. Notice in Figure~\ref{fig:energy_actions} how Thamnos occupies a space that could be described as a retrograde version of disk stars. \subsection{Associations to Previously Identified Groups and MW Substructure}\label{subsec:Prev_DTGs_Stellar_assoc} Separately, we can compare the newly identified CDTGs in this work with other dynamical groups identified by previous authors in order to find structures in common. We take the mean group dynamical properties from the previously identified groups and compare them to the mean and dispersion for the dynamical parameters of our identified CDTGs. Stellar associations of $5 \arcsec$ are also considered, allowing the identification of stars in our sample that belong to previously identified groups. For details on the previous work used in this process, see \citet{Shank2022a}. The resulting dynamical associations between our identified CDTGs and previously identified groups (along with substructure and globular cluster associations, see Section \ref{subsec:GCDG}) are listed in Table~\ref{tab:interesting_substructure}. The previous groups that are associated to the identified CDTGs in this work are listed in Table~\ref{tab:interesting_groups_substructure}. Table~\ref{tab:cluster_results_stub} lists the individual stellar associations for each of our CDTGs. The works that we use for previous groups are AH17: \citet{Helmi2017}, GM17: \citet{Myeong2017}, HK18: \citet{Koppelman2018}, GM18a: \citet{Myeong2018b}, GM18b: \citet{Myeong2018c}, IR18: \citet{Roederer2018a}, HL19: \citet{Li2019}, SF19: \citet{Sestito2019}, ZY19: \citet{Yuan2019}, NB20: \citet{Borsato2020}, HL20: \citet{Li2020}, SM20: \citet{Monty2020}, ZY20a: \citet{Yuan2020b}, ZY20b: \citet{Yuan2020a}, GC21: \citet{Cordoni2021}, DG21: \citet{Gudin2021}, CK21: \citet{Kielty2021}, GL21: \citet{Limberg2021a}, KH22: \citet{Hattori2022}, KM22: \citet{Malhan2022}, DS22a: \citet{Shank2022b}, DS22b: \citet{Shank2022a}, and SL22: \citet{SofieLovdal2022}. This work is distinguished from previous papers, thanks to the increase in RPE stars compared to \citet{Gudin2021}, and preliminary abundance results for other elements such as Mg and Y. Select identified CDTGs related to each of the large-scale substructures described in Section~\ref{subsec:MWSubstructure} (along with one that is not associated to large-scale substructure) are examined in detail below. \vspace{1cm} \subsubsection{CDTG-6}\label{subsubsec:CDTG_6} CDTG-6 is associated to the Splashed Disk \citep{Naidu2020}, and has interesting associations to previously identified groups. There are three stellar associations made between previously identified groups and CDTG-6, with two coming from DG21:CDTG-5\footnote{We adopt the nomenclature for previously identified DTGs and CDTGs from \cite{Yuan2020b}. For example, DG21:CDTG-5 is represented as the first initial then last initial of the first author (DG) \citep{Gudin2021}, followed by the year the paper was published (21), and after the colon is the group obtained by the authors of the paper (CDTG-5).}, and one coming from DS22a:DTG-62 \citep{Gudin2021,Shank2022b}. DG21:CDTG-5 is associated to the MWTD by the authors \citep{Gudin2021}. DS22a:DTG-62 is not associated to any MW substructure by the authors, though it is noted that there were no measured $\alpha$-element abundances for the stars belonging to DS22a:DTG-62 \citep{Shank2022b}. CDTG-6 has three dynamical associations to previously identified groups -- DG21:CDTG-5, DS22a:DTG-62, and DS22b:DTG-19 \citep{Gudin2021,Shank2022b,Shank2022a}. DS22b:DTG-19 is associated to the MWTD by the authors \citep{Shank2022a}. CDTG-6 seems to point towards an association with either the MWTD or the Splashed Disk, but clearly more Mg abundances are needed before definitive claims can be made. While there were three studies \citep{Gudin2021,Shank2022b,Shank2022a} that had associations corresponding to the MWTD, \citet{Shank2022b} had limited $\alpha$-element abundance information. \subsubsection{CDTG-7}\label{subsubsec:CDTG_7} CDTG-7 is the only group associated to the MWTD following the procedure in \citet{Naidu2020}. Three stellar associations are made to DG21:CDTG-7, two to KH22:DTC-4, and one each to DG21:CDTG-6, DS22a:DTG-14, and DS22a:DTG-99. DG21:CDTG-7 is associated to GSE by the authors, while DG21:CDTG-6 is not associated to MW substructure \citep{Gudin2021}. KH22:DTC-4 is not associated to any MW substructure, and is associated to IR18:C by the authors \citep{Hattori2022}. DS22a:DTG-14 is associated to the MWTD, with other associations to HL19:GL-1, DS22b:DTG-2, and DG21:CDTG-6 \citep{Shank2022b}. HL19:GL-1 is not associated to MW substructure by the authors, and also associated to AH17:VelHel-7 \citep{Li2019}. DS22b:DTG-2 is associated with the MWTD, and also associated to HL19:GL-1, DG21:CDTG-6, and DG21:CDTG-8, with DG21:CDTG-8 associated to the MWTD by the authors \citep{Shank2022a,Gudin2021}. DS22a:DTG-99 is not associated to any MW substructure, and also associated to HL19:GL-1 by the authors \citep{Shank2022b}. CDTG-7 is also dynamically associated to KM22:C-3 and DS22a:DTG-97 \citep{Malhan2022,Shank2022b}. KM22:C-3 is not associated to MW substructure by the authors, while DS22a:DTG97 is associated to GSE and also HL20:GR-1 \citep{Malhan2022,Shank2022b}. HL20:GR-1 is associated to AH17:VelHel-7, HL19:GL-1, and HL19:GL-2 by the authors, none of which are recovered here \citep{Li2020}. Interestingly, CDTG-7 has a few associations to GSE, but does not have a strong enough eccentricity ($\langle$ ecc $\rangle ~\sim 0.64$) to be determined as GSE according to the procedure by \citet{Naidu2020} ($\langle$ ecc $\rangle > 0.7$). The chemical information relates this more to the MWTD compared to GSE, with both $\langle$[C/Fe]$_{c}\rangle$ ($\sim +0.5$) and $\langle$[Mg/Fe]$\rangle$ ($\sim +0.4$) being more abundant in CDTG-7 compared to the detected abundances in GSE presented here ($\sim +0.4$ and $\sim +0.3$, respectively). In total, there are $7$ associations between CDTG-7 and previous works, with $2$ associations being related to the MWTD and $2$ to GSE by the previous authors, the rest were not associated to large-scale substructure. \subsubsection{CDTG-8}\label{subsubsec:CDTG_8} CDTG-8 is associated with GSE \citep{Naidu2020}, and has interesting associations to previously identified groups. Taking a closer look at CDTG-8, we have six stellar associations for CDTG-8 with two in KH22:DTC-15, and a star in each of GL21:DTG-18, DG21:CDTG-18, DS22a:DTG-57, and DS22a:DTG-58 \citep{Limberg2021a,Gudin2021,Shank2022b}. KH22:DTC-15 is associated to Pontus \citep{Malhan2022}, not discussed in this work, by the authors \citep{Hattori2022}. KH22:DTC-15 is associated to IR18:E, IR18:F, IR18:H, and ZY20a:DTG-38 by the authors as well \citep{Hattori2022}. ZY20a:DTG-38 is related to GSE by the authors \citep{Yuan2020b}. GL21:DTG-18 was not associated to GSE by the authors, or any MW substructure; however, it was associated to ZY20a:DTG-33, which was also not associated to any MW substructure by the authors \citep{Limberg2021a,Yuan2020a}. DG21:CDTG-18 was also not assigned to MW substructure by the authors, while DS22a:DTG-57, and DS22a:DTG-58 were associated to GSE \citep{Gudin2021,Shank2022b}. CDTG-8 is dynamically associated with GC21:Sausage, DS22a:DTG-57, and DS22b:DTG-11 which are all associated to GSE by their authors \citep{Cordoni2021,Shank2022b,Shank2022a}. We also recover a globular cluster match with CDTG-8 to EV21:Ryu 879 (RLGC 2), which is typically associated with GSE dynamics \citep{Callingham2022,Shank2022b,Shank2022a}. There are $8$ associations between CDTG-8 and previous groups, with $5$ being associated to GSE, $1$ to Pontus, and the rest not associated to large-scale substructure by the previous authors. \subsubsection{CDTG-17}\label{subsubsec:CDTG_17} Only one group is associated to the HS, CDTG-17. We can see that CDTG-17 has five stellar associations with DG21:CDTG-15, four with NB20:H99, four with SL22:60, three with HK18:Green, and one each with GL21:DTG-3 and DS22a:DTG-42 \citep{Koppelman2018,Borsato2020,Gudin2021,Limberg2021a,Shank2022b,SofieLovdal2022}. All of these groups were associated to the HS by their respective authors, with DG21:CDTG-15 also being associated to GL21:DTG-3 and ZY20a:DTG-3, of which we recover the GL21:DTG-3 association, and with ZY20a:DTG-3 being associated as HS members by the authors \citep{Gudin2021,Limberg2021a,Yuan2020a}. CDTG-17 is also dynamically associated to GM18a:S2, GM18b:S2, DG21:CDTG-15, GL21:DTG-3, and DS22a:DTG-42 which are all associated to HS by their authors \citep{Myeong2018b,Myeong2018c,Gudin2021,Limberg2021a,Shank2022b}. There are $8$ associations between CDTG-17 and previous groups, with all $8$ being assigned to the HS by the previous authors. \subsubsection{CDTG-22}\label{subsubsec:CDTG_22} CDTG-22 is associated to Thamnos. CDTG-22 has three stellar associations with two belonging in DG21:CDTG-27, which is identified as belonging to Thamnos by the authors \citep{Gudin2021}, and one belonging in KH22:DTC-16, which is associated to IR18:B by the authors \citep{Hattori2022}. On the other hand, CDTG-22 has a dynamical association to GC21:Sequoia, belonging to Sequoia according to the authors \citep{Cordoni2021}. However, Sequoia is a higher energy structure compared to Thamnos \citep{Koppelman2019b}. CDTG-22 also has a dynamical association to KM22:C-3, which is not associated by the authors \citep{Malhan2022}. This is a case where CDTG-22 could be between the two substructures of Thamnos and Sequoia in terms of energy, and more information is required before a definitive conclusion can be made. There are $4$ associations between CDTG-22 and previous groups, with 1 of those being associated with Thamnos, 1 being associated to Sequoia, and the other 2 not associated to large-scale substructure by the previous authors. \vspace{0.5cm} \subsubsection{CDTG-25}\label{subsubsec:CDTG_25} CDTG-25 is associated to LMS-1 (Wukong) through the procedure outlined in \citet{Naidu2020}. There were six stars from previously identified groups matched with CDTG-25 through a $5 \arcsec$ radius search of the CDTG-25 member stars. Three of the stars are in DG21:CDTG-4, and the other three are in DS22a:DTG-67. DG21:CDTG-4 is not assigned \citep{Gudin2021,Shank2022b}, while DS22a:DTG-67 is associated with LMS-1 \citep{Yuan2020b}. DG21:CDTG-4 was associated with GL21:DTG-2 by the authors, where \citet{Limberg2021a} made a tentative association with LMS-1. These associations seem to strengthen their argument for GL21:DTG-2. CDTG-25 is also dynamically associated with GM18b:Cand10, GM18b:Cand11, DG21:CDTG-4, GL21:DTG-2, and DS22a:DTG-67 \citep{Myeong2018c,Gudin2021,Limberg2021a,Shank2022b}. Both GM18b:Cand10 and GM18b:Cand11 were new groups identified by \citep{Myeong2018c}, though we note that LMS-1 was not discovered until two years later by \citet{Yuan2020b}, meaning that LMS-1 was possibly detected through a dynamical search for groups. There are $5$ associations between CDTG-25 and previous groups, with $2$ being associated to LMS-1 and the other $3$ not associated to large-scale substructure by the previous authors. \input{Tables/interesting_substructure_groups_table} \subsubsection{CDTG-36}\label{subsubsec:CDTG_36} CDTG-36 is not assigned to any MW substructure, and has two stellar associations to DS22a:DTG-55, and one each with DS22b:DTG-9, KH22:DTC-4, and EV21:NGC 6397 \citep{Vasiliev2021,Hattori2022,Shank2022b,Shank2022a}. All of the past groups are not assigned by their authors to any large-scale MW substructure, with EV21:NGC 6397 being a globular cluster. KH22:DTC-4 is associated to IR18:C by the authors \citep{Hattori2022}. Both DS22a:DTG-55 and DS22b:DTG-9 are also associated with NGC 6397 as well. Although we have presented a stellar association, CDTG-36 has $\langle$[Fe/H]$\rangle \sim -0.9$, as opposed to the metallicity of NGC 6397 of [Fe/H] $\sim -2.0$ \citep{Jain2020}. It is interesting to see stellar associations with NGC 6397 when the average metallicity of CDTG-36 does not compare to the metallicity of NGC 6397, and when both DS22a:DTG-55 and DS22b:DTG-9 are very metal poor ($\langle$[Fe/H]$\rangle \sim -2.5$ and $\langle$[Fe/H]$\rangle \sim -2.5$, respectively) \citep{Shank2022b,Shank2022a}. There are $4$ associations between CDTG-36 and previous groups, with all $4$ not being assigned to large-scale substructure by the previous authors, though $2$ are associated to the globular cluster NGC 6397. CDTG-36 has 2 member stars which are very metal poor, while the other three are just metal poor, which may explain the discrepancy in metallicity observed between CDTG-36 and NGC 6397. \subsection{Globular Clusters and Dwarf Galaxies}\label{subsec:GCDG} Both globular clusters and dwarf galaxies have been shown to play an important role in the formation of stars that deviate from the usual chemical-abundance trends in the MW \citep{Ji2016a,Myeong2018a}. Globular clusters can also be a good indicator of galaxy-formation history based on their metallicities and orbits \citep{Woody2021}. From the work of \citet{Vasiliev2021}, we can compare the dynamical properties of $170$ globular clusters to those of the CDTGs we identify. The procedure that is employed is the same one used for previously identified groups and stellar associations introduced in Sec. \ref{subsec:Prev_DTGs_Stellar_assoc}. The dynamics for $45$ dwarf galaxies of the MW (excluding the Large Magellanic Cloud, Small Magellanic Cloud, and Sagittarius) are also explored \citep{McConnachie2020,Li2021}. \citet{Shank2022a} contains details of the orbits of the globular clusters and dwarf galaxies. The same procedure used for previously identified groups was then applied to determine whether a CDTG was dynamically associated to the dwarf galaxy. Stellar associations were also determined for both globular clusters and dwarf galaxies in the same manner as previously identified groups. The above comparison exercise led to seven of our identified CDTGs being associated to globular clusters. Table~\ref{tab:interesting_substructure} provides a breakdown of which globular clusters are associated with our CDTGs. The CDTGs associated with globular clusters are expected to have formed in chemically similar birth environments; this is mostly supported through the similar chemical properties of the CDTGs. Associations of globular clusters with Galactic substructure have also been made by \citet{Massari2019} and \citet{Callingham2022}. Ryu 879 (RLGC 2) (CDTG-8 and CDTG-20), IC 1257 (CDTG-15), NGC 6293 (CDTG-11), and NGC 6402 (M 14) (CDTG-11) are dynamically associated to their respective groups. On the other hand, NGC 362 (CDTG-28), NGC 4833 (CDTG-2), NGC 5986 (CDTG-11), and NGC 6397 (CDTG-36) have stellar associations to their respective groups. Even though the matched stars in these globular clusters would have individually been associated with the globular cluster orbital parameters, the overall CDTG did not possess sufficiently similar orbital characteristics to be associated. \begin{itemize} \item Ryu 879 (RLGC 2) has two dynamical CDTG associations that agree with each other in mean metallicity within errors ($\langle$[Fe/H]$\rangle$ $\sim -1.97 \pm 0.8$ for CDTG-$9$ vs. $\langle$[Fe/H]$\rangle$ $\sim -1.05 \pm 0.16$ for CDTG-$20$), compared to the metallicity of Ryu 879 (RLGC 2) of [Fe/H] $-2.1 \pm 0.3$ \citep{Ryu2018}. Both CDTG-9 and CDTG-20 are associated to GSE in this work, agreeing with the association to Galactic substructure of Ryu 879 (RLGC 2) by \citet{Callingham2022}, while \citet{Massari2019} did not analyze Ryu 879 (RLGC 2), since the globular cluster was only recently discovered at the time of the publication. \item IC 1257 is associated to GSE in this work, and \citet{Massari2019}, \citet{Callingham2022}, and \citet{Limberg2022} associate IC 1257 to GSE as well. \item NGC 6293 is associated to the Bulge by both \citet{Massari2019} and \citet{Callingham2022}, though this globular cluster is associated to GSE in this work. CDTG-11, which is associated to NGC 6293, interestingly has the lowest bound orbital energy ($\langle$E$\rangle \sim -2.2$), which actually overlaps with the potential energy of the defined bulge region in \citet{Callingham2022}, but here CDTG-11 is assigned to GSE due to the large eccentricity ($\langle$ecc$\rangle \sim 0.75$). It is possible that this group formed in the bulge with intrinsically large eccentricity. \item NGC 6402 (M 14) is associated to GSE in this work, but not associated in \citet{Massari2019}. \citet{Callingham2022} associate NGC 6402 (M 14) to the Kraken substructure, not explored in this work. \item NGC 362 is not associated to MW substructure in this work, but was associated to GSE by \citet{Massari2019}, \citet{Callingham2022}, and \citet{Limberg2022}. CDTG-28, which was associated to NGC 362, does not have a sufficiently high eccentricity ($\langle$ecc$\rangle \sim 0.5$) in this work to be associated to GSE. \item NGC 4833 is associated to GSE by both the identification in this work and \cite{Massari2019}, but related to the Kraken substructure, not explored in this work, by \citet{Callingham2022}. \item NGC 5986 is associated to the Kraken substructure by \citet{Callingham2022}, while not being assigned by \citet{Massari2019} and associated to GSE in this work. \item NGC 6397 is not associated to any substructure in this work, but \cite{Massari2019} find an association to the disk, which is not a part of the substructure routine in this work, while \citet{Callingham2022} associate this globular cluster to the Kraken substructure, not explored in this work. \end{itemize} We did not identify any associations of CDTGs to the sample of (surviving) MW dwarf galaxies, either through stellar associations, or through the dynamical association procedure described above. Note that we excluded known RPE stars that are members of recognized dwarf galaxies during the assembly of our field-star sample. As dwarf galaxy astrometric parameters themselves continue to improve, evidence may arise that shows some stripped stars after the dwarf has made a pass near the inner MW, but that is currently not seen in this sample. Nevertheless, some of the CDTGs identified by our analysis may well be associated with dwarf galaxies that have previously merged with the MW, and are now disrupted. \section{Chemical structure of identified CDTGs}\label{sec:chemical_structure} \subsection{Statistical Framework}\label{subsec:statistical_framework} Since the CDTGs we have obtained are expected to contain stars that (within each given CDTG) have been formed in similar primordial environments, the same processes may be responsible for their chemical enrichment. In this section we explore whether the elemental abundances of stars within the found CDTGs are more similar to each other than when the stars are randomly selected from the RPE Final Sample. We follow the same statistical framework as outlined in \citet{Gudin2021}. First, we use Monte Carlo sampling to select $2.5\times 10^6$ random groups of $N$ stars with a given measured elemental abundance (with $5\leq N\leq 22$) from the sample and measure the biweight scale \citep{Beers1990a}. This allows us to obtain empirical estimates of the cumulative distribution functions (CDFs) for the elemental-abundance dispersions within CDTGs of a given size. A low CDF value for a given elemental-abundance dispersion in a given CDTG indicates increased similarity for this species between the cluster member stars. For the elemental-abundance dispersions selected at random for CDTGs of a given size, the probability of the number of clusters lying below a given CDF value (from 0 to 1) is described by the binomial distribution. Using three different CDF thresholds ($\alpha\in\{0.25,0.33,0.5\}$) and multiple different abundances ($X\in\{\text{[Fe/H]}, \text{[C/Fe]}_\text{c}, \dots\}$), we obtain overall statistical significances from multinomial distributions obtained by either grouping the cumulative probabilities across all $\alpha$ values, or across all $X$ abundances. The overall statistical significance of the results is obtained by grouping the probabilities across both all $\alpha$ values and all $X$ abundances. We denote these probabilities according to the following classification: \begin{itemize} \item \textit{Individual Elemental-Abundance Dispersion (IEAD) probability}: Individual binomial probability for specific values of $\alpha$ and $X$. \item \textit{Full Elemental-Abundance Distribution (FEAD) probability}: Multinomial probability for specific values of $\alpha$, grouped over all abundances $X$. \item \textit{Global Element Abundance Dispersion (GEAD) probability}: Multinomial probability for specific abundances $X$, grouped over all values of $\alpha$. This is the overall statistical significance for the particular abundance. \item \textit{Overall Element Abundance Dispersion (OEAD) probability}: Multinomial probability grouped over all values of $\alpha$ and all abundances $X$. This is the overall statistical significance of our clustering results. \end{itemize} For a more detailed discussion of the above probabilities, and their use, the interested reader is referred to \citet{Gudin2021}. \vspace{2cm} \input{Tables/cluster_mean_table} \input{Tables/binom} \subsection{Important Caveats} Note that there are several caveats to the interpretation of this scheme that should be mentioned. Most importantly, a meaningful understanding of the derived probabilities (described below) depends on the contrast of the ``typical" CDTG elemental-abundance dispersion to the abundance dispersion of the parent population to which it is compared, from which random draws are made. If the typical CDTG elemental-abundance dispersion is roughly commensurate with the dispersion of the parent sample, then by definition the dispersions will always be consistent with what are expected from the random draws, and one cannot expect the CDTG elemental-abundance dispersions to be significantly smaller. This requires that, in particular for a given element, prior to assessing the significance of its dispersion for a given set of CDTGs, we should compare its value to the expected value from the appropriate parent sample. We carry out this exercise, as described below, by inspection of the Inter-Quartile Range (IQR) for each element for a given set of CDTGs compared to the IQR of CDTGs drawn at random from the parent sample. As a rule of thumb, we demand that the IQR of the mean for each element of a set of CDTGs is on the order of one-half of the IQR of the parent-population CDTGs. Otherwise, there is insufficient ``dynamical range" for the statistical inferences to be made with confidence, at least for individual elements. Furthermore, as is perhaps obvious, the statistical power of our comparisons increase with the numbers of CDTGs in a given parent population. Thus, when we consider dynamical clusters formed exclusively from $r$-II stars, as discussed below, it becomes more difficult to place confidence in the statistical inferences. For that reason, in the case of the clustering of $r$-II stars, we have relaxed the criterion for the minimum number of stars per cluster to $3$, rather than $5$, more than doubling the numbers of identified $r$-II CDTGs. It should also be kept in mind that the observed CDTG elemental-abundance dispersions depend on a number of different parameters, including not only on the total mass of a given parent dwarf galaxy, but on its available gas mass for conversion into stars, the history of star formation in that environment, and the nature of the progenitor population(s) involved in the production of a given element. These are complex and interacting sets of conditions, and certainly are best considered in the context of simulations (such as \citealt{Hirai2022}). Consequently, the expected result for a given element in a given set of CDTGs is not always clear. However, we have designed our statistical tests to consider a broad set of questions of interest, the most pertinent of which for the current application are the FEAD and OEAD probabilities, which we employ for making our primary inferences. \input{Tables/binom_rI} \subsection{Results} \subsubsection{Full Sample of RPE CDTGs} Table~\ref{tab:cluster_mean} lists the means and dispersions of the elemental abundances explored in this study for each of the CDTGs identified in this work. The second part of the table lists the global CDTG properties, with the mean and standard error of the mean (using biweight location and scale) of both the CDTG means and dispersions being listed. The second part of the table also includes the IQR of the CDTG means and dispersions. The third part of the table lists the biweight location and scale of the elemental abundances in the Final Sample, along with the IQR of the elemental abundances in the Final Sample. As can be seen by comparing the two IQRs for each the CDTG results and the Final Sample, the IQRs for the CDTG results for 4 of the 7 elements considered are at least twice smaller, the exceptions being for [Fe/H] (which only slightly misses our rule of thumb), [Sr/Fe], and [Y/Fe]. The elements that meet the criteria, [C/Fe]$_{c}$, [Mg/Fe], [Ba/Fe], and [Eu/Fe], provide stronger constraints compared to the ones that have large IQR ranges. Table~\ref{tab:binomial_probability_table} lists the numbers of CDTGs with available estimates of the listed abundance ratios, and the numbers of CDTGs falling below the $0.50$, $0.33$, and $0.25$ levels of the CDFs, along with our calculated values for the various probabilities. The full and overall probabilities (captured by the FEAD and OEAD probability values) are very low, implying that the measured abundance spreads are highly statistically significant within our CDTGs across the entire sample; this is similar to the results found in \citet{Gudin2021} (see their Table 7). However, some of the individual abundance spreads (the GEAD probabilities) are not statistically significant, namely the [Mg/Fe] and [Sr/Fe] spreads. Keep in mind, as noted above, that the lack of contrast in the mean IQRs of [Fe/H], [Sr/Fe], and [Y/Fe] for our CDTGs with their corresponding mean IQRs for the full sample may impact interpretation of their probabilities. By comparison, in \citet{Gudin2021}, the [Fe/H], [C/Fe]$_\text{c}$, [Sr/Fe], and [Ba/Fe] spreads were statistically significant, as inferred from their GEAD probabilities; the same is recovered in this study, with the exception of [Sr/Fe]. It should also be recalled that the sample of RPE stars considered by Gudin et al. is dominated by stars with lower [Fe/H] than our present sample, since the generally more metal-rich stars from GALAH were not available at that time. Below we explore whether $r$-I and $r$-II stars exhibit different elemental-abundance patterns. For this purpose, we repeat the clustering procedure described in Section \ref{sec:ClusteringProcedure}, separately for subsets of $r$-I and $r$-II stars. The result of this procedure is $28$ ($r$-I) CDTGs ranging in size from $5$ to $18$ members and $20$ ($r$-II) CDTGs ranging in size from $3$ to $23$ members, respectively. We then perform the same statistical analysis on each as described above. \input{Tables/binom_rII} \subsubsection{The r-I Sample}\label{subsec:rI_sample} The binomial statistics for the $28$ $r$-I CDTGs are listed in Table~\ref{tab:binomial_probability_rI_all_table}. We observe a lack of statistical significance for the reduction of [Sr/Fe] and [Eu/Fe] abundance spreads, but other abundances (including [Mg/Fe], which exhibited a statistically insignificant spread reduction in the case of the full sample) exhibit statistically significant reductions in their spreads (all elements except for [Sr/Fe] and [Eu/Fe] have GEAD probabilities less than $10\%$). From inspection of Table \ref{tab:cluster_mean_rI} in the Appendix, only 3 of the 7 elements considered ([Y/Fe], [Ba/Fe], and [Eu/Fe]) have contrasts in their mean CDTG IQRs relative to the mean IQRs of the full sample of $r$-I stars that pass our factor of two rule of thumb (although [Fe/H] and [C/Fe]$_\text{c}$ only narrowly miss), so interpretation of the GEAD probabilities for several of these elements should be regarded with caution. The FEAD probabilities for the $r$-I CDTGs are low, and are statistically significant for all three of the $v = 0.5$, $v = 0.33$ and $v = 0.25$ levels. The OEAD probability is highly statistically significant. \subsubsection{The r-II Sample}\label{subsec:rII_sample} If we adopt the minimum number of stars per $r$-II CDTG for the clustering exercise as we have for the full sample ($5$) and the $r$-I sample ($5$), we are left with only a total of $7$ $r$-II CDTGs for the statistical analysis. We judge this to be too small, as experience suggests that a minimum of $10$ CDTGs provide the most stable results. Thus, we chose to reduce the minimum number of stars in order to form an $r$-II CDTG from $5$ to $3$. This increases the number of $r$-II CDTGs from $7$ to $20$, comparable to that in the full and the $r$-I samples ($36$ and $28$ respectively). Table \ref{tab:binomial_probability_rII_all_3_table} shows the binomial statistics for our $20$ $r$-II CDTGs. From inspection, the GEAD probabilities for the elements considered are statistically significant in the same manner as the Final Sample, with [Mg/Fe] and [Sr/Fe] both lacking stastical significance. From inspection of Table \ref{tab:cluster_mean_rII} in the Appendix, only $2$ of the $7$ elements considered ([Y/Fe] and [Ba/Fe]) have mean IQRs for the $r$-II CDTGs that are at least twice smaller than the mean IQR of the final sample, although [C/Fe]$_\text{c}$ only narrowly misses our rule of thumb. Thus we can assume that, for many of the individual elements, there is not a useful dynamical range for confident interpretation for most of the GEADs. The FEAD probabilities for the $r$-II CDTGs are statistically significant for all three $v = 0.50$, $v = 0.33$, and $v = 0.25$ levels. The OEAD probability is highly statistically significant. \iffalse p_v(k\geq n,~N)=\sum\limits_{k=n}^N C_N^n v^n (1-v)^{N-n}, C_N^n=\frac{N!}{(N-n)!n!}. \fi \section{Summary}\label{sec:Discussion_2} We have assembled an RPE Initial Sample of $1776$ stars ($1393$ $r$-I stars, $381$ $r$-II stars, and $2$ $r$-III stars) from both a literature search and GALAH DR3 \citep{Buder2021} survey data, with stars that met the $r$-process-enhancement requirements listed in Table~\ref{tab:MPsignatures}. A total of $105$ of these stars are identified as CEMP-$r$ stars; these are listed in Table~\ref{tab:cemp} in the Appendix. These stars are of interest due to their enhanced carbon abundance and association to the morphological groups described in \citet{Yoon2016}. Based on their classification scheme, there are $58$ Group I CEMP-$r$ stars, $41$ Group II CEMP-$r$ stars, and $2$ Group III CEMP-$r$ stars, with a number of stars that have ambiguous classifications. This list provides a useful reference for high-resolution follow-up targets, some of which has already begun (e.g., \citealt{Rasmussen2020} and \citealt{Zepeda2022}). The RPE Final Sample of $1720$ stars ($1346$ $r$-I stars, $372$ $r$-II stars, and $2$ $r$-III stars) had radial velocity and astrometric information from which orbits were constructed, in order to identify Chemo-Dynamically Tagged Groups (CDTGs) in orbital energy and cylindrical action space with the \HDBSCAN ~algorithm. We chose \HDBSCAN\ as the clustering algorithm due to precedence within the literature \citep{Koppelman2019a,Gudin2021,Limberg2021a,Shank2022a}, and its ability to extract clusters of stars over the energy and action space. We recover $36$ CDTGs that include between $5$ and $22$ members, with $17$ CDTGs containing at least $10$ member stars. These CDTGs were associated with MW substructures, resulting in the re-identification of the Gaia-Sausage-Enceladus, the Splashed Disk, the Metal-Weak Thick Disk, the Helmi Stream, LMS-1 (Wukong), and Thamnos. A total of $7$ CDTGs were associated with globular clusters, while no surviving dwarf galaxies were determined to be associated with the identified CDTGs. Previously identified groups were found to be associated with the CDTGs as well, with past work mostly confirming our substructure identification, and showing some limitations in the procedure, which we discussed. Each of these associations allow insights into the dynamical and chemical properties of the parent substructures. The implications of past group and stellar associations were explored, with emphasis placed on the structure associations. Stellar associations to stars with abnormal MW abundances were addressed as being good candidates for high-resolution follow-up spectroscopy targets, due to the statistical likelihood of the other members being chemically peculiar as well, mostly focused on RPE and CEMP stars. We have considered the statistical significance of the elemental-abundance dispersions across the identified RPE CDTGs for a set of seven abundance ratios ([Fe/H]], [C/Fe]$_\text{c}$, [Mg/Fe], [Sr/Fe], [Y/Fe], [Ba/Fe], and [Eu/Fe]). CDTGs are statistically examined, in order to assess the similarity (or not) of the chemical-evolution histories in their presumed dwarf-galaxy birth environments, following the approach developed by \citet{Gudin2021}. We point out that, for a number of elements considered, the mean IQRs of the dispersions in the CDTGs do not provide sufficient dynamical range compared to the mean IQR of the parent samples with which they are compared (we suggest they must be a factor of two smaller) in order to enable meaningful interpretations of the Global Elemental-Abundance (GEAD) probabilities. However, the probabilities that consider the distributions of {\it all} of the elements across the CDFs for the set of CDTGs (the Full Elemental-Abundance Distribution, FEAD probabilities, and the Overall Elemental-Abundance Distribution, OEAD probablities), strongly support the assertion that the stars associated with individual RPE CDTGs indeed share similar chemical-enrichment histories, as previously claimed by \citet{Gudin2021}. We have also divided the full sample into RPE stars classified as moderately $r$-process enhanced ($r$-I) and highly $r$-process enhanced ($r$-II), and find similar results. The methods presented here will be used in future samples that contain many more RPE stars, especially with planned data releases from the RPA increasing their total number, and other ongoing or planned surveys allowing for a systematic search for RPE stars. The next steps in advancing our understanding of the birth environments and the nature of the astrophysical site(s) of the $r$-process require detailed comparisons with modern high-resolution (spatial and temporal) simulations of the formation of Milky Way-like galaxies, along the lines explored by \citet{Hirai2022}. \vspace{2.0cm} \begin{acknowledgements} The authors are thankful to an anonymous referee, who provided comments and suggestions that substantially improved the presentation in this paper. We are grateful for data provided by Rohan Naidu that is included in this work. D.S., T.C.B., and I.U.R. acknowledge partial support for this work from grant PHY 14-30152; Physics Frontier Center/JINA Center for the Evolution of the Elements (JINA-CEE), awarded by the US National Science Foundation. The work of V.M.P. is supported by NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the US National Science Foundation. I.U.R.\ acknowledges support from the NASA Astrophysics Data Analysis Program, grant 80NSSC21K0627, and the US National Science Foundation, through grant AST~1815403/1815767. \end{acknowledgements} \section{Appendix} Here we present the table for the RPE Initial Sample (Table~\ref{tab:initial_data_descript}) and the RPE Final Sample (Table~\ref{tab:final_data_descript}). In the print edition, only the table descriptions are provided; the full tables are available only in electronic form. We also present Table~\ref{tab:cemp}, which describes the identified CEMP-$r$ stars and their associated morphological groups, according to the regions defined by \citet{Yoon2016}. Tables~\ref{tab:cluster_results_rI_stub} and \ref{tab:cluster_orbit_rI} show the CDTGs identified by \HDBSCAN~ and the CDTG dynamical parameters as determined by \AGAMA~ for the $r$-I Sample, respectively. Table~\ref{tab:cluster_mean_rI} lists the means and dispersions of the CDTGs identified in the $r$-I Sample, along with the statistics on all the CDTGs and the $r$-I Sample. Tables~\ref{tab:cluster_results_rII_stub} and \ref{tab:cluster_orbit_rII} show the CDTGs identified by \HDBSCAN~ and the CDTG dynamical parameters as determined by \AGAMA~ for the $r$-II Sample, respectively. Table~\ref{tab:cluster_mean_rII} lists the means and dispersions of the CDTGs identified in the $r$-II Sample, along with the statistics on all the CDTGs and the $r$-II Sample. \include{Tables/initial_data_description_table} \include{Tables/final_data_description_table} \include{Tables/cemp_table} \include{Tables/cluster_stellar_results_rI_stub_table} \include{Tables/cluster_orbital_rI_table} \include{Tables/cluster_mean_rI_table} \include{Tables/cluster_stellar_results_rII_stub_table} \include{Tables/cluster_orbital_rII_table} \include{Tables/cluster_mean_rII_table} \bibliography{main}{} \bibliographystyle{aasjournal}
Title: The magnetic field environment of active region 12673 that produced the energetic particle events of September 2017
Abstract: Forecasting solar energetic particles (SEPs), and identifying flare/CMEs from active regions (ARs) that will produce SEP events in advance is extremely challenging. We investigate the magnetic field environment of AR 12673, including the AR's magnetic configuration, the surrounding field configuration in the vicinity of the AR, the decay index profile, and the footpoints of Earth-connected magnetic field, around the time of four eruptive events. Two of the eruptive events are SEP-productive (2017 September 4 at 20:00~UT and September 6 at 11:56~UT), while two are not (September 4 at 18:05~UT and September 7 at 14:33~UT). We analysed a range of EUV and white-light coronagraph observations along with potential field extrapolations and find that the CMEs associated with the SEP-productive events either trigger null point reconnection that redirects flare-accelerated particles from the flare site to the Earth-connected field and/or have a significant expansion (and shock formation) into the open Earth-connected field. The rate of change of the decay index with height indicates that the region could produce a fast CME ($v >$ 1500~km~s$^{-1}$), which it did during events two and three. The AR's magnetic field environment, including sites of open magnetic field and null points along with the magnetic field connectivity and propagation direction of the CMEs play an important role in the escape and arrival of SEPs at Earth. Other SEP-productive ARs should be investigated to determine whether their magnetic field environment and CME propagation direction are significant in the escape and arrival of SEPs at Earth.
https://export.arxiv.org/pdf/2208.12774
\title{The magnetic field environment of active region 12673 that produced the energetic particle events of September 2017} \correspondingauthor{Stephanie L. Yardley} \email{stephanie.yardley@ucl.ac.uk} \author[0000-0003-2802-4381]{Stephanie L. Yardley} \affiliation{University College London, Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK} \affiliation{Donostia International Physics Center (DIPC), Paseo Manuel de Lardizabal 4, 20018 San Sebasti{\'a}n, Spain} \author[0000-0002-0053-4876]{Lucie M. Green} \affiliation{University College London, Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK} \author[0000-0001-7927-9291]{Alexander W. James} \affiliation{European Space Agency (ESA), European Space Astronomy Centre (ESAC), Camino Bajo del Castillo s/n, 28692 Villanueva De La Ca{\~n}ada, Madrid, Spain} \author[0000-0002-1365-1908]{David Stansby} \affiliation{University College London, Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK} \affiliation{University College London/Research IT Services, Gower St, Bloomsbury, London WC1E 6BT, UK} \author[0000-0001-8055-0472]{Teodora Mihailescu} \affiliation{University College London, Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK} \keywords{Solar Activity (1475); Solar Energetic Particles (1491); Solar Active Region Magnetic Fields (1503); Solar Magnetic Reconnection (1504); Solar Flares (1496); Solar Coronal Mass Ejections (310)} \shorttitle{Magnetic environment of SEP-productive AR12673} \shortauthors{Yardley et al.} \section{Introduction} \label{sec:intro} The Sun sporadically accelerates particles (electrons, protons and heavy ions) to near-relativistic speeds and energies of 10 keV to GeV during activity events such as solar flares and coronal mass ejections (CMEs). These particles are termed solar energetic particles (SEPs). The acceleration processes are thought to be related to the electric fields or plasma turbulence associated with the magnetic reconnection involved in solar flares that energise the thermal plasma to suprathermal levels \citep[see][for a review]{vlahos2019}. For example, SEP production is correlated with flare thermal energy, with all flares with a GOES soft X-ray classification greater than X5, and located in the solar longitude range W15 to W75, being SEP productive \citep{belov2005}. The most energetic flares have a high likelihood of being accompanied by a CME \citep{yashiro2005} and correlations have been found between certain CME characteristics and SEP production. For example, CME acceleration and spatial extent have been shown to influence the production and spread of SEPs \citep{kahler1986} and CME energy is correlated to peak SEP intensity \citep{kahler2013}. The mechanism by which CMEs are capable of producing SEPs is through the shocks that are created by super-Alfv{\'e}nic CMEs as they move through the lower corona and into the solar wind \citep{reames1999,Kahler2001, gopalswamy2004, papaioannou2016}, injecting shock-accelerated particles onto observer-connected field lines. Regardless of the SEP acceleration mechanism, accelerated particles escape the Sun by propagating along open field lines that guide the particles as they move through the heliosphere. SEP studies utilising data from spacecraft at different solar longitudes have shown that the largest SEP intensities are detected at those spacecraft that are well connected to the solar activity event \citep{dresing2014, lario2013}. Therefore, Earth impacting SEPs must either originate on, diffuse onto, or have access to as a result of magnetic reconnection, magnetic field lines that connect from the Sun to the Earth. Currently, the source of SEP seed populations, the method by which particles escape from their acceleration region, and the SEP profile variation from event to event in relation to source region characteristics are still not yet well understood \citep{desai2016}. For example, the standard CSHKP model of solar flares \citep{carmichael1964, sturrock1966, hirayama1974, kopp1976} injects reconnection-accelerated particles downward to produce heating in the lower atmospheric plasma and the flare, but also injects particles upward. However, these upward-accelerated particles are injected onto the closed field lines of the escaping CME and therefore the escape routes for the SEPs are not easily explained. It is possible to investigate the escape of SEPs by first identifying their solar origin. A key technique for this involves analysing the elemental composition of the SEP plasma population as measured in situ and then seeking plasma with the same composition in the flare/CME source region. Elemental composition is normally characterised by considering elements with differing first ionisation potentials (FIP), comparing low-FIP to high-FIP elements, such as Si and S, and comparing coronal abundances to that of the photosphere. This has emerged as a key diagnostic in the study of SEP plasma \citep{reames2018}. Recently, this approach has been used to trace multiple SEP events detected near Earth during January 2014 back to their solar source \citep{brooks2021a}. The results showed that plasma confined by strong magnetic fields in the active region (AR) developed the composition signature (high Si/S abundance ratio) indicative of the SEP population. Smaller Si/S abundance enhancements were also recorded close to upflow regions at the AR boundary. The plasma detected in situ during the SEP events was therefore determined to be a combination of plasma that was accelerated and released during the flare/CME itself, that escaped directly along open magnetic field lines, and also plasma that escaped indirectly through interchange reconnection at the AR periphery \citep{brooks2021b,yardley2021}. It is clear that the configuration of an AR's magnetic field, and the configuration of its surroundings, plays a key role in both particle acceleration and escape. The magnetic field configuration influences where flare reconnection may occur, how much energy can be released, and over what timescale. The magnetic field configuration also affects CME acceleration, speed and propagation direction. Therefore, the magnetic configuration of an AR and its surrounding field must be investigated in SEP studies. Determining whether there are certain characteristics of an AR's magnetic field or of the surrounding field that are necessary for SEP-production and escape would therefore mark a step towards being able to forecast which regions might produce observer-impacting SEP events. For the specific case of CMEs that are initiated by an unstable flux rope \citep{kliem2006}, the pre-eruptive (and therefore stable) configuration is obtained when the upward Lorentz force of the rope is balanced by the downward strapping force of the overlying arcade. The flux rope will become unstable if it reaches a height at which the downward force of the overlying strapping is insufficient. This height is known as the critical height and the gradient of the strapping field is given by the decay index \citep{kliem2006}. Once unstable, the decay index profile will influence the acceleration of the CME, and its terminal velocity. A study by \citet{kliem2021} suggests there is a correlation between the steepness of the decay index height profile (above the critical height) and the CME velocity for CMEs with speeds $\geqslant$1500~km~s$^{-1}$. In this paper, we focus on the evolution of the well-studied NOAA AR 12673 during its disk passage \citep{sun2017, yang2017, chertok2018b, cohen2018, luhmann2018, romano2018, sharykin2018, shen2018, wang2018, anfinogentov2019, bruno2019, romano2019}, which was the site of several M and X-class flares, and fast CMEs between 2017 September 4 until it passed out of view over the west limb on 2017 September 10. Two SEP events and a further SEP event that produced a ground-level enhancement were produced in association with this activity, as described in \cite{bruno2019}. Two SEP events occurred while the AR was visible on disk from the Earth perspective, and within 50W of central meridian. We aim to determine the role of the magnetic field environment of the AR and its surroundings in enabling these two flare/CME events to produce SEP events that were detected at Earth, whereas other major flares/CMEs from the AR were not. We probe whether there are certain characteristics of the AR magnetic field configuration, and its surrounding magnetic environment, that influence the production and escape of SEPs in a subset of the major flare/CME events produced by the AR. In contrast to previous studies, we analyse both the SEP and non-SEP productive events. \section{Evolution of NOAA active region 12673} NOAA active region (AR) 12673 appeared on the east solar limb on 28 August 2017, consisting of a lone positive polarity sunspot with dispersed positive (negative) polarity field to the north-west (south-east). This positive polarity spot was present in previous rotations as part of AR 12670 and AR 12665 (which also produced two SEP events). Major flux emergence began in the region on 2017 September 2 and as a consequence, the AR evolved rapidly, from an $\alpha$ to a $\beta\gamma\delta$ Hale-class by September 5 (Figure~\ref{hmi_goes}~a). The region was recorded to have had one of the fastest rates of emerging flux ever observed \citep{sun2017}. The AR first started flaring early on 2017 September 4 and produced its first CME later that day, which was observed to begin around 18:00~UT. In total, AR 12673 was the source of 27 GOES M- and four X-class flares, eleven CMEs and three SEP events during the time period 2017 September 4--10. Figure~\ref{hmi_goes}~(b) shows the two SEP events that occurred as detected by GOES when the AR was less than 50W of central meridian. In the next section, we focus on analysing the properties of a subset of eruptive flares from this AR (between 2017 September 4 and 7), which are both SEP-productive and non-SEP productive, in order to try to distinguish these two types of eruptive events. \section{SEP and non-SEP flare/CME productive events} \label{sec:events} In this study, we focus on four eruptive events (flares and their associated CMEs) that occur between 2017 September 4 and 7, when the AR was no greater than 50W of central meridian. Details of the flare and CME properties are given in Table~\ref{tab:summary_table}. The SEP productive events (events two and three) are temporally associated with an M5.5 GOES class flare on 2017 September 4 and a X9.3 GOES class flare on 2017 September 6. GOES 13 and 15 data show that particles from September 4 SEP event arrived at the spacecraft by 22:30~UT on the same day and on September 6 the particles of the SEP event were detected by 12:35 UT. Energy spectra of both SEP events were relatively soft, with the data from the 2017 September 4 event suggestive of a post-eruption origin \citep{chertok2018a,chertok2018b}. In the following, a brief description of each eruptive event is given, including the location and development of flare ribbons (``active PILs"), CME propagation direction and radial speed, and sites of magnetic reconnection as evidenced by the flaring and EUV observations. In total, four active PILs are identified in the AR along which flare ribbons are observed, three of which are aligned almost north-south (PILs 1, 2, and 3 in Figure~\ref{Event1}~b) and one aligned east-west (PIL 4). The AR evolves so that on September 7 only one active PIL is remaining (PIL 3). \subsection{Eruptive Event 1} Event one occurred on 2017 September 4 and comprises of an M1.0 GOES class flare that occurred in association with a CME. No SEPs were detected in association with this event. Flare ribbons are initially observed along PIL 2, but an overlap with ribbons along this PIL from a flare just minutes before hinders the ability to discriminate which flare is responsible for these ribbons. As seen in AIA 1600~\AA, flare ribbons are forming along PILs 3 and 4 by 18:16 UT on 2017 September 4, which is temporally coincident with the initial increase in the GOES soft X-ray light curve for this flare (see Figure~\ref{Event1} panels a and b). At the peak of the flare, as determined from the GOES soft X-ray emission (18:21~UT), the flare ribbons are mainly observed along PIL 4. The eruption begins at around $\sim$18:05~UT as observed by the expansion and propagation of coronal loops to the south-west, visible in the SDO/AIA 171~\AA~and running difference images (Figure~\ref{Event1}~c). The CME (Figure~\ref{Event1}~d) is first seen in LASCO/C2 data at 19:00~UT on September 4 and has a radial speed of 973~km~s$^{-1}$ (calculated using the STEREOCAT tool\footnote{\url{https://ccmc.gsfc.nasa.gov/analysis/stereo/}}). The CME propagation direction in 3D is S06W28 (taken from the DONKI catalogue\footnote{\url{https://kauai.ccmc.gsfc.nasa.gov/DONKI/}}), which is to the west of the radial direction of the AR. \subsection{Eruptive Event 2} Event two also occurs on 2017 September 4 and comprises of an M5.5 GOES class flare that occurred in association with a CME and the production of SEPs. Flare ribbons are first observed (faintly) at $\sim$20:11 UT on 2017 September 4 in the AIA 1600~\AA~waveband, initially appearing along PIL 1. The ribbons then spread across PILs 4, 3 and then 2. At the peak of the flare's soft X-ray emission (20:32~UT) the 1600~\AA~waveband flare ribbons are most intense across PIL 2 (see Figure~\ref{Event2} panels a and b). Reverse S-shaped coronal loops are observed to erupt to the north-west from 20:00~UT onwards, which drives reconnection at what looks like a null point, as evidenced by the 171~\AA~and running difference images, located north-east of the AR (Figure~\ref{Event2}~c). The associated CME is first seen in LASCO/C2 (appearing to the south) on 2017 September 4 at 20:36~UT. The eruption direction in 3D is S10W10 (from DONKI), which is north of the radial direction of the AR, and is super-imposed on the previous CME from the region that occurred during event one (Figure~\ref{Event2}~d). However, the CME is also observed to have a component propagating to the east as seen in the coronagraph field-of-view. The CME's radial speed is calculated to be 2153~km~s$^{-1}$ (from the STEREOCAT tool). A type II radio burst was observed in association with this event (as recorded by the WIND/WAVES catalogue\footnote{\url{https://cdaw.gsfc.nasa.gov/CME\_list/radio/waves\_type2.html}} \citealt{gopalswamy2019}). The SEP event is first detected in the GOES data at 22:30~UT. \startlongtable \begin{deluxetable*}{cccccccccccc} \tabletypesize{\scriptsize} \tablecolumns{12} \tablewidth{0pt} \tablecaption{The flare/CME properties of the four eruptive events. \label{tab:summary_table}} \tablehead{ \colhead{No.} & \colhead{Lat.} & \colhead{Lon.} & \colhead{Flare} & \colhead{Flare} & \colhead{GOES} & \colhead{CME} & \colhead{LASCO/C2} & \colhead{Half} & \colhead{Radial} & \colhead{Prop.} & \colhead{SEP} \\ & \colhead{(deg)} & \colhead{(deg)} & \colhead{Start Time} & \colhead{Peak Time} & \colhead{Flare} & \colhead{Onset Time} & \colhead{First Obs.} & \colhead{Width} & \colhead{Velocity} & \colhead{Dir.} & \colhead{Event} \\ & & & \colhead{(UT)} & \colhead{(UT)} & \colhead{Class} & \colhead{(UT)} & \colhead{(UT)} & \colhead{(deg)} & \colhead{(km~s$^{-1}$)} & & } \startdata 1 & -7 & 11 & 17 Sep 4 18:12 & 17 Sep 4 18:21 & M1.0 & 17 Sep 4 18:05 & 17 Sep 4 19:00 & 37 & 973 & S06W28 & N \\ 2 & -10 & 11 & 17 Sep 4 20:12 & 17 Sep 4 20:32 & M5.5 & 17 Sep 4 20:00 & 17 Sep 4 20:36 & 101 & 2153 & S10W10 & Y \\ 3 & -9 & 34 & 17 Sep 6 11:52 & 17 Sep 6 12:01 & X9.3 & 17 Sep 6 11:56 & 17 Sep 6 12:24 & 103 & 2268 & S15W23 & Y \\ 4 & -8 & 48 & 17 Sep 7 14:31 & 17 Sep 7 14:36 & X1.3 & 17 Sep 7 14:33 & 17 Sep 7 15:12 & 16 & 481 & S16W53 & N \\ \enddata \tablecomments{The first three columns give the event number (1--4), the latitude and longitude that the eruptive event originates from. Columns four and five give the start and peak time of the solar flares as derived from the GOES soft X-ray flux. Columns six and seven give the time of the CME onset as observed in the SDO/AIA 171~\AA~data, and the time the CME was first observed in SoHO LASCO/C2. Columns eight and nine give the half width and the radial velocity as determined by the STEREOCAT tool. The ensemble mode is used and the median of 5 different speed measurements, calculated using LASCO/C2 and STEREO-A/COR2, is taken. The propagation direction of the CME is given in column ten, which is taken from the DONKI catalogue. Finally, column eleven states whether the event was associated with SEPs as detected by GOES.} \end{deluxetable*} \subsection{Eruptive Event 3} Event three occurred on 2017 September 6 and comprises an X9.3 GOES class flare that occurred in association with a CME and an SEP event. Flare ribbons are first observed $\sim$11:53 UT on September 6 in AIA 1600~\AA~waveband data along PIL 3 and then PIL 4. By September 6 the AR has evolved to have two main PILs (3 \& 4, see Figure~\ref{Event3}~b). Flare ribbons appear and are strongest around PIL 3 at the peak of the flare (12:01~UT) but also spread to PIL 4 (Figure~\ref{Event3}~a). The eruption associated with the flare, as observed in EUV imaging data of the lower corona, is quite complex. Three different structures are observed to propagate outwards from the AR (black arrows shown in Figure~\ref{Event3}~c). The first is a loop structure, aligned north south with respect to the AR, located at the east of the AR. The second and third loop structures originate from the western side of the AR. These loop structures (2 \& 3), which erupt around the same time as the first loop structure, propagate to the south-west and north-west, respectively (see Figure~\ref{Event3}~c). As a result, a halo CME was observed, with a radial speed of 2268~km~s$^{-1}$, which was first observed in LASCO/C2 at 12:24~UT on September 6. The CME propagated in the direction S15W23, which is to the east of the radial direction from the AR. A type II burst was observed in association with this event. The SEP event is first detected in the GOES data at 12:35~UT. \subsection{Eruptive Event 4} Event four begins on September 7 at 14:31~UT and is not associated with an SEP event. The AR is almost at 50W at this time with projection effects starting to become more evident. By this time, the AR has evolved to have only one main PIL (3, Figure~\ref{Event4}~b), which has changed orientation mainly due to shearing motions. The flare ribbons are therefore observed along PIL 3 and do not evolve into multiple ribbons (Figure~\ref{Event4}~a). The erupting loop structure is difficult to identify directly in this case but there are nearby loops that visibly oscillate due to the propagation of an erupting structure (Figure~\ref{Event4}~c). The eruption is indirectly observed to begin around 14:33~UT. A narrow CME is first observed in LASCO/C2 at 15:24~UT with a radial speed of 481 km~s$^{-1}$. The CME propagation direction is S16W53, which is to the south-west of the AR radial. \section{Local magnetic field configuration} In this section, aspects of the magnetic field configuration of NOAA active region (AR) 12673, the magnetic configuration in the vicinity of the AR and the footpoints of the Earth-connected field during this time period are discussed. The decay index in the region is presented in relation to investigating CME velocity, but no detailed magnetic field analysis can be carried out since only a potential field model is used. The magnetic field configuration local to the AR (i.e. connectivity to the immediate surroundings) is analysed in order to investigate the escape of particles from the AR, which are accelerated during flare reconnection processes, and the Earth-connected fields lines are analysed in order to investigate the processes which may inject and accelerate particles towards the Earth. Two different potential field models are used for the analysis of the decay index profile and to investigate the magnetic field configuration in the vicinity of the AR. The extrapolation methods are also described in this section. \subsection{Active region decay index height profile} A study by \citet{kliem2021} showed that the height profile of the decay index above the critical height (i.e. the height at which the torus instability sets in for eruptive flux ropes) may correlate with CME velocity in the cases of fast CMEs (with velocity greater than 1500~km~s$^{-1}$). A similar analysis is followed here. The value of this approach is that if the correlation is found to hold, it enables a pre-event investigation of which CMEs may be of sufficiently high speed that they drive shocks as they propagate. Hence, these CMEs could be SEP-productive. Potential field models are sufficient for the analysis of the height profile of the decay index (whereas a full treatment of the AR, which contains complexity and free magnetic energy, requires the construction of a non-linear model). Potential field models of NOAA AR 12673 are created by extrapolating linear fields using the method of \citet{alissandrakis1981} with the force-free parameter set to zero (i.e. current-free). The radial field component of photospheric magnetograms from the HMI \citep{scherrer2012} SHARP data series (Spaceweather HMI Active Region Patch; \citealt{bobra2014}) taken around the onset time of each event are used as the lower boundary of each extrapolation. The magnetograms are spatially downsized by a factor of 2, meaning that each pixel in the extrapolation volume represents approximately 0.725~Mm in each spatial dimension. The height of the extrapolation volume is chosen as 451 pixels, or approximately 327~Mm (0.47~R$_{\odot}$). Once the potential field model has been created, the decay index is computed. The poloidal component of the magnetic field is used in the decay index calculation \citep{kliem2006, james2022}, which is approximated in this study by using the field component that is transverse to the PIL along which the flare ribbons are observed ($B_{tr} = \sqrt{(Bx^2+By^2)}$). Pixels are selected in the lower boundary of the extrapolation that correspond to the ``active" photospheric PILs associated with each flare. The mean value of the decay index along this ``active" PIL is then computed at each height layer in the extrapolation. The critical height, $h_{crit}$, is defined as the height of the lowest layer in the extrapolation in which the mean decay index is greater than the critical decay index, $n_{crit}$. However, any critical heights in the lowest 9 layers (3.26 Mm) of the volume are discounted as these values are generally a result of noise in the boundary magnetogram. \citet{bateman1978} found a theoretical critical decay index of 1.5, however observational and theoretical studies have determined values of $1 < n_{crit} < 2$ \citep{torok2007, fan2007, demoulin2010}. \citet{kliem2021} tested various values of the critical decay index and compared observed CME speeds to gradients of the decay index measured over different height ranges above the critical height. They found the strongest correlation between CME speeds and gradients of the decay index when the critical decay index was taken as $n_{crit}$ = 1.7 and the gradients were calculated over a range of 1-1.6 times the critical height. In this study, the values used by \citet{kliem2021} are adopted i.e., $n_{crit}$ = 1.7 and decay index gradients computed over the relative range of (1-1.6)$h_{crit}$. The critical height $h_{crit}$ and the gradient of the decay index $dn/dh$ were calculated (see Table~\ref{tab:decay}) for the four active PILs for which flare ribbons were observed during the four eruptive events (Figures~\ref{Event1}--\ref{Event4}). At PILs 3 and 4 it can be seen that $h_{crit}$ increases and $dn/dh$ decreases with time. During events one and two, the minimum $h_{crit}$ and maximum $dn/dh$ occur at PIL 3. During event three $h_{crit}$ is lower at PIL 3 than PIL 4. The numbers highlighted in bold in Table~\ref{tab:decay} represent $h_{crit}$ and $dn/dh$ calculated for the main PIL that was "activated" at the peak of the flare. The main PIL was chosen as the PIL above which the strongest flare ribbons were observed at the flare peak. The results show that for events one, two, three, and four (E1--E4 in Table~\ref{tab:decay}) the critical height above which a flux rope would become torus unstable, $h_{crit}$, is 39, 40, 42 and 48~Mm, respectively. Above these heights, the decay index falls with height as 0.020, 0.022, 0.017 and 0.16~Mm$^{-1}$, respectively. Following the findings of \citet{kliem2021} a correlation between CME velocity and the rate at which the decay index falls has been indicated for CMEs with speeds greater than 1500~km~s$^{-1}$, which in this study applies to the two SEP productive events only (events two and three). Our results are consistent with the findings in \citet{kliem2021} however, our values indicate that the region has the possibility of producing a fast CME ($\geqslant$1500~km~s$^{-1}$) at the time of all four events. We only observe fast CMEs during events two and three, which are SEP-productive. \startlongtable \begin{deluxetable*}{cccccccccc} \tablecolumns{10} \tablewidth{0pt} \tablecaption{Critical height h$_{crit}$ and gradient of the critical decay index $dn/dh$ calculated for the active PILs. \label{tab:decay}} \tablehead{ \colhead{h$_{crit}$} & \colhead{Event 1} & \colhead{Event 2} & \colhead{Event 3} & \colhead{Event 4} & \colhead{$\frac{dn}{dh}$} & \colhead{Event 1} & \colhead{Event 2} & \colhead{Event 3} & \colhead{Event 4} \\ \colhead{(Mm)} & & & & & \colhead{(Mm$^{-1}$)} & & & } \startdata PIL1 & 46 & 47 & - & - & & 0.022 & 0.021 & - & - \\ PIL2 & 40 & {\bf 40} & - & - & & 0.022 & {\bf 0.022} & - & - \\ PIL3 & 25 & 25 & {\bf 42} & {\bf 48} & & 0.025 & 0.024 & {\bf 0.017} & {\bf 0.016} \\ PIL4 & {\bf 39} & 39 & 59 & - & & {\bf 0.020} & 0.020 & 0.016 & - \\ \enddata \tablecomments{The numbers in bold represent the critical height and gradient of the critical decay index at the main active PIL at the peak of the flare during the four events (E1--4). } \end{deluxetable*} \subsection{Magnetic field configuration in active region vicinity} To investigate the magnetic field configuration in the vicinity of the AR, potential field models are constructed using the PFFS model available in SSWIDL. The models are made are constructed on 2017 September 4 at 18:04~UT, 2017 September 6 at 12:04~UT, and 2017 September 7 at 12:04~UT (Figure~\ref{null}), close to the time of the four events. The models reveal the presence of a null point to the north east of the AR, between AR 12673 and AR 12674, which is located in the northern hemisphere. The null is present throughout the time period in which the four events studied here take place. The potential field models also reveal a channel of open magnetic field along the east boundary of the negative polarity, which is also present for the entire duration of the events and corresponds to a small coronal hole in the AIA observations. As can be seen in Figure~\ref{null}, the configuration of the null changes in the time period of the four events between 2017 September 4 and 7. On September 4 the null is not associated with open field lines, while on 6 and 7 the west section of the null is associated with partially and completely open field, respectively. The null is closed on the west side until September 6 due to the decayed positive magnetic flux that is close to the AR boundary and also an AR beyond the west limb. By comparing the EUV and coronagraph data of each of the four eruptive events to the magnetic field configuration given by the potential field model the following conclusions are drawn. Event one is directed to the west of the radial and away from the null point and open magnetic field. The emission structures observed in the AIA data indicate no significant perturbation at the null. Event two, which is directed approximately radial from the AR, interacts with the null as the magnetic field expansion of the CME occurs. In event 3 the AIA EUV data shows that reconnection occurs at the null point, likely due to the CME propagation direction, which is east of the AR radial. This is also the location of open magnetic field. Finally, event four propagates to the south-west of the radial direction away from the null point and open magnetic field. \section{Magnetic connectivity to Earth} For energetic particles to escape and reach Earth they ultimately need to be injected onto open magnetic field lines that are magnetically connected to Earth. To investigate the footpoints of open field that is magnetically connected to Earth during 2017 September 4--7 we use a combination of the potential field source surface model (PFSS, \citealt{schatten1969}) along with a ballistic propagation model \citep{Neugebauer1998}. We use a Global Oscillations Network Group (GONG) synoptic photospheric magnetic field map taken on 2017 September 9 at 23:14~UT to construct the PFSS model. The GONG map is loaded into Python using SunPy \citep{sunpy2020} and pfsspy \citep{yeates2018, stansby2019} was used to construct the potential field between 1~R${_\odot}$ and 2.5~R$_{\odot}$. Then a Parker spiral configuration is assumed above 2.5~R$_{\odot}$ using a solar wind speed of 500~km~s$^{-1}$. This speed was the average solar wind speed measured by ACE during our time period. We then use HelioPy \citep{stansby2021}, along with SpiceyPy \citep{annex2021} and the SPICE toolkit \citep{acton2018} to trace the field lines connected to Earth back to their source on the surface. We do this for four different times (2017 September 4 at 18:00~UT, 2017 September 4 at 20:00~UT, 2017 September 6 at 11:52~UT, and 2017 September 7 at 14:31~UT) at the start of each eruptive event (see Section~\ref{sec:events}). We choose these times so that we can determine the instantaneous Earth connectivity and estimate the source location of the energetic particles measured near-Earth. Figure~\ref{earth_connectivity} shows the magnetic connectivity of Earth at the times of the four eruptive events. The top panel shows the Earth's back-projected trajectory (green line), the traced field lines (black lines), and the location of the heliospheric current sheet (black line) overlaid on the GONG synoptic magnetogram. The bottom panel shows the same but overlaid on a SDO/AIA 193~\AA~synoptic map, constructed by joining 27 AIA images together, with the final image taken on 2017 September 10. The results show that for the first two events (on 2017 September 4) the Earth-connected field lines are rooted to the south-east of AR 12673 in decayed negative polarity field. We recall that the first event (2017 September 4 at 18:05~UT) was non-SEP productive, and the CME propagated to the south-west. The second event (2017 September 4 at 20:00~UT), which was SEP productive, propagated radially from AR 12673. However, it was a wide CME, likely to expand into Earth-connected field lines to the south-east of the source AR. For events three and four, the connectivity has changed, and Earth-connected field lines are rooted in NOAA AR 12674 (to the north-east of 12673). In particular, the field lines are rooted in the main positive polarity spot of AR 12674. The magnetic field from this part of the AR forms part of the null point that exists between ARs 12673 and 12674. As we have seen in Section~\ref{sec:events}, the erupting loop structures in event three propagate in many directions, including into the null, and where AR 12674 is connected to Earth. In event four, the eruption propagates to the west away from the region that is well-connected to Earth. \section{Summary \& Discussion} The analysis presented here uses AR 12673 as a case study to investigate whether and how the magnetic environment of an AR plays a role in flare/CME events being (or not being) SEP productive. Several aspects of the magnetic field are considered including how the strength of the field falls with height in the AR, and whether that correlates with CME speed (important for generating shocks), how the magnetic field of the AR interacts with its surrounding field during the dynamic phase of the flare/CME, and how particles may get injected onto observer-connected field lines. Despite the four events occurring close in time, there are some interesting differences and important findings that are summarised here and in Figure~\ref{cartoon}. In event one, which was non-SEP productive, a major flare occurred (M1.0 GOES-class), and a relatively fast CME that had a radial speed of 973 km~s$^{-1}$. At this time, the Earth-connected field lines were rooted in a region of negative polarity field to the south-east of NOAA AR 12673 (see purple lines in Figure~\ref{cartoon}~a), not in the core of the AR, where the flare ribbons indicate magnetic connection to locations of particle acceleration due to flare processes. The CME occurring during this event was fairly narrow and propagated to the west of AR 12673's radial direction, i.e. away from earth-connected field lines (black arrow in Figure~\ref{cartoon}~a). Perhaps deflected by the small coronal hole (CH) to the east of the AR. No perturbation of the null was observed in EUV imaging data, in line with the CME propagation direction (i.e. away from the null), and no CME-driven shock was produced. The data indicate that flare accelerated particles were not able be re-directed via magnetic reconnection onto Earth-connected field lines, and no shock acceleration of particles occurred due to the CME propagation. Event two was SEP productive and involved a flare and CME that was initiated less than two hours after event one. Very little evolution of the AR corona took place during this short interval, in terms of photospheric motions and flux emergence, and the location of the Earth-connected field lines remained the same at the Sun as it was for event one (purple lines Figure~\ref{cartoon}~b). What was significantly different were the characteristics of the CME, which expanded to have an angular width of $\sim$200$^{\circ}$, and propagated into an environment already modified by the previous CME (event one). This expansion appears to have been sufficient to cause the CME to interact with the Earth connected-field lines. The shock created by this CME likely accelerated particles along the open field lines, meaning the particles were able to reach Earth. The expansion of the CME also activated the null between ARs 12673 and 12674 (red and dashed blue lines in Figure~\ref{cartoon}~b), leading to reconnection. However, this reconnection (and any transfer of particles) did not involve any open field lines that were Earth-connected. Collectively, these observations are suggestive of particle acceleration occurring along Earth-connected open field to the south-east of AR 12673, accelerated by the CME shock, consistent with a post-eruption origin as found by \citet{chertok2018a,chertok2018b}. At the time of event three, the location of the Earth-connnected field lines had moved from its previous position to the south-east of AR 12673 and into the negative polarity (leading) spot of AR 12674 (purple lines Figure~\ref{cartoon}~c). The CME that occurred as part of event three, propagated to the north-east, radially from the AR (black arrows in Figure~\ref{cartoon}~c), and therefore towards the null. An activation of the null was evidenced in EUV imaging data and reconnection at the null effectively opened field lines in AR 12763 (as it transferred the footpoints of the open Earth-connected field from AR 12674 to AR 12673), providing a magnetic channel for flare accelerated particles to escape to Earth. In addition, the detected type II burst indicates the occurrence of a CME-driven shock that also may have accelerated particles along Earth-connected field lines. It is interesting to note that the SEP energy spectrum, although still soft, contained higher energy protons (of a few hundred MeV, see Figure~\ref{hmi_goes} b) than were detected at Earth during event two \citep{bruno2019}. The CME of event four was narrow and was deflected to the south-west, away from Earth connected field lines that remained in NOAA AR 12674 at this time, albeit modified by the null-point reconnection of event three (see Figure~\ref{cartoon}~d). Although the null point was well developed at this time, there appears to have been no activation of the null (i.e. reconnection). The relatively small and slow CME, deflected away from Earth-connected field lines, with no reconnection to transfer particles, seems to be at the heart of why event four was not SEP-productive. The previous work of \cite{chertok2018b} analysed the CMEs of the two SEP-productive events (i.e. our events two and three) including the propagation of the CMEs through the high-speed solar wind stream emanating from the coronal hole to the south-east of AR 12673. Their Figure 1 shows that dimmings associated with the field expansion of the CMEs in these two events extends to AR 12674. This supports our observational findings that reconnection at the null (activation of the null) occurs in both SEP-productive events, transferring the footpoints of some of the erupting magnetic field from AR 12673 to AR 12674. However, this reconnection, and the new magnetic pathways it creates, likely only becomes significant for the second SEP-productive event (our event three), since only from this time on are Earth-connected field lines rooted in AR 12764. It could be speculated that the scenarios of CME-shock accelerated particles along Earth-connected field (event two) and flare-processes, as well as shock accelerated particles (in event three), could contribute to the different energy spectra observed in situ. In that respect, protons arriving at Earth from event two are limited to energies below 150 MeV whereas for event three the protons reach energies of a few hundred MeV \citep{bruno2019}. Indeed, \cite{bruno2019} find that the temporal evolution of the SEP events are complex but conclude that both events show evidence of CME-shock accelerated particles. When modelling the magnetic field of an AR, non-linear force free field models are usually more appropriate than potential field models as ARs consist of non-potential field configurations with varying degrees of shear and twist across the configuration. However, in this study we are interested in the large-scale magnetic field of the solar corona surrounding the AR where the magnetic field is known to be close-to a potential state and the decay index profile of the magnetic field overlying the AR, which only requires knowledge of the potential field. To validate the PFSS model for the field surrounding the AR, we used EUV emission structures observed by SDO/AIA. For example, the field associated with the magnetic null, which is observed to the north-east of AR 12673, and the small coronal hole to the east where open magnetic field is located. Very strong magnetic fields have been recorded in this AR \citep{wang2018} and therefore numerous artifacts in the line-of-sight and vector magnetic field exist \citep{anfinogentov2019}. These artefacts are present along one of the AR’s PILs and are most apparent in the tranverse field component. Therefore, our approach of using the PFSS model, involving the radial magnetic field component as the boundary condition, our investigation of the magnetic field in the vicinity of the AR, and the gradient of the overlying magnetic field with height involve field lines with footpoints away from the PILs in the AR. Therefore, these artefacts do not affect our analysis of the configuration of the magnetic field provided by the potential field extrapolations. Using a potential field model allows the rate at which the downward Lorentz force of the overlying field varies with height to be determined. \cite{kliem2021} have shown that the rate at which this force varies with height is correlated with CME speed for CMEs with speeds greater than 1500~km~s$^{-1}$. Both SEP-productive events in this study have CMEs with speeds greater than this value, whereas the non SEP-productive events are slower. Using the same height range as \cite{kliem2021} we calculated the change of the decay index with altitude $dn/dh$, above the critical height $h_{crit}$ for all four events. We obtained the change of the decay index with altitude above the flare/CME sites (as indicated by the location of the flare ribbons) in order to determine whether this metric indicates when an AR might be capable of producing a fast CME and therefore creating shock-accelerated particles. Our results show little variation in $dn/dh$ at the times of the four events, even though two of the CMEs have speeds below 1500~km~s$^{-1}$. Event three shows a lower $dn/dh$ than expected as the region produces a CME that is $>$2000~km~s$^{-1}$. However, event three is our most complex event and the flare seemingly has two parts; confined and eruptive. Flare ribbons are seen to extend along PIL 3 and then 4 by the peak of the flare. The critical heights are quite different (42 vs 59~Mm above PILs 3 and 4), for this event however, the values of $dn/dh$ are very similar (0.017 and 0.016~Mm$^{-1}$). The complexity of the events produced by this region brings into question, which PILs should be used for the calculation of $h_{crit}$ and $dn/dh$. Nevertheless, this is a very preliminary study in a complex AR with several PILs activating during each event and more analysis is required in order to determine whether the parameters $h_{crit}$ and $dn/dh$ could be interesting proxies to consider alongside other characteristics of an AR's magnetic field when assessing likelihood of SEP occurrence. \begin{acknowledgments} SLY and LMG would like to thank NERC for funding via the SWIMMR Aviation Risk Modelling (SWARM) project (grant no. NE/V002899/1). SLY would also like to thank DIPC for their hospitality at Miramar Palace. AWJ is supported by a European Space Agency (ESA) Research Fellowship. DS received support under STFC grant number ST/S000240/1. TM is supported by the STFC PhD studentship grant ST/V507155/1. \end{acknowledgments} \vspace{5mm} \facilities{SDO/AIA, SDO/HMI, GONG, SoHO/LASCO, GOES} \software{JHelioviewer \citep{muller2017}, SunPy \citep{sunpy2020}, pfsspy \citep{yeates2018, stansby2019}, HelioPy \citep{stansby2021}, SpiceyPy \citep{annex2021}, SPICE toolkit \citep{acton2018}, } \bibliography{ref}{} \bibliographystyle{aasjournal}
Title: A new candidate for central tidal disruption event in SDSS J014124+010306 with broad Mg~{\sc ii} line at $z=1.06$
Abstract: In the Letter, a new candidate for central tidal disruption event (TDE) is reported in SDSS J014124+010306 (=SDSS J0141) with broad Mg~{\sc ii} line at redshift $z=1.06$. Based on long-term photometric $ugriz$-band variabilities from SDSS Stripe82 Database and PHOTOOBJALL database, a central TDE is preferred with a 1.3${\rm M_\odot}$ main-sequence star tidally disrupted by central black hole (BH) of $(14\pm2)\times10^6{\rm M_\odot}$ in SDSS J0141. Moreover, CAR process has been applied to confirm that the probability is only about 0.4\% that the long-term variabilities in SDSS J0141 are not related to TDE but from intrinsic AGN activities. Meanwhile, based on the apparent broad Mg~{\sc ii} emission lines, virial BH mass can be estimated as $245\times10^6{\rm M_\odot}$, 18 times larger than the TDE-model determined BH mass, providing further clues to support a central TDE in SDSS J0141, similar as the case in the TDE candidate SDSS J0159 with virial BH mass two magnitudes larger than M-sigma relation expected BH mass. Among the reported optical TDE candidates, SDSS J0141 is the candidate at the highest redshift. The results in the Letter indicate it should be common to detect TDE candidates in high redshift galaxies with broad Mg~{\sc ii} lines.
https://export.arxiv.org/pdf/2208.05253
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} galaxies:nuclei - quasars:emission lines - transients: tidal disruption events - quasars: individual (SDSS J0141) \end{keywords} \section{Introduction} TDEs (Tidal Disruption Events), indicators to massive black holes (BHs) and BH accreting systems, have been well studied in detail for more than four decades \citep{re88, lu97, gm06, gr13, gm14, mg19, st19, zl21} with accreting fallback debris from stars tidally disrupted by central black holes (BHs) leading to apparent time-dependent variabilities. More recent reviews on TDE can be found in \citet{gs21}. More recent two large samples of dozens of new TDE candidates can be found in \citet{vg21} and in \citet{sg21}. Among the reported TDE candidates, almost all the candidates are reported in inactive galaxies, such as the two candidates in non-active galaxies through the Stripe82 database by \citet{ve11}, the PS1-10jh and PS1-11af in inactive galaxies through the PanSTARRS (panoramic survey telescope and rapid response system) by \citet{gs12, ch14}, the iPTF16fnl in E+A galaxy through the PTF (palomar transient factory) by \citet{bl17}, the OGLE17aaj through the Optical Gravitational Lensing Experiment (OGLE) in quite weakly active galaxy by \citet{gr19}, and the well-known ASASSN-14ae, ASASSN-14li and ASASSN-19dj in nearby quiescent galaxies through the ASAS-SN (all-sky automated survey for supernovae) by \citet{ht14, ht16, hi21}, etc.. Besides the TDE candidates in quiescent galaxies, there are only a few TDE candidates reported in active galaxies, such as the SDSS J0159 \citep{md15}, CNSS J0019+00 \citep{an20}. However, in SDSS J0159, due to quite different virial BH mass from the M-sigma relation expected BH mass as discussed in \citet{zh19}, the broad Balmer emission lines are expected to be totally related to central TDE debris, indicating SDSS J0159 is not a normal broad line AGN (BLAGN). And in CNSS J0019+00, there are no broad emission lines, indicating CNSS J0019+00 is not a normal broad line AGN. It is an interesting objective to detect TDE candidates in normal broad line galaxies. Among the reported optical TDE candidates, strong broad Balmer and Helium emission lines are fundamental spectroscopic characteristics. And the reported broad emission lines can be related to disk-like structures from TDE debris, such as SDSS J0159 in \citet{md15, zh21}, ASASSN-14li in \citet{ht16}, PTF09djl in \citet{lz17}, PS18kh in \citet{ht19}, AT2018hyz in \citet{sn20, hf20}, etc., indicating the reported broad emission lines in the TDE candidates are not related to normal BLRs in normal BLAGN. Moreover, there are several TDE candidates, their UV-band spectra have been well checked, such as the PS18kh, ASASSN-15lh, ASASSN-14li, etc., there are no broad Mg~{\sc ii}$\lambda2800$\AA~ emission lines. In other words, if there was broad Mg~{\sc ii} emission line in UV spectra of a TDE candidate, the host galaxy of the TDE candidate could be probably a normal BLAGN. Therefore, to detect and report a TDE candidate with apparent broad Mg~{\sc ii} line is the main objective of the Letter. Among the high redshift objects in SDSS covering broad Mg~{\sc ii} emissions, a new TDE candidate in J014124+010306 (=SDSS J0141) at a redshift of 1.06 is reported in the Letter. Section 2 presents the long-term photometric SDSS $ugriz$-band variabilities of \obj. Section 3 shows the theoretical TDE model and fitting procedure applied. Section 4 shows spectroscopic properties and necessary discussions. Section 5 gives our final conclusions. We have adopted the cosmological parameters of $H_{0}=70{\rm km\cdot s}^{-1}{\rm Mpc}^{-1}$, $\Omega_{\Lambda}=0.7$ and $\Omega_{\rm m}=0.3$. \section{Long-term Light curves in \obj} As a follow-up to our previous work on changing-look AGN in \citet{zh21b}, plans are underway for systematic searching changing-look AGN through multi-epoch SDSS spectra. When checking long-term variabilities of the candidates, the \obj~ with five repeated spectra is selected as the target of the Letter, due to its unique photometric variabilities. SDSS $ugriz$-band light curves of \obj~ are collected from the following two databases. First, the $griz$-band light curves are collected from the Stripe82 database \citep{bv08} with MJD from 51081 (September 1998) to 53705 (December 2005). There are no $u$-band data points provided by the Stripe82 database. Second, the $ugriz$-band variabilities are collected from the SDSS PHOTOOBJALL database according to THINGID=120118318 and the corresponding 29 photometric objids of \obj, by the following query applied in the SQL search tool in SDSS DR16, \begin{lstlisting} SELECT mjd, psfmag_g, psfmag_u, psfmag_r, psfmag_i, psfmag_z, psfmagerr_g, psfmagerr_u, psfmagerr_r, psfmagerr_i, psfmagerr_z FROM PHOTOOBJALL WHERE objid = 1237657072231972919 or objid = 1237657192526905410 or objid = 1237657235444138067 or objid = 1237657364307247212 or objid = 1237657587628048446 or objid = 1237657737952297019 or objid = 1237657815264133192 or objid = 1237659915500454002 or objid = 1237660010022568005 or objid = 1237662969222070482 or objid = 1237663205462114377 or objid = 1237663506125488270 or objid = 1237663544780849275 or objid = 1237666302155292964 or objid = 1237666340803444954 or objid = 1237666383743484048 or objid = 1237666409527247014 or objid = 1237666499696328851 or objid = 1237666542643773631 or objid = 1237666637155991620 or objid = 1237666662926516356 or objid = 1237666727323500667 or objid = 1237646012704882989 or objid = 1237646648350802019 or objid = 1237653000602648648 or objid = 1237656513891467420 or objid = 1237656595492765803 or objid = 1237656909050544257 or objid = 1237656973454934076 \end{lstlisting} Here, due to the point-like photometric images of \obj~ at redshift 1.06, PSF magnitudes are collected from the PHOTOOBJALL, rather than the Petrosian magnitudes commonly accepted for extended photometric images. The light curves are shown in Fig.~\ref{lmc}. The rise-to-peak and followed by smooth declining trend in each band light curve is apparent and can be well expected by a central TDE. Then, the theoretical TDE model can be considered to describe the variabilities of \obj. \section{Theoretical TDE model applied to describe the Light Curves} More recent detailed descriptions on theoretical TDE model can be found in \citet{gr13, gm14, mg19} and in the corresponding public codes of TDEFIT and the MOSFIT. Here, based on the more recent discussions in \citet{mg19}, the theoretical TDE model can be applied by the following four steps, similar what we have done in \citet{zh22} to describe the X-ray variabilities in the TDE candidate {\it Swift} J2058.4+0516 with relativistic jet. First, standard templates of viscous-delayed accretion rates $\dot{M}_{at}$ are created, based on the TDEFIT/MOSFIT provided the $dm/de$ as distributions of debris mass $dm$ as a function of the specific binding energy $e$ after a star is disrupted, by the equations \begin{equation} \begin{split} \dot{M}_{at}~&=~\frac{exp(-t/T_{v})}{T_{v}}\int_{0}^{t}exp(t'/T_{v})\dot{M}_{fbt}dt' \\ \dot{M}_{fbt}~&=~dm/de~\times~de/dt \ \ \ \ \ \ de/dt~=~\frac{(2~\pi~G~M_{\rm BH})^{2/3}}{3~t^{5/3}} \end{split} \end{equation} where $M_{\rm BH}$ as the central BH mass, and $\dot{M}_{fbt}$ as the templates of fallback material rates created for standard case with central BH of $M_{\rm BH}=10^6{\rm M_\odot}$ and disrupted main-sequence star of $M_{*}=1{\rm M_\odot}$ and with a grid of the listed impact parameters $\beta_{temp}$ in \citet{gr13}, and $T_{v}$ as the viscous time after considering the viscous delay effects as discussed in \citet{gr13, mg19}. Here, a grid of 31 $\log(T_{v, temp}/{\rm years})$ range from -3 to 0 are applied to create templates of $\dot{M}_{at}$ for each impact parameter. Finally, templates of $\dot{M}_{at}$ include 736 (640) time-dependent viscous-delayed accretion rates for 31 different $T_{v}$ of each 23 (20) impact parameters for the main-sequence star with polytropic index $\gamma$ of 4/3 (5/3). Second, simple linear interpolations are applied to determine accretion rates $\dot{M}_{a}(T_{v},~\beta)$ for TDEs with input $\beta$ and $T_{v}$ different from the list values in $\beta_{temp}$ and in $T_{v, temp}$. Assuming $\beta_1$, $\beta_2$ in the $\beta_{temp}$ as the two values nearer to the input $\beta$ and $T_{v1}$, $T_{v2}$ in the $T_{v,temp}$ as the two values nearer to the input $T_{v}$, the first linear interpolation is applied to find the viscous-delayed accretion rates with input $T_{v}$ but with $\beta=\beta_1$ and $\beta=\beta_2$ by \begin{equation} \begin{split} \dot{M}_{a}(T_{v}, \beta_{1}) &= \dot{M}_{at}(T_{v1}, \beta_1) + \\ &\frac{T_{v}-T_{v1}}{T_{v2}-T_{v1}}(\dot{M}_{at}(T_{v2}, \beta_1) - \dot{M}_{at}(T_{v1}, \beta_1))\\ \dot{M}_{a}(T_{v}, \beta_2) &= \dot{M}_{at}(T_{v1}, \beta_2) + \\ &\frac{T_{v}-T_{v1}}{T_{v2}-T_{v1}}(\dot{M}_{at}(T_{v2}, \beta_2) - \dot{M}_{at}(T_{v1}, \beta_2)) \end{split} \end{equation} Then, the second linear interpolation is applied to find the viscous-delayed accretion rates with input $T_{v}$ and with input $\beta$ by \begin{equation} \dot{M}_{a}(T_{v}, \beta) = \dot{M}_{a}(T_{v}, \beta_1) + \\ \frac{\beta-\beta_1}{\beta_2-\beta_1}(\dot{M}_{a}(T_{v}, \beta_2) - \dot{M}_{a}(T_{v}, \beta_1)) \end{equation} Third, for TDEs with input $M_{\rm BH}$ and $M_{*}$ different from $10^6{\rm M_\odot}$ and $1{\rm M_\odot}$, as discussed in \citet{gr13, mg19}, the actual viscous-delayed accretion rates $\dot{M}$ and the corresponding time information are created from the viscous-delayed accretion rates $\dot{M}_{a}(T_{v},~\beta)$ by the following scaling rations applied with input BH mass, mass and radius of the disrupted main-sequence star, \begin{equation} \begin{split} &\dot{M} = M_{\rm BH,6}^{-0.5}\times M_{\star}^2\times R_{\star}^{-1.5}\times\dot{M}_{a}(T_{v}, \beta) \\ &t = (1+z)\times M_{\rm BH}^{0.5}\times M_{\star}^{-1}\times R_{\star}^{1.5} \times t_{a}(T_{v}, \beta) \end{split} \end{equation}, where $M_{\rm BH,6}$, $M_{\star}$, $R_{\star}$ and $z$ represent central BH mass in unit of ${\rm 10^6M_\odot}$, stellar mass in unit of ${\rm M_\odot}$, stellar radius in unit of ${\rm R_{\odot}}$ and redshift of host galaxy of a TDE, respectively. And the mass-radius relation discussed in \citet{tp96} has been accepted in the manuscript for main-sequence stars. Fourth, the time-dependent output emission spectrum in rest frame based on the TDE model expected accretion rate $\dot{M(t)}$ can be determined by the simple black-body photosphere model as discussed in \citet{gm14, mg19}, \begin{equation} \begin{split} &F_\lambda(t)=\frac{2\pi Gc^2}{\lambda^5}\frac{1}{exp(hc/(k\lambda T_p(t)))-1}(\frac{R_p(t)}{D})^2 \\ &R_p(t) = R_0\times a_p(\frac{\epsilon\dot{M(t)}c^2}{1.3\times10^{38}M_{\rm BH}/{\rm M_\odot}})^{l_p} \\ &T_p(t)=(\frac{\epsilon\dot{M(t)}c^2}{4\pi\sigma_{SB}R_p^2})^{1/4} \ \ \ \ \ a_p = (G M_{\rm BH}\times (\frac{t_p}{\pi})^2)^{1/3} \end{split} \end{equation} where $D$ means the distance to the earth calculated by redshift $z$, $k$ is the Boltzmann constant, $T_p(t)$ and $R_p(t)$ represent the time-dependent effective temperature and radius of the photosphere, respectively, and $\epsilon$ is the energy transfer efficiency smaller than 0.4, $\sigma_{SB}$ is the Stefan-Boltzmann constant, and $t_p$ is the time information of the peak accretion. Then, time-dependent apparent SDSS $ugriz$-band magnitudes $mag_{u,~g,~r,~i,~z}(t)$ can be well determined through the $F_\lambda(t)$ in observer frame convoluted with the accepted transmission curves of the SDSS $ugriz$ filters. Based on the four steps above, TDE model expected time dependent apparent $ugriz$-band magnitudes $mag_{u,~g,~r,~i,~z}(t)$ can be well created with seven parameters (redshift $z=1.0624$ accepted to \obj) of central BH mass $M_{\rm BH}$, mass $M_{\star}$ and polytropic index $\gamma$ (4/3 or 3/5) of the disrupted main-sequence star, the impact parameter $\beta$, the viscous-delay time $T_{v}$, the energy transfer efficiency $\epsilon$, and the two parameters of $R_0$ and $l_p$ related to the black-body photosphere. Moreover, there are five additional parameters $mag_{0}(u,~g,~r,~i,~z)$, applied to determine contributions of host galaxies (the none-variability components included in the light curves) to observed variabilities. Finally, through the Levenberg-Marquardt least-squares minimization technique (the MPFIT package) \citep{mc09}, the theoretical TDE model can be well applied to describe the SDSS $ugriz$-band light curves. Meanwhile, when the fitting procedure above is applied, there is one limitation to the model parameters. For an available TDE, the determined tidal disruption radius is limited to be larger than event horizon of central BH. Then, the determined TDE model parameters (with $\gamma=5/3$) and the corresponding uncertainties (1sigma errors computed from the covariance matrix) are: $\log(M_{\rm BH,6})\sim1.16\pm0.05$, $\log(M_\star/{\rm M_\odot})\sim-0.31\pm0.04$, $\log(\beta)\sim0.39\pm0.01$, $\log(T_{v})\sim-0.81\pm0.05$, $\log(\epsilon)\sim-0.55\pm0.05$, $\log(R_0)\sim-0.41\pm0.06$, $\log(l_p)\sim-0.14\pm0.06$, $\log(mag_0(u))\sim1.32\pm0.01$, $\log(mag_0(g))\sim1.32\pm0.01$, $\log(mag_0(r))\sim1.31\pm0.01$, $\log(mag_0(i))\sim1.31\pm0.01$, $\log(mag_0(z))\sim1.31\pm0.01$. Fig.~\ref{lmc} shows the TDE model determined best-fitting results and the corresponding confidence bands by uncertainties of the model parameters. % Before the end of the section, three points are noted. First, as discussed in \citet{rs18}, \obj~ has been classified as an extreme variability quasar (EVQ). Based on the improved DRW (Dampled Random Walk) process in \citet{koz10, zk13}, the public JAVELIN (Just Another Vehicle for Estimating Lags In Nuclei) code can lead to best descriptions to the light curves of \obj. Here, in top middle panel of Fig.~\ref{lmc}, the JAVELIN code determined best descriptions and corresponding confidence bands are shown as solid and dashed blue lines. Meanwhile, through the MCMC (Markov Chain Monte Carlo, \citet{fh13}) analysis with the uniform logarithmic priors of the DRW process parameters of $\tau$ and $\sigma$ ($SF_\infty\sim\sigma\sqrt{\tau}$ as the parameter used in \citet{rs18}), bottom right panel of Fig.~\ref{lmc} shows the posterior distributions of $\sigma$ and $\tau$, with $\log(\tau/days)\sim3.04$ and $\log(\sigma/(mag/days^{0.5}))\sim-0.535$, leading to the determined $\log(SF_{\infty}/mag)=\sim0.98$ which is about 1magnitude larger than the shown results in Fig.~9\ in \citet{rs18}, indicating \obj~ should have unique variabilities among the EVQs. Second, an interesting method is applied to check whether the shown variabilities in Fig.~\ref{lmc} is not from TDE but actually from intrinsic AGN activities. Based on the CAR (continuous Autoregressive) process preferred to describe AGN variabilities discussed in \citet{kbs09}: \begin{equation} \dif LMC_t=\frac{-1}{\tau}LMC_t\dif t+\sigma_*\sqrt{\dif t}\epsilon(t) + 19.77 \end{equation} where $\epsilon(t)$ a white noise process with zero mean and variance equal to 1, and $19.77$ is the mean value of $LMC_t$ (the mean value of the SDSS $r$-band light curve of \obj)(different mean values in different bands have few effects on our following results). Then, a series of 1000 simulating light curves [$t_i$, $LMC_i$] can been created, with randomly selected values of $\tau$ from 50days to 5000days (similar as the reported values of extreme variability quasars in \citet{rs18}) and $\sigma_*$ leading the variance $\tau\sigma_*^2/2$ to be 0.095 (the variance of SDSS $r$-band light curve of \obj), and $t_i$ are the same as the observational time information shown in Figure~\ref{lmc}. Among the 1000 simulating light curves, 4 light curves can be well described by theoretical TDE model, based on the criterion that the TDE-model determined best descriptions lead $\chi^2/Dof$ to be better than 3 (2.5 for the results shown in Fig.~\ref{lmc}). Therefore, the probability is only about 0.4\% (4/1000) that the detected TDE-expected variabilities in \obj~ is mis-detected through the intrinsic AGN variabilities. Moreover, in order to confirm the probability of 0.4\%, all the 9258 quasars covered in the Stripe82 region (S82Qs) \citep{mi10} have their light curves been carefully checked. Among the 4763 S82Qs with light curves having number of available data points larger than 60, there are 11 quasars of which light curves have smooth declining trends $t^{\sim-5/3}$ as TDE model expected. The number ratio of TDE candidates among the S82Qs is about $11/4763\sim0.23\%$, to support the expected ratio of 0.4\% determined by the CAR simulating results. Fig.~\ref{fake} shows the light curve of one of the 11 TDE candidates among the S82Qs. Detailed discussions on the TDE candidates among the S82Qs are beyond scope of the Letter and will appear in our manuscript in preparation. Therefore, the TDE expected variabilities in \obj~ are confident enough. Third, besides the SDSS long-term variabilities, within searching radius smaller than 1\arcsec, long-term variabilities of SDSS J0141 have been collected from PanSTARRS with MJD from 55174 (Dec. 2009) to 56989 (Nov. 2014) shown as solid red circles in Fig.~\ref{lmc}. The PanSTARRS data points can be well followed by the TDE model, besides the PanSTARRS $g$-band data points, perhaps due to large magnitude difference relative to quite different $g$-filter transmission curves between SDSS and PanSTARRs. Moreover, within searching radius smaller than 5\arcsec, long-term variabilities can be collected from the PTF in Nov. 2014, from the WISE (Wide-field Infrared Survey Explorer) with MJD from 55207 (Jan. 2010) to 55570 (Jan. 2011), and from the ZTF (Zwicky Transient Facility) with MJD from 58307 (Sep. 2018) to 59543 (Nov. 2021), shown in Fig.~\ref{zpw} with none apparent variabilities. The long-term none-variabilities not only can be well expected by TDE model at late times, but also can be accepted as indirect evidence to support that the variabilities shown in Fig.~\ref{lmc} are not from intrinsic AGN variabilities. Furthermore, based on the WISE data points shown in the left panel of Fig.~\ref{zpw}, the parameter $w1-w2$ is about 1.22mag, a normal value applied to classify AGN by WISE colors \citep{as18}. Meanwhile, as the reported MIR flares related to TDEs in \citet{wy18}, WISE colors can be well around $w1-w2~\sim~1mag$ (see their table~1). Therefore, only based on the WISE colors, it is hard to provide further evidence to confirm that the variabilities in SDSS J0141 are not related to a central TDE but totally related to intrinsic AGN activities. However, based on the TDE model expected variabilities from SDSS and the more recent 7.4years-long none-variabilities from PTF and ZTF, the central TDE is preferred in SDSS J0141. \section{Virial BH mass from Spectroscopic Properties of \obj} SDSS spectra of \obj~ observed in MJD=52644, 53265, 53315, 57364, 58108 are shown in Fig.~\ref{spec} with broad Mg~{\sc ii} emission lines at $z=1.0624$. Moreover, for the spectrum with MJD=58108, continuum emissions can be described by $2.66\times\lambda^{-0.19}$, leading continuum luminosity at rest wavelength 3100\AA~ to be about $\lambda L_{3100}\sim5.01\times10^{44}{\rm erg/s}$. % Through broad Mg~{\sc ii} emission lines, virial BH mass as discussed in \citet{pf04, sh11} can be estimated as \begin{equation} \begin{split} \log(\frac{M_{BH}}{{\rm M_\odot}})~&=~0.86~+~0.5\log(\frac{\lambda L_{3100}}{{\rm 10^{44}erg/s}})~+~ 2\log(\frac{FWHM}{{\rm km/s}})\\ &\ \ \ \ \ ~\sim~8.39 \end{split} \end{equation} with $FWHM\sim3900{\rm km/s}$ measured through the definition of Full Width at Half Maximum (FWHM) for the broad Mg~{\sc ii} from the spectrum observed in MJD=58108 after subtraction of the power law continuum emissions. Based on the results in Fig.~\ref{lmc}, the spectrum with MJD=58108 has few effects of the central TDE. The estimated virial BH mass is about 18 times larger than the TDE-model determined BH mass, indicating apparent contributions of TDE fallback accreting debris to broad Mg~{\sc ii} emission clouds. The distance of the broad Mg~{\sc ii} emission clouds to central BH could be quite smaller than the Virialization assumption expected BLRs sizes, leading to quite larger Virial BH mass in \obj, similar as what have been discussed in SDSS J0159 in \citet{zh19, zh21}. Actually, it is still an open question whether the TDE model determined BH masses are well consistent with intrinsic BH masses in TDE candidates. However, \citet{gm14, mg19, ry20, zl21} have shown that the long-term TDE model expected variabilities can be well applied to estimate central BH masses of TDE candidates. Meanwhile, for \obj~ at redshift 1.06, there is no way to measured stellar velocity dispersion in SDSS spectra, indicating that it is hard to measure more reliable BH mass through the M-sigma relation \citep{fm00, ge00, kh13} in \obj. Therefore, in the Letter, TDE model determined BH mass and Virial BH mass are compared in SDSS J0141. Certainly, further efforts are necessary to determine accurate central BH mass which will provide further clues to support or to against the central expected TDE in the \obj. \section{Conclusions} Finally, we give our main conclusions as follows. A preferred TDE can be well applied to describe the long-term SDSS $ugriz$-band variabilities over 8 years in the \obj~ with apparent broad Mg~{\sc ii} emission lines at $z=1.06$, leading the TDE model determined BH mass to be about $(14\pm2)\times10^6{\rm M_\odot}$. Moreover, based on CAR process created artificial light curves, the probability is only about 0.4\% that the long-term variabilities in SDSS J0141 are from central AGN activities but not from a central TDE. Meanwhile, through the broad Mg~{\sc ii} emission lines, the virial BH mass can be estimated to be about $245\times10^6{\rm M_\odot}$, about 18times larger than the TDE-model determined BH mass, providing further clues to support the central TDE in \obj. Among the reported optical TDE candidates, SDSS J0141 is the candidate with the highest redshift. Moreover, it is feasible to detect more TDE candidates in galaxies with broad emission lines, not only broad Balmer lines and Helium lines but also broad Mg~{\sc ii} lines. \section*{Acknowledgements} Zhang gratefully acknowledges the anonymous referee for giving us constructive comments and suggestions to greatly improve our paper. Zhang gratefully acknowledges the grant support from NSFC-12173020. This Letter has made use of the data from the SDSS projects managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration. The letter has made use of the public code of TDEFIT and MOSFIT, and MPFIT. This Letter has made use of the data from PanSRARRS, WISE, PTF and ZTF. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author (\href{mailto:aexueguang@qq.com}{aexueguang@qq.com}). \label{lastpage}
Title: A novel inversion method to determine the coronal magnetic field including the impact of bound-free absorption
Abstract: The magnetic field governs the corona; hence it is a crucial parameter to measure. Unfortunately, existing techniques for estimating its strength are limited by strong assumptions and limitations. These techniques include photospheric or chromospheric field extrapolation using potential or non-linear-force-free methods, estimates based on coronal seismology, or by direct observations via, e.g., the Cryo-NIRSP instrument on DKIST which will measure the coronal magnetic field, but only off the limb. Alternately, in this work we investigate a recently developed approach based on the magnetic-field-induced (MIT) transition of the \fex~257.261~\AA. In order to examine this approach, we have synthesized several \fex\ lines from two 3D magnetohydrodynamic simulations, one modeling an emerging flux region and the second an established mature active region. In addition, we take bound-free absorption from neutral hydrogen and helium and singly ionised helium into account. The absorption from cool plasma that occurs at coronal heights has a significant impact on determining the magnetic field. We investigate in detail the challenges of using these \fex\ lines to measure the field, considering their density and temperature dependence. We present a novel approach to deriving the magnetic field from the MIT using inversions of the differential emission measure as a function of the temperature, density, and magnetic field. This approach successfully estimates the magnetic field strength (up to \%18 relative error) in regions that do not suffer from significant absorption and that have relatively strong coronal magnetic fields ($>250$~G). This method allows the masking of regions where absorption is significant.
https://export.arxiv.org/pdf/2208.13984
\title{A novel inversion method to determine the coronal magnetic field including the impact of bound-free absorption} \author[0000-0002-0333-5717]{Juan Mart\'inez-Sykora} \affil{Lockheed Martin Solar \& Astrophysics Laboratory, 3251 Hanover St, Palo Alto, CA 94304, USA} \affil{Bay Area Environmental Research Institute, NASA Research Park, Moffett Field, CA 94035, USA.} \affil{Rosseland Center for Solar Physics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo, Norway} \affil{Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo, Norway} \author[0000-0003-0975-6659]{Viggo H. Hansteen} \affil{Lockheed Martin Solar \& Astrophysics Laboratory, 3251 Hanover St, Palo Alto, CA 94304, USA} \affil{Bay Area Environmental Research Institute, NASA Research Park, Moffett Field, CA 94035, USA.} \affil{Rosseland Center for Solar Physics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo, Norway} \affil{Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo, Norway} \author[0000-0002-8370-952X]{Bart De Pontieu} \affil{Lockheed Martin Solar \& Astrophysics Laboratory, 3251 Hanover St, Palo Alto, CA 94304, USA} \affil{Rosseland Center for Solar Physics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo, Norway} \affil{Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo, Norway} \author[0000-0002-9325-9884]{Enrico Landi} \affil{Department of Climate and Space Sciences and Engineering, University of Michigan, Ann Arbor, MI, USA} \keywords{Magnetohydrodynamics (MHD) ---Methods: numerical --- Radiative transfer --- Sun: atmosphere --- Sun: Corona} \section{Introduction} \label{sec:intro} The evolution and strength of the solar coronal magnetic field is of great importance in understanding the driving of energetic flares \citep{Benz:2017LRSP...14....2B} and coronal mass ejections (CMEs) \citep{Chen:2011LRSP....8....1C}. It forms the framework upon which the energy transport in coronal waves occurs \citep{Cranmer:2005ApJS..156..265C}, and furthermore forms complex structures and arcades in active regions \citep{Schrijver:1997xu}. In the quiet sun, the magnetic field connects the corona with photospheric motions, mainly through the chromospheric network where the energetics are dominated by phenomena such as shock waves, spicules, and other types of jets \citep{Martinez-Sykora:2017sci,DePontieu2021}. Moreover, the magnetic field links the layers from the solar interior, where the field is created, to the outer bounds of our heliosphere more than 100~AU from the Sun. Beyond solar physics, the ability of measuring stellar fields is also crucial, for instance, in understanding superflares \citep{Maehara:2012Natur.485..478M}, or the impact of stellar CMEs \citep{Argiroffi:2019NatAs...3..742A} on exoplanets \citep{Dong:2017ApJ...837L..26D}. The magnetic field is measured at photospheric heights via the Zeeman and Hanle effects \citet{delToroIniesta:2016LRSP...13....4D,Asensio:2008ApJ...683..542A} using, for instance, spectropolarimeters \citep[eg., the Swedish Solar Telescope (SST), the Solar Optical Telescope on board of Hinede (SOT), or the Daniel K. Inouye Solar Telescope (DKIST)][]{Scharmer:2003ve, Tsuneta:2008kc,deWijn:2022SoPh..297...22D}. Recently, this has also been applied for strong non-LTE lines formed in the chromosphere, by inverting their full Stokes profiles \citep{delaCruzRodriguez:2019A&A...623A..74D}. However, Zeeman splitting is generally very weak and hence negligible for coronal lines. DKIST will have sufficient signal and spectral resolution in the infrared using the Cryo-NIRSP instrument to study the coronal magnetic field \citep{Rast2021SoPh..296...70R}. However, such observations will be possible only at the solar off-limb where confusion along the line of sight (LOS) is possible and perhaps likely. Another approach to estimating the coronal magnetic field strength and topology lies in using photospheric (or chromospheric) magnetic measurement and performing field extrapolations based on these. Non-linear force free field (NLFFF) extrapolations are commonly used in the literature, but these have clear limitations since the photosphere and chromosphere are pressure gradient dominated \citep{DeRosa2015ApJ...811..107D} thus making the assumption of force free fields at least partially invalid. These methods can be aided or improved by the concurrent use of intensity maps from the transition region or coronal spectral lines \citep{Aschwanden:2016qy}. Similarly, the non force free nature of lower atmosphere fields plagues the more advanced magneto-friction numerical methods, which aspire to capture the time-dependent evolution of the magnetic field \citep[e.g.,][]{Cheung2012ApJ...757..147C}. Clearly, direct measurements of the coronal magnetic field would be an important supplement to these techniques. Recent work suggests the use of magnetic-field-induced transition (MIT), e.g., the \fex~257.261~\AA\ line, to directly measure the coronal magnetic field \citep{Li:2015ApJ...807...69L,Li:2016ApJ...826..219L,Chen:2021ApJ...920..116C}. Such MIT transitions are highly sensitive to the strength of the magnetic field. However, the MIT transition is an unresolvable blend with two other \fex\ lines, which decreases the sensitivity of the total 257~\AA\ line to the magnetic field. Several methods have been proposed to measure the magnetic field using the the 257~\AA\ blended lines: originally, \citet{Si:2020ApJ...898L..34S} proposed line intensity ratios between the total intensity of the 257~\AA\ line with other \fex\ lines; later \citet{Landi:2020ApJ...904...87L} proposed a combination of intensity ratios capable of removing the blending contributions and improve the accuracy of the magnetic field measurement. However, regardless of the method, the intensity of the 257~\AA\ line is also dependent on the electron temperature and density, so that these two parameters need to be constrained in order to achieve a good estimate of the field strength. In addition, all MIT studies have ignored the absorption that the observed spectral line emission will suffer due to the presence of neutral hydrogen, helium and singly ionized helium. \citet{Berger:1999ApJ...519L..97B,dePontieu:1999SoPh..190..419D,Schrijver:1999SoPh..187..261S,De-Pontieu:2003qr} pointed out that cool plasma may be present at coronal heights (e.g., in the moss or upper transition region of hot loops) leading to EUV absorption. \citet{Anzer:2005ye} showed that EUV emission lines may expect significant bound-free absorption from neutral hydrogen, helium, and singly ionized helium and focused on such plasma in filaments and prominences. \citet{DePontieu:2009ApJ...702.1016D} used numerical models and observations to show evidence of this type of absorption in moss, while \citet{Hansteen:2019AA...626A..33H} showed it in the presence of recently emerged magnetic flux. The MIT technique of measuring coronal magnetic fields has already been applied to existing observations. For example, \citet{Landi:2021fl} estimated the magnetic field during a C2 flare using Hinode/EIS. Similarly, \citet{Brooks:2021ApJ...915L..24B} conducted a study of the field strength in coronal loops. \citet{Chen:2021ApJ...918L..13C} have tested the method for observations of stellar atmospheres. % In this study, we will not only investigate the impact of absorption from high lying cool gas, but we will also present a new approach to inverting the observed data to determine the magnetic field by using the differential emission measure (DEM). This new approach is based on using the method of \citet{Cheung:2019ApJ...882...13C} to derive simultaneously the density, temperature and magnetic field of the coronal plasma. These inversions can also be used to disambiguate the spectrum from multi-slit observations, e.g., MUSE \citep{Cheung:2019ApJ...882...13C,DePontieu:2020ApJ...888....3D} or slitless spectrographs, e.g., COSIE \citep{Winebarger:2019ApJ...882...12W}. In the following, we give a short description of the forward radiative modeling code, that includes absorption (Section~\ref{sec:syn}). This code is used to compute the spectral line intensities in two different 3D radiative MHD models which are described in Section~\ref{sec:sim}. One is a simulation of an emerging flux region using the Bifrost code \citep[][]{Gudiksen:2011qy}. The results are discussed in the main text. The second simulation is of a mature active region \citep{Cheung:2019NatAs...3..160C} using MURaM \citep{Rempel:2017zl}, and results are summarized in the appendix. The results in Section \ref{sec:res} are composed of a first part that goes into the details of the properties of the various spectral lines with and without MIT and absorption (Section~\ref{sec:int}) and a second part where we apply the inversions to derive the magnetic fields and compare them to the models' ground truth (Section~\ref{sec:demb}). Section~\ref{sec:con} contains the conclusions and discussion. \section{Synthesis} \label{sec:syn} The synthesized EUV spectral lines intensities ($I$) are computed assuming that the radiation is optically thin, and that the ionization state of the emitting ion is in statistical equilibrium with the temperature and density of the emitting plasma. However, we do allow for absorption by bound-free processes in overlying cold plasma: \begin{eqnarray} I[\lambda] = \int_l n_e\, n_H\, G_\lambda (T, n_e, B)\, e^{-\tau}\,dl \label{eq:int} \end{eqnarray} \noindent where $n_e$, and $n_H$ are the electron and the hydrogen number densities, respectively. $G_\lambda(T, n_e, B)$ is the contribution function (see Appendix~\ref{sec:appendix_rad}). The integration is carried out along the LOS ($l$), which is along the vertical axis mimicking disk-center observations for our models. In cases where we take the absorption into account, we have included absorption by neutral hydrogen, helium as well as singly ionized helium in computing $e^{-\tau}$ following the recipes of \citet{Anzer:2005ye} (see Appendix~\ref{sec:appendix_rad} for details). This computation has been done using CHIANTI v.10 \citep{DelZanna:2021ApJ...909...38D} along with the Chiantipy software, and the integration has been done with GPUs using CUDA (pycuda): this allows us to use GPUs to vastly accelerate the calculations, which in turn allows the use of a finer grid along the LOS improving accuracy. We have assumed coronal abundances \citep{Feldman:1992ApJS...81..387F}. Further details on the model of \fex\ can be found in the Appendix~\ref{sec:appendix_rad}. Figure~\ref{fig:gtne} shows the contribution function of the lines of interest for this effort. All lines show some degree of density dependence, this is especially for \fex~174~\AA\ (panel a, e) and 257.261~\AA\ (panel c, g). It is also important to notice that the largest intensities are expected for the \fex~174~\AA\ line (panel a). Depending on the density, \fex~257.261~\AA\ (panel c) may have a much smaller contribution to the total intensity than the \fex~257.259~\AA\ line which it is blended with (panel b). The magnetic field dependence of the contribution function for \fex\ 257.261~\AA\ ($G(T,n_e,B)_{257.261}$) is shown in Figure~\ref{fig:gbtne}, normalized with the contribution function of the 174~\AA\ line. Note that the magnetic field dependence varies strongly with density and temperature. The variation of the contribution function is greater in density and temperature than with magnetic field when comparing with $G(T,n_e)_{174}$. The bottom row shows the impact if including the \fex~257~\AA\ blend in the contribution functions: the ratio with the 174~\AA\ line is now less sensitive to the magnetic field. Note that we use photon units in contrast to \citet{Chen:2021ApJ...920..116C} who used energy units. These properties between the various contribution functions need to be considered when used for density or magnetic field diagnostics. \section{Models} \label{sec:sim} The current work uses a flux emergence simulation computed with the Bifrost code and an AR simulation computed with MURaM. The results from the MURaM simulation are collected and shown in the appendix. The Bifrost code \citep{Gudiksen:2011qy} is run with a plasma of solar photospheric abundance \citep{Asplund:2009uq} obeying an equation of state in thermodynamic equilibrium. Therefore, the temperature, pressure, ionization degree, and opacities at all points are computed (via a lookup table) from the density and internal energy. Radiative losses are given by optically thick radiation, including scattering, in four spectral bins, while radiative losses (and gains) in the upper chromosphere are treated according to \cite{Carlsson:2012uq}. In the corona, optically thin losses are computed. In addition, thermal conduction along the magnetic field is included. To maintain an effective temperature close to that of the Sun, the entropy of inflowing gas is set at the bottom boundary, as is the strength and direction of the horizontal magnetic field as described below. The upper boundary in the corona uses characteristic methods to remain nearly transparent while the temperature gradient is set to zero. The horizontal boundaries are periodic. The flux emergence simulation covers a computational domain of $72\times 72\times 60$~Mm$^3$, the vertical extent of which goes from the bottom boundary 8.5~Mm below the photosphere to the upper boundary 52~Mm above. This computational box is covered by $720\times720\times 1120$ grid points, giving a horizontal resolution of 100~km and a varying vertical simulation, with a 20~km resolution in the photosphere, chromosphere, and transition region stretching to 100~km in the deeper layers of the convection zone and the corona. The simulation starts with an initial horizontal field of 100~G up to the photosphere and close to 0~G in the overlying atmosphere. The simulation evolved for several hours with a horizontal magnetic field (a sheet) of 200~G injected at the bottom boundary for the first 95~minutes, which is ramped up to 1000~G for the next 70 minutes, then again to 2000~G for another 150~minutes. Thereafter a horizontal field of 300~G is injected continuously. The simulation has run for 8~hours solar time at the moment of the snapshot considered in this study. When magnetic flux becomes strong enough to pierce the photosphere and rise into the upper solar atmosphere it carries with it plasma of roughly photospheric temperatures. This cool gas drains back through the chromosphere to the photosphere eventually as the field continues to rise, but this can take significant time; thermal conduction is very inefficient perpendicular to the field preventing the plasma from heating to coronal temperatures, while at the same time the nearly horizontal field will prevent a rapid fall. Thus, at times when there is significant emergence of magnetic field, we expect, and find in the model, the corona to contain cool gas with temperatures of order $10^4$~K. This may be the case in emerging flux regions in the Sun, as found in the model used here. More generally, the total amount of cool gas found in the corona is not well known. A full description of this model and its evolution is in preparation \citep{Hansteen:2022prep}. In Figure~\ref{fig:bifrost_field}, we show the vertical ($B_z$) component of the magnetic field in the photosphere along with the emissivity of the \fex~174.35 line, colored by the strength of the magnetic field. Low intensity regions are made transparent. The emissivity is multiplied by $\exp(-\tau)$ where the opacity, caused by neutral hydrogen, helium, and once ionized helium, stems from high lying cool gas in part carried aloft by flux emergence. The largest vertical field strengths in the photosphere are of order 2000~G or higher, while the mean strength is of order 80~G. At the height of \fex\ emission, 1.5~Mm above the photosphere and higher, the maximum (total) field strengths have fallen to roughly 300~G, which is towards the low end of the field strengths possible to measure with the method described here. While we believe that this is a fairly typical coronal field strength for an emerging ephemeral region, it is towards the low end of what is possible to measure with the technique described here, as evidenced by Figure~\ref{fig:gbtne}. We will therefore artificially increase by six times this field strength in much of the analysis that follows. \section{Results} \label{sec:res} \subsection{Intensities, MIT and absorption}\label{sec:int} We have computed the synthetic intensities of several \fex\ lines from the two simulation snapshots considered. As mentioned above, we will focus on the flux emergence simulation in the main text while the equivalent figures for the MURaM simulation can be found in the appendix \ref{sec:appendix}. Figure~\ref{fig:inta} shows the \fex~174, 257.259, 257.261~\AA\ and 345 intensities. The 257.261 line has been computed assuming B=0 in panels c, h, and m, while in panels d, i, and n, we set a field strength six times larger than that found in the model to enhance the effects of the MIT. In the middle row, we also consider absorption. The intensities of the various lines differ in some cases by more than one order of magnitude. The \fex~174~\AA\ line is the strongest emitter. Note also that, for the high density model presented here, which has flux emergence, the \fex~257.261~\AA\ intensity is lower than 257.259~\AA\ (see also panels b and c in Fig.~\ref{fig:gtne}). For solar conditions at lower coronal densities the relative contributions of these two lines may be significantly different. A close look at the third and fourth columns of Figure~\ref{fig:inta}, shows that the 257.261 line is brighter in some locations if the MIT is included. However, the absorption also leads to significant changes in intensity for all lines in varying degrees. To clarify the role of absorption from neutral cold gas, we compute the ratio of the line intensities with and without including absorption, i.e. we present the ratio of the top and middle rows of Figure~\ref{fig:inta}, shown in bottom row. As expected, the longer the wavelength, the greater the difference between the intensity with and without absorption. This dependence with wavelength comes about because the opacity scales with $\lambda^3$ (see Section~\ref{sec:syn}). It is also important to note that the opacity has contributions from neutral and singly ionized helium for spectral lines below 504~\AA\ and 228~\AA, respectively \citep{Anzer:2005ye,Rumph:1994qo}. This simulation, which has a significant amount of cool gas carried aloft as a result emerging flux, has large opacities and hence absorption in all lines for extended regions. Still, other regions have very little or zero absorption. A central question is thus if it is possible to discern which intensity variations are due to density, temperature, or magnetic field vs. which are due to absorption, or at least to find in what regions absorption is important. Existing methods to derive the coronal magnetic field rely on the comparison of different spectral lines with varying sensitivity to the density and temperature to obtain an estimate of these thermodynamic parameters in the emitting plasma, \citep{Landi:2020ApJ...904...87L,Chen:2021ApJ...918L..13C,Brooks:2021ApJ...915L..24B} and thereafter to use these values when computing the magnetic field. To illustrate the complexity of this task, we compare \fex~257 lines with the 174~\AA\ line (Fig.~\ref{fig:ratlines}). The variation of the ratio of \fex~257.259 and 174~\AA\ (panel a) is due to different density and temperature sensitivity of the contribution functions for both lines. Absorption by cool gas also impacts the ratio of these two lines (panel f). The \fex~257.261/174 line ratio (panel b) reveals a much smaller temperature and density variation since the dependence of the \fex~257.261 and 174 lines is very similar (Fig.~\ref{fig:gtne}). However, the MIT effect (panel c) is relatively small, so it is critical to estimate the densities correctly in order to measure the magnetic field accurately. Unfortunately, absorption can play a large role when comparing \fex~257.261 and 174 (panels g and e). With the densities found in this model, the dominant contribution from \fex~257 comes from the \fex~257.259 line, which highlights that the density sensitivity and absorption by cool gas play a large role when comparing these two lines. We also notice that the absorption will change the height from which the emission predominantly originates. This has an impact on the role of the MIT effect since different heights will have different magnetic field. In the following we present an inversion technique that may help us determine whether and where the data is not affected by absorption and retrieval of the magnetic field is possible (Section~\ref{sec:demb}). \subsection{Inversions: DEM$(T, n_e, B)$ }~\label{sec:demb} Based on the same idea described in \citet{Cheung:2019ApJ...882...13C} we suggest a new approach to derive the magnetic field by using the multi-dimensional sensitivity (temperature, density, and magnetic field) of the spectral lines. The idea is to invert the differential emission measure (DEM) from the observations which allows to constrain the temperature, densities and magnetic field strength. However, in contrast to previous efforts, we will take into account in the contribution function both the density and magnetic field dependence ($G_\lambda(T, n_e, B)$). In this case, the spectral line intensity ($I[\lambda]$) can be defined as follows: \begin{eqnarray} I[\lambda] = \int_{T_0}^{T_1} \int_{n_{e0}}^{n_{e1}} \int_{B_0}^{B_1} G_\lambda\, \text{DEM}(T, n_e, B)\, dT\, dn_e\, dB \end{eqnarray} \noindent where $\lambda$ is the wavelength of the spectral line, and the DEM$(T, n_e, B)= \int n_e\, n_H\, dl$ is the differential emission measure for each temperature, number density, and magnetic field value. The aim then becomes to invert the DEM given a set of spectral lines $I[\lambda]$ in order to find a unique solution $T$, $n_e$, and $B$ \citep[e.g.,][]{Cheung:2015ApJ...807..143C}. Ideally, one would also like to determine the amount of absorption as part of the inversion approach. However, this would add significant complexity in addition to expanding the number of free parameters in the inversion, thus complicating the convergence to the correct solutions. This is beyond the scope of the current work, however we will demonstrate that our approach can identify regions in which there is significant bound-free absorption. We will use the same strategy as in \citet{Cheung:2019ApJ...882...13C,Winebarger:2019ApJ...882...12W,DePontieu:2020ApJ...888....3D}, which solves the linear system with sparse solutions coming from the Lasso method \citep{tsinganos1980}. Lasso is implemented in the Python scikit learn module \citep{Pedregosa:scikit-learn}. See \citet{Cheung:2019ApJ...882...13C} for further details. For the inversion, we limit the temperature range to $\log{(T (K))} = [5.7, 6.3]$ to reduce the number of possible solutions as well as to enable reasonable array sizes. Note that the DEM will have five dimensions, i.e., two spatial dimensions in addition to $(T, n_e, B)$. We found that including \fex~174, 175, 184, 257 (blended), 345~\AA\ lines provides reasonable results also thanks to the different temperature sensitivity of these lines, whose excitation threshold is very different. Finally, this technique allows adjusting the weight of the spectral lines of interest. In the present case, we have increased by ten the weight accorded to the blended 257 lines (and thus the contribution function, accordingly). This enforces that the best fits will be found for the \fex~257 lines. Note that this line is the only one with potentially large magnetic field sensitivity. Finally, this inversion is controlled with a hyperparameter ($\alpha$) that controls the sparsity or entropy of the solutions. The greater the value of $\alpha$, the smaller the L1 norm and thus the sparsity of the solution \citep{Tibshirani:lasso,Pedregosa:scikit-learn}. In our case, we use a value that is as low as possible while avoiding artifacts from overfitting: $\alpha \sim 10^{-2}$. A nice feature of this technique is that one can form an estimate of how good the solutions are by comparing the real intensities and the synthesized intensities from the inverted DEM. The case without absorption is shown in Fig.~\ref{fig:inv_syn}. Since we have placed a greater weight on \fex~257, the best fit is found for that line (panel l), whereas the spectral lines with artificially smaller weighted intensities (due to lower weights in the inversion) have larger errors. The right column of Fig.~\ref{fig:inv_syn} is the relative difference between the two left columns ($(I_{inv}-I_{syn})/I_{syn}$) and indicates where the derived magnetic field may not be accurate (see below). We note that the Lasso method gives higher weight to fits with sparse solutions that have a minimum DEM. This results in slightly lower intensities in general and therefore the right column is slightly red. With the absorption included, the inversion does not find good solutions in those locations where absorption is important. We have used five spectral lines to derive the magnetic field, density, temperature and discern where the absorption may cause problems in doing so. In Fig.~\ref{fig:inv_synabs}, the same layout as in Fig.~\ref{fig:inv_syn} is used, but now for the case where absorption by cool gas is included. The last column shows the ratio between the intensity of the corresponding spectral line with and without absorption included. The third column shows an extremely good match with the right panel of the spectral line with the strongest absorption, as seen in panel t. So, the inversion is not finding good solutions in the DEM space to match the intensities for all spectral lines. The resulting synthesis from the inverted DEM does not match the original synthesis of the spectral lines. The largest absorption is for the \fex~345~\AA\ line, and this is why the errors match panel t. From the DEM$(T, n_e, B)$ we estimate the average magnetic field along the LOS by weighting with the DEM and the source function ($G_{\lambda}(T, n_e, B)$) for each temperature and density bin as follows: \begin{eqnarray} B_{i} = \frac {\int_{T_0}^{T_1} \int_{n_{e0}}^{n_{e1}} \int_{B_0}^{B_1} B\, \text{DEM}\, G_{\lambda}\, dT\, dn_e\, dB }{\int_{T_0}^{T_1} \int_{n_{e0}}^{n_{e1}} \int_{B_0}^{B_1} \text{DEM}\, G_{\lambda}\, dT\, dn_e\, dB } \label{eq:modb_mit} \end{eqnarray} \noindent note this could, similarly, be done for density or temperature diagnostics. To compare with the simulation, we compute the magnetic field in the model as follows: \begin{eqnarray} \text{EM}_{s} = n_{e_s}\, n_{H_s}\, G_{\lambda}(T_{s},n_{e_s},|B_{s}|); \\ B_{s} = \frac {\int B_{s}\, \text{EM}_{s} dl }{\int \text{EM}_{s} dl}~\label{eq:emmag} \end{eqnarray} \noindent where the subscript $s$ refers to the values from the simulation and $i$ refers to the inversions. The resulting maps of the inverted magnetic field ($B_i$) without (panel c) and with absorption (panels e and f), and the simulation in the numerical model ($B_s$) (panel b) are shown in Figure~\ref{fig:mag_maps}. The absolute error is shown without and with absorption in the top row (panel d includes the mask from panel s in Figure~\ref{fig:inv_synabs}) and the relative error is shown panels h (with mask and absorption) and i (without absorption). Without absorption (panel c), most of the magnetic field features are well resolved. However, the strongest magnetic field strengths in the inverted data seem to underestimate those found in the model. In contrast, the inverted magnetic field with absorption is filled with artifacts due to this absorption (panel f). Fortunately, our method allows the identification of such regions and thereby the possibility of masking out regions with large impact from absorbing gas (panel e, d, and h) and thereby to select the areas clean from absorption. The absolute error indicates that the high values of magnetic field are underestimated by up to a few hundred Gauss (red) and some weak values are overestimated by up to a several tens of Gauss (blue). The relative error panel shows that this error becomes smaller for stronger magnetic fields. See also below for more details. Indeed, a set of 2D histograms reveals the quality of the correlation between $B_s$ and $B_i$. Figure~\ref{fig:mag_hist} shows that $B_i$ is underestimated at high values. \citet{Chen:2021ApJ...918L..13C} suggested that their under (or over) estimates may come from uncertainties in the temperature diagnostic. Most likely this is not the case for our study since our method also takes into account temperature variations. Another possible scenario could explain the under-estimated values: As mentioned above, the Lasso method gives higher weight to fits with sparse solutions that have a minimum DEM. Those solutions tend to be biased to lower magnetic strengths. We see the same trend for the density parameter, but not for the temperature (not shown here). If this error is systematic across many realizations, one may be able to correct for such underestimates (see also appendix). This might depend on the lines selected for inversion. Further investigation is needed to determine whether such a systematic correction is possible. The 2D histograms and the absolute standard deviation (middle row) as a function of magnetic field shows that the deviation can be from a few tens of Gauss at low magnetic field values to up to 200 G for regions without absorption. The relative error is large (above 40\%) for magnetic field values lower than 250~G and only above those values the relative error decreases to 18\%. Regions with significant absorption experience larger errors. The inversion could allow an investigation of the DEM as a function of temperature, density, or, for this work, the magnetic field, in a similar fashion as is done when estimating the DEM as a function of temperature using AIA data. The temperature coverage of the selected lines we study here is of course very limited. In addition, while determining the density from DEM may work, but this extension is outside of the scope of this paper. Figure~\ref{fig:mag_mult_comp} shows that in some places, the DEM$(B)$, i.e. integrated with temperature and density, can identify some multi-magnetic field components which correspond to different structures along the LOS, e.g., around $x=[30,35]$~Mm. In this figure, we degraded the magnetic field bin to 50~G since the standard deviation shown in Figure~\ref{fig:mag_maps} has no better values than those. We do not expect to achieve better accuracy of the DEM(B) than for the full integration done in equation~\ref{eq:modb_mit}. While the values are slightly inaccurate, the inversion appears to be doing a good job in revealing multiple components in some locations and thus could provide, in principle, more observables than the least-square methods or line intensity ratios used in earlier studies. This could have the potential of revealing multi-magnetic field structures along the LOS. In summary, the bound-free absorption by cool plasma at coronal heights has been ignored in previous studies. We have investigated the MIT effect taking into account the absorption and provide a new technique to derive the magnetic field. We find that absorption can locally have a major impact on the determination of the magnetic field. Our new method allows to identify regions with significant absorption, providing more confidence to the derived magnetic field in absorption-free locations, using either line intensity ratios, the least-square methods or the one presented here. \section{Conclusions and discussion}\label{sec:con} This study has utilized two 3D radiative MHD numerical models characterized by two very different magnetic field configurations. The first mimics an emerging ephemeral region using the Bifrost code (main text). The second is a simulated AR containing the early stages of new flux emergence into the existing active region configuration (Appendix~\ref{sec:appendix}). We synthesize the emission of highly ionized iron atoms in the EUV to further understand the potential for diagnostics from the magnetically sensitive \fex~257~\AA\ line. In total we have synthesized \fex~174, 175, 177, 184, 257 (the two blended lines), and 345~\AA\, taking into account their temperature, density and magnetic dependence. In addition to modeling the effects of the local state of the emitting plasma, we consider the possible effects of the presence of significant amounts of cool gas lying above the emitting region and synthesize spectra according to two different scenarios: with and without including bound-free absorption by neutral hydrogen and helium, and single ionized helium. Our analysis of the synthesized spectral lines reveals that the absorption can cause considerable difficulties in estimating both the density and the magnetic field from the various lines. Furthermore, for the two models, we found out that the relative intensity variation of \fex~257~\AA\ in comparison to \fex~174 due to the magnetic field is comparable to or even less important than the variation due to absorption of cold plasma in those locations with overlaying cold plasma. In the second part of this study, we describe a novel inversion technique to derive the magnetic field, density and temperature. This inversion technique has earlier been used to derive the DEM; for instance from AIA observations \citep{Cheung:2015ApJ...807..143C}, or to disambiguate the spectrum from multi-slit observations or slitless observations, e.g., MUSE \citep{Cheung:2019ApJ...882...13C} and COSIE \citep{Winebarger:2019ApJ...882...12W}. The novelty in this study is that we consider the temperature, density, and for \fex~257~\AA\ also the magnetic field dependence of the contribution function. So, in principle, this approach could retrieve the magnetic field as well as the temperature and densities. Another interesting diagnostic that this tool may be able to provide is the product of the filling factor with the column depth for each temperature, magnetic field, and/or density bin within the FOV. This is possible because the inversion has derived the DEM and the number density for each bin. This aspect requires further investigation in future work. When absorption is not included in the analysis, our results show similar results as those reported by \cite{Chen:2021ApJ...918L..13C}. We estimate that the accuracy we can achieve with this method is 50-200~G in locations without significant absorption. The absolute accuracy is highest at lower magnetic field values and rises to 200~G at high magnetic field values. Consequently, we obtain reasonable relative errors above 250~G. We also noticed that the MURAM simulation has a slower improvement in the relative error with field strength between 250-600~G. These slightly worse errors seem to come from regions with an overestimate of the magnetic field. The LOS analysis of the DEM(B) seems to indicate that these regions have more than one component of the magnetic field, and one of them is nearly zero Gauss. These estimates are less optimistic than those provided by \citet{Landi:2020ApJ...904...87L}, who carried out an uncertainty analysis and determined that the minimum detectable magnetic field in active regions was of the order of 50~G. Unfortunately, observations cannot be compared with actual values as we did with the numerical models. We also noted that we used a different line list. It is unclear what typical magnetic field values in the corona are. Based on the simulations we have used here, only a modest fraction of the field of view shows magnetic field values stronger than this sensitivity limit. In fact, for the flux emergence simulation we had to increase the coronal magnetic by six in order to obtain a measurable range of field values. Note that this may limit the applicability of the MIT method, either using the current method or the least-square or line ratio methods, to regions with strong coronal magnetic field such as footpoints, lower legs of AR loops, loops associated with sunspots, etc. The least-square method used by previous studies \citep[e.g.,][]{Chen:2021ApJ...918L..13C} also underestimates the magnetic field for a specific combination of spectral lines. We noticed that in our method the inversion can provide better results by varying the weights on the intensities. In this way we can control the inversion to better fit the lines of interest. In this work, we have found the desired weights with trial and error. However, the method to find the best weights probably can be automated and fine-tuned. This is something that can be investigated in the future. Another possibility of improving the inversions would be by looping the inversions, by first starting with low resolution and large ranges for temperature, density, and magnetic field. Then, the output of this first loop can be used to trim the temperature, density, and magnetic field range and thereby reduce their bin sizes, thus improving accuracy. Finally, the sparsity of the solution can be controlled with the hyperparameter $\alpha$. This parameter may need to be adjusted while using this method for different observations. Such a detailed study will be the subject of a follow-up paper. This technique provides estimates that are similar to \cite{Chen:2021ApJ...918L..13C} with the addition of being able to mask regions with absorption and with the potential of discerning multi-magnetic field components. In addition, our method self-consistently and simultaneously solve for the density, temperature and magnetic field sensitivity of these spectral lines. We note that the Lasso method gives higher weight to fits with sparse solutions that have a minimum DEM: This results in an underestimate of the density and magnetic field for large values of the density and magnetic field, respectively. This error appears systematic and could probably be corrected, but further studies are necessary and we expect that the accuracy of our result will also depend on the spectral lines available for inversion. Similar errors have been found in previous studies using least squared methods but the systematic error in those cases may come from a different reason \cite[see for details][]{Chen:2021ApJ...918L..13C}. In principle, one could expand the parameter range of the inversion and include velocities as has been outlined for the disambiguation of MUSE spectra \citep{Cheung:2019ApJ...882...13C,DePontieu:2020ApJ...888....3D}. This would allow one to further isolate structures along the LOS, with temperature, density and magnetic field bins as well as velocity bins. The NLFFF method provides similar error estimates \citep[of the order of 20\%, e.g.,][]{Metcalf:2008bd,De-Rosa:2009rq}. So for field strongest to 250~G the MIT method is a good complement on constraining the magnetic field. The main advantage of our method is that it provides a powerful tool to mask regions that may suffer absorption. The reason is that once there is absorption, the inversion will not find reasonable solutions. In these locations the comparison between the observation and prediction from the forward synthesis with large absorption do not match. Similarly, one would expect that this sort of mismatch could also be used for regions suffering significant non-equilibrium effects or instrumental artifacts, e.g. discrepancies in the absolute calibration. In order to expand the parameter range of the analysis on the spectral line properties and methods we used two very different simulations. One with an active emerging flux with relatively large amount of absorption. We found that for this scenario, the coronal magnetic field is too weak and we had to increase by 6 times to get enough signal on the MIT effect. The second one is with lower spatial resolution, less absorption and stronger magnetic field. We notice that for both regions the systematic error in the derived magnetic field (under-estimate of the magnetic field) is the same giving support to the possibility that this can be corrected. This method would be of great interest for Hinode/EIS observations. However, Hinode/EIS does not include \fex~345. Still, obtaining estimates of the coronal magnetic field strength should be possible as already shown by \cite{Brooks:2021ApJ...915L..24B} using line intensity ratios. The method will also be of even greater value in measuring coronal magnetic fields using upcoming EUVST observations which can complement with relevant iron lines to further constrain the DEMs. \appendix \section{radiative transfer}~\label{sec:appendix_rad} The contribution function ($G_\lambda (T, n_e, B)$) used in eq.~\ref{eq:int} is defined as \begin{equation} G_\lambda (T, n_e, B) = \frac{N_j{\left({X^{+m}}\right)}}{N{\left({X^{+m}}\right)}}\frac{N{\left({X^{+m}}\right)}}{N{\left({X}\right)}}\frac{N{\left({X}\right)}}{N{\left({H}\right)}}\frac{N{\left({H}\right)}}{n_e}\frac{A_{ji}}{n_e}h\nu_{ji} \end{equation} \noindent where $N_j{\left({X^{+m}}\right)}/N{\left({X^{+m}}\right)}$ is the upper level population which can depend on electron density and temperature and, in the case of the level giving rise to the \fex~257.261~\AA\ line, also on the magnetic field strength $B$ as described by \citet{Li:2015ApJ...807...69L,Li:2016ApJ...826..219L}. $N{\left({X^{+m}}\right)}/N{\left({X}\right)}$ is the ion fraction (strongly dependent on the electron temperature), $N{\left({X}\right)}$ is the element abundance, $N{\left({H}\right)}/n_e$ is the ratio between total H density and free electron density, and $A_{ji}$ is the Einstein coefficient for spontaneous emission for the transition between upper level $j$ and lower level $i$, having a frequency $\nu_{ji}$ \citep{Phillips:2008uxss.book.....P}. The bound free absorption used in this work is by nuetral hydrogen, leium as well as singly ionized helium following the recipes of \citet{Anzer:2005ye}. In short, \begin{eqnarray} \tau & = & n_H \{ (1-F_{HI})\sigma_H + \nonumber \\ & & A_{He} [(1-F_{HeI}-F_{HeII})\sigma_{HeI} + F_{HeI}\sigma_{HeII}]\} \end{eqnarray} \noindent where $n_H$ is the number density of hydrogen, $A_{He}$ the helium abundance, and $F_HI$, $F_{HeI}$, and $F_{HeII}$ are the ionization fraction for hydrogen, and helium. The photoionization cross sections can be found in \citet{Mihalas:1978stat.book.....M} as follows: \begin{eqnarray} \sigma_H(\lambda) = \sigma_0 g_H(\lambda) (\lambda/912)^3 \\ \sigma_{HeII}(\lambda) = 16\sigma_0 g_{HeII}(\lambda) (\lambda/912)^3 \\ \log_{10}{\sigma_{HeI}(\lambda)} = \sum_i^7 c_i \log_{10}(\lambda)^i \end{eqnarray} \noindent see \citet{Anzer:2005ye} for further details, and we used the coefficients $c_i$ in \citet{Rumph:1994qo}. We would like to highlight here the dependence of the cross sections on $\lambda$ which provides a different absorption coefficient for each spectral line of varying wavelength. The CHIANTI v.10 model for \fex\ includes 552 fine structure levels, which allow users to account for the effects of cascades. Collisional excitation rates are taken from the R-Matrix calculations of \citet{DelZanna:2021ApJ...909...38D} while the Einstein coefficients come from \citet{Wang:2020ApJS..246....1W} for the $n=3$ levels and \citet{DelZanna:2021ApJ...909...38D} for $n=4$ levels. In general, the accuracy of predicted contribution functions depends critically on the accuracy of two sets of data: 1) the ionization and recombination data used to calculate ion fractions, and 2) the atomic data, transition rates, and electron-ion collisional excitation rates used to calculate level populations. In the case of \fex, the CHIANTI model has evolved considerably since the first release, and both sets of data have been improved multiple times. Level populations have changed by a factor of 2 from the first version of CHIANTI in 1997 to the latest one; improvements have been substantial, but the changes that occurred in the last three CHIANTI versions (V8, V9 and V10) provide negligible differences even if the value for the radiative transition probability of the E1 $3s^23p^5~^2P_{1.5} - 3s^23p^4(^3P)3d~^4D_{2.5}$ transition at 257.259~\AA\ has changed by a factor 2 between V8 and V10. The reason is that radiative decay through the E1 257.259~\AA\ line is the only decay avenue available for the $^4D_{2.5}$ level, so that the overall line intensity did not change -- thus, we expect no change in the E1 transition if the more recent decay rate of \citet{Li:2021ApJ...913..135L} is used in place of the \citet{Wang:2020ApJS..246....1W} value. The largest source of uncertainty at the moment are the ion fractions, as the maximum abundance temperature have shifted towards higher temperatures, resulting in temperature-dependent differences of up to a factor 3; however, this change does not affect the MIT technique, which is entirely based on intensity ratios from lines emitted by \fex\ only. \section{Collection of figures for the MURaM AR simulation}~\label{sec:appendix} In the main text we show a case where a significant amount of cold plasma is lifted to coronal heights due to flux emergence. The simulation may be similar in nature to an ephemeral region or a newly forming active region. Other regions may experience similar or even more absorption, e.g., as found in UV bursts, moss, coronal rain, or limb observations to cite a few. Here, we now consider a mature AR where cool loops have had time enough to drain and which is characterized by long hot loops. This simulation has been performed with the MURaM code. \subsection{Intensities, MIT and absorption} Like for the previous model, the comparison between the different spectral lines become a multi-dimensional problem that depends on temperature, density, magnetic field as well as absorption (Figures~\ref{fig:inta_hgcr}, and~\ref{fig:ratlines_hgcr}). This model is not as dominated by flux emergence, but does include some localized flux emergence (top-right). It also shows some 1 MK emission in loop footpoints (``moss") with some absorption, but this is reduced compared to the extensive presence of moss in the solar atmosphere \citep{DePontieu:2009ApJ...702.1016D}. This may be because there are fewer chromospheric jets in this simulation because of its low spatial resolution. The jets that are present lift less cold plasma and thus cause less absorption at the footpoints of the hot loops (Fig~\ref{fig:inta_hgcr}). Large extended loops have negligible absorption, whereas low lying loops suffer from absorption. This model corresponds to an AR and has a magnetic field that is more typical for the corona, so we did not increase the magnetic field. \subsection{Inversions: DEM$(T, n_e, B)$} The inversion is doing a good job on reproducing the observables (Fig.~\ref{fig:inv_syn_hgcr}). Similarly this technique is able to reveal where the absorption is happening allowing to mask those locations from the inversion (Fig.~\ref{fig:inv_synabs_hgcr}). The resulting derived magnetic field shows results consistent with the ephemeral simulation from Bifrost (Fig.~\ref{fig:mag_maps_hgcr}, and~\ref{fig:mag_hist_hgcr}). Likewise, the derived magnetic field underestimates the real magnetic field, providing support to the possibility to apply a systematic correction. Note that, DEM analysis in Fig.~\ref{fig:mag_mult_comp_hgcr} reveals that it could be used to separate different magnetic field components along the LOS. It is also worth pointing out that in the MURaM simulation, we have some regions with an overestimate of the magnetic field strength ($[x,y]\approx[20,35]$~Mm). The animated Figure~\ref{fig:mag_mult_comp_hgcr} reveals that the overestimate may be because there are at least two magnetic field components, and one is very close to zero. The inversion provides a value between the two components of the magnetic field. \acknowledgements{\longacknowledgment} \bibliographystyle{aasjournal} \bibliography{aamnemonic,collectionbib}
Title: Discovery of a pre-merger shock in an intercluster filament in Abell 98
Abstract: We report the first unambiguous detection of an axial merger shock in the early-stage merging cluster Abell 98 using deep (227 ks) Chandra observations. The shock is about 420 kpc south from the northern subcluster of Abell 98, in between the northern and central subclusters, with a Mach number of M $\approx$ 2.3 $\pm$ 0.3. Our discovery of the axial merger shock front unveils a critical epoch in the formation of a massive galaxy cluster, when two subclusters are caught in the early phase of the merging process. We find that the electron temperature in the post-shock region favors the instant collisionless model, where electrons are strongly heated at the shock front, by interactions with the magnetic field. We also report on the detection of an intercluster gas filament, with a temperature of kT = 1.07 $\pm$ 0.29 keV, along the merger axis of Abell 98. The measured properties of the gas in the filament are consistent with previous observations and numerical simulations of the hottest, densest parts of the warm-hot intergalactic medium (WHIM), where WHIM filaments interface with the virialization regions of galaxy clusters.
https://export.arxiv.org/pdf/2208.03401
command. \usepackage{amsmath} \usepackage{breqn} \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \input{macros.tex} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \shorttitle{$\chandra$ observations of A98} \graphicspath{{./}{figures/}} \begin{document} \title{Discovery of a pre-merger shock in an intercluster filament in Abell 98} \author[0000-0002-5222-1337]{Arnab Sarkar} \affiliation{University of Kentucky, 505 Rose street, Lexington, KY 40506, USA} \affiliation{Center for Astrophysics $\vert$ Harvard \& Smithsonian, Cambridge, MA 02138, USA} \email{arnab.sarkar@uky.edu, arnab.sarkar@cfa.harvard.edu} \author[0000-0002-3984-4337]{Scott Randall} \affiliation{Center for Astrophysics $\vert$ Harvard \& Smithsonian, Cambridge, MA 02138, USA} \author{Yuanyuan Su} \affiliation{University of Kentucky, 505 Rose street, Lexington, KY 40506, USA} \author[0000-0001-9266-6974]{Gabriella E. Alvarez} \affiliation{Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA} \author{Craig Sarazin} \affiliation{University of Virginia, Charlottesville, VA 22904, USA} \author{Paul Nulsen} \affiliation{Center for Astrophysics $\vert$ Harvard \& Smithsonian, Cambridge, MA 02138, USA} \affiliation{ICRAR, University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009, Australia} \author{Elizabeth Blanton} \affiliation{Boston University, Boston, MA 02215, USA} \author{William Forman} \affiliation{Center for Astrophysics $\vert$ Harvard \& Smithsonian, Cambridge, MA 02138, USA} \author{Christine Jones} \affiliation{Center for Astrophysics $\vert$ Harvard \& Smithsonian, Cambridge, MA 02138, USA} \author{Esra Bulbul} \affiliation{Max Planck Institute for Extraterrestrial Physics, Giessenbachstrasse 1, 85748 Garching, Germany} \author{John Zuhone} \affiliation{Center for Astrophysics $\vert$ Harvard \& Smithsonian, Cambridge, MA 02138, USA} \author{Felipe Andrade-Santos} \affiliation{Center for Astrophysics $\vert$ Harvard \& Smithsonian, Cambridge, MA 02138, USA} \author{Ryan E. Johnson} \affiliation{Gettysburg College, Gettysburg, PA 17325, USA} \author{Priyanka Chakraborty} \affiliation{Center for Astrophysics $\vert$ Harvard \& Smithsonian, Cambridge, MA 02138, USA} \keywords{Galaxy cluster --- Shock front --- Filament --- Cosmology} \section{Introduction} \label{sec:intro} Clusters of galaxies are primarily assembled and grow via accretion, gravitational infall, and mergers of smaller substructures and groups. In such merging events, a significant fraction of kinetic energy dissipates (on a Gyr timescale) in the intra-cluster medium (ICM) via shocks and turbulence \citep{1999ApJ...521..526M}. Such shocks are the major heating sources for the X-ray emitting ICM plasma \citep{2007PhR...443....1M}. Shock fronts provide an essential observational tool in probing the physics of transport processes in the ICM, including electron-ion equilibration and thermal conduction, magnetic fields, and turbulence \citep[e.g.,][]{2006ESASP.604..723M,2007PhR...443....1M,2016MNRAS.463.1534B,2018MNRAS.476.5591B}. Despite the intrinsic interest and significance of merger shocks, X-ray observations of merger shocks with a sharp density edge and an unambiguous jump in temperature are relatively rare. Currently, only a handful of merger shock fronts have been identified by $\chandra$, such as 1E 0657-56 \citep{2002ApJ...567L..27M}, Abell 520 \citep{2005ApJ...627..733M}, Abell 2146 \citep{2010MNRAS.406.1721R,2012MNRAS.423..236R}, Abell 2744 \citep{2011ApJ...728...27O}, Abell 754 \citep{2011ApJ...728...82M}, Abell 2034 \citep{2014ApJ...780..163O}, Abell 665 \citep{2016ApJ...820L..20D}. Cosmic filaments are thought to connect the large-scale structures of our universe \citep[e.g.,][]{2006MNRAS.370..656D,2008A&A...482L..29W,2018ApJ...858...44A,2021A&A...647A...2R}. Several independent searches for baryonic mass have confirmed discrepancies in baryonic content between the high and low redshift universe \citep[e.g.,][]{1998ApJ...503..518F,2004ApJ...616..643F}, with fewer baryons being detected in the local Universe. They concluded that a significant fraction of these missing baryons may be ``hidden'' in the WHIM, in cosmic filaments that connect clusters and groups. The WHIM has a temperature in the 10$^5$-10$^7$ K (or, kT$\sim$0.01-1 keV) regime, and relatively low surface brightness \citep[e.g.,][]{1999ApJ...514....1C,1999ApJ...511..521D,2001ApJ...552..473D,2011ApJ...731....6S}. Abell 98 (hereafter A98) is a richness class III early-stage merger with three subclusters: central (A98S; $z\approx$ 0.1063), northern (A98N; $z\approx$ 0.1042), and southern (A98SS; $z\approx$ 0.1218). The northern and southern subclusters are at projected distances of $\sim$ 1.1 Mpc and 1.4 Mpc from the central subcluster, respectively \citep[e.g.,][]{1989ApJS...70....1A,1999ApJ...511...65J,1994ApJ...423...94B,2014ApJ...791..104P}. The central subcluster is undergoing a separate late-stage merger, with two distinct X-ray cores. Previous observations using $\chandra$ and $\xmm$ showed that A98N is a cool core cluster with a marginal detection of a warm gas arc, consistent with the presence of a leading bow shock, but that the exposure time was insufficient to confirm this feature \citep{2014ApJ...791..104P} To investigate further, we analyzed deep ($\sim$227 ks) $\chandra$ observations of A98N and A98S. In this letter, we report on the detection of an intercluster filament connecting A98N and A98S, and of a leading bow shock in the region of the filament, associated with the early-stage merger between A98N and A98S. We adopted a cosmology of $H_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}$ = 0.7, and $\Omega_{\rm m}$ = 0.3, which gives a scale of 1$''$ = 1.913 kpc at the redshift $z$ = 0.1042 of A98. Unless otherwise stated, all reported error bars are at 90\% confidence level.\\ \section{Data analysis} \label{sec:data} Abell 98 was observed twice with $\chandra$ for a total exposure time of $\sim$ 227 ks. The observation logs are listed in Table \ref{tab:obs_log}. We discuss the detailed data reduction processes in Section \ref{sec:supp_data}. \subsection{Imaging analysis} \label{sec:imaging} The image showing the both A98N and A98S in the 0.5-2 keV energy band is shown in Figure \ref{fig:bridge}. We derived a Gaussian Gradient Magnitude (GGM) filtered image of A98, zooming in the northern subcluster, as shown in Figure \ref{fig:image_edge_fitting}. The GGM-filtered image provides a powerful tool to reveal sub-structures and any associated sharp features in the cluster core, as well as at larger cluster radii \citep{2016MNRAS.461..684W}. The intensity of the GGM images indicates the slope of the local surface brightness gradient, with steeper gradients showing up as brighter regions. The GGM image we present here is filtered on a length scale of 32 pixels. Each pixel is 0.492$\arcsec$ wide. We observe a rapid change in the magnitude of the surface brightness gradient at $\sim$ 400 kpc to the south of the A98N center, as seen in Figure \ref{fig:image_edge_fitting}. GGM images often show artifacts from the filtering process, particularly in low surface brightness regions. Therefore, we next test for the presence of the edge feature indicated in Figure \ref{fig:image_edge_fitting} by extracting a surface brightness profile across the edge. Figure \ref{fig:image_edge_fitting} shows the resulting radial surface brightness profile as a function of distance from the A98N core in the 0.5-2 keV energy band. We observe an edge in the X-ray surface brightness profile at about 420 kpc {(0:46:23.14, +20:33:46.32)} from the A98N core. The distance of the edge from the A98N core is consistent with the edge observed in the GGM image, which suggests the abrupt change in the gradient corresponds to the surface brightness edge. The shape of the extracted surface brightness profile across the edge is consistent with what is expected from a projection of a 3D density discontinuity \citep{2000ApJ...541..542M}. To quantify the surface brightness edge, we fit the surface brightness profile by projecting a 3D discontinuous double power-law model along the line of sight, defined as \begin{equation} n_e(r) \propto \begin{cases} \left(\frac{r}{r_{edge}}\right)^{-\alpha_{1}}, & \text{if $r < r_{edge}$}\\ \frac{1}{jump} \left(\frac{r}{r_{edge}}\right)^{-\alpha_{2}}, & \text{if $r \geq r_{edge}$} \end{cases} \end{equation} where $n_e(r)$ is the 3D electron density at a radius $r$, $r_{edge}$ is the radius of the putative edge, $jump$ is the density jump factor, and $\alpha_1$ and $\alpha_2$ are the slopes before and after the edge, respectively. {A constant term was also added to the model to account for any residual background, after blanksky subtraction. The best-fit value of this term was consistent with zero, suggesting we successfully eliminated sky and particle background.} We project the estimated emission measure profile onto the sky plane and fit the observed surface brightness profile by using least square fitting technique with $\alpha_1$, $\alpha_2$, $r_{edge}$, and the $jump$ as free parameters. Figure \ref{fig:image_edge_fitting} shows the best-fit model and the 3D density profile (inset). The best-fit power-law indices across the edge are $\alpha_1$ = 0.76 $\pm$ 0.01 and $\alpha_2$ = 0.83 $\pm$ 0.02, respectively ($\chi^{2}$/dof = 57/21). The density jumps, across the edge, by a factor of $\rho_2/\rho_1$ = 2.5 $\pm$ 0.3, where suffix 2 and 1 represent the regions inside and outside of the front. Assuming that the edge is a shock front, this density jump corresponds to a Mach number of $\cal{M}$ = 2.3 $\pm$ 0.3, estimated from the Rankine-Hugoniot jump condition, defined as \begin{equation} {\cal{M}} = \left [\frac{2 r}{\gamma + 1 - r\left (\gamma - 1 \right)}\right]^{\frac{1}{2}}, \end{equation} where $r$ = $\rho_2/\rho_1$ and for a monoatomic gas $\gamma$ = 5/3. The edge radius, obtained from the fit, is 420 $\pm$ 25 kpc. We estimated the uncertainties by allowing all the other model parameters to vary freely. The best-fit edge radius is consistent with the distance of the GGM peak from the A98N center. \subsection{Spectral analysis} \label{sec:spectral} To measure the temperature across the surface brightness edge, the southern sector was divided into seven regions, as shown in Figure \ref{fig:image_edge_fitting}. Each region contains a minimum of 2300 background-subtracted counts. We set this lower limit to guarantee adequate counts to measure the temperature uncertainty within 25\% in the faint region at a 90\% confidence level. For each region, we extracted spectra from individual observations and fitted them simultaneously. The spectra were grouped to contain a minimum of 20 counts per spectral channel. The blanksky-background spectra were subtracted from the source spectra before fitting \citep{2016ApJ...820L..20D}. We fitted the spectra from each region to an absorbed single-temperature thermal emission model, PHABS(APEC) \citep{2001ApJ...556L..91S}. The redshift was fixed to $z$=0.1042, and the absorption was fixed to the Galactic value of $N_{H}$=3.06$\times$10$^{20}$ cm$^{-2}$ \citep{2005A&A...440..775K}. The spectral fitting was performed using {\tt XSPEC} in the 0.6-7 keV energy band, and the best-fit parameters were obtained by reducing C-statistics. We fixed the metallicity to an average value of 0.4 Z$_{\odot}$ since it was poorly constrained if left free \citep{2010MNRAS.406.1721R}. We adopted solar abundance table of \citet{2009ARA&A..47..481A}. Figure \ref{fig:image_edge_fitting} shows the best-fit projected temperatures. The projected temperature increases steadily from the A98N core up to $\sim$ 200 kpc, then jumps to a peak of $\sim$ 6.4$^{+1.0}_{-1.5}$ keV at $\sim$ 420 kpc, which then plummets to 2.7$^{+0.5}_{-0.5}$ keV beyond the surface brightness edge. Across the edge, the temperature decreases by a factor of $\sim$ 2.3 $\pm$ 0.6, confirming the outer edge as a shock front. {We estimated the electron pressure by combining the temperature and electron density, as shown in Figure \ref{fig:image_edge_fitting}. As expected, we found a significant decrease in pressure by a factor of $\sim$ 7 $\pm$ 2 at the shock front.} The observed temperature drop corresponds to a Mach number of $\cal{M}$ = 2.2 $\pm$ 0.4, consistent with the Mach number derived from the density and pressure jump. Mach numbers from both methods are consistent with each other, bolstering the shock detection. We estimated a pre-shock sound speed of $c_s$ $\sim$ 848 $\pm$ 80 km/s, which gives a shock speed of $v_{shock}$ = ${ c_s} \cal{M}$ $\approx$ 1900 $\pm$ 180 km/s. \section{Detection of Filament emission} Early-stage, major mergers between two roughly equal mass subclusters are expected to typically have their merger axis aligned with the filaments of the cosmic web (see \citet{2018ApJ...858...44A} for further discussion). To search for faint X-ray emission associated with the filament between A98N and A98S, we extracted a surface brightness profile from nine box regions across the bridge between A98N and A98S in the 0.5-2 keV energy band (see Figure \ref{fig:bridge}). Figure \ref{fig:bridge_sb_temp} shows the surface brightness profile of the bridge. This surface brightness profile includes the emission from the extended ICM of both subclusters, and from the filament. To account for only the extended ICM emission from both subclusters, we extracted surface brightness profiles from similar regions placed in the opposite directions of the filament, using {\it Suzaku} observations of A98 \citep{2022arXiv220608430A}, assuming in each that the contribution from the neighboring subcluster is negligible. We used {\it Suzaku} observations because the existing {\it Chandra} observations do not cover the part of the sky needed for such analysis. These two diffuse surface brightness profiles are then subtracted from the surface brightness profile of the bridge, yielding the surface brightness profile of the filament. {We use {\it WebPIMMS}~\footnote{\url{https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl}} to convert {\it Suzaku} and {\it Chandra} count rates to physical units (erg/cm$^2$/s/arcsec$^{2}$) in the 0.5-2 keV energy band. We assumed a thermal {\tt APEC} model with absorption fixed to $N_{H}$=3.06$\times$10$^{20}$ cm$^{-2}$, abundance $Z = 0.2 Z_{\odot}$, and temperature $kT$ = 2.6 keV (as discussed later).} Figure \ref{fig:bridge_sb_temp} shows the resulting surface brightness profile of the filament in the 0.5-2 keV energy band. We detected excess X-ray surface brightness in the region of the bridge with a $\sim$3.2$\sigma$ significance level. Similar excess emission along the bridge with somewhat lower significance ($\sim$ 2.2$\sigma$) was also found by \citet{2022arXiv220608430A} using only {\it Suzaku} data. Being very nearby to both subcluster cores, where emission is dominated by the respective subcluster, we could not detect any significant filament emission from the first two and last two regions. This excess X-ray emission suggests the presence of a filament along the bridge connecting A98N and A98S. To measure the temperature across the bridge region, we adopted similar regions used for the surface brightness profile of the bridge. The spectra were then fitted to a single-temperature APEC model (1-T), keeping the metallicity fixed to 0.2 Z$_{\odot}$. Figure \ref{fig:bridge_sb_temp} shows the projected temperature profile. The temperatures across the bridge are consistent within their $\sim$ 2.7$\sigma$ (90\%) uncertainty, except for third region where we detected a shock. We next measured the global properties of the bridge using a 0.6 Mpc $\times$ 0.7 Mpc box region (632 kpc from the A98N and 505 kpc from the A98S; shown with cyan in Figure \ref{fig:bridge}). Using a single temperature emission model for the bridge region, we obtained a temperature of 1.8$^{+0.7}_{-0.4}$ keV and an abundance of 0.22$^{+0.12}_{-0.16}$ $Z_{\odot}$. Our measured temperature of the bridge is consistent with the temperatures obtained by \citet{2014ApJ...791..104P} using {\it XMM-Newton} and relatively short exposure {\it Chandra} observations. We next estimated the electron density of the bridge using \begin{dmath}\label{eq:ne} n_e = \left [1.73 \times 10^{-10} \times N\ {\rm sin}\left( {i}\right) \times (1+z)^2 \times \left(\frac{D_A}{\rm Mpc}\right)^2 \left(\frac{r}{\rm Mpc}\right)^{-2} \left(\frac{l_{obs}}{\rm Mpc}\right)^{-1} \right]^{1/2}\ \textrm{cm}^{-3}, \end{dmath} where $N$ is APEC normalization, $D_A$ is the angular diameter distance, $r$ and $l_{obs}$ are the radius and the projected length of the filament, respectively. We obtained electron density of $n_{e}$ = 4.2$^{+0.9}_{-0.8}$ $\times$ 10$^{-4}$ cm$^{-3}$, assuming the filament is in the plane of the sky ($i$ = 90$^{\circ}$) and has cylindrical symmetry, since for $i$ $\ll$ 90$^{\circ}$ we would not expect to detect a clear leading merger shock edge. Our measured temperature and electron density are higher than the expected temperature ($\lesssim$ 1 keV) and electron density ($\sim$10$^{-4}$cm$^{-3}$) for the WHIM \citep[e.g.,][]{2007ARA&A..45..221B,2008A&A...482L..29W,2015Natur.528..105E,2018ApJ...858...44A,2022MNRAS.510.3335H}. The surface brightness profiles seen in Figure \ref{fig:bridge_sb_temp} show that the emission from the filament itself is much fainter than the diffuse, extended cluster emission. A low-density gas emission is expected together with the ICM emission at the bridge region. We, therefore, adopted a two-temperature emission model for the bridge and obtained a temperature of $kT_{hot}$ = 2.60$^{+0.93}_{-0.62}$ keV for the hotter component and $kT_{cool}$ = 1.07 $\pm$ 0.29 keV for the cooler component. The higher temperature gas is probably mostly the overlapping extended ICM of the subclusters seen in projection. The two-temperature model was a marginal improvement over the one-temperature model with an F-test probability of 0.08. The emission measure of the cooler component corresponds to an electron density of $n_{e}$ = 1.30$^{+0.28}_{-0.31}$ $\times$ 10$^{-4}$ cm$^{-3}$, assuming the filament is in the plane of the sky. From the temperature and electron density of the cooler component, we obtain an entropy of $\sim$ 416 keV cm$^2$. Our measured temperature, density, and entropy of the cooler component are consistent with what one expects for the hot, dense part of the WHIM \citep[e.g.,][]{2015Natur.528..105E,2016ApJ...818..131B,2018ApJ...858...44A,2021A&A...647A...2R}. {\it Suzaku} observations also show similar emission to the north of A98N, beyond the virial radius, in the region of the putative large scale cosmic filament \citep{2022arXiv220608430A}. {We also check for any systematic uncertainties in measuring the filament temperature, abundance, and density by varying the scaling parameter of blanksky background spectra by $\pm$5\%. We find no significant changes.} \section{Discussion and Conclusion} \subsection{Nature of the shock front} Our deep $\chandra$ observations reveal that the overall system is complex, with A98S dominated by a later-stage merger ongoing along the east-west direction (Sarkar et al. in prep). Previously, \citet{2014ApJ...791..104P} argued that the surface brightness excess along the southern direction of A98N is more likely a shock with a Mach number, $\cal{M}$ $\gtrsim$ 1.3. With the new data, we found that the temperature increases by a factor of $\sim$ 2.3 (from 2.7 keV to 6.4 keV) across the surface brightness edge, confirming it is a shock front propagating along the merger axis (N/S direction). This is the first unambiguous detection of axial merger shock in an early-stage merger {(i.e., pre-core-passage)}{, as opposed to the late-stage merger {(i.e., post-core-passage)}, where several previous observations found axial shocks \citep[e.g.,][]{2010MNRAS.406.1721R,2012MNRAS.423..236R,2016ApJ...820L..20D}}. Axial shock detection in an early-stage merging cluster is a long-standing missing piece of the puzzle of cluster formation. Previous {\it Chandra} observations of the pre-merger system, 1E 2216/1E 2215, detected an equatorial shock \citep{2019NatAs...3..838G}. Equatorial shocks are driven by the adiabatically expanding overlap region between the outskirts of the merging subclusters. They propagate along the equatorial plane, perpendicular to the merger axis. This is in contrast to axial shocks, which are driven by, and ahead of, the infalling subclusters. \citet{2019NatAs...3..838G} did not detect any axial shock in 1E 2216/1E 2215. There are conflicting findings from simulations on which merger shock should form first. Recent hydrodynamical simulations of binary cluster mergers by \citet{2021MNRAS.506..839Z} showed the formation of axial merger shocks in the early stages of the merging process. In contrast, simulations by \citet{2018ApJ...857...26H} indicated that equatorial shock forms long before the axial shock. To date, it is unclear what is driving the apparent discrepancy between the formation of equatorial and axial shocks, although the parameters of the merger (e.g., mass ratio, total mass, impact parameter) likely play a role. Our {\it Chandra} observation of the shock front in A98 is the `{first}' unambiguous axial shock detection in an early stage merging system, prior to core passage. We detect no equatorial shocks, but it is possible that they are present and the current observations are not deep enough to detect them. With this discovery, we caught two subclusters in a crucial epoch of the merging process, which will reveal any missing link to the shock formation process in pre-merger systems and provide a yardstick for future simulations. \subsection{Electron-ion equilibrium}\label{sec:elec} The electron heating mechanism behind a shock front is still up for debate. The collisional equilibrium model predicts that a shock front propagating through a collisional plasma heats the heavier ions dissipatively. Electrons are then compressed adiabatically, and subsequently come to thermal equilibrium with the ions via Coulomb collisions after a timescale defined in Equation \ref{eq:teq} \citep{1962pfig.book.....S,1988xrec.book.....S,1998MNRAS.293L..33E,2007PhR...443....1M}. By contrast, the instant equilibrium model predicts that electrons are strongly heated at the shock front, and their temperature rapidly rises to the post-shock gas temperature, similar to the ions \citep{2006ESASP.604..723M,2012MNRAS.423..236R}. The electron and ion temperature jumps at the shock front are determined by the Rankine-Hugonoit jump conditions. \citet{2006ESASP.604..723M} showed that the observed temperature profile across the shock front of the Bullet cluster supports the instant equilibrium model. \citet{2012MNRAS.423..236R} found that the temperature profile across one shock front of Abell~2146 supports the collisional model while another shock supports the instant model. However, in all cases the measurement errors prevented a conclusive determination. {Later, \citet{2022MNRAS.514.1477R} with deeper $\chandra$ observations found both shocks in Abell~2146 favor the collisional model.} Here, we compare the observed temperature profile across the shock front of A98 with the predicted temperature profiles from collisional and instant equilibrium models. We estimate the model electron temperatures and project them along the line of sight, as described in Section \ref{sec:supp_elec}. The resulting temperature profiles are shown in Figure \ref{fig:model}. The observed post-shock electron temperature appears to be higher than the temperature predicted by the collisional equilibrium model and favors the instant equilibrium model, although we cannot rule out the collisional equilibration model due to the large uncertainty in the post-shock electron temperature. \subsection{Filament emission} We detected 3.2$\sigma$ excess X-ray emission along the bridge connecting two sub-clusters (A98N and A98S), also detected by \citet{2022arXiv220608430A} (with lower significance) using $\suzaku$. Our measured surface brightness of the cooler component ranges between (0.9--2.8) $\times$ 10$^{-15}$ erg cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$ in the 0.5--2 keV energy band, which is equivalent to (1.2--3.6) $\times$ 10$^{-15}$ erg cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$ in the 0.2--10 keV energy band. Using high-resolution cosmological simulations, \citet{2006MNRAS.370..656D} predicted that the X-ray surface brightness of the WHIM filaments should be $\sim$ 10$^{-16}$ erg cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$ with a zero metallicity in the 0.2-10 keV energy band. However, an increased metallicity could induce more line emission, increasing the surface brightness of the filament. A similar conclusion was drawn by \citet{2008A&A...482L..29W} while explaining their observed surface brightness of the WHIM filament, higher than that of cosmological simulation. Our measured surface brightness of the filament is consistent with the surface brightness of the WHIM filament obtained for A222/223 and A1750 \citep{2016ApJ...818..131B}. Using a 2--T plasma emission model, we measured the temperature of the cooler component of the filament, $kT_{\rm cool}$ = 1.07 $\pm$ 0.29 keV. The 2--T model was a marginal improvement over the 1--T model with an F-test probability of 0.08. Similar filament temperatures were measured for A2744 \citep[0.86--1.72 keV;][]{2015Natur.528..105E}, A222/223 \citep[0.66-1.16 keV;][]{2008A&A...482L..29W}, and A1750 \citep{2016ApJ...818..131B}. We obtained a best-fit filament electron density of $n_{e}$ = 1.30$^{+0.28}_{-0.31}$ $\times$ 10$^{-4}$ cm$^{-3}$, assuming the filament is in the sky plane. If the filament has an inclination angle ($i$) with the line of sight, the electron density will be lower by a factor of ($\sin\ i$)$^{-1/2}$. Previous studies also found similar electron densities for the hot, dense part of the WHIM in several other galaxy clusters, e.g., 0.88 $\times$ 10$^{-4}$ cm$^{-3}$ for A399/401 \citep{2021arXiv210704611H}, 1.08 $\times$ 10$^{-4}$ cm$^{-3}$ for A3391/3395 \citep{2018ApJ...858...44A}, 10$^{-4}$ cm$^{-3}$ for A2744 and A222/223 \citep{2015Natur.528..105E,2008A&A...482L..29W}. We estimated a baryon over-density of $\rho/\langle \rho \rangle$ $\sim$ 240 associated with the gas in the filament, which is consistent with the expected over-density for a WHIM filament \citep{2007ARA&A..45..221B}. Assuming a cylindrical geometry for the filament, we estimated the associated gas mass to be $M_{ gas}$ = 3.8$^{+0.8}_{-0.6}$ $\times$ 10$^{11}$ M$_{\odot}$. Our measured temperature and average density of the cooler component are remarkably consistent with what one expects for the hot, dense part of the WHIM, suggesting that this gas corresponds to the hottest, densest parts of the WHIM. X-ray observations of the emission from WHIM filaments are relatively rare because they have lower surface brightness than the ICM. They offer crucial observational evidence of hierarchical structure formation. Using numerical simulations, \citet{2001ApJ...552..473D} predicted that the gas in a WHIM filament has a temperatures in the range 10$^{5.5}$--10$^{6.5}$ K. A similar conclusion was drawn by \citet{2020ApJ...889...48L} using the kinetic S--Z effect in groups of galaxies. Since our detected filament lies in the overlapping ICM of both sub-clusters, the gas may have been heated by the shock and adiabatic compression. \begin{acknowledgments} {We are grateful to the anonymous referee for insightful comments that greatly helped to improve the paper.} This work is based on observations obtained with $\chandra$ observatory, a NASA mission and $\suzaku$, a joint JAXA and NASA mission. {AS and SR are supported by the grant from NASA's Chandra X-ray Observatory, grant number GO9-20112X.} \end{acknowledgments} \bibliography{sample631}{} \bibliographystyle{aasjournal} \appendix \section{Data reduction processes}\label{sec:supp_data} \begin{table}[ht!] \centering \caption{{\it Chandra} observation log} \begin{tabular}{cccc} \hline Obs ID & Obs Date & Exp. time & PI \\ & & (ks) & \\ \hline 11876 & 2009 Sep 17 & 19.2 & S. Murray \\ 11877 & 2009 Sep 17 & 17.9 & S. Murray \\ 21534 & 2018 Sep 28 & 29.5 & S. Randall \\ 21535 & 2019 Feb 19 & 24.7 & S. Randall \\ 21856 & 2018 Sep 26 & 30.5 & S. Randall \\ 21857 & 2018 Sep 30 & 30.6 & S. Randall \\ 21880 & 2018 Oct 09 & 9.9 & S. Randall \\ 21893 & 2018 Nov 11 & 17.9 & S. Randall \\ 21894 & 2018 Nov 14 & 17.8 & S. Randall \\ 21895 & 2018 Nov 14 & 28.6 & S. Randall \\ \hline \end{tabular} \label{tab:obs_log} \end{table} Abell 98 was observed twice with {\it Chandra} ACIS-I in VFAINT mode, in September 2009 for 37 ks split into two observations and later in September 2018 -- February 2019 for 190 ks divided into eight observations. The combined exposure time is $\sim$ 227 ks (detailed observation logs are listed in Table \ref{tab:obs_log}). The {\it Chandra} data reduction was performed using CIAO version 4.12 and CALDB version 4.9.4 provided by the Chandra X-ray Center (CXC). We have followed a standard data analysis thread \footnote[4]{\url{http://cxc.harvard.edu/ciao/threads/index.html}}. All level 1 event files were reprocessed using the {\tt chandra$\_$repro} task, employing the latest gain and charge transfer inefficiency corrections, and standard grade filtering. VFAINT mode was applied to improve the background screening. The light curves were extracted and filtered using the {\tt lc$\_$clean} script to identify and remove periods affected by flares. The filtered exposure times are listed in Table \ref{tab:obs_log}. We used the {\tt reproject$\_$obs} task to reproject all observations to a common tangent position and combine them. The exposure maps in the 0.5-2.0 keV energy bands were created using the {\tt flux$\_$obs} script by providing a weight spectrum. The weight spectrum was generated using the {\tt make$\_$instmap$\_$weights} task with an absorbed APEC plasma emission model and a plasma temperature of 3 keV. To remove low signal-to-noise areas near chip edges and chip gaps, we set the pixel value to zero for those pixels with an exposure of less than 15\% of the combined exposure time. Point sources were identified using {\tt wavdetect} with a range of wavelet radii between 1--16 pixels to maximize the number of detected point sources. We set the detection threshold to $\sim$ 10$^{-6}$, for which we expect $\lesssim$ 1 spurious source detection per CCD. We used blank-sky observations provided by the CXC to model the non-X-ray background and emission from the foreground structures (e.g., Galactic Halo and Local Hot Bubble) along the observed direction. The blanksky--background was generated using the {\tt blanksky} task and then reprojected to match the coordinates of the observations. The resulting blanksky-background was normalized so that the hard band (9.5 - 12 keV) count rate matched the observations. \section{Electron heating mechanism}\label{sec:supp_elec} {When a shock front propagates through a plasma, it heats the ions dissipatively in a layer that has a width of few ion-ion collisional mean free paths. On the other hand, having very high thermal velocity compared to the shock, the electron temperature does not jump by the same high factor as the ion temperature. The electrons are compressed adiabatically at first in merger shocks and subsequently equilibrate with the ions via Coulomb scattering after a timescale \citep{1962pfig.book.....S,1988xrec.book.....S} given by, \begin{equation} \label{eq:teq} {t_{eq}}( e,p) \approx 6.2 \times 10^8 {\rm yr} \left( \frac{{T}_{ e}}{10^8 \rm K} \right)^{3/2} \left( \frac{{ n}_{ e}}{10^{-3} {\rm cm}^{-3}} \right)^{-1} \end{equation} where $T_{e}$ and $n_{e}$ are the electron temperature and density, respectively. Alternatively, the instant collisionless shock model predicts that the electrons and ions reach thermal equilibrium on a timescale much shorter than $t_{ eq}$ after passing the shock front, where the post-shock electron temperature is determined by the pre-shock electron temperature and the RH jump conditions \citep{2007PhR...443....1M}. We use the best-fit density profile shown in Figure \ref{fig:image_edge_fitting} to project this model electron temperature along the line of sight analytically \citep{2012MNRAS.423..236R}. In the collisional equilibration model, the electron temperature rises at the shock front through adiabatic compression, \begin{equation}\label{eq:Te} { T}_{ e,2} = { T}_{ e,1} \left( \frac{\rho_2}{\rho_1} \right)^{\gamma -1} \end{equation} where $\rho_{1}$ and $\rho_{2}$ are the gas density in the pre-shock and post-shock regions, and $\gamma$ = 5/3. Electron and ion temperatures then subsequently equilibrate via Coulomb collision at a rate given by, \begin{equation}\label{eq:rate} \frac{ dT_e}{ dt} = \frac{{ T_i} - { T_e}}{ t_{eq}} \end{equation} where $T_{ i}$ is the ion temperature. Since the total kinetic energy density is conserved, the local mean gas temperature, $T_{ gas}$ is constant with time, where $T_{ gas}$ is given by, \begin{equation}\label{eq:Tgas} { T_{gas}} = \frac{ n_eT_e +n_iT_i}{ n_i + n_e} = \frac{ 1.1T_e + T_i}{2.1} \end{equation} where $n_{i}$ is the ion density. We integrate Equation \ref{eq:rate} by using Equation \ref{eq:teq} and \ref{eq:Tgas} to obtain the model electron temperature analytically. Finally, we project the model electron temperature profile along the line of sight \citep{1998MNRAS.293L..33E} by, \begin{equation} \label{eq:project} \langle { T} \rangle =\int_{b^2}^{\infty}\frac{\epsilon( r){ T_e( r) dr^2}}{\sqrt{r^2 - b^2}} \bigg/ {\int^{\infty}_{b^2}\frac{\epsilon( r){ dr^2}}{\sqrt{r^2 - b^2}}} \end{equation} where $\epsilon(r)$ is the emissivity at physical radius $r$ and $b$ is the distance from the shock front. The emissivity-weighted electron temperature should be close to what one observes with a perfect instrument with a flat energy response across the relevant energy range. Since this is not the case, we convolve the instant and collisional model electron temperatures with the response of the telescope to predict what we expect to measure \citep{2012MNRAS.423..236R}. We first estimate the emissivity-weighted electron temperature using the above models and corresponding emission measure using the best-fit density discontinuity model (Equation \ref{eq:ne}) in a small volume dV. We then sum the emission measures with similar temperatures for each annulus using a temperature binning of 0.1 keV. We simulate spectra in XSPEC using a multi-temperature absorbed {\tt apec} model. We fix the temperature of each component with the median temperature of each bin and calculate the normalization based on the summed emission measure in that temperature bin. We also fix the abundance to 0.4 $Z_{\odot}$, n$_{\rm H}$ to galactic value, and redshift to 0.1043. To estimate the expected projected temperature in this annulus, we finally fit this simulated spectra with a single temperature absorbed {\tt apec} model with abundance, n$_{\rm H}$, and redshift fixed as above. We adopt the Monte Carlo technique with 1000 realizations to measure uncertainty in model electron temperatures. Assuming a Gaussian distribution of the pre-shock temperature, which is the largest source of uncertainty, we repeated the model and projected temperature calculations 1000 times with a new value of pre-shock temperature each time. The resulting instant and collisional model temperature profiles with 1$\sigma$ uncertainty are shown in Figure \ref{fig:model}.}
Title: AGN feedback duty cycle in Planck SZ selected clusters using Chandra observations
Abstract: We present a systematic study of X-ray cavities using archival Chandra observations of nearby galaxy clusters selected by their Sunyaev-Zel'dovich (SZ) signature in the Planck survey, which provides a nearly unbiased mass-selected sample to explore the entire AGN feedback duty cycle. Based on X-ray image analysis, we report that 30 of the 164 clusters show X-ray cavities, which corresponds to a detection fraction of 18%. After correcting for spatial resolution to match the high-$z$ SPT-SZ sample, the detection fraction decreases to 9%, consistent with the high-z sample, hinting that the AGN feedback has not evolved across almost 8 Gyrs. Our finding agrees with the lack of evolution of cool-core clusters fraction. We calculate the cavity power, P_{\rm cav}, and find that most systems of our sample have enough AGN heating to offset the radiative losses of the intracluster medium.
https://export.arxiv.org/pdf/2208.04888
\label{firstpage} \vspace{-5cm} \begin{keywords} galaxies: clusters: general – intergalactic medium – X-rays: galaxies \end{keywords} \section{Introduction} \label{sec:intro} \defcitealias{Hlavacek-Larrondo15}{HL15} \defcitealias{andrade-santos17}{AS17} Feedback from active galactic nuclei (AGN) jets has been proposed to solve the cooling flow problem (see \citealt{fabian94,McNamara_2005}, for a review). Although the details of how AGN feedback counteracts the radiative losses of the intracluster medium (ICM) in clusters are still not fully understood. Early observations with the X-ray satellite \textit{ROSAT} revealed surface brightness deficits that appear to be spatially aligned with regions of radio emission in the ICM of a few galaxy clusters \citep{boehringer93,carilli94}. Nowadays, with the superb resolution of the X-ray \textit{Chandra} observatory, it has become clear that central AGN located in the Brightest Cluster Galaxy (hereafter BCG) continuously interacts with the surrounding ICM, producing, not solely, the X-ray surface brightness depressions known as X-ray cavities (or bubbles), but also shocks and ripples \citep[e.g.,][]{fabian06}. In addition, high-resolution radio observations by the Jansky Very Large Array (JVLA) have shown that extended radio lobes inflated by the central AGN may excavate these X-ray cavities by pushing aside the surrounding hot gas. Accordingly, they are expected to be filled with radio emission \citep[e.g.,][]{birzan20}. There have also been detections of the so-called ``ghost cavities'' at low radio frequency, which are believed to trace a past AGN outburst, for which the radio emission has faded away. More importantly, the X-ray cavities and bubbles may provide a direct measurement of the work done by the radio-mode feedback on the ICM \citep[e.g.,][]{gitti10}. The X-ray cavities are not only supposed to carry enough energy to balance the cooling losses of the X-ray emitting plasma \citep{birzan08}, but also play a key role in the formation of extended multiphase filaments observed in cooling flow clusters \citep[e.g.,][]{olivares19,2022arXiv220107838O,russell19}. Therefore, investigating the physical properties of X-ray cavities can improve our understanding of AGN feedback and its impact on galaxy formation and evolution. Currently, most X-ray cavity studies of clusters, groups and elliptical galaxies are based on \textit{Chandra} X-ray observations for both individual systems and dedicated surveys (see \citealt{birzan04,birzan08, rafferty06,nulsen09,dong10,osullivan11,Hlavacek_Larrondo_2013,shin16,panagoulia14b,birzan17}). One of the main limitations of existing studies based on X-ray selection methods is that they are often biased towards bright cool-core systems and, consequently, against X-ray faint clusters. A complete unbiased sample of galaxy clusters is desirable to obtain heating and cooling balance constraints \citep{gitti10}, and to understand the duty cycle of AGN feedback, which is estimated as the fraction of systems displaying bubbles inflated by the central AGN. Millimeter-wave surveys utilizing the SZ effect have the advantage of providing nearly mass-limited samples, as the impact of SZ effect on the CMB (cosmic microwave background) brightness temperature is independent of redshift. This allows us to explore the entire AGN feedback duty cycle. Examples of instruments used to undertake SZ surveys include the Planck satellite \citep{planck_collaboration11}, the South Pole Telescope (SPT) \citep{bleem15,bleem20}, and the Atacama Cosmology Telescope \citep{hincks10,hilton18}. For the high-$z$ Universe, \citet[][hereafter HL15]{Hlavacek-Larrondo15} performed a \textit{Chandra} study of 83 clusters selected from the SPT-SZ survey, and found X-ray surface brightness depression in 6 clusters consistent with radio jets inflating X-ray cavities in their ICM. Here, we present a study of X-ray cavities in the Planck SZ survey, which provides a unique and unbiased view of AGN feedback in the nearby ($z<0.35$) Universe and anchors the evolution of AGN feedback over the past 8 Gyrs. This paper examines \textit{Chandra} observations of 164 Planck SZ clusters with the aim of identifying X-ray bubbles. In section~\ref{sec:sample} we describe the Planck SZ sample. Section~\ref{sec:obs} presents the X-ray \textit{Chandra} observations and describes the methods used to identify X-ray surface brightness depressions. Section~\ref{sec:results} is devoted to the results and their implications. Section~\ref{sec:limitations} presents the limitations of the present study. Finally, section~\ref{sec:conclusions} summarizes our findings. Throughout this paper, we adopted a standard cosmology with H$_{\rm 0}$=70\,km s$^{-1}$\,Mpc$^{-1}$ and $\Omega_{\rm m}$=0.3. \section{Sample}\label{sec:sample} The \textit{Chandra}-Planck Legacy Program for Massive Clusters of Galaxies \citep{jones12} is a deep X-ray survey of massive Planck clusters with redshift $\leq0.35$ detected over almost the full sky (and $|$b$|> 15 \deg$) through the Sunyaev-Zel'dovich effect by the first Planck mission released in early 2011 \citep{planck_collaboration11}. The observations are constructed by combining the \textit{Chandra} XVP (PI: Jones) and HRC Guaranteed Time Observations (PI: Murray). At least 10,000 source counts have been collected for each cluster to derive its gas properties out to $R_{\rm 500}$ \citep[][hereafter AS17]{andrade-santos17}. The \textit{Chandra} Planck sample is nearly an unbiased, mass-selected sample, covering the mass range $7 \times 10^{13} {\rm M}_\odot \le M_{500} \le 2 \times 10^{15} {\rm M}_\odot$. % The sample consists of 164 clusters, of which a small fraction contain pronounced substructures (35, subclusters), visually identified in the X-ray images \citepalias{andrade-santos17}. Central density is the best known proxy for the central cooling time \citep[e.g.,][]{Su2020}, and has been widely used to classify CC and NCC clusters \citep[e.g.,][]{ruppin21, andrade-santos17}. Based on the central density classification of $n_{\rm core} = 1.5\times$10$^{-2}$~cm$^{-3}$, as presented in \citetalias{andrade-santos17}, 63 clusters are classified as CC and 101 as NCC clusters. Deprojected temperature and density profiles of clusters in this sample are taken from \citetalias{andrade-santos17}, to which we refer the reader for a detailed description. \section{Observation and Analysis}\label{sec:obs} For each cluster, we used all available \textit{Chandra} observations, including both CCDs ACIS-I and ACIS-S. The data reduction and calibration of \textit{Chandra} observations were carried out using \textit{Chandra} Interactive Analysis of Observations software (CIAO) 4.12, and Chandra Calibration Database (CALDB) 4.9.2.1. The observations were reprocessed using the {\tt chandra\_repro} tool of CIAO. Standard blank sky background files and readout artifacts were subtracted. Point sources were detected in the 0.5-8.0 keV energy band, then masked before performing the spectral and imagining analysis of the clusters. Exposure corrected images were produced in the 0.5–2.0~keV band energy, and used for the X-ray cavity analysis. Unsharp masked images were produced to help identify X-ray cavities using CIAO tool {\tt aconvolve}. The original image was smoothed twice using a small- and large-scale Gaussian kernel. The highly smoothed image was then subtracted from the less smoothed image, enhancing the inhomogeneities in the residual image. We tried different smoothing lengths for the more heavily smoothed images based on the large scale of the cluster emission, starting on 10 up to 60~kpc. For the less smoothed images, we tried smoothing lengths comparable to the physical size of a cavity, from 1 up to 20~kpc \citep[e.g.][]{rafferty06}. We also examined the residual image after subtracting an elliptical double beta model, which was obtained by fitting a slightly smoothed 0.5--2.0 keV image. The second beta model is to account for excess emission from the cool core. We classified each cavity identified as Certain (C) or Potential (P). The first two co-author independently looked for X-ray cavities and then classified them based on the significance of each cavity. Cavities were classified as certain if they appear as a clear visible depression in the original image, but also the unsharp-masked image or double $\beta$-subtracted image. In figure~\ref{fig:example} we present an example of the methods employed to identify cavities for a clusters with certain (C), potential (P), and without cavities. A cavity was classified as potential if there was only a hint of an X-ray depression in the original X-ray image, but visible in the unsharp-masked or double $\beta$-model subtracted image. The number of counts of the central region ($<$20~kpc) of the clusters with potential cavities is too low for the cavities to be certain (see also Section~\ref{sec:limitations}). Clusters without depressions were classified as lacking cavities. We also consider clusters with dark annuli or rings created by bright excesses and asymmetries of the cluster distribution as lacking cavities, as such surface brightness depressions are not consistent with bubbles inflated by radio jets. \section{Results and Discussion}\label{sec:results} \subsection{Detection fraction of cavities and evolution}\label{sec:fraction_cav} Overall, we detected 67 X-ray cavities in 30 clusters, of which 32 are classified as certain (C) and 35 as potential (P) cavities. From the CC cluster sub-sample, we find 29 clusters with cavities, of which 12 clusters reveal mostly certain cavities and 17 potential cavities. The remaining 34 CC clusters lack X-ray depressions. We also find in one NCC cluster, G269.51+26.42, two potential cavities located in opposite sides of the cluster center. The rest of the NCC clusters show no hint of X-ray depressions. We find that most of the detected cavities come in pairs, and they are usually located on opposite sides of the cluster core, as expected, considering that X-ray cavities are believed to be inflated by radio jets. It is worth mentioning that 29/30 of clusters with cavities also have radio emission associated with the central source (Olivares et al. in prep). Some clusters show multiple X-ray cavities, likely due to either multiple AGN outbursts or the disintegration of large cavities, while five clusters have single cavities \citep[e.g.,][]{morsony10,Canning_2013}. In total, 18\% of all clusters in our sample, including both CC and NCC clusters contain X-ray cavities (see Fig~\ref{fig:smaller_cav_size}, left panel), a few times smaller than the fractions found by previous studies of nearby clusters and about twice as high as that of the high-$z$ SPT–SZ sample (7\%--9\%; \citetalias{Hlavacek-Larrondo15}). We have included uncertainties associated with the fraction of clusters with cavities using the Wilson interval method \citep{brown01}. We note that previous studies of nearby clusters tend to be biased towards X-ray bright clusters. Furthermore, our findings suggest a slightly lower duty cycle of $\sim$46\%, as 28 of the 63 CC clusters ($n_{\rm core}$>1.5$\times$10$^{-2}$~cm$^{-3}$) have detected cavities (see Fig.~\ref{fig:smaller_cav_size}, left panel), compared to previous studies which predict AGN feedback duty cycle to be high (60--90\%, \citealt[][]{birzan12,fabian12,panagoulia14b}). We stress, however, that different definitions have been used to classify cool core clusters. At high-$z$, \citetalias{Hlavacek-Larrondo15} found a lower limit of $\sim$11\% on the duty cycle for the SPT-SZ sample, as only 6 of the 52 clusters with signs of cooling reveal cavities. To explore the evolution of the detection fraction of cavities, we compare our results with those found in the high-$z$ SPT-SZ sample \citepalias{Hlavacek-Larrondo15}. Bear in mind that the SPT–SZ sample is limited by resolution (see Fig~\ref{fig:smaller_cav_size}, right panel), with the smaller cavities detected in this sample having sizes of $\lesssim$10~kpc. The latter is due to a combination of larger Chandra PSF at high~$z$ and lower number of counts compared to low-$z$ clusters. To account for that limitation, we compute the detection fraction taking only clusters with cavities sizes larger than $\gtrsim$10~kpc to match the observing bias of the SPT–SZ sample. That yields a detection fraction of 9\%, which is in good agreement with the SPT–SZ sample \citepalias{Hlavacek-Larrondo15}. In the same vein, if we consider only clusters with ``certain'' (C) cavities and sizes $\gtrsim$10~kpc, the detection fraction of the Planck sample drops to 3\%, close to the 2\% obtained in the high-$z$ SPT–SZ sample when only clearly detected cavities are taken into account. These findings suggest that the AGN feedback duty cycle has remained constant over almost 8 Gyrs. This trend strongly agrees with the lack of evolution in the fraction of cool-cores clusters (of $\sim$40--60\% across the same redshift range \citealt{ruppin21}), which is linked to the ICM cooling. An absence of evolution on the detection fraction of cavities could imply that the mechanical feedback in CC clusters has been in place and maintained across almost 8 Gyrs. All the above is quite intriguing given that the AGN-hosting BCG fraction in the SPT-SZ cluster sample, selected from infrared WISE observations, appears to be strongly evolving with redshift (see \citealt{somboonpanyakul22,birzan17,Hlavacek_Larrondo_2013}; also \citealt[][]{silverman09,haggard10} for related studies). The authors argue that nearby clusters may grow by dry mergers without increasing the AGN activity. Whereas, high-$z$ clusters may accrete cold gas from gas-rich mergers and satellite interactions, which could drive a massive inflow of cold gas towards the central region, increasing the AGN activity. With more fuel available at high-$z$, the accretion rate is more likely to reach the Eddington limit, leading to a transition from a mechanical feedback state to a radiative feedback mode \citep[e.g.,][]{churazov05,dubois12,russell13}. Therefore, the lack of evolution on the cavity fraction at high-$z$ may be due to the dominance of BCGs with radiatively efficient AGNs (see also \citealt[][]{Hlavacek_Larrondo_2013}). \subsection{Cooling luminosity versus Cavity power} One goal of this work is to test whether the AGN is able to compensate for the cooling losses of the ICM by heating caused from radio jets. We used the cavity power ($P_{\rm cav}$) as a proxy for the mechanical power released by the AGN. The $P_{\rm cav}$ was estimated by dividing the total enthalpy of each cavity ($E_{\rm cav} = 4 p V$) by its age. Here $p$ is the thermal pressure of the ICM at the projected location of the cavity, defined as the center of each ellipse, and $V$ is the cavity volume. We assumed that the cavities have a prolate shape. The age of the cavity can be given by the buoyant rise time, the refill time, or the sound crossing time \citep{mcnamara00,birzan04,McNamara_2005}. For the purpose of this work, we used the buoyant rise time ($t_{\rm buoy}$) as the age of the cavity, as done in previous studies. The $t_{\rm buoy}$ corresponds to time for the cavity to rise buoyantly at its terminal velocity, and is defined as $t_{\rm buoy} = R / v_{\rm t}= R \sqrt{S C/2 g V}$, where $S$ is the cross-section of the bubble ($S=\pi r_{\rm b}^{2}$), $C$ (= 0.75) is the drag coefficient \citep{churazov01}. Lastly, the local gravitational potential, $g$, was derived assuming hydrostatic equilibrium. We drew on top of each identified X-ray cavity an ellipse model, as done in previous works \citep[e.g.,][]{dong10,shin16}. Accordingly, the volume of the cavities is $V=4\pi r_{\rm b}^{2}r_{\rm a}/3$, where $r_{\rm a}$ is the semi-major axis, $r_{\rm b}$ is the semi-minor axis of each X-ray cavity (see Table~\ref{tab:cav}). We also include in Table~\ref{tab:cav} the significance of the detection for each cavity. {The significance was calculate as the surface brightness ratio of the surrounding ``blackground'', measured within the same aperture size as the cavity , and the cavity. Certain cavities have average significance of 2.2, while potential cavities have average significance of 1.5.} \setlength{\tabcolsep}{1pt} \renewcommand{\arraystretch}{0.9} \begin{table} \caption{Cavity properties\label{tab:cav}} \resizebox{1.0\columnwidth}{!}{ \hspace{-9.0pt}\begin{tabular}{lccccccccc} \hline \small {Cluster name} & {Class} & {$r_{\rm a}$} & {$r_{\rm b}$} & {R} & {PA} &% {$t_{\rm bouy}$} & {$P_{\rm cav}$} & {$L_{\rm cool}$} & {cav. } \\ {} & {} & {(kpc)} & {(kpc)} & {(kpc)} & {(deg.)} & {( 10$^{7}$~yr)} & {( 10$^{44}$~erg~s$^{-1}$)} & {(10$^{44}$~erg~s$^{-1}$)} & {significance}\\ \hline\hline G021.09+33.25&C&4.1&2.6&6&140&0.9$^{+0.6} _{-3.3}$&0.9$^{+1.6} _{-0.2}$&10.1 & 2.2\\ G021.09+33.25&C&5.4&3.6&7&70&1.6$^{+2.2} _{-1.3}$&1.2$^{+1.1} _{-1.0}$&10.1 & 2.7 \\ \hline\\ \end{tabular}} \footnotesize{(This table is available in its entirety in machine-readable form.)} \end{table} Motivated from the previous studies \citep[e.g.,][]{rafferty06}, we calculate the X-ray cooling luminosity, $L_{cool}$, within a volume where the deprojected (isobaric) cooling time, $t_{\rm cool}$, is 7.7~Gyrs (see Table~\ref{tab:cav}). It is representative of the epoch of the last major. % Since then, clusters have been relaxed and a cooling flow could develop \citep{rafferty06}. For the cooling luminosity, we used $L_{\rm cool} = \int n_{\rm e} n_{\rm H} \Lambda(T,Z) dV$, where $\Lambda(Z,T)$ is the cooling function which depends on the temperature, $T$, and metallicity, $Z$, of the hot gas. We use the cooling functions from \citet{OGnat07}, assuming metallicity of $Z=1~Z\odot$, since typical CC clusters have solar or nearly solar metallicity within their cores \citep[e.g.,][]{molendi01,mernier16}. In Figure~\ref{fig:lcool_pcav} we compare the mechanical power released by the AGN located in the central BCG and the cooling luminosity ($L_{\rm cool}$) of each cluster. As a matter of comparison, we included galaxy groups and elliptical galaxies from \citep{rafferty06}, as well as high-$z$ clusters from the SPT–SZ \citepalias{Hlavacek-Larrondo15}. Notably, our Planck sample has systematically lower mechanical powers than the high-$z$ SPT–SZ sample, indicating our ability to detect smaller cavities. The smallest X-ray cavities detected in the SPT–SZ sample have sizes on the order of $\sim$10~kpc, whereas the resolution for the Planck clusters, as they are at lower $z$ clusters, is on the order of $\sim$2.5~kpc. The scatter on the $L_{\rm cool}$ versus $P_{\rm cav}$ relation is slightly smaller in our sample than in the high-$z$ SPT–SZ sample by $\sim$20\%. To make a fair comparison with the high-$z$ SPT-SZ sample taking only cavities with sizes $\gtrsim$10~kpcs, we find that the scatter is 65\% smaller compared to the SPT-SZ sample. The higher scatter on this relation for high-$z$ clusters is consistent with being fueled mainly through wet mergers \citep{somboonpanyakul22}. {As shown in Fig~\ref{fig:lcool_pcav} the $L_{\rm cool}$ and $P_{\rm cav}$ realized from the jets are positively correlated, indicating that the energy realized by the AGN is sufficient to balance the radiative losses of the ICM within the cooling radius for most of the sources in the sample.} Some of those objects may require additional heating from another mechanism, such as thermal conduction and shocks \citep[e.g.,][]{pope05}. However, we expect that some of these clusters may be in a cooling phase discarding the need for an additional heating source. The nature of the AGN feedback is cyclic and does not always require a balance between the cooling luminosity and the AGN heating \citep{mcnamara07}. This is likely displayed on the scatter of the $L_{\rm cool}$ versus $P_{\rm cav}$ relation. In that sense, the AGN power is variable, and the objects change their $P_{\rm cav}$ depending on what phase of the AGN feedback cycle they are observed in. Another source of scatter comes from dynamically disturbed clusters due to either sloshing motions or mergers. These mechanisms move the hot gas out of the central BCG to large distances producing lower $L_{\rm cool}$ for a given $P_{\rm cav}$ value, as found for clusters with higher centroid shifts indicative of dynamically disturbed atmospheres (Olivares et al. in prep). \section{Limitations}\label{sec:limitations} One of this work's limitations is the fact that we are probably missing cavities due to shallow X-ray observations, in particular in high-$z$ clusters \citep[e.g.,][]{diehl08,birzan12}. As pointed out by several studies, X-ray cavities are more easily detected in clusters with stronger cool cores, as the contrast between the depression and surroundings is sharper, and it is more difficult to find bubbles in high-redshift clusters due to the lack of counts. The detectability of cavities also decreases with their radius \citep{enblin02}. More importantly, cavities that have sizes below the resolution (e.g., cavity size $\leq2$~kpc for clusters at $z$=0.1) will be undetectable in such an analysis. As shown in Figure~\ref{fig:smaller_cav_size}, this effect will increase at high-$z$. For example, cluster at $z$>0.5 only cavities with sizes $\geq 6$~kpc can be detected. We also note that sources with more than 2000 counts, in the central region, tend to have ``certain'' detected X-ray cavities (circle symbols). Therefore, we stress that deeper \textit{Chandra} follow-up observations are required to confirm the presence of any potential X-ray cavity, especially in high-$z$ clusters. Aside from the data quality, other effects may be also interfering with the cavity detectability, such as orientation, location, and angular size (see \citealt{enblin02, diehl08, bruggen09} for more details). {As pointed out by \citet{enblin02}, the detectability of cavities decreases with distance to the center, and for a cavity moving in the plane of the cavities, the contrast decreases slowly. Cavities that lie off the plane of the sky have reduced contrast and therefore are harder to detect. To quantify this projection effect, we assume a random distribution of the angle of the cavity relative to the plane of the sky, a typical cavity size of 10~kpc, and a typical beta profile for the ICM distribution (r$_{c}$=20, beta=3/4). At an average projected distance of 30~kpc, 20\%--30\% of the cavities would have a contrast below our detection limit and would have been missed in our study. } It should be noted that the $P_{\rm cav} - L_{\rm cool}$ relation is also affected by projection effects, likely introducing scatter. As pointed out by \citet{diehl08}, all the physical quantities involving the cavity power, $P_{\rm cav}$, such as density, temperature, and pressure, are measured at the projected distance from the cavity to the center rather than the true distance. The former corresponds to a lower limit. The pressure increases towards the center, leading to an overestimation of the ambient pressure at the cavity position. On the other hand, the cavity ages will be underestimated as they are proportional to the cavity distance. Both mentioned effects will bias the cavity power upwards. \section{Conclusions}\label{sec:conclusions} We have investigated the mechanical AGN feedback mechanism in central cluster galaxies using archival X-ray \textit{Chandra} observations of 164 Planck selected clusters to search for X-ray cavities. (i) Using several techniques to look for X-ray cavities, including inspection of the original image, a model subtracted image and an unsharp masked image, we find 65 X-ray cavities in 29 systems out of 63 CC clusters. Among them, 12 systems have clearly detected cavities, whereas 17 have only potential depressions. Two potential cavities were also found in one NCC cluster. (ii) We measured a total detection fraction of X-ray cavities of $\sim$18\%, twice the detection rate of the high-$z$ SPT–SZ sample, indicating that clusters have radio-mode feedback only 18\% of the time. Nevertheless, our detection fraction of 9\% is close to the high-$z$ SPT–SZ sample when taking only cavities with sizes $\gtrsim$10~kpc to match the resolution of the SPT-SZ sample. We interpreted this as an lack of evolution of the AGN feedback cycle across cosmic time. (iii) We find that the AGN heating traced by the power of the X-ray cavities alone is able to balance the radiative losses of the ICM in our sample. Our sources have slightly lower cavity power per cavity than high-$z$ massive clusters from the SPT-SZ sample due to smaller cavities being detected in our sample. \noindent Future high-resolution X-ray observations from \textit{Chandra} satellite and the upcoming Advanced X-ray Imaging Satellite (AXIS) telescope will be needed to find more cavities in the faintest clusters and confirm the discussed findings in high-$z$ clusters. \section*{Acknowledgments} This research has made use of software provided by the \textit{Chandra} X-ray Center (CXC) in the application packages CIAO. V.O. and Y.S. were supported by NSF grant 2107711, Chandra X-ray Observatory grant GO1-22126X, and NASA grant 80NSSC21K0714. \section*{Data Availability} The \textit{Chandra} raw data used in this paper are available to download at the HEASARC Data Archive website\footnote{https://heasarc.gsfc.nasa.gov/docs/archive.htm}. \bibliographystyle{mnras} \bsp % \label{lastpage}
Title: Constraints on the transition redshift from the calibrated Gamma-ray Burst $E_{\rm p}$-$E_{\rm iso}$ correlation
Abstract: We constrain the deceleration-acceleration epoch, namely the transition redshift $z_{tr}$, adopting model-independent techniques that utilize a calibrated $E_{\rm p}$-$E_{\rm iso}$ correlation for gamma-ray bursts (GRBs). To do so, in addition to real data points, we employ up to $1000$ simulated observational Hubble data (OHD) points. We then calibrate the $E_{\rm p}$-$E_{\rm iso}$ correlation by means of the well-consolidate B\'ezier polynomial technique, interpolating OHD up to the second order. Once GRB data have been calibrated, we consider two strategies of cosmographic expansions, i.e., first we take a direct Hubble rate expansion around $z_{tr}$, and second the expansion of the deceleration parameter around the same redshift, but with a different order. Employing type Ia supernovae, baryonic acoustic oscillations and GRB data sets, from Monte Carlo analyses we infer tight constraints on $z_{tr}$ and the jerk parameters at $z=z_{tr}$, namely $j_{tr}$. Our results are extremely compatible with previous outcomes and confirm the $\Lambda$CDM predictions, being slightly different in terms of the jerk parameter. In this respect, we conjecture which extensions of the concordance paradigm are possible and we compare our findings with expectations provided by generic dark energy models.
https://export.arxiv.org/pdf/2208.13700
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} Gamma-ray Bursts: general -- cosmology: Dark energy -- cosmology: Observations \end{keywords} \section{Introduction} \label{intro} The greatest challenge of modern cosmology is understanding the nature of dark matter and dark energy \citep{reviewDE1,Capozziello:2019cav} that constitute the $\sim25\%$ and $70\%$ of universe's content, respectively \citep{Planck2018}. A complete comprehension of the dark sector, namely both dark matter and dark energy, would shed light on the $\Lambda$CDM paradigm \citep{luomuc,2022NewAR..9501659P}. The concordance paradigm involves six free parameters, whereas at small redshifts can be easily approximated by a zero-spatial curvature, with a vacuum energy cosmological constant $\Lambda$, inferred from quantum field fluctuations \citep{weinberg}, and a uniform pressureless matter fluid, with negligible radiation, neutrino and relics abundances \citep{Planck2018}. While matter decelerates the expansion, $\Lambda$ acts as a bizarre repulsive gravity \citep{2016PDU....12...56B} that, instead, accelerates the universe. Consequently, there exists an \emph{onset of cosmic acceleration} at a given \emph{transition time} and/or \emph{transition redshift} \citep[see][and references therein]{2022MNRAS.509.5399C}. A fully clear scheme towards determining tight constraints on the transition time would suggest, in a model independent way, whether dark energy is under the form of a cosmological constant or it evolves in time\footnote{A cascade of alternative approaches has been proposed \citep{casc}, recently with a renewed aim to overcome cosmic tensions \citep[see, e.g.,][]{2020NatAs...4..196D}. However, some of the models in \citet{casc} with barotropic index $\omega>-1$ are now at odds with the local Hubble constant $H_0$ values. The same occurs, at late times, for models where $\Lambda$ is replaced by a scalar field theories \citep{2021PhRvD.103h1305B,2022JCAP...04..004L}.} \citep[see, e.g.,][]{Chevallier2001,Linder2003,King2014}. The transition time, or alternatively the transition redshift $z_{tr}$, marks the era of dark energy domination over matter during which the universe enters a phase of accelerated expansion \citep{1973lsss.book.....H,2007PhRvD..76d1301M}. This quantity has been recently reinterpreted as a genuine \emph{kinematic quantity}\footnote{We can distinguish the kinematics of the universe from its dynamics, once we assume metric expansions only, without directly involving Einstein's field equations \citep[see, e.g.,][]{Gruber:2013wua,Aviles:2014rma,2016IJGMM..1330002D,Capozziello:2017nbu}.}. Thus, constraining $z_{tr}$ would prove in a model-independent manner how dark energy evolves and what consequences can be argued at the background without postulating the functional form of the Hubble parameter \citep[see, e.g.,][]{2016IJGMM..1330002D,Aviles:2012ay}. The strategy to write up $z_{tr}$ in terms of a given cosmological model has been proposed in \citet{2007PhRvD..76d1301M}, whereas characterising $z_{tr}$ without invoking an underlying cosmological model is proposed in \citet{2005GReGr..37.1541V}, \citet{2007CQGra..24.5985C,2008CQGra..25p5013C,2008PhRvD..78f3501C}, \citet{2012PhRvD..86l3516A} and \citet{Capozziello:2014zda,Capozziello:2015rda}. Another severe issue is that strategy of bounding $z_{tr}$ usually involves low redshift data, ignoring if the transition time is compatible with high-redshift data. In this respect, gamma-ray bursts (GRBs) represent plausible sources that, if used for cosmological purposes, may trace the universe expansion history in a more suitable way than using low and/or intermediate catalogs of data only \citep[see, e.g.,][]{2021ApJ...908..181M,2022MNRAS.512..439C,2022PASJ..tmp...83D,2022arXiv220809272J}. In this work, we thus propose to constrain the transition time using the high-redshift GRB data, in order to check the compatibility of standard dark energy models at earlier times. To do so, we propose two fully model-independent strategies to reconstruct the Hubble rates around $z=z_{tr}$. In particular, the first procedure makes use of a direct Hubble expansion, whereas the second expands the deceleration parameter $q$, that by definition vanishes at the transition time, namely $q_{tr}=q(z_{tr})=0$. The corresponding cosmological distances are modeled by calibrating the $E_{\rm p}$--$E_{\rm iso}$ GRB correlation. To do so, even in this case we formulate the model-independent calibration of the observed Hubble rate data, employing the widely-consolidate technique based on the B\'ezier polynomials. Hence, we calibrated our correlation adopting four hierarchies: first we utilized $32$ real data and then $100$, $500$ and $1000$ simulated ones. We therefore infer how error bars modify with the increase of the number of data points. Consequences on experimental analysis are also discussed, emphasizing the advantages of increasing the number of simulated data. In particular, we performed Markov chain Monte Carlo (MCMC) fits of our reconstructed Hubble expansions up to the jerk cosmographic parameter, inferred at $z=z_{tr}$, taking into account Pantheon type Ia supernovae (SNe Ia) and baryonic acoustic oscillarions (BAO) together with the most recent catalog of GRB data composed of $118$ long GRBs \citep{2021JCAP...09..042K}. Finally, we debate about our fitting procedures and we show an overall compatibility of our results with respect to previous literature and the $\Lambda$CDM paradigm. Inside the intervals of validity for $z_{tr}$, we notice that evolving dark energy is not fully excluded. Consequences in physical cosmologies are therefore discussed. The paper is structured as follows. In Sec.~\ref{sezione2}, we describe the calibration based on the use of B\'ezier polynomials and the Amati relation. In the same section, we describe the method of simulating the observational data adopted as Hubble points. The corresponding analysis of simulated data is reported. In Sec.~\ref{sec:3}, we describe the two theoretical features related to our model-independent methods adopted to constrain the transition redshift. The experimental constraints of our simulations and the theoretical consequences of our results are described in Sec.~\ref{sec:4}. Finally, conclusions and perspectives are reported in Sec.~\ref{sezione5}. \section{Model-independent calibration of the Amati relation through B\'ezier polynomials}\label{sezione2} The most widely-used GRB correlation, named $E_{\rm p}$--$E_{\rm iso}$ or Amati correlation \citep{AmatiDellaValle2013}, provides a relation between the peak and isotropic energies of GRBs. It reads \begin{equation} \label{Amatirel} \log\left(\frac{E_{\rm p}}{{\rm keV}}\right)= a \left[\log\left(\frac{E_{\rm iso}}{{\rm erg}}\right)-52\right] + b\,, \end{equation} and it is defined by means of two parameters, namely the slope $a$ and the intercept $b$. The correlation is also characterized by an extra source of variability $\sigma$ \citep{Dago2005}. In Eq.~\eqref{eiso}, $E_{\rm p}= E_{\rm p}^{\rm o}(1+z)$ is the observed peak energy $E_{\rm p}^{\rm o}$ of the $\gamma$-ray time-integrated $\nu F_\nu$ spectrum computed in the source rest-frame, whereas $E_{\rm iso}$ is the isotropic energy radiated in $\gamma$-rays, defined as \citep[see, e.g.,][for details]{2021Galax...9...77L} \begin{equation}\label{eiso} E_{\rm iso}\equiv 4\pi d_{\rm L}^2 S_{\rm b}(1+z)^{-1}\,. \end{equation} The observed bolometric GRB fluence $S_{\rm b}$ is the integral of the $\nu F_\nu$ spectrum in the rest-frame $1-10^4$~keV energy band and the factor $(1+z)^{-1}$ transforms the GRB duration from the observer to the source cosmological rest-frame. We immediately notice that $E_{\rm iso}$ is determined once a background cosmology is imposed \emph{a priori} through the luminosity distance $d_{\rm L}$, leading to the well-known \emph{circularity problem}. To overcome this issue, we need to calibrate the $E_{\rm p}$--$E_{\rm iso}$ correlation by means of model-independent techniques. To this aim, we resort the well-established strategy of fitting OHD data by using a B\'ezier parametric curve \citep{2019MNRAS.486L..46A,LM2020}. We utilize the updated sample of $32$ OHD \citep[see][and references therein]{2022arXiv220513247K} and interpolate it by employing a B\'ezier parametric curve of order $n$ \begin{equation} \label{bezier1} H_n(x)=\sum_{i=0}^{n} g_\alpha\alpha_i h_n^d(x)\quad,\quad h_n^i(x)\equiv n!\frac{x^i}{i!} \frac{\left(1-x\right)^{n-i}}{(n-i)!}\,, \end{equation} where $g_\alpha=100$~km/s/Mpc is a scaling factor and $\alpha_i$ are the coefficients of the linear combination of the polynomials $h_n^i(x)$, positive-defined for $0\leq x\equiv z/z_{\rm O}^{\rm m}\leq1$, with $z_{\rm O}^{\rm m}$ representing the maximum redshift of the OHD catalog. By construction, we identify $\alpha_0\equiv h_0=H_0/g_\alpha$. \citet{LM2020} proved that only the function $H_2(z)$ (with $n=2$) is non-linear and monotonic growing with $z$ and can be extrapolated to $z>z_{\rm O}^{\rm m}$. Therefore, supported by the finding $\Omega_k=0.001\pm0.002$ from \citet{Planck2018}, we can safely assume $\Omega_k=0$ and the luminosity distance becomes cosmology-independent \footnote{It is worth to mention that, according to recent claims on $\Omega_k$ representing at most the $2$\% of the total universe energy density \citep[see, e.g.,][and references therein]{2018ApJ...864...80O}, the circularity problem would not be completely healed, but would be only restricted to the value of $\Omega_k$ through $d_{\rm L}$.} \begin{equation} \label{dlHz2} d_{\rm cal}(z)=c(1+z)\int_0^z\dfrac{dz'}{H_2(z')}\,. \end{equation} We are now in the position to use $d_{\rm cal}(z)$ to calibrate the isotropic energy $E_{\rm iso}^{\rm cal}$ for each GRB fulfilling the Amati relation \begin{equation} \label{Eisocal} E_{\rm iso}^{\rm cal}(z)\equiv 4\pi d_{\rm cal}^2(z) S_{\rm b}(1+z)^{-1}\,, \end{equation} where the respective errors on $E_{\rm iso}^{\rm cal}$ depend upon the GRB systematics on the observables and the fitting procedure (see Sec.~\ref{sec:4}). \begin{table*} \centering \setlength{\tabcolsep}{1.em} \renewcommand{\arraystretch}{1.3} \begin{tabular}{lcccccccc} \hline\hline OHD data & \vline & $\alpha_0$ & $\alpha_1$ & $\alpha_2$ & \vline & $\delta\alpha_0^{\rm min}$--$\delta\alpha_0^{\rm max}$ & $\delta\alpha_1^{\rm min}$--$\delta\alpha_1^{\rm max}$ & $\delta\alpha_2^{\rm min}$--$\delta\alpha_2^{\rm max}$ \\ \hline $32$ real & \vline & $0.678^{+0.050}_{-0.049}$ & $1.030^{+0.152}_{-0.152}$ & $2.090^{+0.190}_{-0.198}$ & \vline & $0.072$--$0.074$ & $0.148$--$0.148$ & $0.091$--$0.095$ \\ $100$ mock & \vline & $0.713^{+0.036}_{-0.035}$ & $0.995^{+0.080}_{-0.083}$ & $1.976^{+0.076}_{-0.072}$ & \vline & $0.049$--$0.050$ & $0.083$--$0.080$ & $0.036$--$0.039$ \\ $500$ mock & \vline & $0.669^{+0.013}_{-0.014}$ & $1.051^{+0.036}_{-0.037}$ & $2.067^{+0.037}_{-0.035}$ & \vline & $0.021$--$0.019$ & $0.035$--$0.034$ & $0.017$--$0.019$ \\ $1000$ mock & \vline & $0.672^{+0.009}_{-0.009}$ & $1.040^{+0.024}_{-0.025}$ & $2.095^{+0.024}_{-0.025}$ & \vline & $0.013$--$0.013$ & $0.024$--$0.023$ & $0.012$--$0.011$ \\ \hline \end{tabular} \caption{Best-fit B\'ezier coefficients $\alpha_i$ and relative maximum and minimum error bars of the best-fit parameters of the function $H_2(z)$.} \label{tab:Bez} \end{table*} \subsection{Method of simulating data} In this new era of precision cosmology, we expect the improvement in both the quantity and quality of data. Hence to explore the effect of a large number of data points, we simulate the Hubble data as suggested by \citet{2011ApJ...730...74M} in the redshift range $0 < z < 2$. Below, the steps of this simulation are briefly outlined. \begin{itemize} \item[1.] First, we choose the flat $\Lambda$CDM model as a fiducial model with matter density parameter $\Omega_{\rm m0} = 0.3153\pm0.0073$ and Hubble constant $H_0 = (67.36\pm0.54)$~km/s/Mpc \citep{Planck2018}. \item[2.] Next, we define the deviation $\Delta H$ between the simulated and the fiducial Hubble data points, \emph{i.e.}, $$ H_{\rm sim}(z) = H_{\rm fid} + \Delta H\,.$$ \item[3.] Then, we plot the uncertainties of the observed $H(z)$ data points with $z$. After removing $7$ outliers, we interpolate the errors with two lines: $\sigma_+(z) = 16.577 z + 18.440$ for the upper errors and $\sigma_-(z) = 7.402 z + 2.675$ for the lower errors. Hence, to simulate the Hubble data points, we define $\sigma_{\rm m}$ as the mean of $\sigma_+$ and $\sigma_-$. \item[4.] By assuming a Gaussian distribution $\mathcal G$, the random uncertainty $\sigma_{\rm r}(z)$ in simulating the $H(z)$ points can be estimated as $\mathcal G( \sigma_{\rm m}, \sigma_{\rm v})$, where the variance $\sigma_{\rm v}=(\sigma_+ -\sigma_-)/4$ ensures that $\sigma_{\rm r}(z)$ lies between $\sigma_+ $ and $\sigma_-$ at $95.4\%$ probability. \item[5.] Finally, the deviation $\Delta H$ is now obtained by assuming the Gaussian distribution $\mathcal G[0,\sigma_{\rm r}(z)]$. \item[6.] The above steps generate the simulated Hubble data points $H_{\rm sim}$ with the associated uncertainty given by $\sigma_{\rm r}(z)$ at a redshift in the range $0 < z < 2$. \end{itemize} \subsection{Direct calibration} We are now in the position to estimate the coefficients $\alpha_i$ ($0\leq i\leq2$) of the B\'ezier curve $H_2(z)$. Assuming Gaussian distributed errors, the OHD the log-likelihood function reads as \begin{equation} \label{loglikeOHD} \ln \mathcal{L}_{\rm O} = -\frac{1}{2}\sum_{k=1}^{N_{\rm O}}\left\{\left[\dfrac{H_k-H_2(z_k)}{\sigma_{H,k}}\right]^2 + \ln(2\pi\,\sigma_{H,k}^2)\right\}\,, \end{equation} where $N_{\rm O}$ is the size of the OHD catalog with values $H_k$ and attached errors $\sigma_{H,k}$. We here consider various OHD catalogs: $32$ real measurements, and $100$, $500$ and $1000$ mock data. The best-fit curves approximating the various OHD catalogs are portrayed in Fig.~\ref{fig:Bez}, where a comparison with the predictions of the $\Lambda$CDM paradigm \citep{Planck2018} is also shown. The best-fit coefficients $\alpha_i$ used in Fig.~\ref{fig:Bez} are summarized in Table~\ref{tab:Bez} and displayed in the contour plots of Fig.~\ref{fig:Bez_cont}. Once the error bars are evaluated, we computed the error evolution as data points increase. In particular, in Fig.~\ref{fig:alphadecrease} we draw plots of the relative errors $\delta\alpha_i=\sigma_{\alpha,i}/\alpha_i$ decrease, with $0\leq i\leq2$. There each point corresponds to the mean value of the relative error defined by $r_i\equiv(\delta\alpha_i^{\rm min}+\delta\alpha_i^{\rm max})/2$ at the corresponding value of the natural logarithm of the number points, namely $N=\{32,\,100,\,500,\,1000\}$ as shown in Table \ref{tab:Bez}. The functional behavior of the error decrease is written as \begin{equation} r_i(\ln N) = \mathcal A_{i} + \mathcal B_{i} \ln N + \mathcal C_{i} (\ln N)^2\,, \end{equation} where the free coefficients have been evaluated by the \texttt{FindFit} command in \texttt{Wolfram Mathematica} and read \begin{subequations} \begin{align} \mathcal A_{i}&\simeq\{0.177,\,0.487,\,0.391\},\\ \mathcal B_{i}&\simeq\{-0.036,\,-0.129,\,-0.119\},\\ \mathcal C_{i}&\simeq\{-0.002,\,0.009,\,0.009\}\,, \end{align} \end{subequations} where the first values of $\mathcal A_i$, $\mathcal B_i$ and $\mathcal C_i$ refer to $r_0$ and so forth. All the aforementioned cases show that the decreases are slightly approximated by second order polynomials, though the second order is subdominant with respect to the first one. The steepest decrease is associated to $\delta\alpha_1$, whereas the largest uncertainty is attached to $\delta\alpha_2$. The theoretical treatment shows that error bars tend to smaller values as our simulated data are included. The large discrepancy between expectations got from the real catalog and the simulated ones is a proper indication that it is not suitable to match together both simulated and real data. For this reason, we split the overall analysis by considering the values reported in Table ~\ref{tab:Bez}. % \section{Model-independent Hubble rate and transition redshifts}\label{sec:3} The transition redshift can be constrained adopting two model-independent procedures. The first strategy is constructed by expanding the Hubble rate around $z=z_{tr}$, while the second approach is analogous but expands directly the deceleration parameter. The two methods are below reported \citep[see][for details]{2022MNRAS.509.5399C}. \begin{itemize} \item[--] \emph{DHE method}. Expanding the Hubble rate, we immediately get \begin{equation} \label{primometodoHespanso} H=H_{tr}+H_{tr}^\prime(z-z_{tr})+\frac{1}{2}H_{tr}^{''}(z-z_{tr})^2+\mathcal O(z-z_{tr})^3\,, \end{equation} where the additional normalizing request on $H(z=0)\equiv H_0$ relates the cosmographic parameters among them. In particular, since% \begin{equation} \label{Hpunto} \dot{H} = -H^2 (1 + q)\quad,\quad \ddot{H} = H^3 (j + 3q + 2)\,, \end{equation} and $a\equiv(1+z)^{-1}$, we obtain \begin{equation} H^{''}_{tr}=\frac{H_{tr}j_{tr}}{(1+z_{tr})^2}\quad,\quad H^\prime_{tr}=\frac{H_{tr}}{1+z_{tr}}\,, \end{equation} with \begin{equation} H_{tr} = \frac{2 H_0 (1 + z_{tr})^2}{ 2 + z_{tr} (2 + j_{tr} z_{tr})}\,, \end{equation} providing as final result the normalized Hubble rate that reads \begin{equation}\label{2primometodoHespanso2} \mathcal E_{\rm DHE}(z)=1+\frac{2 z (1 + z_{tr} - j_{tr} z_{tr})+j_{tr} z^2}{2 + z_{tr} (2 + j_{tr} z_{tr})}\,. \end{equation} \item[--] \emph{DDPE method}. This second appoach directly expands the deceleration parameter up to a given order around $z_{tr}$. We here limit to first Taylor order of expansion to have \begin{equation}\label{eq:exp q} q\simeq q_{tr}+q_{tr}^\prime(z-z_{tr})+\mathcal O(z-z_{tr})^2\,, \end{equation} where we recall $q_{tr}=0$, noticing to be able to get $q_0$, by simply taking $z=0$ inside Eq.~\eqref{eq:exp q}. In particular, the deceleration parameter is negative definite to guarantee current acceleration, when $z\leq z_{tr}$ since the transition occurs when $q$ passes from a positive to a negative value. It is thus needful to get the Hubble rate and so we substitute Eq.~\eqref{eq:exp q} inside the second Friedmann equation\footnote{Remarkably, if one expands $H$ around $z=0$ and plugs Eqs. \eqref{eq:exp q} with $z=0$ would lead to worse results in computation.}, fixing $H(z=0)=H_0$, to obtain \begin{equation} \label{Hmetodo2ordine2e3} \mathcal E_{\rm DDPE}(z)=\exp^{\frac{j_{tr} z}{1 + z_{tr}}} (1 + z)^{1 - j_{tr}}\,. \end{equation} \end{itemize} \begin{table*} \centering \setlength{\tabcolsep}{.6em} \renewcommand{\arraystretch}{1.6} \begin{tabular}{lccccccc} \hline\hline Model & \vline & $a$ & $b$ & $\sigma$ & $h_0$ & $z_{\rm tr}$ & $j_{\rm tr}$ \\ \hline DHE & \vline & $0.714^{+0.027 (+0.068)}_{-0.034 (-0.066)}$ & $1.814^{+0.051 (+0.097)}_{-0.041 (-0.096)}$ & $0.294^{+0.014 (+0.034)}_{-0.023 (-0.038)}$ & $0.691^{+0.009 (+0.018)}_{-0.007 (-0.018)}$ & $0.739^{+0.065 (+0.166)}_{-0.089 (-0.140)}$ & $1.028^{+0.189 (+0.345)}_{-0.122 (-0.271)}$ \\ \hline DDPE & \vline & $0.709^{+0.033 (+0.074)}_{-0.030 (-0.062)}$ & $1.817^{+0.043 (+0.092)}_{-0.047 (-0.100)}$ & $0.286^{+0.021 (+0.045)}_{-0.012 (-0.029)}$ & $0.691^{+0.008 (+0.017)}_{-0.009 (-0.017)}$ & $0.831^{+0.064 (+0.155)}_{-0.063 (-0.127)}$ & $1.041^{+0.138 (+0.266)}_{-0.108 (-0.233)}$ \\ \hline\hline \end{tabular} \caption{Best-fit coefficients of the calibrated $E_{\rm p}$--$E_{\rm iso}$ correlation and best-fit parameters of the reconstructed Hubble rate, for DHE and DDPE methods.} \label{tab:fits} \end{table*} \section{Experimental constraints}\label{sec:4} Here, we perform MCMC analyses to find out the set of parameters entering the total log-likelihood function \begin{equation} \ln{\mathcal{L}} = \ln{\mathcal{L}_{\rm G}} + \ln{\mathcal{L}_{\rm S}} + \ln{\mathcal{L}_{\rm B}}\,. \end{equation} Below, we describe each contribution and for each of them we assume Gaussian distributed errors. \begin{itemize} \item[(a)]\emph{GRB log-likelihood}. Once the GRB catalog is calibrated through the OHD interpolation, the GRB log-likelihood $\ln{\mathcal{L}_{\rm G}}$ is determined as a nested log-likelihood \citep{LM2020} that combines two sub-samples: \begin{itemize} \item[i)] a \textit{calibrator sample} of GRBs encompassing OHD observations up to $z^{\rm m}_{\rm O}$, used to determine the correlation coefficients, and \item[ii)] a \textit{cosmological sample} of the whole GRB data set, used to estimate the free parameters of the model. \end{itemize} Therefore, the total GRB log-likelihood function is given by \begin{equation} \label{a0} \ln \mathcal{L}_{\rm G} = \ln \mathcal{L}_{\rm G}^{\rm cal} + \ln \mathcal{L}_{\rm G}^{\rm cos}\,. \end{equation} The calibration log-likelihood is given by \begin{equation} \label{a1} \ln \mathcal{L}_{\rm G}^{\rm cal} = -\frac{1}{2}\sum_{k=1}^{N_{\rm cal}}\left\{\left[\dfrac{Y_k-Y(z_k)}{\sigma_{ Y,k}}\right]^2 + \ln(2\pi\,\sigma_{Y,k}^2)\right\}\,,\\ \end{equation} where $N_{\rm cal}=65$ and \begin{subequations} \begin{align} Y_{\rm k} \equiv&\, \log E_{{\rm p},k}\,,\\ \label{a2} Y(z_k)\equiv&\, a [E_{\rm iso}^{\rm cal}(z_k)-52] + b\,,\\ \sigma_{Y,k}^2 \equiv&\, \sigma_{\log E_{{\rm p},k}}^2 + a^2\sigma_{\log[E_{\rm iso}^{\rm cal}(z_k)]}^2+\sigma^2\,. \end{align} \end{subequations} The cosmological log-likelihood is given by \begin{equation} \label{a4} \ln \mathcal{L}_{\rm G}^{\rm cos} = -\frac{1}{2}\sum_{k=1}^{N_{\rm cos}}\left\{\left[\dfrac{\mu_k-\mu_{\rm th}(z_k)}{\sigma_{\mu,k}}\right]^2 + \ln (2 \pi \,\sigma_{\mu,k}^2)\right\}\,, \end{equation} where we have $N_{\rm cos}=118$ and \begin{subequations} \begin{align} \label{a5} \mu_k\equiv& \,\frac{5}{2 a}\left[\log E_{{\rm p},k} - a\log\left(\frac{4\pi S_{{\rm b},k}}{1+z_k}\right) - b\right]\,,\\ \sigma_{\mu,k}^2 \equiv& \,\frac{25}{4 a^2}\left(\sigma_{\log E_{{\rm p},k}}^2 + a^2 \sigma_{\log S_{{\rm b},k}}^2+\sigma^2\right)\,, \end{align} \end{subequations} where the theoretical distance modulus is given by \begin{equation} \label{muz} \mu_{\rm th}(z)= 25+5\log\left[\frac{d_{\rm L}(z)}{{\rm Mpc}}\right]\,. \end{equation} Assuming $\Omega_k=0$, the luminosity distance of the model reads \begin{equation} \label{dlEz} d_{\rm L}(z)=\frac{c}{H_0}(1+z)\int_0^z\dfrac{dz'}{\mathcal E(z')}\,, \end{equation} with $\mathcal E(z)$ given by DHE or DDPE models. \item[(b)] \emph{SN log-likelihood}. The Pantheon data set is the most updated SN Ia sample composed of $1048$ sources \citep{2018ApJ...859..101S}. In \citet{2018ApJ...853..126R}, such data points are prompted under the form of $\mathcal E(z)^{-1}$ catalog at $N_{\rm S}=6$ redshifts, chosen to best represent the whole SN Ia sample and obtained by assuming a flat universe prior. The SN log-likelihood function is given by \begin{align} \nonumber \ln \mathcal{L}_{\rm S} = & -\frac{1}{2}\sum_{k=1}^{N_{\rm S}} \left[\mathcal E_k^{-1} - \mathcal E(z_k)^{-1} \right]^{\rm T} \mathbf{C}_{\rm SN}^{-1} \left[\mathcal E_k^{-1} - \mathcal E(z_k)^{-1} \right]\\ \label{loglikeSN} & -\frac{1}{2}\sum_{k=1}^{N_{\rm S}} \ln \left(2 \pi |\det\mathbf{C}_{\rm SN}^{-1}| \right)\,, \end{align} where $\mathcal E_k^{-1}$ are the measured from SNe Ia and $\mathbf{C}$ is the covariance matrix obtained from the correlation matrix in \citet{2018ApJ...853..126R}. \item[(c)] \emph{BAO log-likelihood}. We select $N_{\rm B}=8$ uncorrelated BAO angular distance measurements \citep[see, e.g.,][]{LM2020} being not explicitly dependent upon $\Omega_{\rm m0}$, \emph{i.e.}, \begin{equation} \label{eq:DV} \Delta(z) \equiv r_{\rm s} \left[\frac{H(z)}{cz}\right]^\frac{1}{3}\left[\frac{\left(1+z\right)}{d_{\rm L}(z)}\right]^\frac{2}{3}\,, \end{equation} where $r_{\rm s}$ is the comoving sound horizon at the baryon drag redshift $z_\text{d}$, calibrated through CMB data for a given cosmological model\footnote{In this sense BAO are slightly model-dependent.}. Hereafter we consider $r_{\rm s}=(147.21\pm0.48)$~Mpc \citep{Planck2018}. The corresponding log-likelihood is given by \begin{equation} \label{loglikebao} \ln \mathcal{L}_{\rm B} = -\frac{1}{2}\sum_{k=1}^{N_{\rm B}}\left\{\left[\frac{\Delta_k - \Delta(z_k)}{\sigma_{\Delta,k}}\right]^2 + \ln (2 \pi \,\sigma_{\Delta,k}^2)\right\}\,. \end{equation} \end{itemize} In view of the results displayed in Fig.~\ref{fig:Bez} and summarized in Table~\ref{tab:Bez}, it is clear that the $1000$ mock OHD data well reproduce the results obtained from the $32$ real measurements, of course with greater accuracy. Therefore, we decided to calibrate GRBs using the the $1000$ mock OHD measurements. We performed MCMC fits, working out the Metropolis-Hastings algorithm and using the DHE and the DDPE methods outlined in Sec.~\ref{sec:3}, by means of a modified free available \texttt{Wolfram Mathematica} code \citep{codice}. The results summarized in Table ~\ref{tab:fits} are displayed in the contour plots of Figs.~\ref{fig:Bez_cont2}--\ref{fig:Bez_cont3}. \subsection{Theoretical discussion of our results} Our experimental results show a suitable compatibility toward the parameters $a$, $b$ and $\sigma$, concerning the Amati calibration. The model-independent calibration, based on B\'ezier polynomials, provides viable constraints that are quite compatible with previous findings got from the literature \citep[see, e.g.,][]{2019MNRAS.486L..46A,LM2020,2021JCAP...09..042K}. Moreover, the bounds show smaller error bars, consequently, also reducing the relative errors over the free coefficients that are needful for calibrating GRBs. The values of Hubble rates today are stable and provide the same mean outcomes for both DHE and DDPE methods. Errors are comparable between the two approaches, being surprisingly small even at the $2$--$\sigma$ confidence levels. The corresponding Hubble constant values are compatible, within the error bars, with Planck measurements \citep{Planck2018}, but far from Riess expectations \citep{2019ApJ...876...85R}, leaving unsolved the $H_0$ tension. The transition redshift appears larger and closer to $z\simeq1$ for the DDPE method, but highly compatible with the standard model predictions given by the standard cosmological model. Error bars are large enough at $2\sigma$ confidence levels to recover the $\Lambda$CDM expectations even considering the DDPE method. Surprisingly, the values of $j_{tr}$ got from our experimental analyses are extremely close to $j=1$ provided by the standard cosmological model. This shows that any possible departures from the concordance paradigm is forced to be extremely small, leading to a very weakly evolving dark energy term even if one involves high redshift data sets, such as GRB data. To better show the latter prediction that we argue from our analyses, we can consider the following generic Hubble rate \begin{equation}\label{hz} H(z)=H_0\sqrt{\Omega_{m0}(1+z)^{3}+\Omega_{DE}F(z)}\,, \end{equation} where the generic function $F(z)$ fulfills the following requirements% \begin{equation} \left\{ \begin{array}{lll} F(z)\rightarrow 1 \quad&,\quad & \hbox{$z=0$}\\ \Omega_{DE}\equiv1-\Omega_{m0}\quad&,\quad & \hbox{$\forall z$}\\ \Omega_{DE}F(z)\gtrsim\Omega_{m0}(1+z)^3\quad&,\quad & \hbox{$z\rightarrow 0$} \end{array} \right. \end{equation} that guarantee $H=H_0$ as $z=0$ and dark energy dominating over matter at late-times. Considering Eq.~\eqref{hz}, immediately we get \begin{subequations} \begin{align} \label{qudef} q(z)&=-1+\frac{(1+z)\left[3\Omega_{m0}(1+z)^{2}+\Omega_{DE}F^\prime(z)\right]}{2\left[\Omega_{m0}(1+z)^{3}+\Omega_{DE}F(z)\right]}\,,\\ \label{jeidef} j(z)&=1+\frac{\Omega_{DE}(1+z)\left[-2 F^\prime(z)+(1+z)F^{\prime\prime}(z)\right]}{2\left[\Omega_{m0}(1+z)^{3}+\Omega_{DE}F(z)\right]}\,. \end{align} \end{subequations} At the transition redshift Eqs.~\eqref{qudef}--\eqref{jeidef} lead to \begin{subequations} \begin{align} \label{Fpdef} F^\prime_{\rm tr}&=-\frac{\Omega_{m0}(1+z_{\rm tr})^2}{1-\Omega_{m0}} + \frac{2 F_{\rm tr}}{1+z_{\rm tr}}\,,\\ \label{Fppdef} F^{\prime\prime}_{\rm tr}&=-\frac{2\Omega_{m0}(2-j_{\rm tr})(1+z_{\rm tr})}{1-\Omega_{m0}} + \frac{2 (1+j_{\rm tr}) F_{\rm tr}}{(1+z_{\rm tr})^2}\,, \end{align} \end{subequations} where we imposed $q_{\rm tr}=0$ and set $F_{\rm tr}=F(z_{\rm tr})$, $F^\prime_{\rm tr}=F^\prime(z_{\rm tr})$ and $F^{\prime\prime}_{\rm tr}=F^{\prime\prime}(z_{\rm tr})$. To fully constrain the dark energy evolution, we need a further equation for $F_{\rm tr}$. Using the B\'ezier interpolation of the Hubble rate and Eq.~\eqref{hz} and imposing that $H_2(z_{\rm tr})\equiv H(z_{\rm tr})$, we get \begin{subequations} \begin{align} \label{Fdef} F_{\rm tr}&=\frac{H_2^2(z_{\rm tr})-H_0^2\Omega_{m0}(1+z_{\rm tr})^3}{H_0^2(1-\Omega_{m0})}\,,\\ \label{Fpdef2} F^\prime_{\rm tr}&=\frac{2H_2^2(z_{\rm tr})-3H_0^2\Omega_{m0}(1+z_{\rm tr})^3}{H_0^2(1-\Omega_{m0})(1+z_{\rm tr})}\,,\\ \label{Fppdef2} F^{\prime\prime}_{\rm tr}&=\frac{2H_2^2(z_{\rm tr})(1+j_{\rm tr})-6H_0^2\Omega_{m0}(1+z_{\rm tr})^3}{H_0^2(1-\Omega_{m0})(1+z_{\rm tr})^2}\,. \end{align} \end{subequations} Taking the constraints in Tab.~\ref{tab:fits}, got for both DHE and DDPE methods, and the value of $\Omega_{m0}$ from \citet{Planck2018}, we obtain the constraints on $F_{\rm tr}$, $F^\prime_{\rm tr}$ and $F^{\prime\prime}_{\rm tr}$ summarized in Table.~\ref{tab:Ftr}. \begin{table} \centering \setlength{\tabcolsep}{.5em} \renewcommand{\arraystretch}{1.6} \begin{tabular}{lccccc} \hline\hline Model & \vline & $F_{\rm tr}$ & $F^\prime_{\rm tr}$ & $F^{\prime\prime}_{\rm tr}$\\ \hline DHE & \vline & $0.874^{+0.286}_{-0.379}$ & $-0.382^{+0.407}_{-0.446}$ & $-0.380^{+0.666}_{-0.756}$ \\ \hline DDPE & \vline & $0.850^{+0.311}_{-0.309}$ & $-0.616^{+0.398}_{-0.408}$ & $-0.584^{+0.586}_{-0.550}$ \\ \hline\hline \end{tabular} \caption{Constraints on the dark energy evolution through $F_{\rm tr}$, $F^\prime_{\rm tr}$ and $F^{\prime\prime}_{\rm tr}$, for DHE and DDPE methods.} \label{tab:Ftr} \end{table} It appears evident from Table.~\ref{tab:Ftr} that the value of $F(z_{tr})$ is compatible with the constraint $F(z_{tr})=1$ even at $1\sigma$ confidence level. However, the first derivative evaluated at the transition time is not compatible with zero, whereas the second derivative does it. This implies that our expectations agree with those of a slightly evolving dark energy term that however departs from a genuine $\Lambda$CDM scenario, albeit recovering it at small redshifts. This finding may be due to the need of a further constraint that we imposed by means of B\'ezier polynomials. It is likely that imposing a tighter bound over $F(z)$ would imply tighter constraints probably more compatible with the standard cosmological model. To better prompt this fact, we plot in Fig.~\ref{fig:Fpl} the behaviors of $F(z), F^\prime(z)$ and $F^{\prime\prime}(z)$ for the $\Lambda$CDM paradigm and the simplest extensions $\omega$CDM and CPL \citep{Chevallier2001,Linder2003}, with evolving dark energy, for which we have \begin{equation} \left\{ \begin{array}{lll} F(z)= 1 \quad&,\quad & \hbox{$\Lambda$CDM}\\ F(z)=(1+z)^{3(1+w_0)}\quad&,\quad & \hbox{$\omega$CDM}\\ F(z)=(1+z)^{3(1+w_0+w_1)}e^{-\frac{3w_1 z}{1+z}}\quad&,\quad & \hbox{CPL} \end{array} \right.\,. \end{equation} The parameter $w_0$ of the $\omega$CDM model and the parameters $w_0$ and $w_1$ of the CPL model are taken from \citet{2021PhRvD.104b3520B}. Fig.~\ref{fig:Fpl} shows that at the level of $1$--$\sigma$ confidence level the DHE approach is more compatible than the standard cosmological model. Clearly the DDPE disfavors more the $\Lambda$CDM paradigm because it takes a direct expansion of $q(z)$ and then finds for the corresponding $H(z)$, leading to more complicated errors induced by the more complex structure of the method itself. Concluding, the two methods do not exclude dark energy to slightly evolve, albeit it is unlikely, especially from the DHE approach, that dark energy is not under the form of a cosmological constant, implying that GRBs agree with the transition redshift predicted by the standard cosmological scenario. \section{Conclusions and perspectives}\label{sezione5} In this paper we derived model-independent constraints on the dark energy evolution through the evaluation of the transition redshift $z_{\rm tr}$. The strategy we have pursued utilizes DHE and DDPE procedures introduced in \citet{2022MNRAS.509.5399C}, involving expansions of the Hubble rate $H(z)$ and the deceleration parameter $q(z)$, respectively. The constraints on $z_{\rm tr}$ have been obtained by using GRB \citep{2021JCAP...09..042K}, SNe Ia \citep{2018ApJ...853..126R} and uncorrelated BAO \citep{LM2020} catalogs. The GRB data have been calibrated by resorting the well-known $E_{\rm p}$--$E_{\rm iso}$ correlation. This model-independent calibration involved the well-consolidate technique based on the B\'ezier polynomials interpolation of the OHD catalog \citep{2019MNRAS.486L..46A,LM2020}. In the current era of precision cosmology, we tested this calibration procedure by increasing the number of OHD points \citep[as outlined in Sec.~\ref{sezione2} and in][]{2022arXiv220513247K}. We found that the $1000$ mock OHD well reproduce the results obtained from the $32$ real measurements, of course with greater accuracy (see Sec.~\ref{sec:4}), and, thus, we selected this catalog to calibrate GRB data. The purpose of the above OHD calibration is twofold: from the one hand, we have performed a new and more precise calibration of the $E_{\rm p}$--$E_{\rm iso}$ correlation; on the other hand, we have made GRBs more competitive cosmological tools and used them in conjunction with SNe Ia and BAO in the determination of $z_{\rm tr}$. Regarding the second aspect, the joint GRBs~+~SNe~Ia~+~BAO data set was employed in MCMC fits, working out the Metropolis-Hastings algorithm and the DHE and the DDPE methods outlined in Sec.~\ref{sec:3}. The results of these analyses, summarized in summarized in Tab.~\ref{tab:fits} and displayed in the contour plots of Figs.~\ref{fig:Bez_cont2}--\ref{fig:Bez_cont3}, show an overall compatibility with respect to the previous literature and the $\Lambda$CDM paradigm. Inside the intervals of validity for $z_{tr}$, we notice that evolving dark energy is not fully excluded, albeit we underlined that only small departures from the standard cosmological scenario are possible. In particular, we prompted the evolution of dark energy at the transition and we showed the limits of evolving dark energy up to the transition time, adopting GRB data. Future developments will focus on constraining the transition epoch adopting more complicated combinations of data catalogs. We will also investigate more refined cosmographic expansions and techniques in order to feature the dark energy evolution at high redshifts. \section*{Acknowledgements} OL expresses his gratitude to the Instituto de Ciencias Nucleares of the UNAM University for hospitality during the period in which this manuscript has been written. OL and MM acknowledge Alejandro Aviles, Alessandro Bravetti, Celia Escamilla-Rivera, Peter~K.S. Dunsby and Hernando Quevedo for fruitful discussions. The work is financially supported by the Ministry of Education and Science of the Republic of Kazakhstan, Grant IRN AP08052311. \section*{Data Availability} Data are available at the following references: SNe Ia catalog from \citet{2018ApJ...853..126R}, the updated $E_{\rm p}$--$E_{\rm iso}$ data set of $118$ long GRBs from \citet{2021JCAP...09..042K}, the uncorrelated BAO data from \citet{LM2020} and the updated $32$ OHD measurements from \citet{2022arXiv220513247K}.
Title: Inflationary Adler Conditions
Abstract: We derive a new soft theorem that corresponds to the spontaneous breaking of Lorentz boosts. This is motivated by the dynamics of inflation in the sub-horizon (flat-space) limit, where spacetime becomes flat but Lorentz boosts are still broken. In this limit, the scattering amplitudes become sensible observables. We relate the soft emission of a Goldstone boson to the (non-relativistic) Lorentz boost of the hard scattering amplitudes. This is the on-shell avatar of the spontaneous breaking of Lorentz boosts, analogous to the Adler zero of pions in the chiral symmetry breaking. We comment on several applications to inflation, including the demonstration that Dirac-Born-Infeld Inflation is the unique theory that has an emergent Lorentz invariance when the boosts are spontaneously broken.
https://export.arxiv.org/pdf/2208.14544
\begin{titlepage} \setcounter{page}{1} \baselineskip=15.5pt \thispagestyle{empty} \begin{center} {\fontsize{18}{18} \bf Inflationary Adler Conditions } \end{center} \vskip 20pt \begin{center} \noindent {\fontsize{12}{18}\selectfont Daniel Green, Yiwen Huang, and Chia-Hsien Shen} \end{center} \begin{center} \textit{ Department of Physics, University of California at San Diego, \\ La Jolla, CA 92093, USA} \end{center} \vspace{0.4cm} \begin{center}{\bf Abstract} \end{center} We derive a new soft theorem that corresponds to the spontaneous breaking of Lorentz boosts. This is motivated by the dynamics of inflation in the sub-horizon (flat-space) limit, where spacetime becomes flat but Lorentz boosts are still broken. In this limit, the scattering amplitudes become sensible observables. We relate the soft emission of a Goldstone boson to the (non-relativistic) Lorentz boost of the hard scattering amplitudes. This is the on-shell avatar of the spontaneous breaking of Lorentz boosts, analogous to the Adler zero of pions in the chiral symmetry breaking. We comment on several applications to inflation, including the demonstration that Dirac-Born-Infeld Inflation is the unique theory that has an emergent Lorentz invariance when the boosts are spontaneously broken. \end{titlepage} \restoregeometry \newpage \setcounter{tocdepth}{2} \tableofcontents \newpage \section{Introduction} Inflation is widely believed to be an essential part of the history of the universe. It explains numerous features of the universe we observe; significantly, the initial seeds of structure are believed to have formed during this era from quantum vacuum fluctuations. Observational tests of the inflationary epoch are limited to the statistics of fluctuations produced during inflation, including scalar and tensor metric fluctuations. However, learning concrete lessons from these fluctuations is limited by our ability to relate the space of models of inflation with the space of consistent statistical correlations. Effective field theory (EFT) provides a natural framework in which we can relate microscopic theories with long distance observables. The EFT of Inflation\cite{Creminelli:2006xe,Cheung:2007st} is such an example where one can write a theory for the fluctuations directly, where the microscopic details are encoded in the particle content and couplings. For single-field inflation specifically, the lone scalar degree of freedom is the scalar metric fluctuation and is highly constrained by symmetries~\cite{Maldacena:2002vr,Creminelli:2004yq}. For scale-invariant fluctuations, the dynamics of the inflationary era are encoded in higher-dimensional operators whose coefficients are constrained by primordial non-Gaussianity in the cosmic microwave background~\cite{Planck:2019kim} or distribution of galaxies~\cite{Cabass:2022wjy,DAmico:2022gki} (see~\cite{Achucarro:2022qrl} for a recent review). In many contexts, it has been found that the space of EFTs consistent with our short distance symmetries is significantly smaller than our naive EFT expectations suggest~\cite{deRham:2022hpx}. This has been seen in countless examples through the self-consistency of the scattering amplitudes calculated within the EFT~\cite{Pham:1985cr,Adams:2006sv}. Naively, such an approach does not apply to the EFT of Inflation which is defined in quasi-de Sitter space. However, ongoing work on the cosmological bootstrap~\cite{Baumann:2022jpr} has shown how cosmological correlators in inflation are tied to the scattering amplitudes of the same theory in flat space~\cite{Maldacena:2011nz,Raju:2012zr,Arkani-Hamed:2015bza,Lee:2016vti,Arkani-Hamed:2018kmz,Arkani-Hamed:2018bjr,Benincasa:2018ssx,Baumann:2019oyu,Pajer:2020wxk,Baumann:2020dch,Bonifacio:2021azc,Cabass:2021fnw,Baumann:2021fxj,Benincasa:2022gtd}. In this precise sense, understanding how amplitudes arise within the flat-space limit of the EFT of Inflation is directly related to our understanding of cosmic observables. Preliminary work in this direction has been initiated in~\cite{Baumann:2011su,Baumann:2015nta,Baumann:2019ghk,Pajer:2020wnj,Grall:2021xxm,Creminelli:2022onn}, but many questions about the general structure of the amplitudes remain. The most natural context to discuss the flat-space limit of the EFT of Inflation is the so-called decoupling limit~\cite{Baumann:2011su}. In this regime, dynamical gravity is decoupled from inflation and the scalar metric mode is described by the Goldstone boson, $\pi$, associated with spontaneously broken time diffeomorphism. The interactions of $\pi$ individually break Lorentz boosts but non-linearly realize the symmetry. Much of this structure also survives in cosmological correlators as the scalar metric fluctuation, $\zeta$, at horizon crossing are well described by the Goldstone boson $\zeta \approx - H \pi$. To flesh out the the full constraints on inflation from scattering amplitudes in the decoupling limit, it is imperative to understand the full on-shell consequences of spontaneous breaking of Lorentz boosts. The nature of the non-linearly realized symmetry is often tied to the soft limit of the amplitude. A celebrated example is the Adler zero of the soft pion~\cite{Adler:1964um} that reflects the underlying chiral symmetry breaking~\cite{Cronin:1967jq,Weinberg:1966fm,Weinberg:1968de,Coleman:1969sm,Callan:1969sn}. This approach has been revived recently to identify EFTs from an on-shell perspective~\cite{Cheung:2014dqa,Huang:2015sla,DiVecchia:2015jaq,Padilla:2016mno,Low:2017mlh,Cheung:2018oki,Low:2018acv,Kampf:2019mcd,Kampf:2020tne,Kampf:2021bet,Kampf:2021tbk}. In fact, these soft limits provide a powerful means to reconstruct amplitudes~\cite{Cheung:2015ota,Luo:2015tat,Arkani-Hamed:2016rak,Rodina:2016jyz,Rodina:2018pcb,Bartsch:2022pyi} and classify the space of EFTs~\cite{Cheung:2016drk,Cheung:2018oki,Elvang:2018dco}. On-shell methods have been applied to non-relativistic EFTs~\cite{Brauner:2020ezm,Pajer:2020wnj,Stefanyszyn:2020kay,Grall:2020ibl,Mojahed:2021sxy,Mojahed:2022nrn,Brauner:2022ymm} and wave functions~\cite{Bittermann:2022nfh}. For the non-linearly realized Lorentz boosts, which are of central interest to inflation, the non-perturbative Goldstone theorem has been elegantly derived~\cite{Alberte:2020eil}, but the analogous soft theorem is yet to be formulated. In this paper, we will establish a soft theorem for flat-space scattering amplitudes that reflects the spontaneous breaking of Lorentz boosts. Our soft theorem, summarized in Section~\ref{sec:softthm_summary}, is the flat-space analog of the sub-leading consistency condition on correlators~\cite{Creminelli:2012ed}. The on-shell soft theorem also complements the algebraic constructions of boost-breaking EFTs~\cite{Grall:2020ibl}. Our results will be helpful in understanding cosmological correlators in several ways. First, it has been observed that Dirac-Born-Infeld (DBI) Inflation has an emergent Lorentz invariance in the broken phase~\cite{Grall:2020ibl}. We will use the soft theorem to show that DBI Inflation is the unique model (to leading derivative expansion) that has this property. Second, it was found in~\cite{Pajer:2020wnj} that boost-violating amplitudes are severely constrained by consistency of the scattering amplitudes (when coupled to gravity). These results seem inconsistent with the EFT of Inflation and the authors suggested it was a result of an inconsistency of the EFT of Inflation in flat space away from the decoupling limit. Our investigation will show the origin of these constraints and how they do not arise when the non-linear structure of the EFT is enforced. Finally, we will see how the structure of on-shell observables in flat space has nontrivial implications for inflationary correlators. The paper is organized as the following. In Section~\ref{sec:review}, we review the EFT of Inflation under the decoupling limit, with emphasis on the non-linearly realized Lorentz boosts and dependence on the field basis. The main results of this paper are given in Section~\ref{sec:softthm}. After reviewing the necessary tools, we prove the soft theorem in Goldstone-boson scattering, the full general case with matter interaction, and comment on the non-perturbative validity of the soft theorem. For readers' convenience, we summarize the final soft theorem in Section~\ref{sec:softthm_summary}. In Section~\ref{sec:inflation}, we discuss possible applications to inflation. Final conclusion and future directions are discussed in Section~\ref{sec:conclusions}. \paragraph{Convention:} We use the metric with mostly minus signature throughout the paper and set the speed of light $c=1$. The Greek and Roman indices denote components of relativistic and spatial vectors. We use boldface $\bm v$ for a spatial vector. For the scattering amplitudes, all momenta are outgoing. We use the hard momentum $p_a^\mu = (E_a, \bm p_a)$ for particle $a$ and the soft momentum $q^\mu = (\omega, \bm q)$. We define a rescaled inner product for general $c_s$ using \Eq{eq:p_def}. The deviation of the propagation speed from the speed of light are defined as $\delC = c_s^{-2}-1$ and $\delPhi = c_\phi^{-2}-1$ for Goldstone boson $\pi$ and a matter field $\phi$, respectively. \section{The EFT of Inflation in Flat Space} \label{sec:review} \subsection{Action} The EFT of single-field inflation describes the fluctuation of the inflaton field around a flat Friedmann–Lemaitre–Robertson–Walker (FLRW) background. Time diffeomorphism, $t\rightarrow t+\xi$, is spontaneously broken by the time-dependence of the expansion parameter $H(t)$. Therefore there exists a Goldstone boson $\pi$ associated with the breaking of such symmetry \begin{align} t &\rightarrow t - \xi \nn \\ \pi &\rightarrow \pi +\xi, \end{align} such that $\pi$ realizes time diffeomorphism non-linearly, but $U\equiv t+\pi$ transforms linearly as a scalar. Demanding that the underlying theory is invariant under spacetime diffeomorphism, the most general effective action for the Goldstone boson $\pi$ is just a derivative expansion in $U$. The resulting action, truncated for simplicity at one-derivative per field, is then given by~\cite{Cheung:2007st} \begin{align} S=\int \mathrm{d}^{4} x \sqrt{-g}\left[\frac{1}{2} \Mp^{2} R-\Mp^{2} \dot{H} g^{\mu \nu} \partial_{\mu}(t+\pi) \partial_{\nu}(t+\pi)-\Mp^{2}\left(3 H^{2}+\dot{H}\right)\right.\nonumber \\ \left.+\sum_{n} \frac{M_{n}(t+\pi)^{4}}{n !}\left(1-g^{\mu \nu} \partial_{\mu}(t+\pi) \partial_{\nu}(t+\pi)\right)^{n}+\cdots\right], \label{eq:action_original} \end{align} where $M_{n}(t+\pi)$ are Wilson coefficients that are generally time dependent. We will make the additional assumption that the time variation is negligible, $\dot M^4_n \ll H M^4_n$. This choice corresponds to the addition of a global symmetry, $\pi \to \pi +c$, and enforces that all the correlation functions from inflation are scale invariant. In the EFT of Inflation, terms with more than one-derivative per field can be described geometrically in terms of the extrinsic curvature of the time-slices in unitary-gauge (i.e.~the gauge where $\pi=0$). Here we are dropping higher derivatives per field for two reasons: (1) in unitary gauge, a term of the form $\delta g^{00}\delta K_{\mu}^{\mu}$ cannot change the speed of sound or contribute non-zero three point amplitude in the flat-space limit (2) in the decoupling limit, $ (\delta K_{\mu}^{\mu})^n$ only contribute higher derivative terms. When we take the soft limit of the amplitude, the contribution from these higher derivative terms are in general subdominant compared to the terms in the action above. We leave the full investigation with extrinsic curvature to the future. We consider the sub-horizon limit of the EFT of Inflation in this paper, where spacetime reduces to flat background. The Goldstone boson propagates with a speed of sound $c_s \le 1$. Inflation is naturally described in terms of a hierarchy of (energy) scales~\cite{Baumann:2011su} \begin{align} |\dot H|^2 \ll H^4 \ll f_\pi^4 \equiv 2 \Mp^2 |\dot{H}|c_s \ll \Mp^4 \label{eq:ScaleHierachy} \end{align} where $f_\pi$ is the decay constant of Goldstone boson and the symmetry breaking scale, $f_\pi^4 = 2\Mpl^2 |\dot H| c_s$, illustrated in Figure~\ref{fig:Energy}. Concretely, in single-field slow-roll inflation ($c_s =1$), the scale of symmetry breaking is $f_\pi^4 = \dot \varphi^2$, which is the scale associated with the time evolution of the background scalar field $\varphi$. In our universe, $2 \Delta_{\zeta}^2 = H^4/f_\pi^4$, where $\Delta_{\zeta}^2$ is the amplitude of curvature perturbation, so that $f_\pi = 59 H$~\cite{Planck:2018jri}. We will therefore consider energies $H^4 \ll E^4 \ll f_\pi^4$ so that the EFT of Inflation applies but we can neglect the curvature of spacetime. Within this context, we can consider scattering amplitudes for $\pi$ but they will not be Lorentz invariant. The EFT of Inflation has an additional scale, $\Lambda = f_\pi c_s$ associated with the strong coupling, or a breakdown of the EFT description\footnote{It is conventional to write $M_n^4 = c_n f_\pi^4 (f_\pi/\Lambda)^{2n-1}$ with $c_n = {\cal O}((1-c_s^2))$ so $\Lambda$ controls the scale of irrelevant operators, after canonically normalizing $\pi$.}. Weak coupling requires that $\Lambda > H$ but $\Lambda \ll f_\pi$ arises for $c_s \ll 1$. Validity of the EFT then requires $E < \Lambda$, which may further restrict the regime of validity of the flat-space approximation during inflation (in our universe). As our primary interest is understanding the structure of the EFT in general, we can take $H/E \to 0$ holding the other scales fixed so that the flat-space limit applies, as shown in Figure~\ref{fig:Energy}. The soft limit we will use later should be taken \emph{after} this flat-space limit. In other words, the energy of the soft particle is small compared to others, but still much greater than $H$. In the sub-horizon limit described above, the breaking of the time diffeomorphism reduces to the breaking of Lorentz boosts, but we still keep spacetime translation and $SO(3)$ spatial rotation. The action in the flat-space limit reduces to \begin{align} S=\int \mathrm{d}^{4} x \left[-\Mp^{2} \dot{H} g^{\mu \nu} \partial_{\mu}(t+\pi) \partial_{\nu}(t+\pi) +\sum_{n} \frac{M_{n}^{4}}{n !}\left(1-g^{\mu \nu} \partial_{\mu}(t+\pi) \partial_{\nu}(t+\pi)\right)^{n}+\cdots\right]. \label{eq:action_flat} \end{align} The metric can be set to $\eta^{\mu\nu}$ if we are not interested in graviton fluctuation\footnote{To make the flat-space limit precise, one often takes the decoupling limit, $\Mpl \to \infty$ and $\dot H \to 0$ folding $f_\pi$ fixed. One can relax this limit by considering large $\Mpl$ which allows (perturbative) graviton fluctuation.}. Crucially, if the couplings are time-independent ($\dot M_n \to 0$), the Goldstone boson is derivatively coupled, reflecting the fact the that Goldstone has an additional (global) shift symmetry. As a consequence, the action (with no time dependent couplings) has an emergent time-translation symmetry that is distinct from the one that is generated by the stress-tensor. As a result, we can still label the physical states by energy and momentum. In the sub-horizon limit, the EFT of Inflation coincides with the $P(X)$ theory where the scalar obtained a vacuum expectation value that breaks Lorentz boosts \begin{align} X = g^{\mu\nu}\partial_\mu (t+\pi) \partial_\nu (t+\pi) \label{eq:Xdef} \end{align} Although we obtain the action \Eq{eq:action_flat} by considering the sub-horizon limit of the EFT of Inflation, the results can be derived purely within flat spacetime by considering the breaking of Lorentz boosts while all other spacetime symmetries are preserved. Therefore, the same action applies to other physical systems with the same symmetry breaking pattern. For instance, this action also describes the dynamics of superfluid~\cite{Son:2002zn}. We can expand the action \Eq{eq:action_flat} in the order of the number of $\pi$-fields. We will make the unconventional choice of picking units such that \begin{align} -2 \Mp^2 \dot{H} =1 \ . \end{align} Notice that with this choice $f_\pi^4 \to c_s$ and $\Lambda^4 \to c_s^5$. The reason for this choice is to simplify the expression for our amplitudes while keeping track of the $c_s$ dependence. The non-linearly realized Lorentz boosts are defined in terms of the speed of light, $c$, and therefore we do not want to obscure the role of $c_s$ in their Ward identities. In these units, the action reads \begin{align} S &= \int d^4x\, \left[ \frac{1}{2}+\dot{\pi} +\frac{1}{2}\left( c_s^{-2}\, \dot{\pi}^2 - (\partial_j \pi)^2 \right) + g_3 \dot{\pi}^3 + g_{3,1} \dot{\pi} \left(c_s^{-2}\, \dot{\pi}^2 - (\partial_j \pi)^2 \right) +\order\left(\pi^4\right) \right], \label{eq:action_new} \end{align} where $\dot{\pi}=\partial_0 \pi$ and $g_3$ and $g_{3,1}$ are coupling constants. Matching to the original action in \Eq{eq:action_flat}, we find the deviation of $c_s$ from the speed of light is given by the coupling $M_2^4$ \begin{align} \delC \equiv \frac{1}{c_s^2}-1 = 4 M_2^4, \end{align} which is positive by causality. The cubic couplings $g_3$ and $g_{3,1}$ are also related to $M_2^4$ and $M_3^4$ \begin{align} g_{3} &= -2 \delC M^4_2 - \frac{4}{3} M^4_3 \nn =-\frac{1}{2} \delC^2 - \frac{4}{3} M^4_3 \\ g_{3, 1} &= 2 M^4_2 = \frac{1}{2}\delC \, . \end{align} The equation of motion (EOM) for $\pi$ up to quadratic order in $\pi$ and derivatives is then \begin{align} \textrm{EOM} &= \partial_{\mu}\left(\frac{\delta \Lag}{\delta \partial_{\mu} \pi}\right) -\frac{\delta \Lag}{\delta \pi} \\ &=\frac{1}{c_s^2}\ddot{\pi} - \nabla^2 \pi +3 g_{3} \partial_0 (\dot{\pi}^2) +\frac{\delC}{2} \partial_0 \left( \frac{3}{c_s^2} (\dot{\pi}^2) -(\partial_j \pi)^2 \right) -\delC \,\partial_j (\dot{\pi} \partial_j \pi) +\order(\pi^3) \ . \label{eq:EOMpi} \end{align} Higher-orders terms in $\pi$ are straight-forward to determine from expanding Equation~(\ref{eq:action_flat}). \subsection{Symmetry} In the energy scale that we consider in \Eq{eq:ScaleHierachy} (with the assumption $\dot M_n = 0$), the breaking of time diffeomorphisms reduces to the breaking of Lorentz boosts. Consider an infinitesimal Lorentz boost along a spatial vector $\bm b$, \begin{align} \delta x^\mu &= (\delta t, \delta \bm x) = (- \bm b \cdot \bm x, - \bm b \,t ) \\ \delta \pi &= \bm b \cdot \left( \bm x +\bm x \dot{\pi} + t \bm{\nabla}\pi \right) = \bm b \cdot \bm x + \bm b \cdot \left(\bm x\, \partial_0 + t \bm{\nabla} \right)\,\pi \ , \label{eq:symmetry_small} \end{align} where $\bm x\, \partial_0 + t \bm{\nabla}$ is the linear boost operator.\footnote{ In components, this is the usual boost as can be seen from $K^{i0}=x^i \partial^0 - t \partial^i = x^i \partial_0 + t \partial_i$. } Crucially the transformation of $\pi$ includes the non-linear term, as well as the usual linear transformation. Note that the boost operator here is the standard relativistic boost, even when the Goldstone boson propagates with generic $c_s$. The breaking of Lorentz boosts can be analyzed through the conserved current, which can be obtained via Noether approach or gravitational stress-energy tensor. We find the two approaches agree up to quadratic order which is sufficient to derive the soft theorem. The stress-energy tensor can be obtained via \begin{align} T_{\mu\nu} &= 2 \frac{\delta \mathcal{L}}{\delta g^{\mu\nu}} - \eta_{\mu\nu} \mathcal{L} \\ &= 2\partial_{\mu}(t+\pi)\partial_{\nu}(t+\pi) \frac{d\mathcal{L}}{dX} - \eta_{\mu\nu} \mathcal{L}. \end{align} Recall that $X$ is defined in \Eq{eq:Xdef}. In terms of the components of $T^{\mu\nu}$, we find \begin{align} T^{00} = 1 + (1+\delC)\dot{\pi}+\dots,\quad T^{j0} = - \partial_j \pi+\dots,\quad T^{ji} = \delta^{ij} \dot{\pi}+\dots \end{align} up to higher orders in $\pi$. Even though the stress-energy tensor contains linear terms in $\pi$, this does \emph{not} imply that the translations are spontaneously broken. One has to check whether the vacuum expectation value of an operator is transformed under the temporal and spatial translations, whose generators are $H=\int d^3\bm x T^{00}$ and $P^i=\int d^3\bm x T^{0i}$. In our case, $U=t+\pi$ is such an operator as $\langle U \rangle \neq 0$ and $\langle [H,U] \rangle \neq 0$. Recall that there can still be an emergent time-translation symmetry of the EFT that it not generated by $T^{00}$, as explained below Equation~(\ref{eq:action_flat}). From the stress-energy tensor, the currents associated with Lorentz generators are defined as \begin{align} M^{\mu \rho \sigma} = x^\rho T^{\mu\sigma}-(\rho\leftrightarrow \sigma) \ . \end{align} The current associated with a boost along direction $i$ is given by \begin{align} J^{\mu,i} = M^{\mu i 0} = x^i T^{\mu 0} - t T^{\mu i}. \end{align} Combining the expressions above we find the current in terms of the Goldstone field \begin{align} J^{0,i} &= x^i + (1+\delC) x^i \dot{\pi} + t\partial_i \pi +\dots\\ J^{j,i} &= -\delta^{ij} t - x^i \partial_j{\pi} - \delta^{ij} t \dot{\pi} +\dots \end{align} The matrix element for boost currents with a single Goldstone state are given by \begin{align} \langle \pi(q)| J^{0,i} |0\rangle &= i e^{iq\cdot x} \left(\frac{1}{c_s^2} x^i q^0-t q^i \right) \label{eq:JboostT} \\ \langle \pi(q)| J^{j,i} |0\rangle &= i e^{iq\cdot x} \left( x^i q^j -t \delta^{ij}q^0 \right) \ , \label{eq:JboostX} \end{align} which one can check obey the Goldstone theorem in \cite{Alberte:2020eil} and verify that the order parameters are non-zero in the broken phase. Later we will use Ward identity to prove the corresponding soft theorem from spontaneously broken Lorentz boosts. Given the boost along $\bm b$, the current is \begin{align} J^\mu = b^i J^{\mu,i}, \end{align} and its divergence can be evaluated simply from EOM \begin{align}\label{eq:NoetherEOM} \partial_{\mu} J^{\mu} &= \delta \pi \cdot \textrm{EOM} \\ &= \bm b \cdot \bm x \, (c_s^{-2} \ddot{\pi} - \partial_j^2 \pi)+\mathcal{O}(\pi^2) \nn \end{align} By construction, the current is conserved under the EOM. But the non-linear term in $\delta \pi$ leads to a nonzero matrix element between the vacuum and the one-particle state of the Goldstone boson. This will be the starting point of the soft theorem derivation. Of course all of the discussions so far are well-known. However, the use of local fields and conserved currents obscures the actual physical behavior. To elaborate on this point, consider the following change of field basis~\cite{Grall:2020ibl} \begin{align}\label{eq:newPi} \pi \rightarrow \pi +\Delta \pi = \pi +\alpha \pi \dot{\pi}\,. \end{align} This induces a change on the action $\mathcal{L} \rightarrow \mathcal{L}+\delta \mathcal{L}$, where \begin{align} \delta \mathcal{L} &= -\textrm{EOM}\cdot \Delta \pi \nn \\ &=-(c_s^{-2}\ddot{\pi}-\nabla^2 \pi ) \cdot (\alpha \pi \dot{\pi}) +\order(\pi^2) =\frac{1}{2}\alpha \dot{\pi} (c_s^{-2}\dot{\pi}^2-(\nabla \pi)^2 ) +\order(\pi^2), \label{eq:action_newbasis} \end{align} where we use $\pi \dot{\pi}\Box \pi =-\frac{1}{2} \dot{\pi}(c_s^{-2}\dot{\pi}^2-(\nabla \pi)^2 )$ modulo total derivative. This implies the shift in cubic coupling $g_{3,1} \rightarrow g_{3,1} + \alpha/2$ in the action \Eq{eq:action_new}. Given that $g_{3,1}=\delta_c/2$, we can eliminate this vertex by choosing \begin{align} \alpha = -\delta_c \,. \label{eq:alphaValue} \end{align} The fact that the vertex $\dot{\pi} (c_s^{-2}\dot{\pi}^2-(\nabla \pi)^2 )$ can be removed by field definition is not surprising since the corresponding three-particle amplitude vanishes. Crucially, the change of field basis also modifies the transformation of $\pi$ under non-linearly realized boosts. The transformation of the field changes into \begin{align} \delta \pi = \bm b \cdot \bm x + \bm b \cdot \left(\bm x\, \partial_0 + t \bm{\nabla}\right)\pi \rightarrow \delta \pi &= \bm b \cdot \bm x + \bm b \cdot \left(\bm x\, \partial_0 + t \bm{\nabla}\right)\pi -\alpha \dot{\pi} \delta \pi +\order{(\pi^2)} \nn \\ &=\bm b \cdot \bm x + \bm b \cdot \left((1-\alpha)\bm x\, \partial_0 + t \bm{\nabla} \right)\pi +\order{(\pi^2)}, \label{eq:boost_generalBasis} \end{align} where omit terms with higher order in $\pi$ in the $\delta \pi$ under the new basis. The crucial difference here is that the linear transformation now depends on $\alpha$. In other words, we show that the symmetry transformation in \Eq{eq:symmetry_small} not invariant under field redefinition. Using the value in \Eq{eq:alphaValue} yields \begin{align} \delta \pi &= \bm b \cdot \bm x + \bm b \cdot \left(\frac{1}{c_s^2}\bm x\, \partial_0 + t \bm{\nabla} \right)\pi +\order{(\pi^2)} \label{eq:newTransformation} \end{align} where we define the boost operator with generic $c_s$ as \begin{align} \frac{1}{c_s^2}\bm x\, \partial_0 + t \bm{\nabla} \,. \label{eq:newBoostPos} \end{align} In particular, this operator commutes with the lightcone with $c_s$ \begin{align} \left(\frac{1}{c_s^2}\bm x\, \partial_0 + t \bm{\nabla}\right)\cdot \left(c_s^2 t^2 -\bm x^2 \right) = 0. \end{align} In momentum space, this implies that the boosts that preserve the on-shell condition are the ones with nontrivial $c_s$. As we will see, these are the correct boost operators used in the soft theorem with generic $c_s$. \subsection{Amplitudes} We use all out-going convention. For a scalar with outgoing momentum $p^\mu = (E,\bm p)$, the corresponding Feynman rule for $\partial_\mu \phi$ then yields $ip_\mu = (iE, -i \bm p)$. Since boost invariance is spontaneously broken, the n-particle scattering amplitudes no longer only depend on the Lorentz-invariant momentum inner product. For Goldstone boson scattering, we can always write the amplitude $\amp_n$ as a function of the energy $E_i$ and rescaled inner product \beq \pp_{ij} \equiv 2\tilde{p}_i\cdot \tilde{p}_j \equiv c_s^{-2}E_iE_j-\bm p_i\cdot \bm p_j \label{eq:p_def} \eeq between external momenta $p_i$ and $p_j$. The on-shell condition then reads \begin{align} \tilde{p}_i^2 =0 \ . \end{align} Notice that we organized the action in Equation~(\ref{eq:action_new}) so that contraction of indices will given this rescaled inner product. Three-particle amplitudes in a Lorentz-invariant theory is trivial for scalar, since all the Mandelstem vanishes. However, since boosts are broken here, we find nontrivial the three-particle amplitudes for Goldstone bosons \beq \amp_3= -6ig_{3} E_{1} E_{2} E_{3} \label{eq:Apion_3} \eeq which only depends on energies. Note that $g_{3,1}$ does not appear, consistent with the earlier discussion that it can be removed by a field redefinition. See Ref.~\cite{Pajer:2020wnj} for a more general classification. The four-particle amplitude is given by \begin{align} \amp_4=\amp_{4,\text{contact}}+\amp_s+\amp_t+\amp_u \end{align} where $A_{s,\text{contact}}$ is a purely local term that comes from the four-particle vertex in the original action, and $\amp_s$, $\amp_t$, $\amp_u$ are the terms from $s,t,u$ exchange diagrams. The contact term reads \begin{align} \amp_{4,\text{contact}}=&24 \left(\frac{1}{8} \delC^3 + 2 \delC M^4_3 + \frac{2}{3} M^4_4\right) E_1E_2E_3E_4+ +\delC\left[\pp_{12}^{\,2}+\pp_{13}^{\,2}+\pp_{14}^{\,2}\right] \nonumber\\ &4 \left(-\frac{1}{4}\delC^2 - 2 M^4_3\right) \left[(E_1E_2+E_3E_4) \pp_{12}+(E_1E_3+E_2E_4) \pp_{13}+(E_1E_4+E_2E_3) \pp_{14}\right].\nn \end{align} Note that we have a new coupling $M_4^4$ entering the four-particle scattering. The contributions from exchange diagrams read \begin{align} \amp_s &=E_{12}^2\left[-18g_{3}^2\frac{E_1E_2E_3E_4}{\pp_{12}}-12g_{3}g_{3,1}(E_1E_2+E_3E_4)-8g_{3,1}^2 \pp_{12} \right]\,,\nn \\ \amp_t &=\amp_s\vert_{(1,2,3,4)\rightarrow (1,4,2,3)}\,, \quad \amp_u =\amp_s\vert_{(1,2,3,4)\rightarrow (1,3,2,4)}. \end{align} Note even though the amplitudes do not depend on the field basis we use, the couplings constants $g_{3,1} $ do change under our field redefinition. Here we have written the three and four-point amplitudes in terms of the original field basis. As we can see from the action and the above explicit example, there is always one more new coupling constant $M^4_{n+1}$ in $\amp_{n+1}$ when comparing to $\amp_{n}$. Therefore, the soft limit of $\amp_{n+1}$ cannot be trivially related to $\amp_{n}$. As we will see, the Ward identity arranges the soft limit in a clever way to circumvent the mismatch. \section{Soft Theorems} \label{sec:softthm} In this section, we will derive the soft theorems for scattering amplitudes associated with the spontaneously broken boosts in the EFT of Inflation. We begin with the Ward-Takahashi identity \begin{align} \partial_{\mu} \langle 0 | T( J^{\mu}(x) \pi(x_1) \dots \pi(x_n)) |0\rangle =-i \sum^n_{a=1} \delta(x-x_a) \langle 0 | T(\pi(x_1) \dots \delta \pi(x_a) \dots \pi(x_n)) |0\rangle \ . \label{eq:Ward_raw} \end{align} In order to apply this to amplitudes, we use the Lehmann-Symanzik-Zimmermann (LSZ) reduction to $\pi (x_i)$ and Fourier transform the current to momentum space \begin{align} \lim_{q\rightarrow 0} \int_x \, e^{iq \cdot x}\, \langle p_1,\dots,p_n | \partial_{\mu} J^{\mu}(x) |0\rangle &=- \sum^n_{a=1} \lim_{q\rightarrow 0} \lim_{\tilde{p}_a^2\rightarrow 0} \tilde{p}_a^2\int_{x} \, e^{i(q+p_a)\cdot x} \, \langle \{p_i|i\neq a\} | \delta \pi(x) |0\rangle = 0\, , \label{eq:Ward_mom} \end{align} where $\int_x\equiv \int d^4 x$ and we amputate $\pi(x_a)$ by applying $\lim_{\tilde{p}_a^2\rightarrow 0}\int_{x_a} e^{ip_a\cdot x} (-i\tilde{p}_a^2)$. The amputation leads to the one-particle state, $\langle p_i|$. The subscript $i$ runs from $1$ to $n$ modulo the additional conditions we specified. Note that the momentum $q$ is injected into $\delta \pi(x)$ on the right-hand side (RHS). This causes a mismatch between the momentum $p_a$, the one we use for amputation, and the momentum $p_a+q$ associated with the operator insertion. One needs to be careful with the order of imposing on-shell condition and taking the soft limit in intermediate steps.\footnote{ Since $\delta \pi(x)$ in the RHS generates a single-particle pole $1/(\tilde{p}_a+\tilde{q})^2$, the ratio $\tilde{p}_a^2/(\tilde{p}_a+\tilde{q})^2$ actually depends on the order of on-shell limit $p_a^2\rightarrow 0$ and the soft limit $q\rightarrow 0$. } Although the final conclusion is the same, we will impose the on-shell condition first before taking the soft limit for most of the discussion. The RHS of \Eq{eq:Ward_mom} vanishes in this case since the momentum of amputation differs from the momentum associated with the $\delta \pi(x)$. The goal is to translate the Ward-Takahashi identity into statements about on-shell amplitudes, also known as Ward identity. In particular, in the kinematic limit where the momentum $q$ becomes soft for both energy and spatial components \begin{align} q^\mu = (\omega, \bm q) \rightarrow 0 \, . \end{align} To maintain momentum conservation, this can be achieved by tuning one of the hard momenta, which we pick to be $p_1$, \begin{align} p_1 = -\sum_{i=2}^n p_i -q \, . \label{eq:p1prescription} \end{align} It is important to impose the on-shell condition on ${p}_1$, $c_1^{-2}E_1^2-\bm p_1^2 =0$, where we assume particle 1 has the speed of propagation $c_1$, which can differ from $c_s$ or $c$. Under momentum conservation the on-shell condition leads to \begin{align} c_1^{-2} \left(\sum_{a=2}^n E_a\right)^2 - \left(\sum_{a=2}^n \bm p_a\right)^2 =-2 \left(c_1^{-2} \omega \sum_{a=2}^n E_a-\bm q\cdot \left(\sum_{a=2}^n \bm p_a\right) \right)-(c_1^{-2}\omega^2 -\bm q^2) \, . \label{eq:p1Onshell} \end{align} This implies the left-hand side is actually $\order(q)$ under the soft limit which is needed for the soft theorem. \subsection{Preliminary: semi-onshell currents} \label{sec:bgCurrent} It is useful to consider the semi-onshell form factor of a field $\pi$ between an $n$-particle state and the vacuum, $\langle p_1,\dots,p_n |\pi(x) |0\rangle$, as a bridge between the off-shell correlation functions and on-shell amplitudes. See Ref.~\cite{DiVecchia:2015jaq,Low:2017mlh,Low:2018acv,Cheung:2021yog} for previous applications to soft theorems. This section reviews the basics of the form factors and the closely related Berends-Giele (BG) current. Readers familiar with the subject can skip this section. The BG current is the Fourier transform of the form factor \begin{align} \int_x \, e^{iq \cdot x}\, \langle p_1,\dots,p_n |\pi(x) |0\rangle = \int_x \, e^{i (q+P)\cdot x} \, \JBG_n = \hat{\delta}(q+P)\, \JBG_n \label{eq:FF1} \end{align} where $\int_x \equiv \int \mathrm{d}^4x$, $P^\mu \equiv \sum_{i=1}^n p^\mu_i$ is the total momentum of external particles, and $\JBG_n$ is the BG current with and outgoing momentum $q$ induced by an $n$-particle state. See the left of \Fig{fig:BGcurrent} for the corresponding diagram. We use the definition $\hat{\delta}(z) \equiv (2\pi)^4 \delta(z)$ throughout the paper. In the first equality of \Eq{eq:FF1}, we factor out the $x$ dependence and $\JBG_n$ is defined as the rest of contribution. In the on-shell limit, $\tilde{q}^2 \rightarrow 0$, the current develops a pole whose residue is given by the on-shell amplitude $\amp_{n+1}$ of the $n$ particles and the additional leg with momentum $q$ \begin{align} \JBG_n \xrightarrow{\tilde{q}^2\rightarrow 0} \frac{i}{\tilde{q}^{2}} \,i\amp_{n+1} +\dots, \label{eq:pole} \end{align} while the regular terms in the eclipses depends on off-shell degrees of freedom. A special case is the overlap with a single-particle state \begin{align} \langle p |\pi(x) |0\rangle = e^{i p\cdot x}, \label{eq:J1} \end{align} which implies $\JBG_1=1$. We also need the form factor of $\pi(x)^2$ operator insertion, depicted in the right of \Fig{fig:BGcurrent}. The form factor can be evaluated in terms of BG current using perturbation theory. Using the tree-level approximation, the form factor is given by the sum over disconnected terms, as shown in the left of \Fig{fig:pi2Tree}, \begin{align} \int_x \, e^{iq \cdot x}\,\langle p_1,\dots,p_n |\pi(x)^2 |0\rangle \xrightarrow{\rm tree} &\, \int_x \, e^{iq \cdot x}\,\sum_{L,R}\langle \{p_i|i\in L\} |\pi(x) |0\rangle \langle \{p_i|i\in R\} |\pi(x) |0\rangle, \nn \\ =&\, \hat{\delta}(q+P)\, \sum_{L,R} \JBG_{L} \JBG_{R} \label{eq:FF2_Tree} \end{align} where we sum over all possible partitions of the $n$ particles into the disjoint sets $L$ and $R$ whose corresponding BG currents are $\JBG_{L}$ and $\JBG_{R}$. When we consider the soft limit $q\rightarrow 0$, \Eq{eq:FF2_Tree} is dominated by the subset in which an internal propagator goes on-shell. This subset is depicted by the right diagram in \Fig{fig:pi2Tree}, where either $L$ or $R$ contains a single particle $a$. In this case, the dash line is nearly on-shell since the momentum injection from $q$ is small, as can be seen from its propagator \begin{align} \frac{i}{\prop_a} \equiv \frac{i}{c_a^{-2}(E_a+\omega)^2-(\bm p_a+\bm q)^2} &= \frac{i}{c_a^{-2}(2\omega E_a+\omega^2) - (2\bm q \cdot \bm p_a+ \bm q^2)}\sim \order (q^{-1})\,, \label{eq:soft_prop} \end{align} where particle $a$ propagates with the speed of sound $c_a$, such that $c_a^{-2} E_a^2 -\bm p_a^2 =0$. Therefore if we first impose the on-shell condition and then take the soft limit, the propagator scales as $\order (q^{-1})$. Combining \Eq{eq:J1} and the on-shell approximation we find \begin{align} &\int_x \, e^{iq \cdot x}\,\langle p_a |\pi(x) |0\rangle \, \langle p_1,\dots, p_{a-1},p_{a+1},\dots,p_n |\pi(x) |0\rangle \nn \\[2pt] =&\, \hat{\delta}(q+P)\, \JBG_1\,\JBG_{n-1,\slashed{a}} \nn \\ =&\, \hat{\delta}(q+P)\, \frac{i}{\prop_a}\, i\amp_{n}(p_1,\dots, p_{a-1}, p_a+q, p_{a+1}, \dots, p_n) +\dots \nn \\ =&\, \hat{\delta}(q+P)\, \frac{i}{\prop_a}\, i\left(1+ q^\mu \frac{\partial}{\partial p_a^\mu}\right)\amp_{n}(p_1,\dots, p_n) +\dots \label{eq:Jsoft} \end{align} where we add the subscript $\slashed{a}$ to the current $\JBG_{n-1}$ denoting that $p_a$ is not included. In the second equality, we use $\JBG_1=1$ and only keep the singular part of the current $\JBG_{n-1,\slashed{a}}$ and replace the residue with an on-shell amplitude similar to \Eq{eq:pole}. As written explicitly, the momentum of $a$-th leg in the amplitude $\amp_{n}$ is extended to $p_a+q$ which maintains the conservation of total momentum. This is an off-shell extension but the deviation is proportional to the inverse propagator $\prop_a$, such that off-shell deviation does not leave any non-local term behaves as $\order(q)/\prop_a$. In other words, all the residual terms are not only overall $\order(1)$ under the soft limit but also local in $q$. Such an off-shell extension will be crucial for the subleading soft theorem. In the last equality, we realize the extension by a differential operator on the hard amplitude. Summing over $a$ on \Eq{eq:Jsoft} and including the symmetry factor, we finally arrive at the leading contribution to \Eq{eq:FF2_Tree} under the soft limit \begin{align} \int_x \, e^{iqx}\,\langle p_1,\dots,p_n |\pi(x)^2 |0\rangle \xrightarrow{\textrm{tree},\,q\rightarrow 0} &\, \hat{\delta}(q+P)\, \left[\sum_{a=1}^n \frac{2i}{\prop_a}\,\left(1+ q^\mu \frac{\partial}{\partial p_a^\mu}\right) i\amp_{n} +\order(1) \right]. \label{eq:FF2_Simp} \end{align} We drop the momentum arguments of $\amp_{n}$ for simplicity. \subsection{Goldstone boson amplitudes} \subsubsection*{Relativistic Goldstone bosons} Let us first consider the simple case with only Goldstone boson scattering and $c_s=1$, or equivalently $\delC=0$. In this case, we can use $\tilde{p}_i \cdot \tilde{p}_j = {p}_i \cdot {p}_j$ and $\prop_a=2p_a\cdot q$. We are only interested in terms in the Ward-Takahashi identity up to $\order(1)$ under the soft limit. Consider the left-hand-side (LHS) of \Eq{eq:Ward_mom}. Even though the conserved current contains an infinite tower of terms, this drastically simplifies under the soft limit. Since $\partial_{\mu}J^\mu(x) \sim q_\mu J^\mu(q)$ in momentum space, most of their contribution to the Ward identity are only of the order of $\order(q)$.\footnote{Astute readers may be concerned here. Since $J^\mu(x)$ depends on the coordinate $x^\nu$, $\partial_{\mu} J^\mu(x)$ may contain terms that have no derivative and not suppressed by the soft limit. For instance, $\partial_{\mu}(x^\nu f(x))= \delta^\nu_\mu f(x)+ x^\nu \partial_{\mu} f(x)$. However, the Fourier transform yields \begin{align} \int_x e^{iq\cdot x} \partial_{\mu}(x^\nu f(x)) =-q_\mu \left(\frac{\partial}{\partial q^\nu} f(q) \right),\nn \end{align} which is still suppressed by $q$ as long as $f(q)$ is regular in soft $q$. } The only possibility to get ab $\order(1)$ contribution is when the matrix element of $J^\mu(x)$ becomes singular. As we reviewed in the previous section, this only occurs in the form factors of $\pi(x)$ and $\pi(x)^2$, whose singular behavior is given by \Eq{eq:pole} and \Eq{eq:FF2_Simp}. This implies that we can truncate $\partial_{\mu} J^\mu(x)$ in \Eq{eq:Ward_mom} to quadratic order in the field. To evaluate the matrix element of $\partial_{\mu} J^\mu(x)$, it is simpler to use Eq.~\eqref{eq:NoetherEOM} to relate it to the equation of motion~\Eq{eq:EOMpi} \begin{align} \partial_{\mu} J^\mu(x) &= \delta \pi \cdot \textrm{EOM} \nn \\ &= \bm b \cdot \bm x\, \left(\Box \pi + 3 g_{3}\partial_0 (\dot{\pi})^2 \right) + \bm b \cdot \left(\bm x\, \partial_0 + t \bm{\nabla}\right) \pi \, \Box \pi +\dots, \label{eq:current_relevant} \end{align} where we set $c_s=1$ ($\delC=0$) and truncate to quadratic order in $\pi$. The first and second parts originate from the non-linear and linear transformation of $\delta \pi$ interfering with the equation of motion. All these terms can be evaluated using Eqs.~\eqref{eq:FF1}, \eqref{eq:pole} and \eqref{eq:FF2_Simp}. Let us discuss them in turn. First, the contribution from the non-linear part of $\delta \pi$ leads to \begin{align} &\lim_{q\rightarrow 0}\,\int_x e^{iq\cdot x} \,\bm b\cdot \bm x \langle p_1,\dots,p_n |\Box \pi + 3 g_{3}\partial_0 (\dot{\pi})^2 |0\rangle \nn \\ =&\lim_{q\rightarrow 0}\, i\bm b \cdot \nabla_{\bm q}\,\left(\int_x e^{iq\cdot x} \,\langle p_1,\dots,p_n |\Box \pi + 3 g_{3}\partial_0 (\dot{\pi})^2 |0\rangle \right) \nn \\ =&\lim_{q\rightarrow 0}\, i\bm b \cdot \nabla_{\bm q}\, \left[ \hat{\delta}(q+P)\,\left( -q^2\JBG_{n} -\sum_{a=1}^n \amp_{3,a}\, \JBG_{n-1,\slashed{a}} \right) \right], \label{eq:termNLRaw} \end{align} where $\nabla_{\bm q}$ is the derivative with respect to $\bm q$ and the vertex $3 g_3 \partial_0 (\dot{\pi})^2$ in momentum space reads \begin{align} \amp_{3,a} \equiv \amp_{3}(q,p_a, -(q+p_a)) = 6i g_3 \omega E_a (\omega+E_a). \label{eq:v3_pi} \end{align} We denote this vertex by the same notation as the three-particle amplitude since they coincide. The three-particle amplitude is written in terms of energies which trivializes the off-shell extension of the leg with momentum $(q+p_a)$. To express the above in terms of on-shell amplitudes, we use the dominant behaviors of $\JBG_{n}$ and $\JBG_{n-1,\slashed{a}}$ in \Eqs{eq:pole}{eq:FF2_Simp}. Next, we only consider the term where the derivative $\nabla_{\bm q}$ acts on the parenthesis instead of the delta function $\hat{\delta}(q+P)$. This is well-defined since we already impose the choice in \Eq{eq:p1prescription} before taking the derivative which eliminate off-shell ambiguity from momentum conservation.\footnote{ \label{foonote:momConservation} More rigorously, we can change the variable $p_1= \mathbb{P}-q-\sum_{i=2}^n p_i$, where $\mathbb{P} = P+q$ is the total momentum. Evaluating the derivative in \Eq{eq:termNLRaw} then yields the one acts on $\hat{\delta}(\mathbb{P})$ and the one acts on the parenthesis. Since they are formally separated when integrating with a test function, we only keep the latter. } Combining the above yields \begin{align} &\lim_{q\rightarrow 0}\,\int_x e^{iq\cdot x} \,\bm b\cdot \bm x \langle p_1,\dots,p_n |\Box \pi + 3 g_{3}\partial_0 (\dot{\pi})^2 |0\rangle \nn \\ =&\hat{\delta}(q+P)\, i\bm b\cdot \nabla_{\bm q}\, \left( \amp_{n+1} +\sum_{a=1}^n \frac{\amp_{3,a}}{2p_a\cdot q}\, \left(1+ q^\mu \frac{\partial}{\partial p_a^\mu}\right) \amp_{n} \right) +\order(q), \label{eq:termNL} \end{align} where we drop the momentum arguments in $\amp_{n+1}$ for simplicity. In the presence of three-particle coupling, there is $\order(q^{-1})$ soft singularity in $\amp_{n+1}$ as a result of factorization. But we observe that the $\amp_{n}$ term precisely cancels this singularity in $\amp_{n+1}$. This is not a coincidence, since this combination originates from the equation of motion which should vanish in a correlation function modulo local contact terms. From \Eq{eq:termNLRaw} to \Eq{eq:termNL}, we replace $\JBG_{n}$ and $\JBG_{n-1,\slashed{a}}$ with amplitudes which are both valid up to $\order(1)$ correction. These corrections can only lead to the $\order(q)$ terms in the above equation which is the same order as other terms we drop in the current conservation. While this is obvious to see for $\nabla_{\bm q}(q^2\JBG_{n})$ by naive counting, one needs to be careful with the $\order(1)$ residual terms in $\JBG_{n-1,\slashed{a}}$. The specific off-shell extension in \Eq{eq:Jsoft} is important here, since it ensures that the $\order(1)$ correction in $\JBG_{n-1,\slashed{a}}$ can only be local in $q$. The derivative $\nabla_{\bm q}$ on this local $\order(1)$ correction can only be at most $\order(1)$ under the soft limit. Combining with the fact that $\amp_{3,a}$ only depends on $\omega$ but not $\bm q$, we find $\nabla_{\bm q} (\amp_{3,a} \times \order(1)) \sim \amp_{3,a} \times \nabla_{\bm q} (\order(1)) \sim \amp_{3,a} \times \order(1) \sim \order(q)$ if the $\order(1)$ term is local in $q$. Therefore we conclude that the above equation is indeed valid. The linear part of $\delta \pi$ can be evaluated similarly using tree-level expansion \begin{align} &\lim_{q\rightarrow 0}\,\int_x \, e^{iq\cdot x} \, \langle p_1,\dots,p_n |(\bm b \cdot \left(\bm x\, \partial_0 + t \bm{\nabla} \right) \pi(x)) \, \Box \pi(x) |0\rangle \nn \\ =&\lim_{q\rightarrow 0}\,\int_x \, e^{iq\cdot x} \, \sum_{a=1}^n \Big[ \langle \{p_i|i\neq a\} |\bm b \cdot \left(\bm x\, \partial_0 + t \bm{\nabla} \right) \pi(x) |0\rangle \langle p_a |\Box\pi(x) |0\rangle \nn \\ &\qquad\qquad\qquad\quad +\langle \{p_i|i\neq a\} |\Box\pi(x) |0\rangle \langle p_a | \bm b \cdot \left(\bm x\, \partial_0 + t \bm{\nabla} \right) \pi(x)|0\rangle \Big] \nn \\ =& \sum_{a=1}^n \lim_{q\rightarrow 0} \lim_{p_a^2\rightarrow 0} \left( -\bm b\cdot \Kmom_{a} \,\int_x \, e^{i(p_a+q)\cdot x} \langle \{p_i|i\neq a\} |\Box\pi(x) |0\rangle \right) \nn \\ =& \sum_{a=1}^n \lim_{q\rightarrow 0} \lim_{p_a^2\rightarrow 0} \bm b\cdot \Kmom_{a} \,\left[ \hat{\delta}(q+P) \left((q+p_a)^2 \JBG_{n-1,\slashed{a}} \right) \right]. \label{eq:termLinearRaw} \end{align} To arrive at the third line, we use the following identities on the single-particle state \begin{align} \langle p_a |\Box\pi(x) |0\rangle &= -p_a^2 e^{ip_a\cdot x} \xrightarrow{p_a^2\rightarrow 0} 0 \\[2pt] \langle p_a |\bm b \cdot \left(\bm x\, \partial_0 + t \bm{\nabla} \right) \pi(x) |0\rangle &= \bm b\cdot \bm x \langle p_a |\dot{\pi}(x) |0\rangle +t \bm b\cdot \langle p_a |\bm \nabla{\pi}(x) |0\rangle \nn \\[3pt] &= -\bm b\cdot \Kmom_{a} \, e^{ip_a\cdot x}, \label{eq:boostSingleParticle} \end{align} and integration by part. Here the relativstic boost generator $\Kmom_{a}$ for particle $a$ reads \begin{align} \Kmom_{a} \rightarrow E_a\,\frac{\partial}{\partial \bm p_a} +\bm p_a \frac{\partial}{\partial E_a}. \end{align} Note that for generic $c_s$, the above will be replaced with a more general definition in \Eq{eq:boost_general}. Similar to \Eq{eq:termNL}, we move the boost operator past the momentum-conserving delta function in \Eq{eq:termLinearRaw} and use \Eq{eq:FF2_Simp} to replace $\JBG_{n-1,\slashed{a}}$. This yields \begin{align} &\lim_{q\rightarrow 0}\,\int_x \, e^{iq\cdot x} \, \langle p_1,\dots,p_n |(\bm b \cdot \left(\bm x\, \partial_0 + t \bm{\nabla} \right) \pi(x)) \, \Box \pi(x) |0\rangle \nn \\ =& -\hat{\delta}(q+P)\,\sum_{a=1}^n \bm b\cdot \Kmom_{a} \, \left[\left(1+ q^\mu \frac{\partial}{\partial p_a^\mu}\right)\amp_{n}\right] +\order(q)\, \nn \\ =& -\hat{\delta}(q+P)\,\sum_{a=1}^n \bm b\cdot \Kmom_{a} \, \amp_{n} +\order(q)\,. \label{eq:termLinear} \end{align} Unlike \Eq{eq:termNL}, the off-shell extension $p_a \rightarrow p_a+q$ is only $\order(q)$ so we can simply use the hard amplitude $\amp_n$ in the last line. Combining \Eqs{eq:termNL}{eq:termLinear}, we find the Ward-Takahashi identity~\eqref{eq:Ward_mom} yields the soft theorem for spontaneously broken boosts on on-shell amplitudes \begin{align} \lim_{q\rightarrow 0} i\nabla_{\bm q}\, \left( \amp_{n+1} +\sum_{a=1}^n \frac{\amp_{3,a}}{2p_a\cdot q}\, \left(1+ q^\mu \frac{\partial}{\partial p_a^\mu}\right) \amp_{n} \right) =& \sum_{a=1}^n \Kmom_{a} \, \amp_{n} +\order(q). \label{eq:ward_final} \end{align} This relates the soft Goldstone boson emission, after subtracting out the singularity from three-particle amplitudes, to the boost of hard amplitude. We verify the above theorem with tree-level scattering amplitudes up to $n=7$ by explicit calculation. Let us emphasizes the caveats in evaluating the above soft theorem, since derivatives acting on on-shell amplitudes are not always well-defined. For soft theorems beyond leading order, it is common to take certain prescriptions in order for the theorems to hold~\cite{DiVecchia:2015jaq,Low:2017mlh,Low:2018acv,Cheung:2021yog}. And our soft theorem here is no exception. As explained below \Eq{eq:termLinear}, the off-shell extension in $\Kmom_{a} \, \amp_{n}$ is not relevant under the soft limit. In the LHS, we avoid the ambiguities from momentum conservation and on-shell conditions by applying the prescription \Eqs{eq:p1prescription}{eq:p1Onshell} \emph{before} taking the derivative $\nabla_{\bm q}$. As we pointed out earlier, since $\amp_{n+1}$ contains one more coupling constant $M^4_{n+1}$ compared to $\amp_n$, the soft limit of the former cannot be fully fixed by the latter. Here we see that the Ward identity only relates the \emph{spatial derivative} of the soft emission to the hard amplitude. Since the coupling $M^4_{n+1}$ only contributes to $\amp_{n+1}$ as a contact term that only depends on energies, this new coupling is projected out by the derivative and therefore does not obstruct the soft theorem. On the other hand, we don't have the full control of the soft limit which is needed to construct on-shell recursion relations for amplitudes~\cite{Cheung:2015ota,Luo:2015tat,Bartsch:2022pyi}. \subsubsection*{Goldstone bosons with generic $c_s$} For the case of generic $c_s$, the form of the soft theorem is not immediately obvious, since the boost operator actually depends on the field basis, as we have demonstrated explicitly in \Eq{eq:boost_generalBasis}. Scattering amplitudes, which are always defined up to on-shell conditions, provide a clean perspective here. In order to have a valid soft theorem, the boost operator has to commute with on-shell conditions (at least before imposing momentum conservation). This is only true if the boost is with respect to $c_s$. We will see this is indeed the case. Mathematically, the soft theorem for generic $c_s$ can be derived in any basis. However, it is much easier to use the basis in \Eq{eq:newPi} with $\alpha=-\delC$ since the current conservation is almost identical to the relativistic case \begin{align} \partial_{\mu} J^\mu(x) &= \delta \pi \cdot \textrm{EOM} \nn \\ &= \bm b \cdot \bm x\, \left((c_s^{-2}\partial^2_0 -\bm \nabla^2) \pi + 3 g_{3}\partial_0 (\dot{\pi})^2 \right) + (\bm b \cdot (c_s^{-2} \bm x \partial_0+ t\bm \nabla) \pi) \, (c_s^{-2}\partial^2_0 -\bm \nabla^2) \pi +\dots\,. \end{align} Comparing to the relativistic case in \Eq{eq:current_relevant}, we only have to modified the boost and d'Alembert operators with respect to nontrivial $c_s$. But crucially we stick to the same three-particle amplitude $\amp_{3,a}$ in \Eq{eq:v3_pi}, since the additional cubic vertex when $\delC\neq 0$ is canceled in the new basis by setting $\alpha=-\delC$ in \Eq{eq:action_newbasis}. Therefore we just need to change the soft theorem with the new propagators and boost operator. The boost operator for particle $a$ with respect to speed of propagation $c_a$ is given by \begin{align} \Kmom_{a} \equiv \frac{E_a}{c_a^2}\,\frac{\partial}{\partial \bm p_a} +\bm p_a \frac{\partial}{\partial E_a} \,. \label{eq:boost_general} \end{align} For pure Golstone boson scattering we have $c_a=c_s$. In particular, the boost operator commutes with on-shell condition of particle $a$, \begin{align} \Kmom_{a}\left(p_a^2 \right) = \Kmom_{a}\left(E_a^2/c_a^2 - \bm p_a^2\right) =0 \end{align} The soft theorem with generic $c_s$ then has exactly the same form as \Eq{eq:ward_final} but with the full boost operator~\eqref{eq:boost_general} and propagator $\prop_a$ defined in \Eq{eq:soft_prop} \begin{align} \lim_{q\rightarrow 0}\, i\nabla_{\bm q}\, \left( \amp_{n+1} +\sum_{a=1}^n \frac{\amp_{3,a}}{\prop_a}\, \left(1+ q^\mu \frac{\partial}{\partial p_a^\mu}\right) \amp_{n} \right) =& \sum_{a=1}^n \Kmom_{a} \, \amp_{n} +\order(q). \label{eq:ward_finalcs} \end{align} We also check this up to $n=7$. The same caveats on off-shell extensions, discussed in the paragraph after \Eq{eq:ward_final}, apply to here as well. The same soft theorem can be derived in the original field basis, although the calculation is more involved. The relativistic boost operator needs extra care since it does not commute with the propagator. In the end, the additional three-particle vertex modifies the relativstic boost operator into the one with respect to $c_s$. We can see that the standard basis, while yields a simple action, does obscure the actual infrared behavior of physical observables. \subsection{Coupling to Matter}\label{sec:mixed} During inflation, we can have spectator fields other than the inflaton and the metric. In general, the spectator can have its own speed of propagation that is neither $c$ or $c_s$. Let us first review the construction in terms of action, and then discuss the soft theorem. Consider a scalar field $\phi$ that propagators with the speed $\cm$ whose action is \begin{align} \mathcal{L}_\phi \supset \frac{1}{2} \left( \cm^{-2}\, \dot{\phi}^2 - (\nabla\phi)^2 \right) +\dots\,. \nn \end{align} If we want to realize Lorentz invariance non-linearly, the deviation from relativistic kinetic term has to be assisted by coupling to Goldstone boson \begin{align} \mathcal{L}_\phi =& \frac{1}{2} \partial^\mu\phi \partial_{\mu}\phi +\frac{\delPhi}{2}\, \left(\partial^\mu (t+\pi) \partial_{\mu}\phi \right)^2 \\[3pt] &+\frac{y_1}{2}\,(1-g^{\mu\nu}\partial_{\mu}(t+\pi)\partial_{\nu}(t+\pi)) \partial^\rho \phi \partial_\rho \phi +\frac{y_2}{2}\,(1-g^{\mu\nu}\partial_{\mu}(t+\pi)\partial_{\nu}(t+\pi)) \left(\partial^\mu (t+\pi) \partial_{\mu}\phi \right)^2,\nn \label{eq:matterFullAction} \end{align} where we define $\delPhi \equiv \cm^{-2}-1$. To illustrate the full soft theorem, we also include two possible interactions starting at cubic order with couplings $y_1$ an $y_2$. In general there are other possible interactions one can write down. The theory is now also invariant under boosts. Under the infinitesimal transformation in \Eq{eq:symmetry_small}, the scalar $\phi$ undergoes the linear relativistic boost \begin{align} \delta \phi = \bm b \cdot \left(\bm x\, \partial_0 + t \bm{\nabla} \right)\phi \,. \end{align} Crucially the non-linearly realized Lorentz invariance implies that the modification of kinetic term comes hand-in-hand with a cubic interaction with the Goldstone boson. Expanding \Eq{eq:matterFullAction} to cubic order yields \begin{align} \mathcal{L}_\phi = \frac{1}{2} \left( \cm^{-2}\, \dot{\phi}^2 - (\nabla\phi)^2 \right) +\delPhi\, \dot{\phi}\, \partial^\mu \phi \partial_\mu \pi -y_1 \dot{\pi} (\partial \phi)^2 -y_2 \dot{\pi} \dot{\phi}^2 +\dots \label{eq:matterCubicAction} \end{align} We can also compute the corresponding amplitudes. For instance, the amplitude with two $\phi$ and one $\pi$ from \Eq{eq:matterCubicAction} reads \begin{align} \amp_{3}(q_\pi,p_{\phi},(-q-p)_\phi) =& -2i(\delPhi^2+2y_1\delPhi+y_2)\,\omega E(E+\omega)- i\left(\frac{\delta_\phi}{2}-y_1\right)(\delPhi-\delC)\omega^3\,, \end{align} where the subscripts on the momenta label the particle species, i.e.,particles with momentum $p$ and $-q-p$ in the above correspond to $\phi$ field and particle with momentum $q$ is the Goldstone boson. As in the Goldstone boson case, we can also do a field redefinition to change the linear boost and the cubic vertices. Observe that under integration by part \begin{align} \delta_\phi \dot{\phi}\, \partial^\mu \phi \partial_\mu \pi =& -\delPhi^2\,\dot{\pi} \,\dot{\phi}^2 +\frac{\delta_\phi}{4}\left(\delPhi-\delC\right)\, \dddot{\pi} \phi^2 \nn\\ &-\delta_\phi \phi\dot{\phi} \,\left(c_s^{-2}\partial^2_0- \bm \nabla^2\right) \pi -\delta_\phi \left(\pi\dot{\phi}+\frac{1}{2}\phi\dot{\pi} \right)\left(\cm^{-2}\partial^2_0- \bm \nabla^2\right) \phi. \label{eq:3ptIdentity} \end{align} The second line is proportional to the leading equations of motion. Thus they can be removed by the following field redefinitions \begin{align} \pi \rightarrow& \pi + \Delta\pi = \pi - \delPhi \phi \dot{\phi} \nn \\ \phi \rightarrow& \phi + \Delta\phi = \phi - \delPhi \left(\pi\dot{\phi}+\frac{1}{2}\phi\dot{\pi} \right), \label{eq:newBasis_matter} \end{align} where the shift $\Delta\pi$ and $\Delta\phi$ is related to the terms proportional to equations of motion~\eqref{eq:3ptIdentity}. The transformation of $\phi$ and $\pi$ under the Lorentz boost is also modified accordingly. Since the original $\phi$ transform linearly, $\delta\pi$ remains the same up to linear order in the new basis. However, the original $\delta\pi$ has a non-linear term which modifies the $\delta\phi$ at linear order \begin{align} \delta\phi &=\bm b \cdot \left(\bm x\, \partial_0 + t \bm{\nabla} \right)\phi +\delPhi \left(\delta\pi\dot{\phi}+\frac{1}{2}\phi\,(\partial_0{\delta\pi}) \right) +\order(\phi^2,\pi\phi) \nn \\ &= \bm b \cdot \left(c_\phi^{-2} \bm x\, \partial_0 + t \bm{\nabla} \right)\phi. \label{eq:newTransformation_matter} \end{align} We find that under this field basis, $\delta\phi$ is given by the boost with respect to its own speed $c_\phi$. The action in the new basis reads \begin{align} \mathcal{L}_\phi = \frac{1}{2} \left( \cm^{-2}\, \dot{\phi}^2 - (\nabla\phi)^2 \right) -\delPhi^2\,\dot{\pi} \,\dot{\phi}^2 +\frac{\delta_\phi}{4}\left(\delPhi-\delC\right)\, \dddot{\pi} \phi^2 -y_1 \dot{\pi} (\partial \phi)^2 -y_2 \dot{\pi} \dot{\phi}^2 +\dots \label{eq:matterCubicActionNew} \end{align} Similar to the earlier discussion, the derivation of soft theorem with coupling with matter will be more straightforward in this field basis. The soft theorem in the general case with $\pi$ and the matter $\phi$ also follows similarly. We use the new basis combining \Eq{eq:newBasis_matter} and \eqref{eq:newPi} (with $\alpha = -\delC$) such that the linear boosts of $\pi$ and $\phi$ are with respect to the its own speed of propagation $c_s$ and $c_\phi$. So the soft theorem is still given by \Eq{eq:ward_finalcs}, but with the generalization to include the $\pi-\phi-\phi$ cubic vertex and the boost with the speed of each particle. The general three-particle vertices are \begin{align} \amp_{3,a} &= \amp(q_\pi,p_{a,X},(-q-p_a)_X)= \begin{cases} \amp(q_\pi,p_{a,\pi},(-q-p_a)_\pi) & \textrm{if $a\in \pi$} \\[5pt] \amp(q_\pi,p_{a,\phi},(-q-p_a)_\phi) & \textrm{if $a\in \phi$} \end{cases} \end{align} where $X$ denotes the species of particle $a$ and the off-shell leg $-q-p_a$ \begin{align} \amp(q_\pi,p_{a,\pi},(-q-p_a)_\pi) &= 6i g_3 \,\omega E_a (\omega+E_a) \\ \amp(q_\pi,p_{a,\phi},(-q-p_a)_\phi) &= -2i(\delPhi^2+2y_1\delPhi+y_2)\,\omega E_a(E_a+\omega)- i\left(\frac{\delta_\phi}{2}-y_1\right)(\delPhi-\delC)\omega^3. \end{align} Again, it turns out that the vertices in all the cases coincide with the three-particle amplitudes when written in terms of energy.~\footnote{ From the action in \Eq{eq:matterCubicActionNew}, the vertex actually has an extra term proportional to $\omega \prop_a$ which only leads to $\order(q)$ correction in the soft theorem. Therefore effectively the $\pi-\phi-\phi$ vertex is still given by the three-particle amplitude $\amp_{3,a}$. This is the case in general as long as the three-particle vertex is proportional to $\omega$. } If we include other interactions between the matter and the Goldstone boson, $\amp(q_\pi,p_{a,\phi},(-q-p_a)_\phi)$ needs to be modified accordingly. \subsection{Non-perturbative validity} The beauty of symmetry is that the many consequences are valid even non-perturbatively. For instance, both the Ward-Takahashi identity and the Goldstone theorems on the correlation functions hold for any coupling. Although our derivation starts with Ward-Takahashi identity, tree-level expansion is needed to evaluate the correlation functions of $\pi(x)^2$ shown in \Eq{eq:FF2_Simp}. Therefore, the full soft theorem is only valid for tree-level scattering amplitudes. Nevertheless, the need of perturbative expansion can be circumvented when three-particle amplitudes vanish. In this case, our soft theorem can be lifted to non-perturbative level. This is analogous to the Adler zero for soft pion emission in QCD, which also holds non-perturbatively under the same condition.\footnote{The vanishing of three-pion amplitude in QCD is guaranteed from parity, while the pion-nucleon-nucleon amplitude is not zero. So Adler zero holds non-perturbatively for pion scattering, but breaks down in the presence of nucleon.} Let us specify the assumptions we use in the non-perturbative regime. \begin{enumerate} \item Lorentz boosts are spontaneously broken but not translation. \item The Goldstone bosons propagates with speed of light, $c_s=1$. \item The matrix elements of the boost current $J^{\mu,i}$ between the single-particle state and the vacuum are given by \Eqs{eq:JboostT}{eq:JboostX}. \item No three-particle amplitudes and the soft limit of amplitudes starts at $\order(q)$. These two statements are equivalent at tree level but we list them separately for the sake of generic case. \end{enumerate} Consider the Ward-Takahashi identity~\eqref{eq:Ward_raw} under these assumptions. As we mentioned, one needs to be careful with the order between soft limit $q\rightarrow 0$ and the on-shell limits of hard particles, $p_a^2 \rightarrow 0$. In the earlier sections with generic three-particle amplitudes, we take the on-shell limits first and then the soft $q$ limit. In the perturbative regime with zero three-particle amplitudes, one can take the order of limits in either way and still find the same conclusion. Here we will show the derivation of soft theorem in the opposite order which is valid when three-particle amplitudes vanish. As we will see, this derivation does not need perturbative expansion and therefore holds more generally under the assumptions. Taking $q\rightarrow 0$ limit first and apply LSZ reduction on the Ward-Takahashi identity yields \begin{align} [\textrm{LSZ}]\lim_{q\rightarrow 0} \int_x e^{i q\cdot x} \partial_{\mu} \langle 0 | T( J^\mu(x) \pi(x_1) \dots \pi(x_n)) |0\rangle =- \sum^n_{a=1} \lim_{p_a^2\rightarrow 0} p_a^2\int_{x_a} \, e^{i p_a\cdot x} \, \langle \{p_i|i\neq a\} | \delta \pi(x) |0\rangle \nn \end{align} where $[\textrm{LSZ}]\equiv \prod^n_{a=1} \lim_{p_a^2\rightarrow 0} (-i p_a^2)\int_{x_a} e^{ip_a\cdot x_a}$ and we already apply $q\rightarrow 0$ in the RHS. Let us consider each side of the equation in turn. On the RHS, the momentum from the Fourier transform matches the one used for amputation. So it no longer vanishes in contrast to the other order of limits taken in \Eq{eq:Ward_mom}. Given that $\delta\pi(x)$ in \Eq{eq:symmetry_small}, we see that the non-linear term does not generate a pole and thus drops out, so we only need to keep the linear part in $\delta\pi(x)$ \begin{align} &-\sum^n_{a=1} \lim_{p_a^2\rightarrow 0} p_a^2\int_{x_a} \, e^{i p_a\cdot x} \, \langle \{p_i|i\neq a\} | \delta \pi(x) |0\rangle \nn \\ =&-\sum^n_{a=1} \lim_{p_a^2\rightarrow 0} p_a^2\int_{x_a} \, e^{i p_a\cdot x} \, \langle \{p_i|i\neq a\} | \bm b\cdot (\bm x \partial_0+t\bm \nabla)\pi(x) |0\rangle \nn \\ =&-\sum^n_{a=1} \lim_{p_a^2\rightarrow 0} p_a^2 (\bm b \cdot \bm \Kmom_a)\, \int_{x_a} \, e^{i p_a\cdot x} \, \langle \{p_i|i\neq a\} |\pi(x) |0\rangle \nn \\ =& \sum^n_{a=1} \bm b \cdot \Kmom_a\,\left( \delta(P) \amp_{n} \right) \label{eq:NP_rhs} \end{align} We use the fact that $p_a^2$ commutes with the boost operator $\Kmom_a$ with $c_s=1$. The LHS is very similar to the tree-level calculation. The crucial difference is that we first take the soft limit before the amputation. \begin{align} &[\textrm{LSZ}]\lim_{q\rightarrow 0} \int_x e^{i q\cdot x} \partial_{\mu} \langle 0 | T( J^\mu(x) \pi(x_1) \dots \pi(x_n)) |0\rangle \nn \\ =&[\textrm{LSZ}] \lim_{q\rightarrow 0} \left[ \int_x e^{i q\cdot x} x^i\, \langle 0 | T( \Box \pi(x) \pi(x_1) \dots \pi(x_n)) |0\rangle +\order(q) \right] \end{align} We use the assumption that the only pole created by the current in an off-shell correlation function is the single-particle emission. So effectively we can use the leading term in $\partial_{\mu} J^{\mu,i} \sim x^i \Box \pi$. The rest of the contribution is suppressed by $\order(q)$ due to the derivative on the current. It is crucial that we take the soft limit before on-shell limits. If this is not the case, the current can be inserted to an external on-shell line and generate a $1/(p_a+q)^2 \sim 1/(2p_a\cdot q)$ pole.\footnote{In the previous perturbative derivation, $\partial_{\mu}J^\mu$ contains quadratic terms in field, even in the absence of three-particle amplitudes, which leads to the boost on hard amplitude as shown in \Eqs{eq:termLinearRaw}{eq:termLinear}. But if we first take the soft limit, the quadratic in $\pi$ contribution is suppressed. The soft theorem remains the same, but the boost on hard amplitudes is now reproduced by the other side of Ward identity as shown in \Eq{eq:NP_rhs}.} Now keeping only the leading order contribution in the above equation leads to \begin{align} [\textrm{LSZ}]\lim_{q\rightarrow 0} \int_x e^{i q\cdot x} \partial_{\mu} \langle 0 | T( J^\mu(x) \pi(x_1) \dots \pi(x_n)) |0\rangle =&[\textrm{LSZ}] \lim_{q\rightarrow 0} \bm b \cdot \nabla_{\bm q} \langle q | T( \pi(x_1) \dots \pi(x_n)) |0\rangle \nn \\ =&\lim_{q\rightarrow 0} \bm b \cdot \nabla_{\bm q} \left( \delta(q+P) i\amp_{n+1}\right). \label{eq:NP_lhs} \end{align} When we commute the on-shell conditions with the derivative, one needs to apply the same prescription as before. Equating both sides of the Ward identity, given in \Eq{eq:NP_rhs} and \Eq{eq:NP_lhs}, and factoring out the momentum conservation delta functions, we find \begin{align} \lim_{q\rightarrow 0}\,i \nabla_{\bm q}\, \amp_{n+1} = \sum^n_{a=1} \Kmom_a\,\amp_{n} +\order(q). \end{align} Crucially, we do not need perturbative expansion in this derivation, under the assumptions listed earlier, therefore we believe it also holds non-perturbatively. The final theorem is the same as the tree-level results~\eqref{eq:ward_finalcs} when $\mathcal{V}_{3,a}=0$ and $c_s=1$. It is possible that the non-perturbative theorem also extends to generic $c_s$ when three-particle amplitudes vanish. But we leave this investigation to the future. \subsection{Summary} \label{sec:softthm_summary} In this section we derived a soft theorem for the spontaneous breaking of boosts in the EFT of Inflation. The final form of this soft theorem is given by \begin{center} \begin{tcolorbox}[colback=light-gray] \begin{minipage}{\textwidth} \begin{align} & \lim_{q\rightarrow 0}\, i\nabla_{\bm q}\, \left( \amp_{n+1} +\sum_{a=1}^n \frac{\amp_{3,a}}{\prop_a}\, \left(1+ q^\mu \frac{\partial}{\partial p_a^\mu}\right) \amp_{n} \right) = \sum_{a=1}^n \Kmom_{a} \, \amp_{n} +\order(q). \label{eq:summary} \\[5pt] & \prop_a \equiv c_a^{-2}(E_a+\omega)^2-(\bm p_a+\bm q)^2 \,,\quad \Kmom_{a} \equiv \frac{E_a}{c_a^2}\,\frac{\partial}{\partial \bm p_a} +\bm p_a \frac{\partial}{\partial E_a} \\ & \amp_{3,a} = \begin{cases} 6i g_3 \,\omega E_a (\omega+E_a) & \textrm{if $a\in \pi$} \\[5pt] -2i(\delPhi^2+2y_1\delPhi+y_2)\,\omega E_a(E_a+\omega)- i\left(\frac{\delta_\phi}{2}-y_1\right)(\delPhi-\delC)\omega^3 & \textrm{if $a\in \phi$} \end{cases} \end{align} \end{minipage} \end{tcolorbox} \end{center} The prescriptions in \Eqs{eq:p1prescription}{eq:p1Onshell} need to be applied \emph{before} taking the derivative $\nabla_{\bm q}$. For matter coupling we assume the interaction in \Eq{eq:matterFullAction}. When the three-particle amplitudes vanish, the soft theorem becomes non-perturbative. A non-trivial feature of this result, is the appearance of a modified boost operator $\Kmom_{a}$, associated with the speed of propagation of a given particle, $c_a$, even though it is derived from the Ward identity for the broken relativistic boosts (i.e.~$c=1$) of the microscopic theory. \section{Implications for Inflation} \label{sec:inflation} \subsection{Dirac-Born-Infeld Inflation} Our soft theorem applies to generic model of inflation. But we can ask if there any theory that is ``special'' from this on-shell point of view. This is analogous to the ``exceptional'' Goldstone boson EFTs identified through the enhanced Adler zero~\cite{Cheung:2014dqa,Cheung:2016drk}. In particular, DBI model is the unique derivatively-coupled theory with amplitudes behaving as $\order(q^2)$ under the soft limit. Now we ask the following question: \emph{``what are the boost-breaking EFTs that have an emergent Lorentz invariance with respect to the speed of sound?''} By \emph{emergent Lorentz invariance}, we mean that the Goldstone boson amplitudes look Lorentz invariant if we rescale $t\rightarrow t/c_s$.\footnote{Of course this emergent Lorentz invariance will break down when we consider dynamical gravity.} In other words, this requires that all amplitudes are written in terms of the rescaled inner product in \Eq{eq:p_def} which implies that all odd-particle scattering amplitudes have to vanish. In this case we can recycle results in Ref.~\cite{Cheung:2014dqa,Cheung:2016drk}. Let us prove that the only non-relativistic Goldstone theory that satisfies this requirement is the DBI Inflation. In the following paragraphs, by Lorentz invariance we mean the emergent Lorentz invariance with respect to $c_s$. The proof is quite simple. The first immediate consequence of Lorentz invariance is that the three-particle amplitude $\amp_{3}$ vanishes identically, since the three-particle kinematics demand that all $\tilde{p}_i\cdot \tilde{p}_j=0$. In other words, this also enforces $\mathcal{V}_{3,a}=0$ as well. Next, since the rescaled inner product $\tilde{p}_i\cdot \tilde{p}_j$ is invariant under the rescaled boosts, this implies that any amplitudes that are Lorentz invariant must be invariant under boosts with respect to $c_s$, $\sum_{a=1}^n \Kmom_{a} \, \amp_{n}=0$. Combining these two conditions, our soft theorem for emergent Lorentz-invariant amplitudes reduces to \begin{align} &\lim_{q\rightarrow 0}\, i\nabla_{\bm q}\, \amp_{n+1} = 0 \label{eq:DBI} \end{align} up to corrections starting at $\order{(q)}$. Since $\amp_{n+1}$ is Lorentz invariant as well, this implies that $\partial_q\,\amp_{n+1}=0$ as well. It implies that the full amplitude $\amp_{n+1}$ is actually $\order{(q^2)}$ under the soft limit. Using the results in Ref.~\cite{Cheung:2014dqa,Cheung:2016drk}, we find that the unique theory (to leading order in derivative expansion) that has this property is the DBI Inflation. This feature of DBI Inflation was first pointed out by Ref.~\cite{Grall:2020ibl}. The authors use the enhanced shift symmetry of DBI to turn the theory emergent Lorentz invariant with nontrivial $c_s$. Note that this property is not manifest in the original action, but only revealed after a field redefinition~\cite{Grall:2020ibl}. Using the on-shell approach, we can see this naturally arises as a special case of the soft theorem. In addition, we also prove the \emph{uniqueness} of DBI Inflation under emergent Lorentz invariance. For the above reason, DBI Inflation corresponds to a special point in the space of inflationary models and therefore also the space of non-Gaussian statistics. This is well-known for the three and four point functions of the primordial density fluctuations. Our observation about the soft-limit of the scattering amplitude in DBI could be useful in bootstrapping the predictions of DBI for higher-point cosmological correlators. \subsection{Coupling to Gravity} One goal of studying amplitudes in the EFT of Inflation would be to understand inflation itself. One might conclude that the scattering amplitudes are a valid probe of inflation in the sub-horizon regime where the geometry is approximately flat\footnote{Although, one may worry that the soft limit, $q\to 0$, is in tension with the sub-horizon limit.}. Yet, in the process we took the decoupling limit, $\Mpl \to \infty$ which, of course, does not hold in our universe. It was argued in~\cite{Pajer:2020wnj} that the self-consistency of amplitudes in Lorentz violating EFTs only holds for flat space with no gravitational interactions. In particular, they argue that coupling to gravity forbids the scalar amplitudes we discuss in the previous section and thus cannot be arrived at as a limit of the EFT of Inflation. If we wish to apply our soft theorems to inflation itself, it is therefore essential that we understand any potential limitations of this kind. We will show that there is no contradiction introduced by coupling to gravity, or at least not one that is visible in the four point amplitude for the production of a graviton, $\varphi\varphi\to \varphi\gamma$. We can understand the problem, and its resolution, most straightforwardly by considering a spectator scalar field, $\varphi$, with a Lorentz violating interaction $\lambda \dot \varphi^3$. If we include the relativistic graviton coupling that arises from the canonical kinetic term, then our action would take the form \beq {\cal L} \supset \frac{1}{2} \partial_\mu \varphi \partial^\mu \varphi + \frac{\lambda}{3!} \dot\varphi^3 + \frac{1}{2 \Mpl} \gamma^{\mu \nu}\partial_\mu \varphi \partial_\nu \varphi \eeq where we used $g_{\mu\nu} = \eta_{\mu\nu} +\Mpl^{-1}\gamma_{\mu \nu}$ with $\gamma_\mu^\mu= 0$ and kept only the leading gravitational term. If we compute the $\varphi\varphi\to \varphi\gamma$ amplitude, we get \beq {\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\gamma}) = -i\frac{\lambda}{\Mpl} \left( \frac{E_1 E_2 E_{12}}{2 p_1\cdot p_2} p_3^\mu p_3^\nu+\frac{E_1 E_3 E_{13}}{2 p_1\cdot p_3} p_2^\mu p_2^\nu+\frac{E_2 E_3 E_{23}}{2 p_2\cdot p_3} p_1^\mu p_1^\nu \right) \epsilon_{\mu \nu}(p_4) \eeq where $E_{ij} = E_i+E_j$ and $\epsilon_{\mu \nu}(p_4)$ is the polarization tensor for the graviton. In writing this expression, we used $(p_i + p_4)^\mu \epsilon_{\mu \nu}(p_4) = p_i^\mu \epsilon_{\mu \nu}(p_4)$. Normally one would prefer to use spinor-helicity variables for the graviton amplitude, but hopefully our reason for avoiding them will be clear below. Now suppose we perform a gauge transformation, \beq \epsilon_{\mu \nu}(p_4) \to \epsilon_{\mu \nu}(p_4) +i p_4^{\mu} f^\nu(p_4)+i p_4^{\nu} f^\mu(p_4) \eeq for some unknown set of functions $f^\mu(p_4)$. The amplitude will shift by ${\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\gamma}) \to {\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\gamma}) + \delta {\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\gamma})$. \beq \delta {\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\gamma}) = \frac{\lambda}{\Mpl} \left(E_1 E_2 E_{12}\, p_3^\mu+E_1 E_3 E_{13} \, p_2^{\mu} +E_2 E_3 E_{23} \, p_1^{\mu} \right) f_\mu(p_4) \ , \eeq Given that $g^{\mu \nu} \epsilon_{\mu \nu}(p_4) =0$, we can remove a term proportional to $p_4^\mu f_\mu (p_4)$ by subtracting a term proportional to $g^{\mu \nu}$ from polarization-stripped amplitude ${\cal A} (p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\gamma})$ before the gauge transformation. We then notice that we can write $\delta {\cal A}$ as \beq \delta {\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\gamma})= -\frac{\lambda}{\Mpl} \left( E_1 E_2 p_3^{\mu}+ E_1 E_3 p_2^{\mu}+ E_2 E_3 p_1^{\mu} \, \right) E_4 f_\mu(p_4) - E_1 E_2 E_3 p_4^{\mu} f_\mu(p_4)\ , \eeq so that the final term can be removed without changing the amplitude. However, the first term does not vanish and reflects a failure of the Ward identify for the graviton\cite{Weinberg:1964ew}. This failure of the Ward identity also implies that the amplitude written in spinor-helicity variables is not well defined, as was observed in~\cite{Pajer:2020wnj}. The origin of this problem, is that our action is not diffeomorphism invariant. As gravity is gauging a local Lorentz transformation, it is therefore not surprising that the gravitational scattering is inconsistent if Lorentz invariance is explicitly broken. Following the construction of the EFT of Inflation, we can make the action invariant by introducing the Goldstone boson \bea {\cal L} &=& \frac{1}{2} \partial_\mu \varphi \partial^\mu \varphi + \frac{\lambda}{3!} (g^{\mu\nu}\partial_\mu(t+\pi) \partial_{\nu}\varphi)^3 +\frac{1}{2 \Mpl} \gamma^{\mu \nu}\partial_\mu \varphi \partial_\nu \varphi \\ &=& \frac{1}{2} \partial_\mu \varphi \partial^\mu \varphi + \frac{1}{3!}\lambda \dot\varphi^3 + \frac{1}{2\Mpl}\gamma^{\mu \nu}\partial_\mu \varphi \partial_\nu \varphi+ \frac{\lambda}{2 \Mpl} \gamma^{\mu 0}\partial_\mu \varphi \dot \varphi^2 + \frac{ \lambda}{2} \partial_\mu \pi \partial^\mu \varphi \dot \varphi^2 \ , \eea where in the second line we dropped interactions that contribute only higher multiplicity or loop amplitudes. We now see there are two additional contact contributions to the amplitude \begin{align} {\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\gamma^{\mu 0}}) &= -i\frac{ \lambda}{M_{\rm pl}}(p_1^{\mu} E_2 E_3 + p_2^{\mu} E_1 E_3 + p_{3}^\mu E_1 E_2) \epsilon_{\mu 0} (p_4) \\ {\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\pi}) &=\frac{ \lambda}{f_\pi^2}(p_1^{\mu} E_2 E_3 + p_2^{\mu} E_1 E_3 + p_{3}^\mu E_1 E_2) p_{4,\mu} \end{align} Finally, we need to determine the relationship between $\pi$ and $\gamma^{0\mu}$, which is determined by the EFT of Inflation. So simplify the discussion, we will assume $c_s=1$ for $\pi$ so that $\pi$ and $\gamma$ propagate at the speed. The kinetic term for $\pi$ introduces a the metric coupling, \beq {\cal L} = \frac{1}{2} (g^{\mu \nu} \partial_\mu(t+\pi) \partial_\nu(t+\pi)-1) = \frac{1}{2} \partial_\mu \pi \partial^\mu \pi + \frac{1}{\Mpl} \gamma^{0 \mu}\partial_\mu \pi + {\rm total \, derivative} \ . \eeq In addition, under a diffeomorphism, $\pi(p_4) \to \pi(p_4) - f^0(p_4)$. Putting this altogether, the amplitude shifts by \beq \delta {\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\gamma})+\delta {\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\gamma^{\mu 0}})+f_\pi^2 \delta{\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\pi}) \eeq under a diffeomorphism and dropping terms proportional to $p_4^\mu$ we have \begin{align} \delta {\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\gamma}) &=-\frac{\lambda}{\Mpl} \left( E_1 E_2 p_3^{\mu}+ E_1 E_3 p_2^{\mu}+ E_2 E_3 p_1^{\mu} \, \right) E_4 f_\mu(p_4) \\ \delta {\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\gamma^{\mu 0}}) &= \frac{ \lambda}{M_{\rm pl}}(p_1^{\mu} E_2 E_3 + p_2^{\mu} E_1 E_3 + p_{3}^\mu E_1 E_2) (p_{4,\mu} f_0(p_4) +E_4 f_\mu(p_4) )\\ f_\pi^2 \delta {\cal A}(p_{1,\varphi},p_{2,\varphi},p_{3,\varphi},p_{4,\pi})&= - \frac{ \lambda}{\Mpl}(p_1^{\mu} E_2 E_3 + p_2^{\mu} E_1 E_3 + p_{3}^\mu E_1 E_2) p_{4,\mu} f_0(p_4) \ . \end{align} With all three contributions, we see that $\delta {\cal A} = 0$ under diffeomorphisms, as required. The general constraints on couplings of gravity to a theory with spontaneously broken Lorentz invariance are beyond the scope of this work. However, we see that the most basic constraint is that Lorentz invariance is spontaneously broken (and then weakly gauged by gravity). At the same time, deriving additional constraints on the EFT from the consistency of graviton amplitudes is plausible but would likely require a more suitable treatment of the spinor helicity variables. \subsection{Cosmological Correlators} The physics of inflation is encoded in cosmological correlators: equal time in-in correlation functions calculated around the quasi-de Sitter background that describes inflation. The EFT of Inflation is particularly useful in characterizing non-Gaussian cosmological correlators. Concretely, for single-field inflation, the scalar metric fluctuation, $\zeta$, eats the Goldstone such that $\zeta \approx - H \pi$\cite{Maldacena:2002vr} outside the horizon. With the additional assumption of time-independent couplings, the bispectrum gives the leading in-in~\cite{Weinberg:2005vy,Weinberg:2006ac} non-Gaussian correlator, which is given by \cite{Chen:2006nt,Cheung:2007sv,Pajer:2020wxk} \beq\label{eq:bispectrum} \langle \zeta(\k_1) \zeta(\k_2) \zeta(\k_3) \rangle' =\Delta_\zeta^{4} \frac{12 \frac{g_3 c_s^2}{f_\pi^4} e_{3}^{2}- \frac{\delta_c}{2 f_\pi^4} \left(-4 k_{T} e_{2} e_{3}-4 k_{T}^{2} e_{2}^{2}+11k_{T}^{3} e_{3}-3 k_{T}^{4} e_{2}+ k_{T}^{6}\right)}{e_{3}^{3} k_{T}^{3}} \ , \eeq where $\Delta_\zeta = H^2 / (\sqrt{2} f_\pi^2)$, we dropped terms that are slow-roll suppressed, and defined the symmetric polynomials of $k_i$ in terms of the total energy $k_T = k_1+k_2+k_3$, $e_2 = k_1 k_2 +k_1 k_3+k_2 k_3$ and $e_3= (k_1 k_2 k_3)$. One of the key concepts of the cosmological bootstrap is the idea that the residue of the total energy pole (i.e.~the leading pole when we analytically continue to $k_T \to 0$) is the flat-space scattering amplitude~\cite{Maldacena:2011nz,Raju:2012zr}. Indeed, in this case we see that the leading behavior, $\Delta_\zeta^4 36 g_3 c_s^2 (k_1 k_2 k_3)/(k_1 k_2 k_3 k_T^3)$ contains the amplitude ${\cal A}_3 = -6 i g_3 E_1 E_2 E_3 $ after identifying $k_i \to E_i$ for a massless particle. However, we can tune $g_3\to 0$ such that the on-shell three point amplitude vanishes, ${\cal A}_3 =0$. In fact, in flat space the field redefinition $\pi \to \pi - \delta_c \dot \pi \pi$ removes the $\delta_c \dot \pi \partial_\mu \pi \partial^\mu \pi$ interaction from the action. Yet, the bispectrum contains a number of terms that appear to follow from this interaction, including several with poles at $k_T \to 0$. The resolution of this tension is that the field redefinition doesn't remove all the interactions in the inflationary background~\cite{Grall:2020ibl}. The action in an FLRW background, $S = \int d^4 x a^3(t) {\cal L}$ with the same Lagrangian \beq {\cal L} = \frac{1}{2} \partial_\mu \pi \partial^\mu \pi + \frac{\delta_c}{2} \dot \pi \partial_\mu \pi \partial^\mu \pi + g_3 \dot \pi^3 \ . \eeq Performing the field redefinition $\pi \to \pi - \delta_c \dot \pi \pi$ gives the shift of the action \beq {\cal L} \to {\cal L} - \delta_c \dot \pi \partial_\mu \pi \partial^\mu \pi - \delta_c \pi \partial_\mu \dot \pi \partial^\mu \pi + {\cal O}(\pi^4) \ . \eeq In flat space, these two terms are related by a total derivative. However, in our FLRW background the total derivative takes the form \beq \frac{d}{dt} (a^3 \pi \partial_\mu \pi \partial^\mu \pi ) =a^3 \left( \dot \pi \partial_\mu \pi \partial^\mu \pi + 2 \partial_\mu \dot \pi \partial^\mu \pi + 3 H c_s^{-2}\pi \dot \pi^2 -a H \pi \partial_i \pi \partial^i \pi\right) \eeq As a result, after the field redefinition we have \beq {\cal L}\to \frac{1}{2} \partial_\mu \pi \partial^\mu \pi + \frac{\delta_c}{2}\left(3 H c_s^{-2}\pi \dot \pi^2 - H \pi a^{-2} \partial_i \pi \partial^i \pi\right) + g_3 \dot \pi^3 \ . \eeq Given that $\dot \pi\pi \to 0$ as $a(t) \to \infty$, we can still calculate the contribution to the scalar metric fluctuation with $\zeta = -H \pi$. Using these three operators we have three contributions to the bispectrum \cite{Cheung:2007sv} \begin{align} B_{\dot \pi^3} &= \Delta_\zeta^4 \frac{ g_3 \, c_s^2}{f_\pi^4} \frac{12 e_3^2}{e_3^3 k_T^3} \\ B_{\pi\dot \pi^2} &= \Delta_\zeta^4 \frac{\delta_c}{f_\pi^4} \frac{-6 e_3 k_T^3 + 3 k_T^2 e_2^2+ 3 k_T e_2 e_3 }{e_3^3 k_T^3}\\ B_{\pi \partial_i\pi \partial^i \pi} &= \Delta_\zeta^4 \frac{ \delta_c}{2 f_\pi^4} \frac{ e_3 k_T^3-2 k_T^2 e_2^2-2 k_T e_3 e_2 +3 k_T^4 e_2 - k_T^6}{e_3^3 k_T^3} \end{align} Combining these terms we reproduce the bispectrum in Equation~(\ref{eq:bispectrum}). Interestingly, when we set $g_3 = 0$, the flat-space amplitude vanishes but there remains a total-energy pole in the correlator~\cite{Grall:2020ibl}, which arises in DBI Inflation for example. However, this is a result of how we take the flat-space limit. Concretely, the interaction $\pi \dot \pi^2$ also contributions a non-zero amplitude in flat-space. This interaction is present after our field redefinition in dS, but would vanish in the $H \to 0$ limit. However, this does not imply a suppression of the cosmological correlator as the powers of $H$ are fixed by dimensional analysis. The end result is that there remains a total energy pole when $g_3 =0$ associated with the $\pi \dot \pi^2$ amplitude. \subsection{Multifield and Quasi-Single Field Inflation} The unique relationship between soft theorems and the predictions of inflation are specific to single-field inflation. Concretely, when the dynamics of inflation are controlled by a single degree of freedom, we can always choose a gauge where that degree of freedom is the scalar mode of the metric, or equivalently, the goldstone boson $\pi$. However, in the presence of multiple degrees of freedom, the relationship between the observable scalar fluctuation and the goldstone boson $\pi$ is not longer fixed by diffeomorphism invariance. In the presence of multiple light scalar fields, we can pick a gauge where fluctuations along the inflationary trajectory are given by the goldstone, $\pi$ and all additional fields, $\vec \phi$ are transverse. During inflation, $\pi$ is eaten by the metric and the $\vec \phi$ fields are effectively isocurvature modes (at least at linear order). However, the dynamics of the inflationary or post-inflationary universe can convert the isocurvature fluctuations into metric fluctuations. Such processes are local on the scales we will observe and therefore $\zeta = F[\pi, \vec \phi]$ for some model-dependent function $F$~\cite{Senatore:2010wk}. One can easily arrange models where $\zeta$ is determined by a single transverse scalar, $\zeta \approx \kappa \phi$ so that the statistics of the metric fluctuations are not fixed by $\pi$ or the Ward identities discussed here. Quasi-single field inflation~\cite{Chen:2009zp,Baumann:2011nk,Chen:2012ge,Noumi:2012vr}, also known as cosmological collider physics~\cite{Arkani-Hamed:2015bza,Meerburg:2016zdz,Alexander:2019vtb,Kumar:2019ebj,Wang:2019gbi,Bodas:2020yho}, provides an interesting middle ground where additional fields are important but do not destroy the relationship between the metric fluctuations and the Goldstone boson of the EFT of Inflation. Additional massive fields, $m ={\cal O }(H)$, and particles with spin will decay outside the horizon. They do not survive until the end of inflation and therefore reheating and subsequent evolution is determined solely by $\pi$ via $\zeta \approx - H \pi$. These particles can still alter the statistics of $\pi$, and therefore $\zeta$, through interactions during inflation. These interactions are governed by the EFT of Inflation coupled to additional matter and are subject to the same mixed constraints discussed in Section~\ref{sec:mixed}. \section{Outlook and Conclusions} \label{sec:conclusions} The structure of cosmological correlators is deeply tied to scattering amplitudes in flat space. At the level of a Lagrangian, this may seem like a vacuous statement, as both the amplitudes and correlators can be determined via the Feynman rules. Yet, amplitudes are known to display a wide range of constraints and simplifications that are hardly apparent from the Lagrangian. This same simplicity is hiding in cosmological correlators, which are known to contain the amplitude as the residue of the total energy pole. In recent years, this relationship has inspired the cosmological bootstrap program~\cite{Baumann:2022jpr}, which aims to understand the structure and consistency of our cosmological observables without directly appealing to the Lagrangian or the Feynman rules. A unique challenge of this program is that inflation is described by a non-relativistic EFT~\cite{Creminelli:2006xe,Cheung:2007st}. If amplitudes serve as a model for our understanding of cosmology, then it is noteworthy that we understand considerably less about amplitudes when time translation and/or boosts are broken. Our goal in this paper was to understand how the structure of the EFT of Inflation is reflected in the scattering amplitudes. As inflation spontaneously breaks Lorentz boosts, the constraints from Lorentz invariance manifest themselves in Ward identities that connect the soft Goldstone boson emission to the hard scattering process. The self-consistency of amplitudes is known to place non-trivial constraints on the space of EFTs and their Wilson coefficients~\cite{deRham:2022hpx}. Famously some higher-derivative couplings are forced to be positive~\cite{Pham:1985cr,Adams:2006sv}. In addition, consistency of the soft theorems may determine the full structure of the amplitude. Any hope of deriving similar results in the inflationary context relies on a deeper understanding of amplitudes with broken Lorentz boosts~\cite{Baumann:2015nta}. As we have seen, this is particularly challenging for couplings to the graviton, as it gauges the broken symmetries. Yet, even for scalars, our limited understanding of the analytic structure of the amplitude and the absence of crossing symmetry are clear obstacles to using flat-space techniques directly (see e.g.~\cite{Baumann:2019ghk,Grall:2021xxm,Creminelli:2022onn} for recent progress). There are several avenues for future work. A natural next step for this work would be to apply the soft theorem to derive recursion relations for on-shell amplitudes~\cite{Cheung:2015ota,Luo:2015tat,Bartsch:2022pyi}. Since the inflaton we consider is derivatively coupled, we need to have the full control of the $\order(q)$ soft behavior in order to derive the recursion relations. The soft theorem we derived here only applies the spatial component of the soft momentum $q$. The soft behavior of the energy component depends both on lower order scattering amplitudes and a new coupling constant $M_n^4$ for an $n$-particle amplitude. One needs to disentangle the two contributions in order to derive the recursion relations. As the Goldstone boson naturally mixes with graviton, it would be interesting to extend the soft theorem to include gravitons. We expect that gravitons and Goldstone bosons are closely related since the gauge invariance is only restored after considering the combination of the two. Their dependence could lead to a ``transmutation'' at the level of on-shell amplitudes~\cite{Cheung:2017ems}. It would also be great to understand the soft theorems of derivatively-coupled Goldstone bosons in terms of geometric perspective~\cite{Cheung:2021yog,Cheung:2022vnd,Cohen:2022uuw}. For cosmological applications, one would like to explore the relationship between soft theorems in cosmology and soft theorems in amplitudes~\cite{Mirbabayi:2016xvc}, particularly in the context of multi-field inflation. It would also be interesting to investigate the full soft theorem for loop amplitudes; extending the non-perturbative statements to the $c_s \le 1$ case would be important for a large class of inflation models. These results could sharpen our understanding of amplitudes in the context of inflation, with the hope to place stringent constraints on the candidates of inflationary models~\cite{deRham:2022hpx}. Finally, while our work focuses on the EFT of Inflation, there is a zoo of EFTs for spontaneous breaking of Lorentz boosts, such as framid, phonons, and Galileid~\cite{Nicolis:2015sra}. For theories with enhanced Adler zeros, the on-shell methods naturally unify different EFT and place sharp constraints~\cite{Cheung:2016drk}. It would be fascinating to unify all boost-breaking EFTs also from the on-shell perspective. \paragraph{Acknowledgements} \hskip 5pt We are grateful to Daniel Baumann, Tim Cohen, Clifford Cheung, Nathaniel Craig, Maria Derda, Andreas Helset, Austin Joyce, Aneesh Manohar, Julio Parra-Martinez, Riccardo Penco, Akhil Premkumar, and the participants of the Simons Symposium {\it Amplitudes meets Cosmology} for helpful discussions. CHS thanks Aneesh Manohar for teaching him nuclear physics. DG and YH are supported by the US~Department of Energy under Grant~\mbox{DE-SC0009919}. CHS is supported in part by the U.S.\ Department of Energy (DOE) under award number~DE-SC0009919 and the National Science Foundation (NSF) under Grant No. NSF PHY-1748958. This work was completed at the Aspen Center for Physics, which is supported by the NSF grant PHY-1607611. \clearpage \phantomsection \addcontentsline{toc}{section}{References} \bibliographystyle{utphys} \bibliography{Wardrefs}
Title: Primary Cosmic Rays Energy Spectrum and Mean Mass Composition by the Data of the TAIGA Astrophysical Complex
Abstract: The corrected dependence of the mean depth of the EAS maximum $X_{max}$ on the energy was obtained from the data of the Tunka-133 array for 7 years and the TAIGA-HiSCORE array for 2 year. The parameter $\langle\ln A\rangle$, characterizing the mean mass compositon was derived from these results. The differential energy spectrum of primary cosmic rays in the energy range of $2\cdot 10^{14}$ - $2\cdot 10^{16}$\,eV was reconstructed using the new parameter $Q_{100}$ the Cherenkov light flux at the core distance 100 m.}
https://export.arxiv.org/pdf/2208.01689
\begin{center}{\Large \textbf{Primary Cosmic Ray Energy Spectrum and Mean Mass Composition Using Data from the TAIGA Astrophysical Complex\\ }}\end{center} \begin{center} V. Prosin\textsuperscript{1$\star$} I. Astapov\textsuperscript{6} P. Bezyazeekov\textsuperscript{2} E. Bonvech\textsuperscript{2} A. Borodin\textsuperscript{7} A. Bulan\textsuperscript{1} A. Chiavassa\textsuperscript{4} D. Chernov\textsuperscript{1} A. Dyachok\textsuperscript{2} A. Gafarov\textsuperscript{2} A. Garmash\textsuperscript{11,9} V. Grebenyuk\textsuperscript{7,8} O. Gress\textsuperscript{2} E. Gress\textsuperscript{2} T. Gress\textsuperscript{2} A. Grinyuk\textsuperscript{7} O. Grishin\textsuperscript{2} A. D. Ivanova\textsuperscript{2} A. L. Ivanova\textsuperscript{9,2} N. Kalmykov\textsuperscript{1} V. Kindin\textsuperscript{6} S. Kiryuhin\textsuperscript{2} R. Kokoulin\textsuperscript{6} K. Komponiets\textsuperscript{6} E. Korosteleva\textsuperscript{1} V. Kozhin\textsuperscript{1} E. Kravchenko\textsuperscript{9,11} A. Kryukov\textsuperscript{1} L. Kuzmichev\textsuperscript{1} A. Lagutin\textsuperscript{10} M. Lavrova\textsuperscript{7} Y. Lemeshev\textsuperscript{2} B. Lubsandorzhiev\textsuperscript{3} N. Lubsandorzhiev\textsuperscript{1} A. Lukanov\textsuperscript{3} D. Lukyantsev\textsuperscript{2} S. Malakhov\textsuperscript{2} R. Mirgazov\textsuperscript{2} R. Monkhoev\textsuperscript{2} E. Okuneva\textsuperscript{6} E. Osipova\textsuperscript{1} A. Pakhorukov\textsuperscript{2} A. Pan\textsuperscript{7} L. Panasenko\textsuperscript{11} L. Pankov\textsuperscript{2} A. D. Panov\textsuperscript{1} A. Petrukhin\textsuperscript{6} I. Poddubny\textsuperscript{2} D. Podgrudkov\textsuperscript{1} V. Poleschuk\textsuperscript{2} V. Ponomareva\textsuperscript{2} E. Popova\textsuperscript{1} E. Postnikov\textsuperscript{1} V. Ptuskin\textsuperscript{5} A. Pushnin\textsuperscript{2} R. Raikin\textsuperscript{10} A. Razumov\textsuperscript{1} G. Rubtsov\textsuperscript{3} E. Ryabov\textsuperscript{2} Y. Sagan\textsuperscript{7,8} V. Samoliga\textsuperscript{2} A. Silaev\textsuperscript{1} A. Silaev(junior)\textsuperscript{1} A. Sidorenkov\textsuperscript{3} A. Skurikhin\textsuperscript{1} A. Sokolov\textsuperscript{9,11} L. Sveshnikova\textsuperscript{1} V. Tabolenko\textsuperscript{2} A. Tanaev\textsuperscript{2} B. Tarashchansky\textsuperscript{2} M. Y. Ternovoy\textsuperscript{2} L. Tkachev\textsuperscript{7} R. Togoo\textsuperscript{12} N. Ushakov\textsuperscript{3} A. Vaidyanathan\textsuperscript{11} P. Volchugov\textsuperscript{1} N. Volkov\textsuperscript{10} D. Voronin\textsuperscript{3} A. Zagorodnikov\textsuperscript{2} A. Zhaglova\textsuperscript{2} D. Zhurov\textsuperscript{2,13} I. Yashin\textsuperscript{6} \end{center} \begin{center} {\bf 1} Skobeltsyn Institute of Nuclear Physics, Moscow State University, Moscow, Russia \\ {\bf 2} Applied Physics Institute of Irkutsk State University, Irkutsk, Russia \\ {\bf 3} Institute for Nuclear Research of the RAS, Troytsk, Moscow, Russia \\ {\bf 4} Dipartimento di Fisica Generale Universiteta di Torino and INFN, Turin, Italy \\ {\bf 5} Puskov Institute of Terrestrial Magnetism, Ionosphere and Radio Wave Propagation of the RAS, Troitsk, Moscow. Russia \\ {\bf 6} National Research Nuclear University MEPhI, Moscow, Russia \\ {\bf 7} Joint Institute for Nuclear Research, Dubna, Moscow Region, Russia \\ {\bf 8} DUBNA University, Dubna, Moscow Region, Russia \\ {\bf 9} Budker Institute of Nuclear Physics SB RAS, Novosibirsk, Russia \\ {\bf 10} Altai State Univeristy, Barnaul, Russia \\ {\bf 11} Novosibirsk State University, Novosibirsk, Russia \\ {\bf 12} Institute of Physics and Technology Mongolian Academy of Sciences, Ulaanbaatar, Mongolia \\ {\bf 13} Irkutsk National Research Technical University, Irkutsk, Russia \\ * v-prosin@yandex.ru \end{center} \begin{center} \today \end{center} \definecolor{palegray}{gray}{0.95} \begin{center} \colorbox{palegray}{ \begin{tabular}{rr} \begin{minipage}{0.1\textwidth} \includegraphics[width=30mm]{TIFR.eps} \end{minipage} & \begin{minipage}{0.85\textwidth} \begin{center} {\it 21st International Symposium on Very High Energy Cosmic Ray Interactions (ISVHECRI 2022)}\\ {\it Online, 23-27 May 2022}\\ \doi{10.21468/SciPostPhysProc.?}\\ \end{center} \end{minipage} \end{tabular} } \end{center} \section*{Abstract} {\bf The corrected dependence of the mean depth of the EAS maximum $X_{max}$ on the energy was obtained from the data of the Tunka-133 array for 7 years and the TAIGA-HiSCORE array for 2 year. The parameter $\langle\ln A\rangle$, characterizing the mean mass compositon was derived from these results. The differential energy spectrum of primary cosmic rays in the energy range of $2\cdot 10^{14}$ – $2\cdot 10^{16}$\,eV was reconstructed using the new parameter $Q_{100}$ the Cherenkov light flux at the core distance 100 m. Change of the parameter for the energy reconstuction for the TAIGA-HiSCORE from $Q_{200}$ to $Q_{100}$ provides a decreasing energy threshold for the spectrum to about 200 TeV.} \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} \label{sec:intro} The energy spectrum and mass composition of primary cosmic rays are the main characteristics that can be obtained by studying of extensive air showers (EAS). The total flux of Cherenkov light is proportional to the total energy scattered by the shower in the atmosphere. The lateral distribution function (LDF) of the EAS Cherenkov light reflects the position of the shower development maximum, which in turn characterizes the mass of the primary particle. The main aim of the present work is to achieve the lower energy threshold by the new method of energy reconstruction, descibed below. \section{Brief description of the arrays} Several arrays that detect the EAS Cherenkov light were successively constructed in the Tunka Valley. The most productive arrays were the Tunka-25 (2000 - 2005) \cite{t25}, which consisted of 25 detectors with a sensitive area of 0.1 $m^2$ each, covering a total area of approximately 0.1 $km^2$, and the Tunka-133 (2009 - 2017) \cite{t133}, consisted finally of 175 detectors with a sensitive area of 0.03 $m^2$, covering an area of approximately 3 $km^2$. The experimetal data was accumulated over 350 clear moonless nights. The total time of the data acquisition was 2175h. Their modern successor is the TAIGA-HiSCORE \cite{his} array, a part of the TAIGA experimental complex \cite{taiga}. TAIGA-HiSCORE single station has a sensitive area of 0.5 $m^2$. Every station has it's own trigger. The stations are merged to an EAS event in case of $\geq 3$ station hits coincident inside a time gate of 3 $\mu$s. This work presents the TAIGA-HiSCORE data that was obtained using 67 stations (two first clusters) during 135 clear moonless nights in the seasons of 2019 - 2020 and 2020 - 2021. The total data acquisition time was 327 h. \section{Reconstruction of the EAS parameters} The reconstruction of the EAS parameters for the Tunka-133 array is described in \cite{t133}. The same algorithms and the fitting functions are used for the TAIGA-HiSCORE data processing \cite{his}. We use the ratio $P = Q(80)/Q(200)$ as a quantitative parameter of LDF steepness. Here, $Q(R)$ is the Cherenkov light flux at a distance, R, in meters. One has to control that there are the measurements of light flux at core distances $R_c \geq 200$\,m and $R_c \geq 80$\,m. The first of these conditions is applied to the events for the primary energy $E_0 \geq 10^{16}$\,eV for Tunka-133 and $E_0 \geq 10^{15}$\,eV for the TAIGA-HiSCORE. CORSIKA simulation \cite{xms} confirmed that the Cherenkov light LDF steepness is determined solely by the thickness of the atmosphere between the array and the depth of the EAS maximum ($\Delta X_{max} = X_0/sec\theta - X_{max}$). Here, $X_0$ is the total depth of the atmosphere. The calculated connection between $P$ and $\Delta X_{max}$, inside the limitted range of parameter $P$ from 2.5 to 9, can be fitted inside the following expression \cite{xms}: \begin{equation} \Delta X_{max}=\left\{ \begin{array}{lc} \!\ 929 - 103\cdot P, & \mbox{if}\hspace{5mm} P \leq 3.9\\ \!\ 882 - 91\cdot P, & \mbox{if}\hspace{5mm} P > 3.9\\ \end{array} \right. \end{equation} The standard deviation of simulated points from the fitting line for this range is approximately 15 $g/cm^2$ \cite{xms}. \section{Mean experimental depth of the EAS maximum} The above described parameter of the LDF steepness $P$ was applied to analyze the data of both the Tunka-133 and TAIGA-HiSCORE arrays. The depth of the maximum is found by the formula: \begin{equation} X_{max} = 965/sec\theta - \Delta X_{max} \end{equation} where 965 $g/cm^2$ is the total depth of the atmosphere at the location of the arrays in the Tunka Valley. To obtain undistorted estimations of the depth of the maximum, showers are selected for the zenith angle $\theta \leq 30^{\circ}$ and the energy above $10^{16}$\,eV for the Tunka-133 array and above $10^{15}$\,eV for TAIGA-HiSCORE. We have 69000 events for 7 years of operation of the Tunka-133 and 380000 events for 2 years of operation of TAIGA-HiSCORE. The expermental results are shown in Fig.\,1. The data of both arrays, despite the difference in their geometry, agree well with each other, providing a wide energy range from $10^{15}$ to $3\cdot 10^{17}$\,eV. Our experimental data are compared with the direct measurements of the depth of the maximum obtained by observing the fluorescent EAS light at the Pierre Auger Observatoy (PAO) \cite{pao} and Telescope Array (TA) \cite{ta}. A close agreement of our data with the PAO data is observed at an energy of $\sim 3\cdot 10^{17}$\,eV. All the experimental results are compared with theoretical curves calculated using the QGSJET-II-04 model \cite{ost} for primary protons and iron nuclei. Fig.\,2 shows the results of recalculation from the mean depth of the maximum to the parameter $\langle\ln A\rangle$ the average logarithm of the atomic number using the QGSJET-II-04 model. Qualitatively, the behavior of the mean mass composition repeats what was published in our previous studies \cite{xmold} becomng heavier in the energy range of $3\cdot 10^{15}$ - $3\cdot 10^{16}$\,eV and lighter with a further increase in energy. However, the mean composition in the entire energy range under consideration is estimated as mostly light. \section{Cherenkov light flux at a core distance 100 m as a new estimator of energy} The TAIGA-HiSCORE array structure is a square net of stations with a step of about 100 m. So the minimal core distance for which light flux can be reconstructed for almost all the events is about 100 m. Our previous parameter for the energy reconstruction was light flux at a core distance 200 m ($Q_{200}$) \cite{his}. When EAS zenith angle $\theta$ changes from $0^{\circ}$ to $45^{\circ}$, $Q_{200}$ changes by less than 10\%. Therefore, it was assumed in \cite{his}, that $Q_{200}$ does not depend on $\theta$ for the fixed particle energy. It was found by the new CORSIKA simulaion that light flux $Q_{100}$ depends on the zenith angle $\theta$ sigificantly more, changing by about 2.5 times with the same change in $\theta$. So first one needs to recalculate from the measured light flux to the $\theta = 0^{\circ}$ using the new CORSIKA results: \begin{equation} \log_{10}(Q_{100}(0)) = \log_{10}(Q_{100}(\theta)) + (\sec \theta - 1)\cdot(1.25 - 0.083\cdot \log_{10}(Q_{100}(\theta)) \end{equation} Then $Q_{100}(0)$ can be recalculated to the primary energy $E_0$ using the result of the new CORSIKA simulation: \begin{equation} \log_{10}(E_0/GeV) = 0.88\cdot \log_{10}(Q_{100}(0)) + 5.14 \end{equation} \section{Experimental energy spectrum by the data of TAIGA-HiSCORE} The experimental energy estimation differs from that described at the previos section because the real atmosphere light absorbtion is different from night to night in contradiction with standard absorption assumed in CORSIKA simulations. So first we obtain the integral energy spectrum for the single night using the expression (4). Then we normalize this spectrum to the reference energy spectrum measured by the QUEST experiment \cite{kor}. The mean difference of normalization constant from that in the expression (4) is 0.03. The differential energy spectrum obtained from the data of TAIGA-HiSCORE array is shown in Fig.\,3. Efficiency of the events at the first left point (starting from the energy $2\cdot 10^{14}$\,eV) is more than 95\%. Points for the lower energy obtained from the events with lower efficiency are removed. The low energy points of our spectrum are in good agreement with direct balloon \cite{atic}, satellite \cite{nucl} and mountain \cite{hawc} measurements. \section{Conclusions} The new estimations of $X_{max}$, derived from the steepness parameter $P = Q(80)/Q(200)$ provides good agreement between both the results of our arrays Tunka-133, TAIGA-HiSCORE and Tunka-133 and the direct measurements of $X_{max}$ at the Pierre Auger Observatory (PAO). The primary composition, derived from $X_{max}$ is lighter than it seemed in our previous publications. It is mostly light (p+He) over the whole energy range. The observed increase of $\langle \ln A\rangle$ in the energy range $10^{16}$ - $10^{17}$\,eV demands a new theoretical explanation. Change of the parameter for the energy reconstruction for the TAIGA-HiSCORE from $Q_{200}$ to $Q_{100}$ provides a decreasing energy threshold for the spectrum to about 200 TeV. The all particle energy spectrum over the energy range 200 TeV - 3 PeV follows a pure power law with index $2.71\pm 0.01$. \section*{Acknowledgements} The work was performed at the UNU “Astrophysical Complex of MSU-ISU” (agreement EB 075-15-2021-675). The work is supported by RFBR (grants 19-52-44002, 19-32-60003), the RSF(grants 19-72-20067(Section 2)), the Russian Federation Ministry of Science and High Education (projects FZZE-2020-0017, FZZE-2020-0024, and FSUS-2020-0039). \nolinenumbers
Title: The supernova remnant SN 1006 as a Galactic particle accelerator
Abstract: The origin of cosmic rays is a pivotal open issue of high-energy astrophysics. Supernova remnants are strong candidates to be the Galactic factory of cosmic rays, their blast waves being powerful particle accelerators. However, supernova remnants can power the observed flux of cosmic rays only if they transfer a significant fraction of their kinetic energy to the accelerated particles, but conclusive evidence for such efficient acceleration is still lacking. In this scenario, the shock energy channeled to cosmic rays should induce a higher post-shock density than that predicted by standard shock conditions. Here we show this effect, and probe its dependence on the orientation of the ambient magnetic field, by analyzing deep X-ray observations of the Galactic remnant of SN 1006. By comparing our results with state-of-the-art models, we conclude that SN 1006 is an efficient source of cosmic rays and obtain an observational support for the quasi-parallel acceleration mechanism.
https://export.arxiv.org/pdf/2208.14491
\date{\today} \begin{affiliations} \item Universit\`a degli Studi di Palermo, Dipartimento di Fisica e Chimica E. Segr\`e, Piazza del Parlamento 1, 90134 Palermo, Italy \item INAF-Osservatorio Astronomico di Palermo, Piazza del Parlamento 1, 90134 Palermo, Italy, \item Department of Astronomy and Astrophysics \& Enrico Fermi Institute, The University of Chicago, 5640 S Ellis Ave, Chicago, IL 60637, USA \item Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France \item Anton Pannekoek Institute, GRAPPA, University of Amsterdam, PO Box 94249, 1090 GE Amsterdam, The Netherlands \item GRAPPA, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands \end{affiliations} \section*{Introduction} Cosmic rays (CRs) are extremely energetic particles, mainly composed by protons. It is widely accepted that the bulk of cosmic rays (below the knee at approximately $3\times10^{15}$ eV) stems within our Galaxy, playing a significant role in its energy budget\cite{sbd93}. Several convincing indications point towards supernova remnants (SNRs) as their most likely source\cite{bla13}. Shock waves in SNRs can accelerate particles through the first-order Fermi mechanism (or diffusive shock acceleration, DSA). The capability of SNRs of accelerating electrons is testified by the ubiquitous synchrotron radio emission detected at their shock fronts\cite{gre19}, associated with approximately $1-10$ GeV electrons. X-ray synchrotron emission from ultrarelativistic (about $10$ TeV) electrons has been first detected in the remnant of the supernova observed in 1006 AD\cite{kpg95} (hereafter SN~1006) and, afterward, in other young SNRs\cite{vin12}. Accelerated protons in SNRs might provide $\gamma-$ray emission through interaction with protons in their environment leading to $\pi^0$ production and subsequent decay. TeV $\gamma-$ray emission from a handful of shell-like SNRs has been observed\cite{hes18}, while GeV emission has been detected in about $30$ remnants\cite{aaa16}. In SN~1006, $\gamma-$ray emission has been detected in both the TeV and GeV bands, with the \emph{HESS} observatory\cite{aaa10} and \emph{Fermi} telescope\cite{cla17}, respectively, but its nature is uncertain, since $\gamma-$ray emission can also be produced by inverse Compton from ultrarelativistic electrons (leptonic scenario). Indications for a hadronic origin of the $\gamma-$rays have been obtained in a few cases\cite{tgc10,mc12,aaa13,sle14}, confirming that SNR shocks can indeed accelerate hadrons. The identification of SNRs as the main Galactic factory of CRs is still based on plausibility arguments and many important open issues need to be addressed\cite{geg19}. To prove that SNRs are CR factories, it is necessary to show that they indeed supply the power needed to sustain the Galactic CRs (of the order of $10^{41}$ erg s$^{-1}$). Considering the rate of supernovae in our Galaxy (about $2$ per century), SNRs should transfer about $10-20\%$ of their characteristic kinetic energy (approximately $10^{51}$ erg) into CRs\cite{hil05}. The loss of such a large fraction of the ram energy is expected to alter the shock dynamics with respect to the adiabatic case. Nonlinear DSA predicts the formation of a shock precursor (travelling "ahead" of the main shock) that modifies the shock structure. This shock modification should result in an increase of the total shock compression ratio, $r_t$, and a decrease of the post-shock temperature with respect to the Rankine-Hugoniot (adiabatic) values\cite{dru83,deb00,bla02,vyh10}. Recent self-consistent hybrid (kinetic ions and fluid electrons) simulations show that efficient acceleration of CRs leads also to the formation of a shock postcursor where non-linear magnetic fluctuations and CRs drift away from the shock front, moving downstream\cite{haggerty+20,caprioli+20}. This postcursor is quantitatively more important for shock modification than the precursor; it acts as an additional energy sink, providing an increase of $r_t$, even with a moderate acceleration efficiency: when the CR pressure is about $5-10\%$ of the bulk ram pressure, the total compression ratio ranges approximately between $r_t=5-7$. SN~1006 is an ideal target to reveal shock modification. Thanks to its height above the Galactic plane (approximately $600$ pc), the remnant evolves in a fairly uniform environment (in terms of density and magnetic field). Moreover, SN~1006 shows regions with prominent particle acceleration, where shock modification might be present, together with regions where there are no indications of particle acceleration and where we do not expect shock modification. In particular, the bilateral radio, X-ray and $\gamma-$ray morphology of SN 1006 clearly reveals nonthermal emission in the northeastern and southwestern limbs. X-ray synchrotron emission in nonthermal limbs, seen notably above 2.5 keV, identifies sites of electron acceleration to TeV energies, whereas the lack of nonthermal emission in the southeastern and northwestern limbs marks regions without considerable particle acceleration, where the X-ray emission is mainly thermal (see Fig. \ref{fig:Xradio}, panel a). It has been shown that the efficiency of diffusive shock acceleration increases by decreasing the angle $\theta$ between the ambient magnetic field and the shock velocity\cite{cs14}. There is strong evidence that the bilateral morphology of SN~1006 can be explained in the framework of this quasi-parallel scenario, with the ambient magnetic field, \textbf{B$_0$}, oriented approximately in the southwest-northeast direction\cite{rbd04,bom11,rhm13}. If these nonthermal limbs are also sites of efficient hadron acceleration, the signature of shock modification is expected to be stored in the X-ray emission of the shocked interstellar medium\cite{deb00} (ISM). The amount of shock modification, namely an increase in the post-shock density, should raise near the nonthermal limbs (i.e., in quasi-parallel conditions), being smaller in thermal regions, where the shock velocity and \textbf{B$_0$} are almost perpendicular. Shock modification is expected to reshape the remnant structure, by reducing the distance between the forward shock and the outer border of the expanding ejecta (i. e., the ``contact discontinuity"). However, this effect (observed in SN~1006\cite{chr08,mbi09}) can also be explained as a natural result of ejecta clumpiness\cite{obm12}, without the need of invoking shock modification. A first attempt to observe azimuthal variations in shock compressibility was performed by analyzing the \emph{XMM-Newton} X-ray observations of the southeastern limb of SN~1006\cite{mbd12}. In this region, the X-ray emission is mainly thermal but contaminated by the contribution of shocked ejecta. Nevertheless, a spatially resolved spectral analysis showed a faint X-ray emitting component associated with the shocked ISM. The density of this component can be inferred from its emission measure ($EM$, defined as the integral of the square of the thermal particle density over the volume of the X-ray emitting plasma). The post-shock density shows hints of azimuthal modulation, with a minimum around $\theta=90^\circ$ (i.e., quasi-perpendicular conditions) and a rapid increase toward the nonthermal limbs, suggestive of a higher shock compressibility. This analysis was restricted to a small portion of the shell (approximately $\theta=90^\circ\pm35^\circ$), without including regions with quasi-parallel conditions, and limited by the spatial resolution of the \emph{XMM-Newton} telescope (comparable to the distance between the shock and ejecta). It was thus not possible to isolate regions with ISM emission only, and all the spectra were contaminated by the ejecta contribution, requiring additional assumptions (namely, pressure equilibrium between ejecta and ISM) to estimate the volume occupied by the ISM and then its density. Here, we identify shock modification in SN 1006 by studying the azimuthal profile of the post-shock density, and overcome the aforementioned limitations by combining deep X-ray observations performed with \emph{XMM-Newton} (\url{https://www.cosmos.esa.int/web/xmm-newton}) and \emph{Chandra} telescopes (\url{https://chandra.harvard.edu/}), to benefit from spectroscopy sensitivity and high-spatial resolution of the post-shock region. \section*{Results} \emph{Spatially resolved spectral analysis} \noindent Panel a of Fig. \ref{fig:Xradio} shows the soft ($0.5-1$ keV, mainly thermal emission) and hard ($2.5-7$ keV, nonthermal emission) X-ray maps of SN~1006, together with HI distribution in the ambient medium\cite{mad14}. Panel b shows the Balmer H$_{\alpha}$ image\cite{wwr14} and the radio continuum map. The ambient magnetic-field orientation is superimposed on the maps, with angle $\theta=0^\circ$ assumed at the center of the northeastern limb. The position of the forward shock is indicated by its Balmer optical emission (panel b and e of Fig.\ref{fig:Xradio}) in thermal limbs, and by the hard X-ray synchrotron filaments in nonthermal limbs (panel a of Fig.\ref{fig:Xradio}). The spatial distribution of the ambient atomic hydrogen around SN~1006 is quite homogeneous\cite{dgg02,mad14} and the only structures detected are localized in the western and northwestern rims (Fig. \ref{fig:Xradio}, panel a). None are observed in the regions considered for our analysis, where the ambient medium appears uniform. While we cannot exclude small fluctuations in the ambient density, they are undetected with current radio data. We consider two additional pieces of evidence supporting a uniform upstream medium in this sector of the remnant. The first one is the fairly circular shape of the shock front from the southern to northeastern rim: the shock velocity (and then its radius) depends on the ambient density, and the almost constant shock radius shown in Fig. \ref{fig:Xradio} indicates a uniform environment. The second piece of evidence is the faint and fairly uniform surface brightness of the H$_\alpha$ emission (whose intensity depends on the local density) in this sector of the shell (see panel e in Fig. \ref{fig:Xradio}). These two conditions support the scenario of a tenuous and homogeneous ambient medium. Therefore, we interpret azimuthal modulations in the post-shock density as ascribed only to variations in $r_t$. We define nine narrow spatial regions immediately behind the forward shock for X-ray spectral studies, excluding the regions contaminated by the ejecta (see lower panels of Fig. \ref{fig:Xradio}). The superior spatial resolution of \emph{Chandra} allows us to identify the outermost ejecta, whose projected position in the plane of the sky is marked by ripples of thermal emission. Abrupt variations in the X-ray surface brightness of thermal emission show the position of the contact discontinuity. In particular, the X-ray surface brightness of the outermost ejecta is more than 10 times larger than that of the background ($S_{out}$, i.e., the surface brightness outside the shell), while the X-ray surface brightness within regions in the southeastern limb (i.e., regions $-3,~-2,~-1,~0,~+1$) is only $2-5\times S_{out}$. The inner border of these regions corresponds to a surface brightness contour level at $6\times S_{out}$, thus marking the sharp separation between ejecta and ISM emission. In regions $2,~3,~4,~5$, the contribution of synchrotron emission to the X-ray surface brightness dominates. Therefore, in this part of the remnant we selected very narrow regions, immediately behind the shock front, by carefully excluding visible ejecta clumps. We investigate a large portion of the shell ($\theta=0^\circ-122^\circ$), including regions with quasi-parallel conditions where shock modification is expected to be at its maximum. Panel a of Fig. \ref{fig:spec} shows the X-ray spectrum extracted from region $0$, revealing the shocked ambient medium at approximately $\theta=90^\circ$, where we do not observe prominent particle acceleration and the shock is adiabatic ($r_t=4$). The spectrum can be modelled by an isothermal optically thin plasma in non-equilibrium of ionization (parametrized by the ionization parameter $\tau$, defined as the time integral of the density reckoned from the impact with the shock). The post-shock density of the plasma is $n=0.164^{+0.014}_{-0.016}$ cm$^{-3}$, in good agreement with previous estimates in this part of the shell\cite{mbd12}, (as well as the electron temperature, approximately $kT=1.35$~keV, and $\tau=4.8^{+0.9}_{-4.7}\times 10^8$~s~cm$^{-3}$), but our values are obtained without any ad-hoc assumptions. The interstellar absorption is expected to be uniform in the portion of the shell analyzed, and we fix the absorbing column density to $N_H=7 \times 10^{20}~ \text{cm}^{-2}$, in agreement with radio observations\cite{dgg02}. We detect the shocked ISM emission in all regions, with a statistical significance $>99\%$. Panel b of Fig. \ref{fig:spec} shows that the ISM contribution is clearly visible at low energies even in the spectrum of region 5 (at approximately $\theta=0^\circ$) where the X-ray emission is dominated by synchrotron radiation. We found that the electron temperature in the shocked ISM is consistent with being constant over all the explored regions (though with large uncertainties), and letting it free to vary does not improve the quality of the fits. So we fixed it to the best-fit value obtained in region 0 ($kT=1.35$~keV), where we obtain the most precise estimate (see, methods subsection X-ray data analysis). In case of shock modification, we expect the ion temperature to be lower in regions with efficient hadron acceleration. This should also reduce the electron temperature, but this effect is predicted to be quite small\cite{pes09}. Moreover, this reduction may be compensated by the fact that strong magnetic turbulence in quasi-parallel regions should favor electron heating (by heat exchange with ions). We expect much larger variations of the shock compression ratio, so we trace the shock modification by focusing on the post-shock density. We estimate the density from the best fit value of the $EM$ in each spectral region (by numerically computing the volume of the emitting plasma, see, methods subsection X-ray data analysis). \noindent{\emph{Azimuthal profile of the post-shock density}} \newline Figure \ref{fig:profile} (panel a) shows the azimuthal modulation of the post-shock density of the ISM. We verified that the estimates of the density are not affected by contamination from the ejecta. Ejecta can be easily identified because of their higher surface brightness with respect to the ISM. If we extract the spectra by slightly changing the shape of the extraction regions, so as that their inner boundaries include ejecta knots, the plasma density increases artificially (and the quality of the fit decreases). For example, by moving inwards the inner boundary of region 0 (region 3), and enhancing its projected area by less than $10\%$ ($<40\%$ for region 3), we find that the plasma density changes from $n=0.164_{-0.016}^{+0.014}$ cm$^{-3}$ up to $n=0.189\pm0.011$ cm$^{-3}$ in region 0 (and from $n=0.21^{+0.05}_{-0.04}$ cm$^{-3}$ to $n=0.25\pm0.03$ cm$^{-3}$ in region 3). Conversely, by reducing the size of the extraction regions, the plasma density stays constant, thus indicating that the medium within each region is fairly uniform (e.g., by reducing the projected area of region 0 and region 3 by about $25\%$, we find $n=0.158^{+0.015}_{-0.021}$ cm$^{-3}$, in region 0, and $n=0.22^{+0.06}_{-0.05}$ cm$^{-3}$ in region 3). We conclude that Fig. \ref{fig:profile} traces the azimuthal density modulation of the shocked ISM. As explained above, it is thus hard to explain this density modulation as a result of inhomogeneities in the upstream medium. These inhomogeneities, if present, would affect the shock velocity ($v_s$, which is proportional to the inverse of the square root of the ambient density), inducing a $\Delta v_s$ of approximately $1200$ km s$^{-1}$ between region 0 and region 5 (for a shock velocity in region 5 of\cite{wwr14} $5000$ km s$^{-1}$). This would produce a difference in the shock radius of about $10^{18}$ cm (corresponding to $0.5'$ at 2.2 kpc\cite{wgl03}) between region 0 and region 5 in only 250 yr. This is at odds with observations, which show a very circular shape of the shock front (see Fig. \ref{fig:Xradio}, panel e), whose radius vary less than $0.15'$ between region 0 and region 5 (see, methods subsection X-ray data analysis for details). We then consider the density modulation as a tracer of azimuthal variations in the shock compression ratio. Assuming $r_t=4$ in region 0 (i. e., at $\theta=90^\circ$, where the acceleration is inefficient), Fig. \ref{fig:profile} (panel b) shows a higher compressibility in quasi-parallel conditions, where the shock compression ratio raises up to approximately $r_t=7$. To further constrain the observed increase of the post-shock density towards quasi-parallel conditions, we added the data from the \emph{XMM-Newton} Large Program of observations of SN~1006 (approximately $t_{exp}=750$ ks). We updated the previous results obtained for 8 regions in the southeastern limb\cite{mbd12} (from around $\theta=55^\circ$ to approximately $\theta=120^\circ$) to correct for the effects of the telescope point spread function (see, methods subsection X-ray data analysis). In addition, we extended the study to quasi-parallel regions by analyzing the \emph{XMM-Newton} spectra including region 3 and region 4-5 (together). We adopted the same model as that adopted for \emph{Chandra} data. The agreement between results obtained with the two telescopes is remarkable (see Fig. \ref{fig:profile}) and the combination of the reliability provided by the high \emph{Chandra} spatial resolution (which excludes contamination from ejecta) and the sensitivity of \emph{XMM-Newton} (which provides more precise estimates) confirms the azimuthal modulation of the post-shock density. In the \emph{XMM-Newton} spectra, the higher post-shock density in region 4-5 with respect to region 0 is confirmed even by letting the electron temperature and interstellar absorption free to vary in the fitting process. In particular, we found that the quality of the fit of the spectrum of region 4-5 worsens significantly by imposing the plasma density to be the same as that observed in region 0, even by letting both $N_H$ and $kT$ free to vary ($\chi^2=182.3$ with 181 d.o.f., with $kT=1.0^{+0.5}_{-0.3}$ keV and $N_H=6.3\pm0.6\times10^{20}$ cm$^{-2}$, to be compared with $\chi^2=179.6$ with 182 d.o.f., see Table \ref{tab:best_fit}). Moreover, we notice that by imposing a low post-shock density in region 4-5, the best-fit value of the ionization parameter ($\tau=1.3\pm0.2\times 10^9$ s cm$^{-3}$) increases by a factor of about $2$ with respect to that reported in Table \ref{tab:best_fit}, thus still indicating the need for a higher post-shock density in quasi-parallel regions. \section*{Discussions} From observations, the azimuthal profile of the post-shock density shown in Fig. \ref{fig:profile} can be explained by a higher shock compression ratio in quasi-parallel regions than in quasi-perpendicular regions. From theory, self-consistent kinetic simulations\cite{cs14} unveiled that the injection of thermal protons into DSA is maximum at quasi-parallel regions, where efficiency can reach 10-15\%, and suppressed at quasi-perpendicular shocks. Prominent shock modification ($r_t\lesssim 7$) is therefore expected only in quasi-parallel regions\cite{haggerty+20,caprioli+20}, while quasi-perpendicular shocks should show approximately $r_t=4$. While injection of thermal ions is typically suppressed for\cite{cs14} $\theta\gtrsim 45^\circ$, re-acceleration of pre-existing Galactic CRs (seeds) can provide a modest acceleration efficiency (approximately $2- 6\%$), thus playing a role in shock modification up to\cite{czs18} $\theta\gtrsim 60 ^\circ$, in agreement with the observed trend in Figure \ref{fig:profile}. CR acceleration also leads to the amplification of the pre-shock magnetic field in the quasi-parallel regions, where synchrotron emission is more prominent and magnetic fields are more turbulent, as attested by radio polarization maps\cite{rhm13}. Also, quasi-perpendicular regions exhibit synchrotron emission in the radio but not in the X-rays, which points to the presence of GeV but not TeV electrons, again consistent with acceleration in the unperturbed Galactic magnetic field\cite{bla13}. The solid curve in Fig.~\ref{fig:profile} (panel b) shows the expected trend of $r_t$ vs. $\theta$, obtained by assuming a quasi-parallel acceleration efficiency $\xi_p=12\%$, with a cut-off at approximately $45^\circ$. Such values are chosen based on self-consistent hybrid simulations\cite{caprioli+14a}, which attest to thermal ions being spontaneously injected into DSA for quasi-parallel shocks\cite{caprioli+15}. Always guided by simulations, which show that more oblique shocks are able to efficiently accelerate pre-existing CR seeds\cite{caprioli+18}, we have also a component of reaccelerated CRs with efficiency $\xi_s=6\%$ and cutoff at $70^\circ$. Since both accelerated and re-accelerated particles can effectively amplify the initial magnetic field, we pose the normalized magnetic pressure $\xi_B=5\%$. We also note that magnetic-field amplification and high $r_t$ are consistent with the non-detection of X-ray emission from the precursor of SN~1006\cite{mab10}. The profile shown by the solid curve in Fig.~\ref{fig:profile} (panel b) is in line with the observed data-points. As a comparison, we also include the expected profiles obtained without CR re-acceleration (with $\xi_p=18\%$, dashed curve), and without postcursor effects\cite{haggerty+20} ($\xi_p=12\%$, $\xi_s=6\%$, $\xi_B=0$, dotted curve). The theoretical curves in Fig.~\ref{fig:profile}, while capturing the zero-th order azimuthal dependence of $r_t$ on $\theta$, do not account for the possible refined geometry of the magnetic field in the SN~1006 field. A comparison between radio maps and MHD simulations (not including shock modification)\cite{bom11} suggests that the local magnetic field may be tilted by $\phi_B\approx 38^\circ\pm4^\circ$ with respect to the line of sight, with a gradient of the field strength of the order of $\nabla|{\rm{\textbf{B}}}|=1.5~\rm{\textbf{B}}$ over 10 pc, roughly laying in the plane of the sky (parallel to the limbs) and pointing toward the Galactic plane. In general, a finite tilt $\phi_B$ would stack regions with different shock inclinations along each line of sight, thereby reducing the contrast between the regions with maximum and minimum $r_t$; nevertheless, since the expected $r_t$ is maximum for $\theta\lesssim 40^\circ$ (see Fig. \ref{fig:profile}), any tilt $\phi_B\lesssim 40^\circ$ would not induce any major modification to the curves in Fig.~\ref{fig:profile}. The presence of a gradient, instead, has been shown\cite{bom11} to reduce the angular distance between the two polar caps, producing a narrow minimum in the $r_t$ versus $\theta$ profile, remarkably similar to that observed (see, methods subsection Modeling the shock modification). Since the simple geometry assumed in this paper captures well the bilateral morphology of SN 1006 and its azimuthal variations, we defer the study of the corrections induced by more elaborate field geometries to a forthcoming publication. Finally, we consider the effects that the shock modification should induce on the spectrum of the accelerated particles. In the context of the classical theory of non-linear DSA \cite{drury83,malkov+01}, a shock compression ratio larger than 4 leads to CR spectra harder than $E^{-2}$, while radio observations suggest for SN~1006 a radio spectral index of 0.6\cite{green19}, corresponding to a CR spectrum $\propto E^{-2.2}$. Nevertheless, when the postcursor physics is taken into account in the calculation of the CR spectral index\cite{caprioli+20}, for the parameters chosen in the paper for the q-parallel regions ($\xi_{tot}=\xi_p+\xi_s=18\%$, $\xi_B=5\%$), one obtains $r_t=6.34$ and a CR spectrum $\propto E^{-2.19}$, in remarkable agreement with the observed radio index. In conclusion, our findings show an azimuthal modulation of the post-shock density in SN 1006, which is consistent with a substantial deviation of the shock compression ratio from the value of $r_t=4$ (expected for strong shocks) in regions of prominent particle acceleration, where electrons are accelerated to TeV energies. The inferred values of compression ratios and CR slopes are compatible with those expected in CR-modified shocks when the effects of the postcursor are also accounted for\cite{haggerty+20,caprioli+20}. Moreover, the azimuthal variation of $r_t$ (Fig.~\ref{fig:profile}) attests to the prominence of parallel acceleration and to the important role played by the re-acceleration of pre-existing Galactic CRs for oblique shocks. \newpage \begin{center} \vspace*{-10.00pt} {\Large \bf METHODS} \end{center} \renewcommand{\thesection}{M\arabic{section}} \setcounter{section}{0} \section*{X-ray data analysis} We analyzed {\it Chandra} observations 13737, 13738, 13739, 13740, 13741, 13742, 13743, 14423, 14424, 14435 (PI F. Winkler) performed between April and June 2012, with a total exposure time of 669.85 ks, and observation 9107 (PI R. Petre) performed on June 2008 for a total exposure time of 68.87 ks (see Table \ref{tab:obs}). All observations were reprocessed with CIAO 4.12\cite{fru06} and CALDB 4.9.0 (\url{https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/caldb_intro.html}). Mosaic images of SN~1006 were obtained by combining the different pointings with the CIAO task \texttt{merge\_obs} (\url{https://cxc.cfa.harvard.edu/ciao/ahelp/merge_obs.html}). In particular, we produced vignetting-corrected mosaic images of the flux (in counts s$^{-1}$ cm$^{-2}$) in the $0.5-1$ keV band (shown in green in Fig. 1) and in the $2.5-7$ keV band (light blue in Fig. 1). The contact discontinuity in SN~1006 is very close to the forward shock\cite{chr08,mbi09}. To measure the ISM post-shock density, we then extract X-ray spectra by selecting narrow regions between the contact discontinuity and the shock front. Regions selected for spatially resolved spectral analysis are shown in Fig. \ref{fig:Xradio}. By assuming $\theta=0^\circ$ at the center of the northeastern radio limb, the azimuthal range explored is $\theta=0^\circ-122^\circ$. In this azimuthal range, the spherical shape of the shock front, combined with the extremely faint and uniform HI emission, clearly point toward a uniform ambient environment. We do not consider regions with negative values of $\theta$ because of the lack of spherical shape in the remnant therein, combined with the superposition of several shock fronts (which make it difficult to correctly estimate the volume of the X-ray emitting plasma). We do not consider regions with $\theta>122^\circ$ because it is not possible to select regions not contaminated by the ejecta emission, given that several ejecta knots reaching the shock front (and even protruding beyond it) can be observed in the soft X-ray image (Fig. \ref{fig:Xradio}, panel c) for approximately $\theta=122^\circ-150^\circ$. Beyond approximately $\theta=150^\circ$ the shell loses its spherical shape and interacts with an atomic cloud\cite{mad14,mop16} (Fig. \ref{fig:Xradio}, panel a). Spectra, together with the corresponding Auxiliary Response File, ARF, and Redistribution Matrix File, RMF, were extracted via the CIAO tool \texttt{specextract} (\url{https://cxc.cfa.harvard.edu/ciao/ahelp/specextract.html}). Background spectra were extracted from regions selected out of the remnant, without point-like sources and, when possible, in the same chip as the source regions. We verified that the results of our spectral analysis are unaffected by the selection of other background regions. Spectra were rebinned by adopting the ``optimal binning" procedure\cite{kb16}. As a cross-check, we also rebinned the spectra so as to get at least 25 counts per spectral bin, obtaining the same results, though with slightly larger error bars. Spectral analysis was performed with XSPEC version12.10.1f\cite{arn96} in the $0.5-5$ keV band, by adopting $\chi^2$ statistics. Spectra extracted from the same region of the sky in different observations were fitted simultaneously. We found out that all our results do not change significantly by modeling the spectrum of the background, instead of subtracting it, and by using Cash statistics instead of $\chi^2$-minimization in the fitting process. Thermal emission from the shocked ISM was described by an isothermal plasma in non equilibrium of ionization with a single ionization parameter, $\tau$ (model NEI in XSPEC, \url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node195.html}, based on the database AtomDB version 3.09, see {http://www.atomdb.org/Webguide/webguide.php}). Though we adopt a state-of-the-art spectral model, we acknowledge that there may be limitations in the description of the emission stemming from an under-ionized plasma with a very low ionization parameter, such as the one studied here. However, we expect this effect to introduce almost the same biases (if any) in all regions, and not be responsible for the density profile shown in Fig. \ref{fig:profile}. We found that the electron temperature in the shocked ISM is consistent with being constant over all the explored regions and fixed it to the best-fit value obtained in region 0 ($kT=1.35$ keV), where we get the most precise estimate (error bars approximately $0.4$ keV at the $68\%$ confidence level). This value is in remarkable agreement with that measured in a similar region with \emph{XMM-Newton}\cite{mbd12}. We included the effects of interstellar absorption by adopting the model TBABS (\url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node268.html}) within XSPEC. The interstellar absorption is expected to be uniform in the portion of the shell analyzed and we fixed the absorbing column density to $N_H=7 \times 10^{20}~ \text{cm}^{-2}$, in agreement with radio observations\cite{dgg02}. We performed the F-test in all regions, finding that the quality of the spectral fittings does not improve significantly by letting $kT$, or $N_H$ free to vary. The ISM emission measure and ionization parameter, $\tau$, were left free to vary in the fitting procedure. We verified that this model provides an accurate description of spectra extracted from regions in the thermal southeastern limb (namely regions $0,~-1,~-2,~-3,~+1$) and an additional nonthermal component does not improve significantly the quality of the fits, its normalization being consistent with 0 at less than the 99\% confidence level. However, in regions $+2,~+3,~+4,~+5$ there is a significant synchrotron emission. We then added a synchrotron component when fitting the spectra from these regions and modeled the synchrotron emission by considering the electron spectrum in the loss-dominated case\cite{za10}, since this model is particularly well suited for SN~1006\cite{mbd13} (our results and conclusions do not change by adopting an exponentially cut-off power-law distribution of electrons (XSPEC/SRCUT model, \url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node228.html}) to describe synchrotron emission, as done in previous works\cite{mbi09,mbd12}). Normalization and break energy of the synchrotron emission were left free to vary in the fittings. The normalization of the thermal component is significantly larger than 0 at the $99\%$ confidence level in all regions. Table \ref{tab:best_fit} shows the best fit results for all the regions, with errors at the 68\% confidence level. We derive the average plasma density, $\overline{n}$, in each spectral region from the corresponding best-fit value of the emission measure of the plasma ($EM=\int n^2 dV=\overline{n^2}V$, where $n$ is the plasma density and $V$ is its volume). The volume is calculated with the following method (see Supplementary Software 1 for further details). We project the regions shown in Fig. \ref{fig:Xradio} on a uniform grid with pixel size $0.2'' \times 0.2'' $ ($0.2''$ correspond to about $6.3 \times 10^{15} \; \text{cm}$ at a distance of 2.2 kpc\cite{wgl03}). For each pixel, we calculate the corresponding depth as the length of the chord along the line of sight intercepted by the sphere that maps the shock front, and compute the volume accordingly, We then sum over all the pixels within a given region. The radius of the sphere marking the shock front slightly depends on the region considered, ranging from $R_{min}= 14.4'$ in region +5 to $R_{max}=14.55'$ in region 0, but we use the same center for all the regions (namely, $\alpha = 15^h:02^m:55.74^s$, $\delta = -41^\circ:56':56.603'' $). We verified the precision and reliability of our method by considering more regular regions, like those adopted in previous works\cite{mbd12}, where the volume can be calculated analytically. We found differences $<0.4\%$ between the numerical and analytical values. The volumes of the emitting plasma in the regions adopted for spectral analysis are listed in Table \ref{tab:best_fit} and were used to derive $\overline{n}$ from $EM$. We verified that our results and conclusions do not change by adopting the PSHOCK model within XSPEC to model the thermal emission. The PSHOCK model assumes a linear distribution of the ionization parameter versus emission measure\cite{blr01}, ranging from zero (at the shock front) up to a maximum value ($\tau^{max}$ which is a free parameter in the fit), instead of a single, ``mean", ionization parameter as the NEI model. The best-fit ISM density obtained in region 0 and region 5 with the PSHOCK model is $n^{P}_{0}=0.163_{-0.017}^{+0.014}$ cm$^{-3}$ and $n^{P}_{5}=0.32_{-0.08}^{+0.11}$ cm$^{-3}$, respectively (to be compared with $n_0=0.164_{-0.016}^{+0.014}$ cm$^{-3}$ and $n_5=0.29_{-0.07}^{+0.10}$ cm$^{-3}$ obtained with the NEI model). As expected, the maximum ionization parameter is approximately a factor of 2 higher than the mean $\tau$ obtained with the NEI model ($\tau^{max}_0=8.9_{-1.5}^{+2.1}\times10^8$ s cm$^{-3}$ and $\tau^{max}_5=1.2^{+1.1}_{-0.4}\times10^9$ s cm$^{-3}$, to be compared with $\tau_0=4.8_{-0.7}^{+0.9}\times 10^8$ s cm$^{-3}$ and $\tau_5=7_{-2}^{+5}\times 10^8$ s cm$^{-3}$). Table \ref{tab:best_fit} shows the best-fit values of the ionization parameter $\tau=\int_{t_s}^{t_f}{ndt}=\overline{n}\overline{\Delta t}$ (where $\overline{n}$ is the time-averaged plasma density, and $\overline{\Delta t}$ is the mean time elapsed since the shock impact within the region) in all regions. Figure \ref{fig:tau-dens} shows the confidence contours of the ISM density (as derived from the $EM$) and $\tau$, in region 0 and region 5. Since both $EM$ and $\tau$ depend on the plasma density, it is possible to estimate $\overline{\Delta t}$. The figure includes isochrones in the $(n,~\tau$) space, showing that we obtain very reasonable estimates of the mean time elapsed after the shock impact ($\overline{\Delta t}=1-2\times10^2$ yr). The radial size of the extraction regions changes from case to case, and so does their inner boundary, which is closer to the shock front in some regions (e.g., region 3, 4, 5) than in others (e.g., region -1 and region -2). Therefore, $\overline{\Delta t}$ is not strictly the same for all regions, and the ionization parameter does not depend only on the plasma density (we expect lower $\overline{\Delta t}$ in the narrow regions around the northeastern polar cap). However, we find that, especially for regions with similar radial size, higher values of the post-shock density are associated with higher values of $\tau$, as shown in Table \ref{tab:best_fit} and Fig. \ref{fig:tauplot}. The azimuthal profile of the ionization parameter shown in Fig. \ref{fig:tauplot} is then consistent with the density profile shown in Fig. \ref{fig:profile}. In the framework of the \emph{XMM-Newton} Large Program of observations of SN~1006 (PI A. Decourchelle), we analyze the EPIC observation 0555630201 (see Table \ref{tab:obs}). \emph{XMM-Newton} EPIC data were processed with the Science Analysis System software, V18.0.0 (see the Users Guide to the XMM-Newton Science Analysis System", Issue 17.0, 2022 \url{https://xmm-tools.cosmos.esa.int/external/xmm_user_support/documentation/sas_usg/USG/}). Event files were filtered for soft protons contamination by adopting the \texttt{ESPFILT} task (\url{https://xmm-tools.cosmos.esa.int/external/sas/current/doc/espfilt/espfilt.html}), thus obtaining a screened exposure time of 89 ks, 94 ks and 51 ks for MOS1, MOS2\cite{taa01} and pn\cite{sbd01} data, respectively. We selected events with PATTERN$\le12$ for the MOS cameras, PATTERN$\le4$ for the pn camera, and FLAG$=$0 for both. We extracted EPIC spectra from region 3 and the union of region 4 and region 5 (hereafter region 4-5, extraction regions are shown in Fig. \ref{fig:Xradio}). Spectra were rebinned by adopting the optimal binning precedure and spectral analysis was performed with XSPEC in the $0.5-5$ keV band by adopting the model described above for the analysis of \emph{Chandra} spectra. MOS and pn spectra were fitted simultaneously. We point out that \emph{Chandra} and \emph{XMM-Newton} spectra were fitted independently. Best fit values are shown in Table \ref{tab:best_fit}, with errors at the 68\% confidence level. Regions selected for spectral analysis are located at the rim of the shell and part of the ISM X-ray emission is spread outside the outer border of the regions, because of the relatively large point spread function of the telescope mirrors (corresponding to about $6''$ full width half maximum). We quantified this effect by assuming that the ISM emission is uniformly distributed in each spectral region and found that approximately $7\%$ of the ISM X-ray emission leaks out of each region. We address this issue by correcting the measured plasma emission measure accordingly. This has a small effect on the density estimate, considering that the density is proportional to the square root of the emission measure. However, we applied this correction to derive the density estimates shown in Table \ref{tab:best_fit} and in Fig. \ref{fig:profile} as well as to revise the previous values obtained in the southeastern limb\cite{mbd12} and also shown in Table \ref{tab:xmmrev} and Fig. \ref{fig:profile}. \section*{Modeling the shock modification} Efficient acceleration of CRs has always been associated with an increase in the shock compressibility \cite{dru83,je91,malkov+01} as due to the softer equation of state of relativistic CRs, whose adiabatic index is 4/3 (rather than 5/3) and the escape of particles from upstream, which effectively makes the shock behave as partially radiative\cite {dv81,cba09}. In this case, though, CR spectra would become significantly harder than $E^{-2}$ above a few GeV, at odds with $\gamma-$ray observations of individual SNRs\cite {cap11,caprioli12}. Unprecedentedly-long hybrid simulations of non-relativistic shocks have recently revealed an effect that was not accounted for in the classical DSA theory, namely that the CR-amplified magnetic turbulence may have a sizable speed with respect to the shocked plasma, resulting in a postcursor, i.e., a region behind the shock where both CRs and magnetic fields drift away from the shock faster than the fluid itself\cite{haggerty+20,caprioli+20}. The postcursor-induced shock modification has two main implications: on one hand, it acts as a sink of energy, which leads to an enhanced compression, and on the other hand it advects CRs away from the shock at a faster rate, which leads to steeper spectra\cite{bel78}. The relevance of the postcursor is controlled by the post-shock Alfv\'en velocity\cite{haggerty+20} relative to the downstream fluid velocity, and can therefore be inferred from observations in which shock velocity and downstream density and magnetic field are constrained; simple estimates for both radio SNe and historical SNRs return a remarkable agreement between observations and theory\cite{caprioli+20,dc21}. It has been shown\cite{haggerty+20} that it is possible to calculate the shock compression ratio given the post-shock pressures in CRs and magnetic fields ($\xi_c$ and $\xi_B$), normalized to the upstream bulk pressure. We then consider the contribution of CRs injected from the thermal pool\cite{cs14,cps15,haggerty+20} and re-accelerated seeds\cite{czs18}, which both are expected to produce magnetic turbulence via the Bell instability for strong shocks\cite{caprioli+14b,czs18}. The dependence on the shock obliquity $\theta$ is modelled after hybrid simulations and modulated with \begin{equation} \xi(\theta)\equiv \frac{\xi_{i}}{2} \left[ 2 + {\rm tanh}\left(\frac{\bar\theta_i - \theta}{\Delta\theta}\right) - {\rm tanh}\left(\frac{\pi-\bar\theta_i - \theta}{\Delta\theta}\right) \right]. \end{equation} We consider $i=[p,s,B]$, corresponding to the pressure in CR protons injected from the thermal pool, reaccelerated seeds, and B fields, respectively; with such a prescription, each pressure is maximum at $\xi(0)=\xi(\pi)$ and drops over an interval $\Delta\theta=20^\circ$ centered at $\bar\theta_i=[45^\circ,70^\circ,70^\circ]$, respectively. The solid line in Figure \ref{fig:profile} shows the prediction for efficiencies $\xi_{p}=12\%$, $\xi_{s}=6\%$, and $\xi_{B}=5\%$; all these values are not the result of a best fitting, but rather motivated by simulations\cite{cs14, caprioli+14b,czs18} and successfully applied to the study of individual SNRs\cite{mc12,dc21}. Note that, with the current parametrization, the total efficiency at parallel regions is $\xi_p+\xi_s\approx 18\%$, which is reasonable when acceleration of He nuclei is added on top of protons \cite{caprioli+17}. We also explored a different configuration of the ambient magnetic field, by including the effects of a gradient of the field strength (as suggested by the slantness of the radio limbs\cite{bom11}) on the shock obliquity. By adopting the formalism described above, with $\xi_{p}=12\%$, $\xi_{s}=6\%$, and $\xi_{B}=5\%$, we obtain the profile shown in Fig. \ref{fig:gradb}. \section*{Data Availability} The \emph{Chandra} and \emph{XMM-Newton} data analyzed in this paper are publicly available in the Chandra Data Archive and \emph{XMM-Newton} Science Archive, respectively. Table \ref{tab:obs} provides the direct link to each observation. Datasets generated during the current study are available from the corresponding author on reasonable request. Source data are provided with this paper. \section*{Code Availability} \emph{Chandra} data were processed by adopting the CIAO software package, available at \url{https://cxc.cfa.harvard.edu/ciao/}. \emph{XMM-Newton} data were analyzed by using the SAS package, available at \url{https://www.cosmos.esa.int/web/xmm-newton/download-and-install-sas}. XSPEC, the software adopted for X-ray spectral analysis, is available at \url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/}. The Python code that we developed to calculate the volumes is provided as Supplementary Software. \newpage \section*{References} \begin{addendum} \item[Corresponding author] Correspondence to M. Miceli~(email: marco.miceli@unipa.it). \item[Acknowledgements] We thank F. Winkler for kindly providing us with the H$_\alpha$ map of SN~1006. We thank P. Plucinsky for helpful suggestions on the Chandra data analysis. MM, SO, FB, EG, and GP acknowledge financial contribution by the PRIN INAF 2019 grant and the INAF mainstream program. DC is supported by NASA grant 80NSSC20K1273 and NSF grants AST-1909778, AST-2009326 and PHY-2010240. JV’s and EG's work on this paper has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101004131 (SHARP). AD acknowledges support by the Centre National d’Etudes Spatiales (CNES). M.M. and R.G. acknowledge support by the INAF Mini-Grant ``X-raying shock modification in supernova remnants''' The scientific results reported in this article are based in part on data obtained from the Chandra Data Archive. This research has made use of software provided by the Chandra X-ray Center (CXC) in the application package CIAO. Results are based in part on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA \item[Author Contributions] M. M. conceived and coordinated the project, led the \emph{XMM-Newton} data analysis and wrote the manuscript. R. G. led the analysis of the \emph{Chandra} observations. D. C. devised the theoretical modeling of shock modification. A. D., J. V., S. O. and G. P., provided insights on the analysis and on the interpretation of the results. E. G. and F. B. collaborated to the X-ray data analysis. All authors discussed the results and implications and commented on the manuscript at all stages. \item[] \item[Competing interests] The authors declare no competing interests. \end{addendum} \newpage \restylefloat{table} \begin{table}[!ht] \caption{\small{Best fit values of emission measure ($EM$), ionization parameter ($\tau$), cutoff energy of the synchrotron emission ($E_{cut}$) and post-shock density ($n_{ISM}$) derived from \emph{Chandra} and \emph{XMM-Newton} spectra extracted from the regions shown in Fig. \ref{fig:Xradio}, with the corresponding values of $\chi^2$ and degrees of freedom ($d.o.f.$). Errors are at the 68\% confidence level (the values of temperature and absorbing column density are fixed to $kT=1.35$ keV, and $N_H=7\times10^{20}$ cm$^{-2}$, respectively). Source data are provided as a Source Data file.}} \begin{center} \footnotesize \begin{tabular}{|l|l|l|l|l|l|l|} \hline \multicolumn{7}{|l|}{\emph{Chandra}} \\ \hline Region & Volume ($10^{55}$ cm$^{3}$) & $EM$ ($10^{54}$ cm$^{-3}$)& $\tau$ ($10^8$ s cm$^{-3}$) & $E_{cut}$ ($10^{3}$ eV) & $n_{ISM}$ (cm$^{-3}$) & $\chi^2/d.o.f.$ \\ \hline $0$ & $4.76$ & $1.3_{-0.2}^{+0.2}$ & $4.8_{-0.7}^{+0.9}$ & & $\textcolor{black}{0.164}_{-0.016}^{+0.014}$ & $56.47/42$\\ \hline $+1$ & $4.26$ & $1.9_{-0.2}^{+0.2}$ & $6.2_{-0.5}^{+0.6}$ & & $\textcolor{black}{0.209}_{-0.012}^{+0.011}$ & $122.38/77$\\ \hline $+2$ & $1.84$ & $\textcolor{black}{0.8_{-0.3}^{+0.5}}$ & $7.9\textcolor{black}{_{-1.9}^{+4}}$ & $0.076_{-0.007}^{+0.008}$ & $0.20_{-0.05}^{+0.06}$ & $207.42/130$\\ \hline $+3$ & $3.49$ & $1.6_{-0.5}^{+0.9}$ & $7_{-2}^{+3}$ & $0.170_{-0.006}^{+0.007}$ & $0.21_{-0.04}^{+0.05}$ & $178.73/162$\\ \hline $+4$ & $1.37$ & $1.61^{+1.9}_{-1.0}$ & $5.4^{+6}_{-2.0}$ & $0.309^{+0.019}_{-0.017}$ & $0.34^{+0.17}_{-0.13}$ & $107.8/91$\\ \hline $+5$ & $1.37$ & $0.8_{-0.4}^{+0.7}$ & $7_{-2}^{+5}$ & $0.27_{-0.02}^{+0.02}$ & $0.29_{-0.07}^{+0.10}$ & $124.22/92$\\ \hline $-1$ & $9.52$ & $3.3_{-0.2}^{+0.2}$ & $4.9_{-0.3}^{+0.4}$ & & $0.188_{-0.007}^{+0.007}$ & $137.83/94$\\ \hline $-2$ & $11.3$ & $5.6_{-0.4}^{+0.4}$ & $4.7_{-0.3}^{+0.4}$ & & $0.222_{-0.009}^{+0.009}$ & $71.75/41$\\ \hline $-3$ & $6.09$ & $2.7_{-0.4}^{+0.4}$ & $5.7_{-0.6}^{+0.8}$ & & $0.212_{-0.016}^{+0.015}$ & $47.42/40$\\ \hline \multicolumn{7}{|l|}{\emph{XMM-Newton}} \\ \hline Region & Volume ($10^{55}$ cm$^{3}$) & $EM$ ($10^{54}$ cm$^{-3}$)& $\tau$ ($10^8$ s cm$^{-3}$) & $E_{cut}$ ($10^{3}$ eV) & $n_{ISM}$ (cm$^{-3}$) & $\chi^2/d.o.f.$ \\ \hline $+3$ & $3.43$ & $1.26^{+0.26}_{-0.15}$ & $11.5\pm 2.0$ & $0.138^{+0.004}_{-0.005}$ & \textcolor{black}{$0.206^{+0.020}_{-0.011}$} & $168.0/176$ \\ \hline $+4-5$ & $2.65$ & $1.5^{+0.9}_{-0.6}$ & $6.7^{+3}_{-1.9}$ & $0.272\pm0.007$ & $0.26^{+0.07}_{-0.06}$ & $179.6/182$ \\ % \hline \end{tabular} \label{tab:best_fit} \end{center} \end{table} \restylefloat{table} \begin{table}[!ht] \caption{\small{List of observations analyzed in this work. Source data are provided as a Source Data file.}} \begin{center} \footnotesize \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{5}{|l|}{\emph{Chandra}} \\ \hline Obs ID & Instrument & Exp (ks) & PI name & link \\ \hline 13737 & ACIS - S & 87.89 & Winkler & \url{https://cxc.cfa.harvard.edu/cdaftp/byobsid/7/13737/}\\ \hline 13738 & ACIS - I & 73.47 & Winkler & \url{https://cxc.cfa.harvard.edu/cdaftp/byobsid/8/13738/}\\ \hline 13739 & ACIS - I & 100.07 & Winkler & \url{https://cxc.cfa.harvard.edu/cdaftp/byobsid/9/13739/}\\ \hline 13740 & ACIS - I & 50.41 & Winkler & \url{https://cxc.cfa.harvard.edu/cdaftp/byobsid/0/13740/}\\ \hline 13741 & ACIS - I & 98.48 & Winkler & \url{https://cxc.cfa.harvard.edu/cdaftp/byobsid/1/13741/}\\ \hline 13742 & ACIS - I & 79.04 & Winkler & \url{https://cxc.cfa.harvard.edu/cdaftp/byobsid/2/13742/}\\ \hline 13743 & ACIS - I & 92.56 & Winkler & \url{https://cxc.cfa.harvard.edu/cdaftp/byobsid/3/13743/}\\ \hline 14423 & ACIS - I & 25.02 & Winkler & \url{https://cxc.cfa.harvard.edu/cdaftp/byobsid/3/14423/}\\ \hline 14424 & ACIS - I & 25.39 & Winkler & \url{https://cxc.cfa.harvard.edu/cdaftp/byobsid/4/14424/}\\ \hline 14435 & ACIS - I & 38.32 & Winkler & \url{https://cxc.cfa.harvard.edu/cdaftp/byobsid/5/14435/} \\ \hline 9107 & ACIS - S & 68.87 & Petre & \url{https://cxc.cfa.harvard.edu/cdaftp/byobsid/7/9107/}\\ \hline \multicolumn{5}{|l|}{\emph{XMM-Newton}} \\ \hline Obs ID & Instrument & Exp (ks) & PI name & link\\ \hline 0555630201 & EPIC & 109.719 & Decourchelle & \url{http://nxsa.esac.esa.int/nxsa-web/#obsid=0555630201}\\ % \hline \end{tabular} \label{tab:obs} \end{center} \end{table} \ \begin{table}[!ht] \caption{\small{Updated values of density of the shocked interstellar medium from previous \emph{XMM-Newton} data analysis\cite{mbd12}. Errors are at 68\% confidence level. Angles are measured counterclockwise from the direction of ambient magnetic field. Source data are provided as a Source Data file.}} \begin{center} \footnotesize \begin{tabular}{|c|c|c|} \hline Azimuthal opening angle ($^\circ$) & Region name\cite{mbd12} & Density (cm$^{-3}$) \\ \hline $53-63$ & $a$ & $0.206^{+0.2}_{-0.13}$ \\ \hline $58-73$ & $b$ & $0.197^{+0.05}_{-0.08}$ \\ \hline $65-80$ & $c$ & $0.189^{+0.05}_{-0.07}$ \\ \hline $73-88$ & $d$ & $0.169^{+0.03}_{-0.08}$ \\ \hline $80-96$ & $e$ & $0.150^{+0.05}_{-0.06}$ \\ \hline $88-103$ & $f$ & $0.152^{+0.07}_{-0.08}$ \\ \hline $96-112$ & $g$ & $0.199^{+0.09}_{-0.11}$ \\ \hline $104-120$ &$h$ & $0.201^{+0.09}_{-0.11}$ \\ \hline \end{tabular} \label{tab:xmmrev} \end{center} \end{table} \newpage
Title: Still at Odds with Conventional Galaxy Evolution: The Star Formation History of Ultra-Diffuse Galaxy Dragonfly 44
Abstract: We study the star formation history (SFH) of the ultra-diffuse galaxy (UDG) Dragonfly 44 (DF44) based on the simultaneous fit to near-ultraviolet to near-infrared photometry and high signal-to-noise optical spectroscopy. In fitting the observations we adopt an advanced physical model with a flexible SFH, and we discuss the results in the context of the degeneracies between stellar population parameters. Through reconstructing the mass-assembly history with a prior for extended star formation (akin to methods in the literature) we find that DF44 formed 90 per cent of its stellar mass by $z\sim 0.9$ ($\sim 7.2$ Gyr ago). In comparison, using a prior that prefers concentrated star formation (as informed by previous studies of DF44's stellar populations) suggests that DF44 formed as early as $z\sim 8$ ($\sim 12.9$ Gyr ago). Regardless of whether DF44 is old or very old, the SFHs imply early star formation and rapid quenching. This result, together with DF44's large size and evidence that it is on its first infall into the Coma cluster, challenges UDG formation scenarios from simulations that treat all UDGs as contiguous with the canonical dwarf population. While our results cannot confirm any particular formation scenario, we can conclude from this that DF44 experienced a rare quenching event.
https://export.arxiv.org/pdf/2208.11038
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} galaxies: evolution -- galaxies: dwarfs \end{keywords} \section{Introduction} Matching predictions to observations of how, and when, galaxies assemble serves as an important test for our greater understanding of cosmology and baryonic physics. Modern theories that suggest galaxy evolution is determined by the growth of their dark matter haloes, as well as the regulation of their gas processes (i.e., infall and star formation histories; e.g., \citealt{white1991, schaye2010, dave2012, wechsler2018}), have successfully replicated some observed relations between galaxy properties -- for example, the tight connection between stellar mass and halo mass (i.e., the SMHM relation; \citealt{moster2010}). A number of outstanding issues remain, however. A particularly challenging problem is explaining the increasing number of galaxies that cease forming stars (i.e., `quench') over time \citep{renzini2006, faber2007}. While simulations correctly predict scaling relations for massive galaxies (e.g., the mass--metallicity relation; MZR, and star formation main sequence), there are still fundamental discrepancies at lower stellar masses. In the low mass regime, observations have shown that quenched galaxies associated with massive host haloes are rare \citep{geha2012}, such that quenching at $z<1$ is thought to predominantly be a result of environmental effects \citep[e.g.,][]{boselli2006, fillingham2018, mao2021}. Rather than remain quenched, recent studies instead suggest that isolated quiescent dwarfs may in fact oscillate between `star forming' and `quenched' states \citep[e.g.,][]{polzin2021}. Yet cosmological simulations typically over-predict the abundance of quiescent field dwarfs \citep[e.g.,][]{dickey2021}. \vspace{0.2cm} The recently discovered ultra-diffuse galaxies (UDGs) potentially exemplify our limited understanding of the true diversity of galaxy evolution and quenching. UDGs were initially noted for their surprisingly large sizes given their low surface brightnesses ($R_\mathrm{eff}\geq1.5$~kpc and $\mu_\mathrm{0}(g)\geq24$~mag~arcsec$^{-2}$; \citealt{vandokkum2015_udg}) which, along with their red colours, distinguished them from classical low surface brightness (LSB) galaxies \citep[e.g.,][]{dalcanton1997}. Several current cosmological model predictions suggest that conventional processes can explain the UDG population, thus maintaining standard dark matter halo occupancy relations \citep[e.g.,][]{tremmel2020}. Such models typically focus on the mechanisms which increase the size of otherwise canonical dwarf galaxies to make them `ultra-diffuse' (for a summary of UDG origins see \citealt{jiang2019a}). Simulations have shown that unusual star formation or galaxy evolution processes can `puff up' canonical dwarfs (e.g., high-spin scenarios, \citealt{amorisco2016, rong2017}; energetic star formation feedback, \citealt{dicintio2017, chan2018, jackson2021b}) or dynamically redistribute their stellar populations (e.g., tidal heating and/or stripping; \citealt{jiang2019a, liao2019, carleton2019, sales2020}). Alternatively, UDGs may represent the tail of galaxy evolution processes, such that only minor differences in their evolution (e.g., when they infall or have major mergers) distinguish their final properties from normal dwarfs \citep[e.g.,][]{tremmel2020, wright2021}. Despite these differences, nearly all models rely on environmental processes to explain the lack of star formation in the subset of UDGs that are quiescent \citep[e.g., via ram pressure stripping;][]{yozin2015a, rong2017, chan2018, tremmel2020}. Accordingly, all of the scenarios follow a dichotomy related to when UDGs infall into a cluster environment: whether the proto-UDGs surpassed the size-threshold prior-to or post infall, is tied to whether they infall `late' or `early'. While UDGs are found both in the field and clusters, those that are quiescent are usually located in clusters \citep[the few exceptions may be on backsplash orbits; e.g.,][]{papastergis2017, benavides2021}. Explaining the origin of UDGs and the diversity of their properties in the context of their environments remains a key question in understanding galaxy formation and evolution. Testing the predicted UDG properties (e.g., kinematics, \citealt{amorisco2016}; stellar populations, \citealt{rong2017, ferremateu2018}; globular cluster (GC) properties, \citealt{carleton2021}; infall versus quenching times, \citealt{gannon2022}) from these scenarios against the observed properties, however, has revealed a number of discrepancies. And while some UDGs are found with very large sizes ($R_\mathrm{eff} > 4.5$~kpc), these exotic objects are beyond the predictions of most models \citep[][]{dicintio2017, carleton2019}. Along the same lines, models which accurately predict the distribution of UDG sizes fail to reproduce the distribution of sizes among normal dwarfs \citep[e.g.,][]{rong2017, jiang2019a, tremmel2020}. On the other hand, \citet{vandokkum2015_udg} proposed that some UDGs originate similar to today's massive galaxies (and have sizes reflecting their massive haloes), but lost their gas early in their histories. As a result of their early quenching, these `failed' galaxies did not build up the stellar mass expected for their haloes. This scenario deviates from the expected galaxy--halo connection, in that either these failed galaxies do not follow the SMHM relation or at least have a larger scatter than the standard relation. \vspace{0.2cm} A particularly interesting UDG is Dragonfly~44 (DF44) which is the largest galaxy in the original \citet{vandokkum2015_udg} sample, with {$R_\mathrm{eff}=4.7\pm0.2$~kpc} \citep{vandokkum2017}. High S/N spectroscopy has revealed an extremely old and metal-poor stellar population \citep[$\sim 2.3\sigma$ below the canonical dwarf MZR;][]{villaume2022}, implying that DF44 quenched very early and over a short time-scale. Moreover, while DF44 appears to have very low rotation \citep{vandokkum2019} characteristic of dwarf spheroidal galaxies, the stellar population gradients are `inverted' compared to the gradients typical of dwarf spheroidals \citep{villaume2022}. Regardless of whether DF44 has an over-massive halo or not \citep[][]{vandokkum2017, wasserman2019, bogdan2020, lee2020, saifollahi2021}, this UDG is inconsistent with the majority of UDG formation models. Late-quenching (after infall into a dense environment) scenarios can be ruled out for DF44 given its old age \citep[e.g.,][]{rong2017, chan2018, liao2019, jiang2019a, jackson2021b}. Moreover, DF44's low rotation conflicts with high-spin scenarios (e.g., \citealt{rong2017}; although the rotation could increase at larger radii, \citealt{grishin2021}). Yet, given the uncertainty in establishing the cluster infall time for an individual galaxy, we cannot preclude early-infall scenarios \citep[e.g.,][]{yozin2015a, liao2019, carleton2019, carleton2021, tremmel2020}. While some evidence \citep[e.g.,][]{{alabi2018, vandokkum2019}} suggests that DF44 is on its first infall into Coma, this is difficult to prove. \vspace{0.2cm} There is more to be learned, however, as UDG formation scenarios can be tested via their inferred star formation histories (SFHs). The time-scales of star formation reveal important epochs (e.g., mergers, infall, and/or quenching), which can be compared against observations. A number of studies have investigated the ages and mass assembly histories of UDGs, relying either only on broadband colours, or low to moderate S/N spectroscopy ( e.g, \citealt{kadowaki2017, ferremateu2018, gu2018b, pandya2018, ruizlara2018, martinnavarro2019}; Buzzo et al. 2022, submitted). While these studies provide important first steps, comparisons with predictions are not necessarily straightforward. This is primarily because constraining the detailed shape of a galaxy's SFH is a complex problem. Several galaxy properties can conspire to alter the spectral energy distribution (SED) in similar ways (e.g., stellar age, metallicity, and dust), which are particularly difficult to disentangle with low spectral resolution data \citep[e.g., with photometry alone;][]{bell2001}. Recovering the SFHs for old stellar populations is particularly difficult -- the integrated spectrum evolves non-linearly with age \citep{serra2007} such that old populations appear relatively similar \citep[for a complete discussion see the review by][]{conroy2013a}. Moreover, a late burst of star formation can `outshine' a (dominant) older population \citep[e.g.,][]{papovich2001, allanson2009}. While broad wavelength coverage is needed to precisely determine the dust absorption (and emission, with mid-infrared coverage), high resolution data of select spectral features are needed to precisely constrain the stellar metallicity and age. Both observations are necessary to break the degeneracy between these parameters \citep[e.g.,][]{vazdekis1999, trager2000b}. Using spectra that span a relatively wide wavelength range, full-spectrum fitting has proven to be effective in this respect \citep[e.g.,][]{macarthur2009, sanchezblazquez2011}. However, this technique requires a well-calibrated spectral continuum. Simultaneously fitting photometry and spectra can bypass this issue, as the photometry provides a means to fit the continuum\footnote{In practice it is generally easier to calibrate photometry to standard filters than to calibrate a spectrum.} and increases the wavelength coverage. In fitting the data it is necessary to impose `prior knowledge',\footnote{`Prior' here is used in the Bayesian sense, where the probability of a model given the data (i.e.,the `posterior') is proportional to both the likelihood of the data (given the model) and the prior knowledge about the model.} such as the flexibility of the SFH. The choice of a prior for the shape of the SFH can significantly impact age estimates, particularly for older stellar populations, and for low resolution and/or low S/N data \citep[as shown in, e.g.,][]{maraston2005, leja2017, leja2019a, han2018, carnall2019a}. In order to draw connections between the predicted and observed properties of UDGs it is necessary to give due attention to the choice of a prior. While it is advantageous to use flexible models together with physically motivated priors, a `good prior' is not necessarily known a priori. Therefore, results should be discussed in the context of the prior used (which may not be as `uninformative' as intended; e.g., \citealt{leja2019a}). \vspace{0.2cm} In this work we simultaneously fit near-ultraviolet (NUV) to near-infrared (NIR) photometry (nine bands) with high S/N ($\sim 96$~\AA$^{-1}$) rest-frame optical spectroscopy (from KCWI, the Keck Cosmic Web Imager). The same data set was used in \citet{vandokkum2019} and \citet{villaume2022} to study the stellar kinematics and populations of DF44. We adopt flexible SFHs in our fiducial model which do not assume a certain shape with time. Moreover, we compare the results between SFH priors of different degrees of `smoothness' in order to identify which results are fully constrained by the observations. We address the unique stellar population properties of this UDG, and its epoch of formation and quenching, in order to test models of UDG formation. The data are described in Section~\ref{sec:data}, and Section~\ref{sec:fitting} details how we fit the data with an advanced physical model. In Section~\ref{sec:results} we discuss the results, and put the results in the context of the literature. What our results imply about the origins of DF44 in the context of theoretical models is discussed in Section~\ref{sec:discussion}. A summary of the key results is provided in Section~\ref{sec:conclusion}. The SFHs of DF44 determined by this work are listed in full in Appendix~\ref{app:fsfh_for_comparison}. We provide additional details on the above discussion in the Appendix, touching on systematic biases in measuring SFHs in Appendix~\ref{app:sfh_biases}, and degeneracies between dust extinction and the flux from old stellar populations in Appendix~\ref{app:old_v_dust}. The magnitudes reported follow the AB magnitude system. We use a Chabrier (2003) initial mass function (IMF), and adopt a flat $\Lambda$ cold dark matter ($\Lambda$CDM) cosmology with $\Omega_m=0.3$ and $ H_0 = 70~\mathrm{km}~\mathrm{s}^{-1}~\mathrm{Mpc}^{-1}$. \section{Data} \label{sec:data} Our data for DF44 include both rest-frame optical spectroscopy and NUV to NIR photometry, shown in Fig.~\ref{fig:spectrum_with_lines}, and described in more detail below. We assume the spectroscopic redshift measured by \citet{vandokkum2017}: $z=0.02132\pm0.00002$. \subsection{Spectroscopy} \label{sec:data_spectrum} The spectroscopy is described in detail in \citet{vandokkum2019} and analysed further in \citet{villaume2022}; we summarise the relevant details here. Of particular note is the sky subtraction, as the sky is much brighter than the UDG. Sky exposures were obtained 1\arcmin.5 away from DF44 intermittently between DF44 observations. The wavelength-dependent time variation in the sky spectrum was obtained from the spatially collapsed individual sky spectra, as parameterised by principal component analysis (PCA). The sky in each science cube was determined from a linear combination of templates, where the bestfit sky spectrum for the given exposure was subtracted from each spatial pixel. Additional details are provided in \citet{vandokkum2019}. KCWI integral field spectroscopy was obtained for DF44 and spectra were extracted in nine elliptical apertures after masking the ten brightest point sources. The apertures were sized $9\arcsec \times 6\arcsec$, to match the UV photometry; see the following section. The integrated spectrum was determined through bootstrapping the individual spectra, where we used the 50\thh\ percentile of the bootstrapped flux distribution and the average of the 16\thh\ and 84\thh\ percentile as the uncertainty. With 17 hours of exposure on-target, the integrated spectrum reaches a $\mathrm{S/N}\sim 96$~\AA$^{-1}$ (see the third panel in Fig.~\ref{fig:spectrum_with_lines}). The KCWI Medium slicer with BM grating was used, yielding a spectral resolution of $\mathrm{R}\sim 4000$. After masking and interpolating over regions badly affected by sky transmission, the spectrum was smoothed to a resolution of 110~km~s$^{-1}$, for the purpose of later comparing with templates at this resolution. The final spectrum is shown in Fig.~\ref{fig:spectrum_with_lines} (the unsmoothed spectrum shown with grey lines), covering 4578--5337~\AA\ rest-frame, with notable absorption features labelled. Also shown is the S/N of the spectrum as a function of wavelength. Given the challenge of precisely flux-calibrating the spectrum (e.g., due to residuals from the spectral extraction), we instead rely on the calibration of the photometry to provide constraints on the SED continuum when fitting the galaxy properties and SFH (see Section~\ref{sec:fitting}). For this reason we do not flux calibrate the spectrum, and the continuum shape therefore reflects primarily the instrument response function and not the galaxy SED. We then effectively flatten the continuum by dividing through by a polynomial fit. In the fitting routine we therefore need to marginalise over the shape of the spectral continuum in comparing the models to the observations (see Section~\ref{sec:fitting_physical_model_speccal}). Lastly, we chose to mask the spectrum between 4700--4750~\AA\ rest-frame where there is a broad dip in the spectrum that does not appear in the models. We note that the blue end of the spectrum ($\lesssim$4800~\AA) was not fitted by either \citet{vandokkum2019} or \citet{villaume2022}. Our results are not impacted by masking this region of the spectrum, although the $\chi^2$ values are slightly higher without masking. \subsection{Photometry} \label{sec:data_phot} Photometry in all the broadband images was performed by measuring fluxes within a $9\arcsec \times 6\arcsec$ elliptical aperture, with a position angle of 65 degrees, to be consistent with the UV photometry reported by \citet{lee2020}. As this is significantly larger than the image resolution in all filters, no point spread function homogenisation was applied, though appropriate aperture corrections are made to the {\it Spitzer} and {\it GALEX} images to account for light lost outside the aperture due to the point spread function. Details on the reduction and analysis of each image is described in more detail, below. The photometric measurements in each broad-band filter were corrected for foreground extinction in the Milky Way in the direction of the Coma Cluster using the website \url{http://argonaut.skymaps.info/usage} and Table~6 of \citet{schlafly2011} with $R_\mathrm{V}=3.1$. \subsubsection{Spitzer-IRAC Near-Infrared (NIR) Imaging} {\it Spitzer}-IRAC \citep[][]{fazio2004, werner2004} observations of DF44 were taken on 2017 May 12 starting at 07:19 (UT). Both 3.6 and 4.5 $\mu$m (channels 1 and 2, respectively) observations were taken. 50 medium-scale (median dither separation 53 pixels) cycling dither pattern 100 second frames were taken in each channel. The total exposure time was $93.6\times50 = 4680$~s in channel 1 and $96.8\times50=4840$~s in channel 2. We removed the `first-frame correction' (to address imperfect bias subtraction; see Section 5.1.10 of the IRAC Instrument Handbook). The rectification of each individual data frame for history effects in the IRAC arrays was performed in two steps that are explained in detail in \citet{pandya2018}. In short, we first performed a per pixel correction that was based on IRAC idling time characteristics in the IRAC skydarks, matched to those that took place before our observations. The typical magnitude of the per pixel correction was about $4$~kJy~sr$^{-1}$ in channel 1 and $1$~kJy~sr$^{-1}$ in channel 2. The typical corrections are much smaller than the read noise error and we do not add any systematic magnitude uncertainties due to these first-frame corrections. In the second step, a mean background is calculated for each frame, and a function fitted to these means is subtracted. The typical function consisted of a constant term plus terms that are declining exponentially with time. The uncertainties in these first-frame effect corrections are negligible compared to other sources of systematic error. We also formed a median image after doing a $3\sigma$ clipping from all the frames on the source in each channel and subtracted that median image separately in each frame. Such a median image will subtract the residual images that have been formed on the detector from previous observations. We determined that the uncertainty in the final magnitudes added by this step is less than $0.01$~mag. The DF44 frames include a point source on top of the faint galaxy. We used {\it Spitzer} Science Center provided software {\sc MOPEX}, specifically the APEX and APEX-QA modules, to subtract this point source using point response function (PRF) fitting. The estimated uncertainty due to this step is about 0.5 micro-Jy in both channels. We used the contributed {\it Spitzer}/IRAC software {\sc IMCLEAN} \citep{imclean} to remove leftover column pulldown artefacts from the CBCD frames. We then used the {\it Spitzer} custom software package {\sc MOPEX} to create mosaics of the 50 frames in each channel, using the default parameters and the North up, East left orientation. Before mosaicking we ran the overlap correction module to adjust for background offsets among the CBCD frames (one number per frame). We used only the multiframe outlier rejection scheme in {\sc MOPEX} to reject outlier pixels in the input frames. Next we manually created masks of other sources (including point-like sources on the galaxy) in both channels with the custom software {\sc GIPSY} \citep{gipsy}. We then measured the `sky background' in five empty areas of sky close to DF44 in channels 1 and 2, and from the results we estimated an average sky background (0.00408 and 0.00415 MJy sr$^{-1}$ in channels 1 and 2, respectively) to be subtracted at the position of DF44, applying the mask and using {\sc Astropy} Python library commands in a $9\arcsec \times 6\arcsec$ (P.A. +$65^{\circ}$) aperture centred on the coordinates given by \citet{vandokkum2015_udg}: $\mathrm{R.A.}=13^\mathrm{h}00^\mathrm{m}58^\mathrm{s}.0$, $\mathrm{Dec.}=26^{\circ}58\arcmin35\arcsec$. We corrected the results with the appropriate aperture corrections from the IRAC Instrument Handbook. The uncertainty in aperture photometry was estimated by performing aperture photometry on several positions in empty sky and taking the rms scatter in these measurements. This gave 0.05~and 0.10~mag in IRAC channels 1 and 2, respectively. We estimated the uncertainty due to masking by replacing the pixel values under the masks by the average pixel values within the unmasked aperture, and performed the photometry again, and took the difference between this measurement and the measurement using the masks as the uncertainty. The channel 1 masking uncertainty is thus $0.14$~mag, and $0.18$~mag for channel 2. The sky background subtraction uncertainty is estimated by taking the maximum difference in the sky background measurements in three areas of empty sky around DF44 in the images and adding this difference to all the pixels within the photometry aperture and summing them up. This method gives $0.01$~mag and $0.11$~mag as the sky uncertainty in channels 1 and 2, respectively. The calibration uncertainty was estimated to be $2$~per~cent in IRAC channels 1 and 2, amounting to $0.02$~mag in systematic uncertainty. There is an additional uncertainty of $9$~per~cent in channel 1 and $2$~per~cent fractional flux in channel 2 due to the uncertainty in integrated aperture flux correction factor (limiting case is infinite aperture). These convert to $0.09$ and $0.02$~mag in channels 1 and 2. In addition there is the point source subtracting uncertainty of $0.01$~mag. We list the final AB magnitudes for channels 1 and 2 and their respective uncertainties from the quadrature sum of the magnitude uncertainties in Table \ref{table:mags}. \subsubsection{Gemini GMOS g- and i-Band Imaging} DF44 was observed on 2017 May 12 with the Gemini Multi-Object Spectrometer (GMOS) for a total of 3000~s in both the $g$- and $i$-bands. The observations have been described by \citet{vandokkum2016}. We flux-calibrated the images with SDSS, accounting for a $g-i$ colour term and using four SDSS catalogued stars in our images. The data were obtained in photometric conditions, and we adopt an absolute calibration magnitude uncertainty to be $3$~per~cent, amounting to $0.03$~mag in the $g$- and $i$-bands, based on \url{https://www.gemini.edu/instrumentation/gmos/calibrations}. The sky background uncertainty was calculated as above for the IRAC channels, and amounted to $0.03$~mag in the $g$-band and $0.09$~mag in the $i$-band. Aperture photometry was performed using the coordinates from \citet{vandokkum2015_udg} and the {\sc Astropy} Python library commands. We list the final AB magnitudes for the $g$- and $i$-bands and the respective uncertainties in Table \ref{table:mags}. \subsubsection{HST/WFC3/UVIS F606W and F814W imaging} Additional visual images of DF44 were taken on 2017 April 23 with the {\it Hubble Space Telescope} using the WFC3 camera and its UVIS detector and broadband filters F606W and F814W. \citet{vandokkum2017} reported $5\sigma$ AB depths of F606W$=28.4$ and F814W$=26.8$ for DF44. A total of 2430~s and 2420~s were spent on the source in F606W and F814W filters. In both filters we calculated the sky mode in five different `empty' regions of the sky and took an average and subtracted those values from the images. We also manually masked out point sources in the images. We used the image headers to calculate the conversion from electrons/s to AB magnitudes and performed elliptical aperture photometry within the same apertures as mentioned above for IRAC. The uncertainties were estimated in the following way: we estimate a photometric calibration offset uncertainty of $0.03$~mag, and the uncertainty due to background subtraction (estimated as above) is $0.05$~mag in F606W and $0.13$~mag in F814W. The uncertainty due to masked point sources within the aperture is estimated to be $0.03$ and $0.01$~mag in F606W and F814W. The uncertainty in performing aperture photometry was estimated as above and results in an additional $0.05$ and $0.14$~mag in F606W and F814W. We list the final AB magnitudes for F606W and F814W and the respective uncertainties in Table \ref{table:mags}. \subsubsection{Ultraviolet} The UV data reduction and analysis was presented in \citet{lee2020}. This consists of two filters observed with {\it Swift} UVOT (UV1 at 2600~\AA\ and UV2 at 1928~\AA), and {\it GALEX} NUV images. The UVOT data include a correction for red leakage and scattered light, where the correction (14~per~cent) was comparable to the flux uncertainty. Again we list the final results in Table~\ref{table:mags}. \begin{table} \footnotesize \centering \caption{ DF44 Photometry. } \label{table:mags} \begin{tabular}{lrr} \hline Filter & m$_0$ (AB) & $\lambda_\mathrm{eff}$ (\AA) \\ \hline UVOT UV1 & $23.40\pm0.19$ & 2516.7 \\ UVOT UV2 & $24.97\pm0.41$ & 2010.4 \\ {\it GALEX} NUV & $23.67\pm0.35$ & 2271.1 \\ GMOS g\_G0301 & $20.02\pm0.14$ & 4687.6 \\ GMOS i\_G0302 & $19.33\pm0.18$ & 7751.6 \\ WFC3 F606W & $19.80\pm0.08$ & 5813.0 \\ WFC3 F814W & $19.32\pm0.19$ & 7972.9 \\ IRAC1 & $20.09\pm0.18$ & 35439.4 \\ IRAC2 & $20.45\pm0.24$ & 44840.9 \\ \hline \end{tabular} \end{table} \section{Stellar population modelling and fitting} \label{sec:fitting} \begin{table*} \footnotesize \centering \caption{SFH parameters and priors. Notes: 1) Fraction of SFR in a given time bin, where the SFH is a piece-wise constant function with $N$ parameters ($N-1$ free parameters). The prior is a Dirichlet function, controlled by the parameter \aD, see Section~\ref{sec:fitting_physical_model_sfh}. 2) Redshift, with a tight prior about the measured spectroscopic redshift, $z_\mathrm{spec}$. 3) Total stellar mass is the integral of the SFH, which includes the mass lost to outflows. To convert to stellar mass \textit{remaining} at the time of observation we regenerate the spectral templates and subtract the mass lost as calculated by {\sc FSPS}. 4) The total stellar metallicity where scaled-Solar $\alpha$-abundance is assumed. 5) Parameters for the two-component \citet{charlot2000} dust absorption model, with an adjustable attenuation curve slope from \citet{noll2009} with a UV bump based on \citet{kriek2013}. 6) Parameters for the \citet{draine2007} dust emission model. 7) The uncertainty on the spectra can be increased by a given factor, with a likelihood penalty for factors giving reduced $\chi^2{<}1$. 8) An outlier pixel model can increase the errors for individual pixels by a factor of 50, to accommodate for poor matches between the data and spectral templates. 9) A fourth degree Chebyshev polynomial is fit (via optimisation) to the residual of the normalised ratio between the observed spectrum and the proposed model spectrum and multiplied out prior to each likelihood calculation. This effectively accounts for the lack of flux-calibration in the spectrum. } \label{tab:params} \begin{tabular}{p{0.03\linewidth} p{0.13\linewidth} p{0.36\linewidth} p{0.38\linewidth}} \hline Note & Parameter & Description & Prior \\ \hline & \multicolumn{2}{l}{SFH} \\ \hline 1 & $f_n$ & sSFR fraction. & Dirichlet(\aD) \\ 2 & $z_\mathrm{obs}$ & Redshift & Uniform(min$=z_\mathrm{spec}-0.01$, max$=z_\mathrm{spec}+0.01$) \\ 3 & $\log\left(M_\ast/\mathrm{M}_{\sun}\right)$ & Total stellar mass formed & Uniform( min$ = 8$, max$ = 12$) \\ 4 & $\log\left(Z/\mathrm{Z}_{\sun}\right)$ & Stellar metallicity & Uniform(min$ = -2$, max$ = 0.19$) \\ \hline & \multicolumn{2}{l}{Dust attenuation} \\ \hline 5 & \dust & Diffuse dust optical depth (eq.~\ref{eqn:dust_diffuse_2}) & Uniform(min$ = 0$, max$ = 1.5$) \\ & ${\hat{\tau}_\mathrm{young}}/{\hat{\tau}_\mathrm{dust,~diffuse}}$ & Ratio of diffuse to birth-cloud dust optical depth (eq.~\ref{eqn:dust_diffuse_1}) & Clipped Normal( $\mu=1$, $\sigma=0.3$, min$ = 0$, max$ = 1.5$) \\ & $n_\mathrm{dust}$ & Diffuse dust attenuation index & Uniform(min$=-2$, max$=0.5$) \\ \hline & \multicolumn{2}{l}{Dust emission} \\ \hline 6 & $Q_\mathrm{PAH}$ & Percent mass fraction of PAHs in dust & Uniform(min$=0.5$, max$=7$) \\ & $U_\mathrm{min,dust}$ & Minimum starlight intensity to which the dust mass is exposed & Uniform(min$=0.1$, max$=25$) \\ & $\gamma_\mathrm{dust}$ & Mass fraction of dust in high radiation intensity & LogUniform(min$=0.001$, max$=0.15$) \\ \hline & \multicolumn{2}{l}{Noise model} \\ \hline 7 & spec\_jitter & Multiplicative spectral noise inflation term & Uniform(min$ = 1$, max$ = 3$) \\ 8 & $f_\mathrm{outlier,~spec}$ & Fraction of spectral pixels considered outliers & Uniform(min$ = 10^{-5}$, max$ = 0.5$) \\ \hline & \multicolumn{2}{l}{Spectrophotometric calibration} \\ \hline 9 & $c_n$ & Chebyshev polynomial coefficients, $n=4$ & \\ \hline \end{tabular} \end{table*} In this section we describe how we fit the DF44 observations using the fully Bayesian inference code \prospector\footnote{https://github.com/bd-j/prospector} \citep[v1.0][]{leja2017, johnson2019, johnson2021}. The photometry and spectroscopy are fitted simultaneously, incorporating the information on the stellar properties and SFH from both data sets. In Section~\ref{sec:fitting_physical_model} we describe the advanced physical model, which includes a non-parametric SFH and a flexible dust attenuation law. We additionally include a white noise and spectral outlier model described in Section~\ref{sec:fitting_physical_model_noise_and_outliers}, and a spectrophotometric calibration model which marginalises out the shape of the spectral continuum, in Section~\ref{sec:fitting_physical_model_speccal}. A summary of the parameters and priors of our physical model is shown in Table~\ref{tab:params}. Section~\ref{sec:fitting_sampling} briefly describes the sampling method. \subsection{The physical model}\label{sec:fitting_physical_model} The physical models are based on the stellar population synthesis (SPS) models from the Flexible Stellar Population Synthesis library \citep[{\sc FSPS};][]{conroy2009, fsps} with {\sc MESA} Isochrones and Stellar Tracks ({\sc MIST}; \citealt{choi2016, dotter2016}, based on the {\sc MESA} stellar evolution code; \citealt{paxton2011,paxton2013,paxton2015,paxton2018}), and {\sc MILES}\footnote{http://miles.iac.es/} spectral templates \citep{sanchez-blazquez2006}. The dust is modelled with the two-component dust attenuation model from \citet{charlot2000}, which separates the dust components between those associated with the birth-cloud, and a uniform dust screen. While we expect DF44 to have an old stellar population with very little dust content, we prefer to include a flexible dust model and marginalise over the parameters rather than assume a simplistic model. This avoids the assumption that dust attenuation in DF44 is the same as dust attenuation in the local Universe. The birth-cloud dust acts to only attenuate stellar emission for stars younger than 10~Myr, \begin{equation}\label{eqn:dust_diffuse_1} \tau_\mathrm{dust,~birth}(\lambda) = \hat{\tau}_\mathrm{dust,~birth} \left( \frac{\lambda}{ \text{5500~\AA} }\right)^{-1} \end{equation} \noindent while the diffuse-dust acts as a uniform screen with a variable attenuation curve \citep{noll2009}, \begin{equation}\label{eqn:dust_diffuse_2} \tau_\mathrm{dust,~diffuse}(\lambda) =\frac{\hat{\tau}_\mathrm{dust,~diffuse}}{4.05} \left( k^\prime(\lambda) + D(\lambda) \right) \left(\frac{ \lambda }{ \text{5500~\AA} }\right)^{n} \end{equation} \noindent where $n$ is the diffuse dust attenuation index, $k^\prime(\lambda)$ is the attenuation curve from \citet{calzetti2000}, and $D(\lambda)$ describes the UV bump based on \citet{kriek2013}. The diffuse dust is given a uniform prior (min$=0$, max$=1.5$). We note that the diffuse dust optical depth is related to the dust extinction via $A_\lambda = 2.5~\log_{10}(e)~\tau_\lambda$, where $\tau_\lambda$ is the sum of the diffuse and birth dust components. We use a joint prior for the ratio of diffuse to birth-cloud dust, rather than a direct prior on birth-cloud dust, to avoid degeneracies between the two parameters. The prior for ${\hat{\tau}_\mathrm{young}}/{\hat{\tau}_\mathrm{dust,~diffuse}}$ is a clipped normal with $\mu=1$, $\sigma=0.3$, min$=0$, and max$=1.5$, which broadly follows results from the literature for massive galaxies while allowing some variation. Lastly the prior on the attenuation index is uniform (min$=-2$, max$=0.5$). Dust emission is calculated assuming energy conservation, i.e., all the energy attenuated by dust is re-emitted at infrared wavelengths \citep{dacunha2008}. As our photometry is limited to $<4.4~\mu$m (rest-frame) there is no significant information in the SED constraining dust emission. We chose to include the full dust model and marginalise over the unconstrained parameters, rather than a more simplistic model, in order to avoid biasing the result. The stellar metallicity is a free parameter; however we assume a constant metallicity for all the stars and for the entire history of the galaxy. This single metallicity has a uniform prior in \logzsol\ (min$=-2$, max$=0.19$), where $\mathrm{Z}_\odot = 0.0142$ \citep{asplund2009}. In addition, we assume scaled-Solar abundances, which is a current limitation of the {\sc FSPS} models. Lastly, a \citet{chabrier2003} IMF is used. \subsubsection{Non-parametric SFH}\label{sec:fitting_physical_model_sfh} To characterise the SFH we use a non-parametric\footnote{\textit{Non-parametric} here means that the SFH has no specified functional form.} model of the form of a piece-wise constant function with $N=12$ time bins. The benefits of such a flexible SFH (relative to parametric functions, e.g., declining exponential or log-normal) have been well characterised by \citet{leja2019a} and \citet{lower2020}, among others. The time bins are defined in lookback time, spaced so that the first seven bins correspond to 0--30~Myr, 30--100~Myr, 100--500~Myr, 500~Myr--1~Gyr, 1.0--2.0~Gyr, 2.0--3.0~Gyr, and 3.0--4.0~Gyr. There are four bins spaced logarithmically between 4~Gyr to 0.95$\times~t_\mathrm{univ}$ (4.0--5.4~Gyr, 5.4--7.2~Gyr, 7.2--9.6~Gyr, and 9.6--12.6~Gyr), and the last bin covers 0.95$\times t_\mathrm{univ}$--$t_\mathrm{univ}$, where $t_\mathrm{univ}$ is the age of the Universe at the time of observation. Defining the time bins this way reflects the non-linear evolution in the SEDs: the narrower time bins at recent lookback times allow a sufficient precision in capturing recent star formation, while the wider bins at later lookback times reflect the modest evolution of older stellar populations. The last time bin is included to permit a maximally old population. Fitting SEDs to recover SFHs is an ill-defined problem, and prone to overfitting \citep[e.g.,][]{moultaka2000, moultaka2004, ocvirk2006a}. In order to recover a physically plausible SFH it is common to invoke `regularisation.' There a number of ways that this can be done, which differ in technical detail. One approach is to impose Gaussian-like priors on the SFH and/or the age-metallicity relation \citep[e.g., as in the commonly used code {\sc steckmap};][]{ocvirk2006a, ocvirk2006b}, and another is to penalise sharp transitions in the SFH \citep[e.g., the continuity prior;][]{leja2019a}. In this work we use a third method, which is to control the degree of concentration of fractional specific SFR (sSFR) between the time bins of the nonparametric function. While these approaches differ in detail, they all attempt to avoid nonphysical solutions by imposing constraints on the variability of the SFH over time. We adopt a Dirichlet prior which includes a concentration parameter, \aD, that controls the preference to distribute the fractional sSFR in one bin ($\alpha_\mathrm{D}<1$) or evenly between all bins ($\alpha_\mathrm{D}\geq1$), respectively. A detailed description of this prior is provided in \citet{leja2017}. Without direct physical motivation to inform a choice of \aD, we consider both \aDo\ and \aDt\ as valid options, labelling them as `extended' and `concentrated' versions of the SFH prior. In comparing the results produced from these two choices of SFH prior, we explore the dependence of the results on the degree of regularisation. Fig.~\ref{fig:sfh_priors} shows random draws (thin lines) for priors with \aDo\ (extended) and \aDt\ (concentrated), with the time bins as defined above. The median and 68~per~cent credible regions (CRs) of the priors are shown with a thick line and shaded regions, respectively. The corresponding implicit prior on the mass-weighted age is shown in the bottom panel for reference. The mass-weighted stellar age ($t_\mathrm{ age}$, sometimes referred to as the mean stellar age, broadly describes the average formation time of stars in a given galaxy in units of lookback time) is calculated from the SFH: \begin{equation}\label{eqn:mwa} t_\mathrm{age} = \frac{\int_{t_\mathrm{obs}}^{0} t~~\mathrm{SFR}(t)~\mathrm{d}t}{\int_{t_\mathrm{obs}}^{0} ~\mathrm{SFR}(t)~\mathrm{d}t} \end{equation} \noindent where $t_\mathrm{obs}$ is the age of the Universe at the time of observation. The implicit age prior for an extended SFH is centred at half the age of the Universe with a 99.9~per~cent CR between 3.08--9.98~Gyr, and thus is a strong prior against both very old and very young ages. In comparison, the concentrated SFH also peaks around half the age of the Universe (although offset given the varying widths of the time bins), but the prior is not as tight (99.9~per~cent CR between 0.83--12.17~Gyr) such that old ages are less strongly disfavoured. \subsection{Noise and outlier models}\label{sec:fitting_physical_model_noise_and_outliers} A noise model is used to account for possible under- or over-estimates of the spectral uncertainties, where the noise is uniformly inflated (or deflated). This effectively modifies the spectral uncertainty by a multiplicative factor, but is counterbalanced by a penalty in the likelihood calculation for larger uncertainties. This down-weights spectra where the uncertainties are otherwise low, but there is a mismatch between the spectrum and the models. The minimum uncertainty in the photometry is 7\%, and as we expect this to be large enough to account for deviations with the template SEDs, we do not include a noise model for the photometry in our model. A mixture model is used to identify and mask pixels in the spectra which have large deviations from the model. The purpose here is to avoid being overly sensitive to outlier pixels in the spectrum. This is again relevant where the S/N is large and significant residuals can result from poor matches to the models where the model itself is inaccurate (due to differences in, for example, $\alpha$-enhancement). \prospector\ uses the mixture model approach described in \citet{hogg2010b}. The spectral outlier model finds that less than 1~per~cent of the pixels are inconsistent with the model templates beyond the specified uncertainty. Note that the spectral white noise model prefers to inflate the uncertainties by $\sim 1$--3~per~cent, which is not unexpected given that the S/N of the spectrum is high, 40--140 (median 96) and that the models are not flexible enough to precisely match the metallicity- and $\alpha$-abundance sensitive spectral features (i.e, the Mg triplet). \subsection{Spectrophotometric calibration}\label{sec:fitting_physical_model_speccal} We rely on the calibration of the photometry to constrain the shape of the SED continuum. The DF44 spectrum is not flux-calibrated such that neither the normalisation nor the shape of the spectral continuum provides information about the stellar properties. In fact, the spectrum was flattened prior to fitting (see Section~\ref{sec:data_spectrum}). For this reason we ignore the shape of the spectrum when computing the likelihood of the SED model (relative to the spectrum). We do this by following the routine provided through \prospector\ which fits (via optimisation) a polynomial to the residual between the spectrum and the model, which is then multiplied to the model. We use an $n=(\lambda_\mathrm{max}-\lambda_\mathrm{min})/100~$\AA\ $\sim 8$ order Chebyshev polynomial, which is flexible enough to remove the broad continuum shape without over-fitting absorption features \citep[e.g.,][]{conroy2018}. We test our results using several different orders of the polynomial, and find that we are generally insensitive to the choice of $n$ as long as $n>4$ (otherwise the dust attenuation pdf is skewed). \subsection{ Sampling } \label{sec:fitting_sampling} The complete model includes 19 free parameters (11 of which describe the shape of the SFH), which are summarised in Table~\ref{tab:params}. We follow the sampling procedure outlined in \citet{johnson2021} \citep[see also ][]{tacchella2021}, using the dynamic nested sampling \citep{skilling2004, higson2019} algorithm {\sc dynesty}\footnote{https://dynesty.readthedocs.io/en/latest/} \citep{speagle2020} to efficiently sample the high-dimensional parameter space of the model and build posterior pdfs. This approach provides full posterior distributions of the model parameters together with their degeneracies. A useful primer on Bayesian methods can be found in \citet{vandeschoot2021}. Throughout this work we report the uncertainties as 68~per~cent CRs (which corresponds to the 16\thh\ to 84\thh\ percent range) of the posterior pdfs as the majority of the distributions are non-symmetric. \subsection{Simultaneously fitting the photometry and spectroscopy}\label{sec:fitting_physical_model_both} In fitting both the photometry and spectroscopy we consider the log-likelihood of the model, conditioned on the observation, to be the sum of the two individual likelihood functions: \begin{equation} \ln \mathcal{L}( d_s, d_p | \theta, \phi, \alpha ) = \ln \mathcal{L}( d_s | \theta, \phi, \alpha ) + \ln \mathcal{L}( d_p | \theta ) \end{equation} \noindent where $d_s$ is the spectroscopic data, $d_p$ is the photometric data, the parameters $\theta$ describe the physical model used in \prospector, the parameters $\alpha$ describe the spectroscopic noise model (Section~\ref{sec:fitting_physical_model_noise_and_outliers}), and the parameters $\phi$ include the spectro-photometric calibration (Section~\ref{sec:fitting_physical_model_speccal}). The parameters of the physical model are summarised in Table~\ref{tab:params}. We apply no relative weighting between fitting the spectroscopy and photometry in assessing the match between the observations and SEDs. The basic likelihood calculation is effectively a $\chi^2$ calculation for both the spectral and the photometric data. We alter the likelihood calculation for the spectroscopy to include the noise model and outlier model described in Section~\ref{sec:fitting_physical_model_noise_and_outliers}, following the procedure outlined in Appendix~D of \citet{johnson2021}. \section{Results}\label{sec:results} \begin{table} \footnotesize \centering \caption{Summary of results. Time-scales $t_x$ correspond to the time at which $x$ percent of stellar mass had formed, in units of Gyr since the Big Bang. Given that our SFH is a step function, we interpolate to estimate $t_{x}$. We provide the 16\thh, 50\thh, and 84\thh\ percentiles of the posterior (i.e., the 68~per~cent CR) as an estimate of the uncertainty, although this is likely an underestimate given the width of the time bins in our SFH. Given the observed redshift of DF44 and the adopted cosmology, the age of the universe is 13.47~Gyr. Time-scales in units of lookback time are therefore $t_\mathrm{Lookback}=13.47~\mathrm{Gyr}-t_x$. The fractional SFH within the time bins of the non-parametric model (i.e., not interpolated) are listed in Table~\ref{tab:results_fsfh} in Appendix~\ref{app:fsfh_for_comparison}. } \label{tab:results} \begin{tabular}{ccccccc} \hline Time-scale & \multicolumn{3}{c}{Extended} & \multicolumn{3}{c}{ Concentrated} \\ (Gyr) & \multicolumn{3}{c}{SFH prior} & \multicolumn{3}{c}{ SFH prior } \\ \hline & 16 & 50 & 84 & 16 & 50 & 84 \\ \hline $t_{10}$ & 0.60 & 0.87 & 1.12 & 0.068 & 0.068 & 0.069 \\ $t_{20}$ & 1.18 & 1.39 & 1.64 & 0.135 & 0.136 & 0.137 \\ $t_{30}$ & 1.63 & 1.79 & 2.33 & 0.203 & 0.204 & 0.206 \\ $t_{40}$ & 2.07 & 2.33 & 2.92 & 0.271 & 0.272 & 0.274 \\ $t_{50}$ & 2.55 & 2.85 & 3.53 & 0.339 & 0.340 & 0.343 \\ $t_{60}$ & 2.93 & 3.41 & 4.17 & 0.406 & 0.408 & 0.411 \\ $t_{70}$ & 3.37 & 4.07 & 4.92 & 0.474 & 0.475 & 0.480 \\ $t_{80}$ & 3.86 & 5.39 & 5.70 & 0.542 & 0.543 & 0.549 \\ $t_{90}$ & 5.73 & 6.32 & 7.01 & 0.610 & 0.611 & 0.617 \\ $t_{95}$ & 6.25 & 7.57 & 7.93 & 0.643 & 0.645 & 0.652 \\ \hline \end{tabular} \end{table} Given the sensitivity of modelling ages of old stellar populations, and their dependence on both the flexibility of the assumed SFH and the choice of SFH prior \citep[e.g.,][]{leja2017, leja2019a}, we present the results for two `extremes' of the SFH prior: i) an `extended' SFH, preferring equal distribution of fractional sSFR between the time bins (\aDo), and ii) a so-called `concentrated' SFH, preferring an unequal distribution of fractional sSFR between time bins (\aDt). The difference between these priors is discussed in Section~\ref{sec:fitting_physical_model_sfh}. In assuming the SFH is extended, there is a preference for ages of half the age of the Universe and against old ages ($\gtrsim 10$~Gyr; see Fig.~\ref{fig:sfh_priors}). However, the results of \citet{villaume2022} suggest that DF44 formed its stellar population early, and shortly thereafter rapidly quenched, as determined from its inverted stellar population gradients and low iron metallicity for its mass. The `concentrated' SFH prior has a higher likelihood for such an SFH. Moreover, the concentrated SFH prior has an overall broader implicit prior on mass-weighted age as there is no preference for where the fraction sSFR is concentrated between the time bins. \vspace{0.2cm} The results from the full-spectral modelling of DF44 are shown in Figures~\ref{fig:summary_fit}--\ref{fig:summary_fit_corner}, where the fits for the extended SFH prior are shown in red, and for the concentrated SFH prior in blue. In Fig.~\ref{fig:summary_fit} the observations are shown with the `bestfit' models (the maximum a-posteriori model; i.e., with the highest probability of the set of samplings) and the 68~per~cent CR of 500 random draws from the posteriors. Overall the fits to the photometry are similar between the two SFH priors; the extended SFH model has marginally smaller residuals at NUV wavelengths. Similarly, the bestfit model spectra (multiplied by the spectrophotometric calibration polynomial) compared to the spectroscopy are nearly identical, with differences only at the $<1$~per~cent level. Given the degeneracy between age, dust, and metallicity, the subtle differences in these features lead to the differences in the predicted stellar population parameters. \subsection{Star formation history and stellar population parameters at z=0} \label{sec:results_sfh} Fig.~\ref{fig:summary_fit_sfhs} shows the median (solid line) and 68~per~cent CR (shaded) of the posterior pdfs for the sSFR, and corresponding SFR and mass-assembly history. Similarly, the median (dashed line) and 68~per~cent CRs (hatched) for the explicit and implicit priors are shown (see also Fig.~\ref{fig:sfh_priors}). The SFHs of the bestfit models (shown in Fig.~\ref{fig:summary_fit}) are indicated with open crosses. Dotted lines are drawn at the 50\thh, 70\thh, and 90\thh\ percent levels of the cumulative stellar mass for reference. In Table~\ref{tab:results} we summarise the two SFH results by listing the times at which different percentiles of the final stellar mass were in place. The SFH within the time bins of the non-parametric model are provided in Appendix~\ref{app:fsfh_for_comparison}. The SFHs determined from both priors suggest that DF44 formed early, having 90~per~cent of its stellar mass in place at least $\sim 7.2$~Gyr ago ($z\sim 0.9$). Using the extended SFH prior, we find that it took $\sim3.5$~Gyr for DF44 to assemble between 50~per~cent to 90~per~cent of its mass, suggesting a relatively fast transition between star forming and quiescent states. The SFH determined with the concentrated prior is extreme in that more than 90~per~cent of the mass formed within the first time bin, i.e., $\sim12.8$~Gyr ago ($z\sim 8$). During the last 5~Gyr the two results are otherwise similar, with low levels of star formation until the last 100~Myr.\footnote{Additional testing of the prior-sensitivity of the SFH showed that using $\alpha_\mathrm{D} = 0.5$ (mildly concentrated) produced parameter values between the results from \aDt\ and \aDo, as expected. The mass-weighted age was found to be 11.9~Gyr, which indicates that the very old age is not overly sensitive to the choice of the \aD\ value.} {\vspace{0.2cm}} A curious feature of both SFHs is the rise in SFR within the last 100~Myr (corresponding to the first two time bins and shaded grey in Fig.~\ref{fig:summary_fit_sfhs}; by 1.8--2.4~dex). Although residual star formation appears to be common for massive early type galaxies, where $\sim$0.5 per cent of their mass formed within the last 2~Gyr, the fraction decreases at lower stellar masses, consistent with galaxy `downsizing' \citep[e.g.,][]{salvador-rusinol2020}. The recent rise in DF44's SFH accounts for $\lesssim 1$~per~cent of the total stellar mass, assuming either SFH prior. While DF44 shows no indication of recent star formation from the photometry, and similarly lacks emission lines in the spectrum, it is possible that $\mathrm{H}\alpha$ emission (perhaps related to star formation ignited by a late infall in to the Coma cluster) recently stopped. This is perhaps unlikely, however, given the lack of blue regions within the galaxy. \citet{lee2020} concluded, based on the difference in NUV and UVW2 bands, that the light traces older stars (on $\sim$~Gyr time-scales, as opposed to young stars which evolve on the order of $\sim$~Myr time-scales). The `recent burst' is not a consequence of an artefact in the KCWI spectrum; the same feature is apparent when fitting the MaNGA data from \citet{gu2018b}. Rather, we expect this recent star formation to be an artefact of the stellar models not being flexible to the contribution of blue horizontal branch (HB) stars (discussed in Appendix~\ref{app:sfh_biases_bhb}) or non-solar Mg-abundances. We none the less test the sensitivity of the models to the presence of a very young stellar population by re-defining the time bins of our SFH, only allowing for star formation older than 1~Gyr. This places a strong prior against recent star formation (SF) to counteract the inability of the SPS models to correctly model the influence of the blue HB stars. In excluding star formation younger than 1~Gyr, the models are better able to recover the shape of the SED, particularly in the NUV, but are marginally worse in matching the spectrum. With this revised model we recover SFHs equivalent to that of our primary results (at times $>1$~Gyr), with statistically consistent but less dust and metallicity, and slightly higher stellar mass. Interestingly, with the extended SFH prior, the revised age estimate is $\sim$2.4~Gyr older. With the concentrated SFH prior, the revised age estimate is unchanged from that of our main result. As such, we conclude that the presence of the `recent burst' of SF does not affect our conclusion that DF44 formed and quenched very early in the history of the Universe. \vspace{0.2cm} Fig.~\ref{fig:summary_fit_corner} shows the posteriors for the normalisation of the diffuse dust attenuation curve, stellar metallicity, stellar mass, and mass-weighted age. The parameters marked with an asterisk are not directly fit in our physical model, but derived from the posterior distributions. We calculate the dust extinction following equations~(\ref{eqn:dust_diffuse_1}) and (\ref{eqn:dust_diffuse_2}) in the $V$-band, where we use $\lambda=5500~$\AA. We note that `total stellar mass formed' is a free parameter in our model, which we convert to `stellar mass' by subtracting the mass lost throughout the SFH, as calculated by {\sc FSPS}. The median and uncertainties of the marginalised posteriors for extended (concentrated) SFH priors are: \begin{enumerate}\centering \item[] $\hat{\tau}_\mathrm{dust,~diffuse}={0.24}_{{-0.05}}^{{+0.03}}$ ~$\left({0.20}_{{-0.03}}^{{+0.04}}\right)$, \item[] *$A_\mathrm{V}={0.51}_{{-0.13}}^{{+0.11}}$ ~$\left({0.45}_{{-0.08}}^{{+0.10}}\right)$, \item[] $\log(Z_\ast/\mathrm{Z}_\odot)={-1.18}_{^{-0.01}}^{_{+0.01}}$ ~$\left({-1.27}_{{-0.02}}^{_{0.03}}\right)$, \item[] *$\log(M_\ast/\mathrm{M}_\odot)={8.23}_{{-0.06}}^{{+0.02}}$ ~$\left({8.33}_{{-0.03}}^{{+0.03}}\right)$, \item[] *$t_\mathrm{age}~/~ \mathrm{Gyr}={10.20}_{{-0.48}}^{{+0.34}}$ ~$\left({13.06}_{{-0.04}}^{{+0.02}}\right)$, \end{enumerate} as labelled above the one-dimensional histograms. In both cases DF44 has a very old, modestly dusty, and metal-poor stellar population. \vspace{0.2cm} Contrary to our expectation that old \citep[e.g.,][]{peroux2020} and metal-poor \citep[e.g.,][]{galliano2018} populations are devoid of dust \citep[see also][]{barbosa2020}, DF44 appears to have a non-negligible amount: the normalisation of the diffuse-dust attenuation curve is $\hat{\tau}_\mathrm{dust,\ diffuse}\gtrsim 0.2$ and $A_\mathrm{V}\gtrsim0.5$. The origin of such dust is not clear, however, Buzzo et al. 2022 (submitted) recently measured similar extinction values from optical to mid-infrared photometry for a sample of quiescent UDGs. The overall shape of the SED constrains the dust content, however there are degeneracies with both metallicity and age. % If we instead fix $\hat{\tau}_\mathrm{dust,\ diffuse}=0$ and refit DF44 (with an extended SFH prior), the posterior pdfs are statistically consistent with that of our main result, although we note that the age increases (as expected) by $\sim0.23$~Gyr. In Appendix~\ref{app:sfh_biases_need_both} we discuss the fit to just the photometry, which prefers an even dustier solution ($\hat{\tau}_\mathrm{dust,\ diffuse}\sim 0.36$ and $A_\mathrm{V}\sim0.8$, although the photometry provides no direct constraint for the metallicity, and little constraint for the age). While the spectroscopy breaks the degeneracy between dust and metallicity, the degeneracy with age remains; adding either more dust or a stellar population older than $\sim$3~Gyr lowers the flux at wavelengths $<5000$~\AA\ (see Appendix~\ref{app:old_v_dust}). Additional observations in the mid-infrared would provide better constraints on the dust content, as age and dust affect the flux in opposite directions at this wavelength range. Other than \dust, the posteriors of the dust model parameters largely reflect their priors -- which is to be expected given the lack of constraining data. None the less, to check that our results do not depend on the particular dust model, we also fit the data with the dust model of \citet{gordon2003} based on the SMC Bar (thought to have similar dust properties to dwarf elliptical galaxies, i.e., without a UV bump in the extinction curve), and find no change to our result. A degeneracy between the dust normalisation and stellar mass can be seen in the joint posterior in Fig.~\ref{fig:summary_fit_corner}, where an increase in dust suggests a higher stellar mass. As a point of comparison, a solid black line indicates the estimated stellar mass from \citet[][]{vandokkum2016}, and a dotted black line indicates that measured by \citet[][]{saifollahi2022} with uncertainties reflecting the systematics of the model fitting. Both of our fits produce stellar masses lower than (and statistically inconsistent within their 68~per~cent CRs) with the \citet{vandokkum2016} value, but consistent with \citet{saifollahi2022}. Given that the photometry included in our fits is measured within an aperture, and thus does not include all of the light of the galaxy, it is not unexpected that the stellar mass we recover underestimates that from the literature. \vspace{0.2cm} There is a $\sim0.1$~dex difference in \logzsol\ between the fits with an extended or concentrated SFH prior, where the sense of the metallicity difference is consistent with that of the age difference ($\sim2.9$~Gyr) with respect to the age--metallicity degeneracy. This indicates that we are not able to fully break the age--metallicity degeneracy with the data at hand. While in Fig.~\ref{fig:summary_fit_corner} we show the stellar `isochrone' metallicity measured by \citet{villaume2022} as a black dashed line for comparison, there are several caveats to their comparison which are discussed in the following section. At this point the dichotomy of DF44 being `old' or `very-old' is subject to the choice of SFH prior. We remind the reader that the extended SFH prior behaves analogously to regularisation methods used throughout the literature. While the concentrated SFH prior provides more flexibility to better recover the short and early star formation expected for DF44, it is not necessarily a `good' prior; we provide no physical information for the shape of the SFH. We simply tune the prior such that it prefers to distribute the SF within fewer time bins (see Section~\ref{sec:fitting_physical_model_sfh}). \vspace{0.2cm} This prior-dependency problem is exacerbated with less complete or lower S/N data sets. As a brief example, in Fig.~\ref{fig:lit_compare_df44} we compare the stellar metallicities and ages determined through fitting both the spectrum and photometry (diamond), with that fitted to only the photometry (circle) for the extended SFH prior (points marked with an `E'). While the NUV--NIR photometry provides information on the dust in DF44 (see Appendix~\ref{app:sfh_biases}), the age estimate is more heavily weighted by the SFH prior than are the full spectrum fitting results. Accordingly, the photometry-only fit gives a median age $\sim 3.4$~Gyr younger than the fit to the spectrum and photometry together.\footnote{ If instead of the non-parametric model, we assume the SFH follows a delayed exponential form (a common parametric model adopted within the literature) we find similar results. With a logarithmically uniform prior on the $e$-folding time, $\tau$, and linearly uniform prior for the delay time, $t_\mathrm{age}$, the implicit age prior has a complex form with 16\thh, 50\thh, and 84\thh percentiles of 1~Gyr, 3.8~Gyr, and 8.4~Gyr respectively -- preferring younger ages than the extended SFH prior results. The implicit age skews even younger if instead $\tau$ is linearly sampled. Fitting the photometry of DF44 suggests the age is $\sim8.2$~Gyr, and slightly less dusty than using the extended SFH model. Fitting both the photometry and spectroscopy suggests the age is $\sim13.6$~Gyr, and slightly less dusty and more metal poor than our main result. We note that the photometry-only results with the delayed parametric model appear particularly sensitive to the S/N -- if we inflate the photometric uncertainties by a factor of two, the age posterior decreases by $\sim2$~Gyr. The same is not true when using the non-parametric models. } \subsection{Which SFH prior is preferred?} \label{ sec:results_compare_lit} There is little statistical evidence to decide whether the results from either SFH prior better reflects the `true' properties (or SFH) of DF44.\footnote{Comparing the Bayesian evidence of the two fits (as derived from the nested sampling described in Section~\ref{sec:fitting_sampling}) we find a strong preference (according to the Jefferys scale, see for example \citealt{kass1995}) for the concentrated SFH prior ($\ln Z_\mathrm{concentrated} = 62590$ is much larger than $\ln Z_\mathrm{extended}=62542$), where here $Z$ is the Bayesian evidence. However, this likely reflects the fact that the old age of DF44 is more disfavoured by the extended SFH prior (see Section~\ref{sec:fitting_physical_model_sfh}) more than a preference of the data itself. } The distributions of SED models shown in Fig.~\ref{fig:summary_fit} are similar between the fits with each prior, and the models have similar residuals. There are subtle differences, however, particularly around the H$\beta$ and Mg~{\sc II} features where the concentrated SFH gives a (statistically) lower $\chi^2$. The H$\beta$ line is sensitive to recent star formation (and to HB stars, as discussed in Section~\ref{sec:results_sfh}), while Mg~{\sc II} is to sensitive the $\alpha$-abundance of the stellar population. The FSPS models that we use are currently limited to fixed solar $\alpha$-abundance. However, \citet{villaume2022} found that DF44 has [Mg/Fe]$=0.11^{+0.06}_{-0.04}$ through fitting the same spectrum of DF44 as this work with the full-spectrum fitting code \alf\ \citep[][]{conroy2018}, which includes response functions to measure the non-solar chemical abundance variations. Given the relationship between both features and the age of the stellar population, this points to the need to include more complex stellar populations variables, e.g., $\alpha$-abundance, in models in order to break this degeneracy.\footnote{While {\sc FSPS} does include an option to set the fraction of blue HB stars, for technical reasons we cannot include it as a free parameter in our models.} Fig.~\ref{fig:lit_compare_df44} compares the stellar metallicities and ages measured for DF44 by this work, \citet{villaume2022}, and \citet{gu2018b}.\footnote{The stellar `isochrone' metallicity (distinct from that which includes the response function for individual elements) [$Z$/H] values from \citet{villaume2022} and \citet{gu2018b} were provided via private communication.\label{footnote:alf_values}} Both previous studies fitted rest-frame optical spectra of DF44 with the full-spectrum fitting code \alf. We caution that there are fundamental differences between \alf\ and \prospector\ which make their results only broadly comparable: e.g., the inclusion of non-solar abundance patterns (as mentioned above), and \alf\ fits a single-age stellar component (with a uniform prior with minimum age of 1~Gyr) rather than an SFH. That said, the luminosity- and mass-weighted ages should be comparable given that DF44 is old. \citet{villaume2022} fitted the same KCWI spectrum as this work, while \citet{gu2018b} fitted a MaNGA spectrum which covers a broader wavelength range (including several additional age diagnostics: H$\delta$, H$\gamma$, Ca~II H and K, and $G$-band). The MaNGA spectrum has $\mathrm{S/N}\sim 8$~\AA$^{-1}$, however, which is only $\sim 12$~per~cent the S/N of the KCWI spectrum. Despite differences in data, the two studies both found the age of DF44 to be $\sim$10.5~Gyr, although the stellar metallicities are formally discrepant.\footnote{\citet{villaume2022} considered the presence of a second young population (aged 1--3~Gyr), which lowers their age estimate by 0.6~Gyr but is consistent with their fiducial fit.} Notably, \citet{gu2018b} also considered the $g-r$ colour of DF44 from Dragonfly imaging, and re-weighted their posteriors, which considerably lowers their metallicity value (and is then consistent with \citealt{villaume2022} owing to its large uncertainty). Considering that we fit DF44 in a completely independent way compared to these studies, it is at least encouraging that the results are fairly similar. Significant variations among age and metallicity measurements for the same object, measured between different studies, is not unique to DF44. In Appendix~\ref{app:sfh_biases_compare_lit} we outline two additional examples and discuss the reasons behind their differences. The comparison shown in Fig.~\ref{fig:lit_compare_df44} demonstrates the difficulty in measuring the stellar properties of old stellar populations, related both to limitations of data and modelling. As discussed in the previous section, a solution is within reach as the inclusion of a variable $\alpha$-abundance or the addition of mid-IR photometry would help to break degeneracies between the stellar population properties. We conclude that DF44 has an age of $\sim$10--13~Gyr. Without clear statistical evidence to favour one SFH model over the other, throughout the remainder of this work we present both sets of results. In the next section, we discuss the implications of such a large sized galaxy having formed the bulk of its stellar mass very early. \section{Discussion}\label{sec:discussion} In this work, we sought to measure the detailed SFH of DF44 as a means to distinguish between UDG formation scenarios, which predict a variety of quenching times (i.e., SFHs). The consistent narrative among theoretical simulations is that UDGs are contiguous with the canonical dwarf population. However, \citet{villaume2022} established that DF44 is dissimilar to canonical dwarf galaxies with respect to both the stellar population gradients, stellar metallicity, and kinematics. In measuring the SFH of DF44 we can further test this scenario. Previous analyses of DF44 found that its stellar population is old, having an age of $\sim 10$~Gyr \citep[see Fig.~\ref{fig:lit_compare_df44};][]{gu2018b, villaume2022}. In this work we have shown that DF44 formed the majority of its mass early, where we consider the galaxy `quenched' after it forms $\sim90$~per~cent of its mass. In using an extended SFH prior we obtain a lower limit of the quenching epoch of $z \sim 0.9$ ($\sim 6.3$~Gyr after the Big Bang). Alternatively, in using a concentrated SFH prior (motivated by the results of \citealt{villaume2022}), we recover an extremely early quenching epoch of $z\sim8$ ($\sim0.6$~Gyr after the Big Bang). In either case we find that DF44 is old, the distinction being that a concentrated SFH prior suggests that it is {\it very} old. Without clear statistical evidence to favour one prior over the other (see Section~\ref{ sec:results_compare_lit}) we instead focus on providing a qualitative comparison of the implications of the two results. \vspace{0.2cm} For either of our two results, the bulk formation of DF44 occurs during an epoch where the evolution of galaxies in dwarf-scale dark matter haloes ($\lesssim 10^{11}~\mathrm{M}_\odot$) significantly differs from that of galaxies in more massive haloes. The mass assembly histories expected for average galaxies with dark matter halo masses between $10^{11}$--$10^{13}~\mathrm{M}_\odot$ are shown in Fig.~\ref{fig:mass_assembly}, from the empirical model of \citet{behroozi2019b}.\footnote{The population averages likely overestimate the star formation time-scales of galaxies in dense environments \citep[e.g.,][]{thomas2010}. Indeed, dEs and UDGs in clusters are found to be old \citep[e.g.,][]{weisz2011a,ferremateu2018,ruizlara2018}, and satellite dwarfs in simulations appear to form much earlier than central dwarfs \citep{digby2019, garrisonkimmel2019, joshi2021}. However, DF44 is likely only on its first infall into the Coma cluster (discussed further below).} The mass assembly history of DF44 (as shown in Fig.~\ref{fig:summary_fit_sfhs}) is shown for comparison. While the current stellar mass of DF44 falls within the range expected for the $z= 0$ canonical central dwarf population, and its halo mass is in the neighbourhood of $\sim10^{11}~\mathrm{M}_\odot$ (e.g., \citealt{vandokkum2016, vandokkum2019, wasserman2019}; see also \citealt{bogdan2020}), its mass assembly history is not necessarily compatible with this population. In fact, the mass of DF44 at $z\sim 8$ was typical for galaxies destined to become brightest cluster galaxies (BCGs) -- however, the mass growth was halted. This provides our first evidence that DF44 may not originate among the canonical field dwarf population. We now further investigate our results in the context of the predictions of UDG formation scenarios from theoretical work. \subsection{DF44 in tension with UDG formation scenarios} There have been many scenarios that have come out of cosmological simulations and SAMs which satisfy the size and surface brightness constraints of the observed UDG population. As these scenarios all work under the same constraints of conventional galaxy evolution physics, they follow the same dichotomy: whether the significant size growth necessary to transform a canonical dwarf galaxy into a UDG occurs pre- or post-infall (which is to say, whether the cluster environment is necessary to the process) is related to whether they are late or early infallers into that environment. Nearly\footnote{The exception is \citet{wright2021} in which a small fraction of isolated UDGs which experience major mergers are quenched at $z=0$. This scenario is discussed further below.} all of these models require the infall into the dense, hot environment of a cluster to quench the UDGs such that the quenching time is directly linked to the infall time. This dichotomy is demonstrated in Fig.~\ref{fig:lit_compare_size} where we show the relation between the effective radius and the quenching time for three different scenarios: i) what is typically expected under `normal' conditions (grey shaded regions)\footnote{The expected size growth was determined from the stellar mass assembly histories of \citet{behroozi2019b} and the size--mass relation of \citet{sales2020}.}, ii) the predictions for the `internal feedback' scenario from the FIRE simulations \citep[gold outlined circles;][]{chan2018}, and iii) the predictions from the RomulusC simulations \citep[magenta outlined circles;][]{tremmel2020}\footnote{These values were taken from Figures 2 and 11 of \citet{tremmel2020}.}. The symbols are additionally colour-coded by stellar mass. We show our quenching time results for DF44 in this figure with the size measured by \citet[][]{vandokkum2017} based on the deepest imaging available ($R_\mathrm{eff}=4.7\pm0.2$~kpc; with black outlines). However, \citet{saifollahi2022} obtained a smaller size ($R_\mathrm{eff}=3.83\pm0.4$~kpc; grey point) based on the same data. A hatched region covers the parameter space of the various quenching times from our results, and the two size measurements. The FIRE UDGs largely follow the expected size growth trend, where the distinction is that their internal feedback causes bursts of SF (which puff up the sizes of the galaxies, prior to quenching) place them on the top end of the size distribution. While \citet{chan2018} predicted that there are objects with quenching times as early as measured for DF44, these objects barely reach the nominal size of UDGs ($R_\mathrm{eff}<2$~kpc). Indeed, they could not reach the size of DF44 without significantly more time to form stars, which would then violate the stellar mass/surface brightness constraint. A formation scenario that can explain both large and early-quenched UDGs is tidal heating, where the expected size--quenching time trend is the exact opposite of the FIRE scenario, i.e., the earliest infallers/quenchers will be the largest because they have spent the longest time expanding due to the cluster environment. We see this effect demonstrated in the RomulusC points.\footnote{We note that other simulations and SAMs which invoke tidal heating \citep[e.g.,][]{carleton2019, jiang2019a, liao2019, sales2020} with slightly different prescriptions (i.e., cuspy vs. cored dark matter haloes, how early satellite infall begins) could change the exact predictions of the sizes of UDGs. We note that RomulusC appears to over-predict the sizes of cluster dwarfs by a factor of $\sim 2$ (compared to observations from \citealt{eigenthaler2018}). } While \citet{tremmel2020} show that some objects reach the nominal sizes of UDGs prior to infall into a cluster, in order to reach the large end of UDG sizes requires the additional effect of tidal heating from the cluster environment. With the extended SFH prior, the quenching time of DF44 is reasonably consistent with the tidally heated RomulusC UDGs. Certainly tidal heating is happening on some level to some galaxies in clusters. Evidence of such has been observed among proto-UDGs in clusters \citep[e.g.,][]{grishin2021}. \citet{carleton2019} interpreted the radial alignment of UDGs in Coma (\citealt{yagi2016}, which includes DF44) as evidence that these galaxies have been tidally influenced. While we cannot discount the tidal heating scenario in explaining the size and quenching times of DF44, this scenario conflicts with other properties of DF44. Measurements of the kinematics and dynamics of DF44 indicate that it has not been in the cluster environment long enough to be impacted by tidal effects. Its position in phase-space points to a late infall into the Coma cluster \citep[$< 2$ Gyr ago;][]{alabi2018}. Moreover, DF44 appears to be part of a dynamically cold group that would have surely been disrupted if tidal heating had taken place \citep{vandokkum2019}, and there is no distortion in its ellipticity that would be a marker of tidal heating \citep{mowla2017}. Together with the above points, the SFH provides evidence that DF44 certainly quenched prior to cluster infall. This would suggest that its progenitor was larger than a dwarf galaxy or that a process unrelated to environment caused an expansion. This interpretation is consistent with the conclusion of \citet{saifollahi2022}, who find that the elevated GC populations at a given stellar mass ($N_\mathrm{GC}/M_\ast$) of large UDGs (including DF44) are inconsistent with scenarios which explain the sizes of UDGs via redistributing the stars to larger radii (i.e., tidal interactions, stellar feedback, or high-spin). \citet{villaume2022} similarly ruled out such scenarios given DF44's `inside-out' stellar population gradients. Therefore, {\it how} DF44 quenched is the crucial question to answer to understand its origins. From simulations, only \citet[][based on {\sc Romulus25}; \citealt{tremmel2020}]{wright2021} have proposed a scenario, `early major mergers'\footnote{We note that \citet{saifollahi2022} refer to this scenario as `lack of late mergers'.}, in which UDGs can form and quench\footnote{Less than 5~per~cent of the simulated UDGs with masses $M_\ast>10^8~\mathrm{M}_\odot$ are quenched, in the sense that they are gas poor. This population is dominated by galaxies that have had an interaction with a more massive halo and/or AGN activity.} without relying on environmental quenching mechanisms. The UDGs in {\sc Romulus25} had their star forming gas and star formation moved outwards from the central cores of the galaxies to larger radii by major mergers $\sim$8--11~Gyr ago. For most of the simulated UDGs, star formation continued in the galaxy outskirts, while the central core passively dimmed, leading to negative radial age gradients. Considering that DF44 quenched $\gtrsim7$~Gyr ago, this may suggest that a major merger is responsible for (or at least concurrent with) its quenching -- and that there would be a flat age gradient. The central ($<0.5$~kpc) SFH predicted for {\sc Romulus25} UDGs is broadly consistent with DF44's SFH when assuming an extended SFH prior, although not when assuming a concentrated prior (which quenches much earlier). \citet{villaume2022} measured a flat-to-negative [Mg/Fe] gradient out to $\sim$2.5~kpc, which taken as a proxy for an age gradient is not strictly inconsistent with this scenario.\footnote{While \citet{villaume2022} measured a flat age gradient, they note that given the limitations of modelling granular differences in old stellar populations, the [Mg/Fe] gradient is more sensitive to age variations.} Further work is needed in order to establish whether DF44 is the product of an early major merger. For instance, the mechanism that quenches $\lesssim5$~per~cent of the {\sc Romulus25} UDGs is not fully described, providing no point of comparison with DF44's SFH or stellar population gradients. Moreover, when this quenching occurs, or whether the galaxies remain quenched, is unclear. While \citet{wright2021} and \citet{vannest2022} explored the predictions of `early major mergers' in differentiating average UDGs and non-UDGs, the fact that DF44 is a rare case warrants more detailed comparisons. \vspace{0.2cm} The results of this work show that DF44 has been shaped by some rare galaxy evolution process, no matter whether the `true' SFH resembles our result with an extended or concentrated SFH prior, or falls somewhere in between. As was shown in Fig.~\ref{fig:mass_assembly}, the early SFR of DF44 is more typical of normal (MW-like) star forming galaxies at $z>3$ \citep{rinaldi2021}. The implication is that it is not the early, extreme SFH that makes DF44 unusual among $z=0$ galaxies, but rather its sudden quenching. Given the lack of galaxies like DF44 in cosmological simulations, this would imply that galaxy evolution models are not capturing the true diversity of quenching mechanisms. In fact, cosmological simulations already struggle to reconcile the opposing stellar mass--effective radius constraints for objects like DF44 in the context of the broader galaxy population. A common problem among cosmological simulations is that they do not accurately reproduce the population of normal sized dwarfs (e.g., \citealt{chan2018, el-badry2016, lupi2017, tremmel2020, benavides2021}; see also \citealt{jiang2019a}). Since this points to issues in the implementation of star formation and related feedback, the evidence from this work and \citet{villaume2022} that there are objects like DF44 that require even more intense star formation feedback exacerbates this problem. Analytic and semi-analytic models can avoid such issues to some degree. With respect to size, several UDG formation scenarios apply empirical distributions \citep[e.g.,][]{carleton2019, sales2020} but they are then subject to the likely bias of `getting out what they put in' \citep[see][]{jiang2019b}. With respect to star formation and feedback, \citet{danieli2021} analysed the large number of GC candidates hosted by NGC~5846\_UDG1 \citep{forbes2021} with a model that connects the evolution of a galaxy with its dark matter halo and GC populations \citep{trujillo-gomez2019} to show that it is plausible that clustered supernova feedback could significantly increase the mass loading factor of gas outflows. However, these models miss an important component of galaxy evolution -- the impact of the different environments a galaxy moves through over its lifetime. DF44's very early quenching and relatively late infall into the Coma cluster invokes the question of what has it been doing for the last $\sim 10$ billion years? Given the potential `pre-processing' by group environments or filaments that can affect everything from the size of a galaxy's dark matter halo, to its SFH and present-day GC population, makes it vital to understand this aspect of galaxy evolution in general. \subsection{DF44 in context}\label{sec:discussion_context} The prior-dependence of the SFH for old stellar populations, even with high-S/N data, means that further work is needed to understand what `good' SFH priors are for these systems. The problem is amplified at lower S/N, where the prior will have a stronger influence on the posterior pdfs (see Appendix~\ref{app:sfh_biases_priors} for an example). Consequently, it is not straightforward to compare results between studies in the literature. With this caveat in mind, we also show in Fig.~\ref{fig:lit_compare_size} the quenching times and sizes of UDGs from three studies \citep[][]{ferremateu2018, ruizlara2018, martinnavarro2019}, and for comparison high- and low-luminosity dwarfs in Coma \citep[squares and diamonds, respectively;][]{ferremateu2018}. Arrows attached to these points indicate that they are perhaps upper limits, given potential biases from the use of regularised SFHs (akin to the extended SFH prior used in this work; see the discussion in Appendix~\ref{app:sfh_biases_compare_lit}). We note that the UDGs from the literature are shown with effective radii from the catalogue of \citet[][]{alabi2020} when possible, where DF44 was found to have a size of $3.74\pm0.23$~kpc in the Subaru/Suprime-Cam $R$-band. Regardless of potential biases in the SFHs, there are still interesting conclusions to draw from this data set. DF44 stands out as an outlier among the largest observed UDGs with an early quenching time, for any of the discussed quenching times or sizes. On the other hand, the UDG DGSAT~I stands out with both the largest size and latest quenching time among the literature values shown in Fig.~\ref{fig:lit_compare_size}, and it is also the only non-cluster member. Unlike the rest of the UDGs, DGSAT~I is similar to a subset of the {\sc RomulusC} UDGs which follow a trend in size--quenching time in distinct disagreement with the standard expectations of tidal heating. Its size is also well outside of what is plausible for the concentrated SFH scenario, or normal expectations of size growth given its late quenching time. While it is outside the scope of this work to examine DGSAT~I in detail, it is relevant to this discussion in that it further provides evidence that multiple observed objects, all of which are `UDGs,' in fact have distinct formation pathways. That DF44 attained a similar stellar mass and size as the other large galaxies, but much earlier, supports the idea that it is either the product of unconventional galaxy evolution processes, or it was interrupted from becoming a much more massive galaxy by some catastrophic quenching event. Speculation of the latter has also been drawn on the basis of the wide range of GC counts among UDGs, and the range of implied dark matter halo masses (with some having little to no dark matter). This is the first time this diversity has been shown in the SFHs of the galaxies' field star populations. \section{Summary}\label{sec:conclusion} In this work we simultaneously fit NUV to NIR photometry and high S/N rest-frame optical spectroscopy of the UDG DF44 with an advanced physical model. Our model includes non-parametric SFHs, a flexible dust attenuation law, a white noise model, and an outlier model, which we fit to the observations in a fully Bayesian framework with \prospector. We find that DF44 formed the majority of its stellar mass ($>90$~per~cent) early, although how early is sensitive to the choice of the SFH prior and degeneracies between stellar population parameters. Using an extended SFH prior akin to similar studies in the literature (which strongly favours ages of half the age of the Universe, and therefore disfavours very old ages) we find that DF44 formed by $z\gtrsim 0.9$. If we instead adopt prior knowledge from DF44's stellar population gradients that the DF44 formed early and rapidly quenched \citep{villaume2022}, such that its SFH is concentrated within a short timescale, we find that DF44 assembled as early as $z\sim 8$. Neither of these priors encode physical information of the shape of the SFH based on a priori knowledge, and thus neither are necessarily `good' priors. Further work is needed to understand what `good' SFH priors are for such old galaxies from a theoretical standpoint. Even with the high-S/N spectral data used in this work ($\sim 96$~\AA$^{-1}$) the data showed no statistical preference for either result. Improved age constraints are possible with the inclusion of observations in the mid-infrared in that this would pin down the dust attenuation, which in the NUV is degenerate with the contribution of old stellar populations. Improvements in the models (e.g., including variable $\alpha$-abundance) to replicate old and complex stellar populations are also needed. DF44's early and short SFH determined from this work, together with previous results that DF44 is very metal poor for its mass, and that stellar population gradients indicate `inside-out' formation \citep[unlike kinematically- and morphologically-similar dwarfs;][]{villaume2022}, points towards an unusual origin, likely distinct from the canonical dwarf population. UDG formation scenarios outlined in simulations only predict the SFH and size of DF44 through invoking prolonged environmental effects, yet we conclude that DF44 quenched prior to accretion into the Coma cluster. While analysis of the {\sc Romulus25} simulation by \citet{wright2021} proposes early major mergers as a means to produce UDGs in the field, it is not yet clear if the properties of DF44 are fully consistent with this scenario. Instead, DF44 may be a `failed galaxy' with its initial size, or whatever processes that expanded it, being unrelated to its environment. In Summary, early quenching an late infall taken together rules out most UDG formation scenarios except for the failed-galaxy and early-major-mergers (with the caveats above). Additional work is needed to explain the old quiescent UDGs from a theoretical standpoint, while reproducing the observed stellar properties beyond general size--mass trends. \section*{Acknowledgements} We thank Chris Lee for helpful discussions regarding the UV data of DF44. We would like to thank Meng Gu for providing the MaNGA spectrum of DF44, Josh Speagle for help with technical details in using {\sc dynesty}, and Joel Leja for help with technical details related to the SFH priors and \prospector. We thank the anonymous referee’s helpful report that improved the quality of this paper. This research is supported by the following grants: National Sciences and Engineering Research Council of Canada (NSERC) PGS award (KW), Discovery grants (MLB), Waterloo Centre Astrophysics Postdoctoral Fellowship (AV). DAF thanks the ARC for financial assistance via DP220101863. AJR was supported as a Research Corporation for Science Advancement Cottrell Scholar. This work was partially supported by a NASA Keck PI Data Award, administered by the NASA Exoplanet Science Institute. The data presented herein were obtained at the W.~M.~Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.~M.~Keck Foundation. We recognise and acknowledge the significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This work made use of the following software: {\sc Astropy} \citep{astropy1, astropy2}, {\sc dynesty} \citep{speagle2020} {\sc FSPS} \citep{conroy2009, fsps}, {\sc IPython} \citep{ipython}, {\sc matplotlib} \citep{matplotlib}, {\sc NumPy} \citep{numpy}, {\sc python-fsps} \citep{pythonfsps}, \prospector\ \citep{leja2017, johnson2019, johnson2021}, and {\sc SciPy} \citep{scipy}, \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras} \bibliography{webb2022_df44_sfhs} % \appendix \section{The SFH of DF44} \label{app:fsfh_for_comparison} For comparison with future works, in Table~\ref{tab:results_fsfh} we provide the fraction of SF, and cumulative fraction of stellar mass formed, within the time bins of the non-parametric models. We list the 16\thh, 50\thh, and 84\thh\ percentiles of the distributions, where we note that the 50\thh\ percentiles of the fractional SFHs do not necessarily sum to unity. \begin{table*} \footnotesize \centering \caption{Summary of SFH results. The fraction of SF and the cumulative fraction of stellar mass formed are listed for each time bin of the non-parametric SFH model. The 16\thh, 50\thh, and 84\thh\ percentiles of the posterior (i.e., the 68~per~cent CR) are listed. We note that the 50\thh\ percentiles of the fractional SFH do not necessarily sum to unity. The SF time-scales listed in Table~\ref{tab:results} are interpolated from these step functions. } \label{tab:results_fsfh} \begin{tabular}{c|ccc|ccc|ccc|ccc} \hline Time bin & \multicolumn{6}{c|}{Extended SFH prior} & \multicolumn{6}{c}{ concentrated SFH prior } \\ (Gyr) & \multicolumn{3}{c}{ SF Fraction } & \multicolumn{3}{c|}{ Cumulative fraction of $M_\ast$ } & \multicolumn{3}{c}{ SF Fraction } & \multicolumn{3}{c}{ Cumulative fraction of $M_\ast$ } \\ \hline & 16 & 50 & 84 & 16 & 50 & 84 & 16 & 50 & 84 & 16 & 50 & 84 \\ \hline $10^{-9}$ -- 0.03 & 0.0703 & 0.0795 & 0.0967 & 1.0000 & 1.0000 & 1.0000 & 0.0042 & 0.0082 & 0.0099 & 1.0000 & 1.0000 & 1.0000 \\ 0.03 -- 0.10 & 0.0050 & 0.0152 & 0.0240 & 0.9986 & 0.9988 & 0.9989 & 0.0160 & 0.0181 & 0.0221 & 0.9995 & 0.9996 & 0.9998 \\ 0.10 -- 0.50 & 0.0007 & 0.0014 & 0.0061 & 0.9980 & 0.9983 & 0.9986 & 0.0000 & 0.0000 & 0.0004 & 0.9974 & 0.9977 & 0.9980 \\ 0.50 -- 1.00 & 0.0014 & 0.0031 & 0.0049 & 0.9972 & 0.9979 & 0.9983 & 0.0000 & 0.0001 & 0.0002 & 0.9974 & 0.9975 & 0.9979 \\ 1.00 -- 2.00 & 0.0003 & 0.0015 & 0.0039 & 0.9964 & 0.9970 & 0.9975 & 0.0000 & 0.0000 & 0.0001 & 0.9973 & 0.9974 & 0.9978 \\ 2.00 -- 3.00 & 0.0014 & 0.0046 & 0.0086 & 0.9949 & 0.9962 & 0.9970 & 0.0000 & 0.0000 & 0.0001 & 0.9971 & 0.9974 & 0.9978 \\ 3.00 -- 4.01 & 0.0011 & 0.0089 & 0.0158 & 0.9917 & 0.9932 & 0.9955 & 0.0000 & 0.0000 & 0.0005 & 0.9969 & 0.9973 & 0.9977 \\ 4.01 -- 5.36 & 0.0091 & 0.0145 & 0.0435 & 0.9833 & 0.9886 & 0.9925 & 0.0000 & 0.0001 & 0.0004 & 0.9964 & 0.9971 & 0.9973 \\ 5.36 -- 7.16 & 0.0166 & 0.0702 & 0.1351 & 0.9591 & 0.9790 & 0.9847 & 0.0000 & 0.0002 & 0.0016 & 0.9943 & 0.9969 & 0.9971 \\ 7.16 -- 9.57 & 0.0722 & 0.1548 & 0.3310 & 0.8543 & 0.8993 & 0.9606 & 0.0000 & 0.0000 & 0.0022 & 0.9917 & 0.9950 & 0.9969 \\ 9.57 -- 12.80 & 0.3036 & 0.3726 & 0.5208 & 0.5596 & 0.6872 & 0.8085 & 0.0000 & 0.0000 & 0.0008 & 0.9851 & 0.9938 & 0.9958 \\ 12.80 -- 13.47 & 0.0583 & 0.1918 & 0.3111 & 0.0179 & 0.0618 & 0.1116 & 0.9678 & 0.9714 & 0.9737 & 0.9822 & 0.9916 & 0.9947 \\ \hline \end{tabular} \end{table*} \section{Systematic biases in measuring SFHs} \label{app:sfh_biases} \subsection{SFH biased by blue horizontal branch stars} \label{app:sfh_biases_bhb} The use of integrated light to reconstruct stellar populations has the caveat that multiple types of stars can share spectral signatures. This is the case for young, massive main-sequence stars and old, metal poor stars on the blue side of the horizontal branch (HB); both act to amplify the equivalent width of the Balmer lines. A population of blue HB stars produces a flux shortward of 3000~\AA\ which increases with decreasing metallicity due to a hotter main-sequence turnoff. Neglecting to include a blue HB population in models can lead to predictions of unrealistically young ages \citep[e.g.,][]{worthey1994, schiavon2004, thomas2005, schiavon2007}. The difficulty of distinguishing between these two stellar populations has been noted in GCs and dwarf galaxies \citep[e.g.,][]{monaco2003, schiavon2004, conroy2018, cabrera-ziri2022}, as well as in elliptical galaxies \citep{maraston2000}. In the SFH fit to DF44 (for which the primary age indicator is the H$\beta$ absorption line) we see a rise in the SFR in the two most recent time bins corresponding to the last 100~Myrs (by ${1.8}_{^{-0.4}}^{_{+0.2}}$~dex for the extended SFH, ${2.5}_{^{-1.1}}^{_{+1.5}}$~dex for the concentrated SFH) -- yet there are no corresponding emission lines to suggest the presence of a young stellar population. Given the low metallicity of this UDG ($\log(Z_\ast/\mathrm{Z}_\odot)\sim -1.2$), the presence of a blue HB population would not be unexpected. While the bias between the age and the blue HB stars is well known when fitting simple stellar populations \citep[SSPs;][]{conroy2018}, it is not yet well studied for non-parametric SFHs. \citet{ocvirk2010} provided a first look at the impact of blue HB stars on linear combinations of SSP models, finding that the presence of blue HB stars can be inferred as a recent burst of star formation at $\sim 100$~Myr, contributing less than around 10~per~cent of the total stellar mass. While this provides a promising explanation for the apparent star formation bursts we observe in the SFH of DF44, we follow a similar test using the non-parametric SFH described in Section~\ref{sec:fitting_physical_model}. In order to investigate if our SFHs are affected by the presence of blue HB stars which mimic a burst of SF within the last 100~Myrs, we fit the SFHs of two Galactic GCs, one with a known blue HB and the other without. We select the GCs from \citet{schiavon2005}, with metallicities similar to DF44: NGC~2808 has [Fe/H] $= -1.29$ and $(B-R)/(B+V+R)=-0.49$ (bluer HB), and NGC~6218 (M12) has [Fe/H] $= -1.32$ and $(B-R)/(B+V+R)=0.97$ (redder HB). Given that GCs are reasonable approximations of SSPs, we expect an early single burst of star formation only. Fits to the spectra\footnote{Downloaded from \url{http://www.noao.edu/ggclib}.} of NGC~2808 and NGC~6218, over the same wavelength range as DF44, following the procedure described in Section~\ref{sec:fitting}, are shown in Fig.~\ref{fig:compare_gcs}. The top panels summarise the comparison between the observations (black) lines, and models (coloured lines). Similar to Fig.~\ref{fig:summary_fit_sfhs}, the bottom panels show the sSFR, SFR, and mass assembly histories. An extended SFH was assumed, and the total stellar mass was fixed to $10^{8}~\mathrm{M}_\odot$. In both cases we find an increase in the SFR within the last 100~Myr, although to a larger extent for the GC with the blue HB stars (by ${1.5}_{^{-0.3}}^{_{+0.5}}$~dex for NGC~6218, and by ${2.6}_{^{-0.4}}^{_{+0.4}}$~dex for NGC~2808). In addition, we see that both SFHs are early and short-lived, although there are modest levels of star formation at $>2$~Gyr, which likely results from the models being unable to precisely match the high S/N spectra ($\sim 180$ and $\sim 480$, respectively). We conclude from this comparison that some component of the recent SF burst we measure for DF44 could plausibly be related to a population of blue HB stars. \subsection{Fitting the spectroscopy and photometry together vs separately} \label{app:sfh_biases_need_both} Figs.~\ref{fig:compare_data_fits} and \ref{fig:compare_data} show the results of fitting the models to observations of DF44, where we include the following input: i) using only the photometry (yellow), ii) using only the spectroscopy (green), and iii) using both the photometry and spectroscopy (red), and assuming an extended SFH prior. We note that the stellar mass is fixed \citep[to the value reported by][]{vandokkum2016} for the spectrum-only fit as the continuum was subtracted from the spectrum. Similar to Fig.~\ref{fig:summary_fit} discussed in Section~\ref{sec:results}, in Fig.~\ref{fig:compare_data_fits} the observations (black lines and markers) are shown relative to the bestfit models (coloured lines and markers, where the colours denote which observations were fit). Shaded coloured regions indicate the 68~per~cent CRs from sampling the posterior pdfs, where the grey shaded region indicates the uncertainties in the spectrum. Both bestfit SED models match the photometry with reasonable $\chi_\mathrm{bestfit}$. In comparison, the UV flux is significantly overestimated when fitting only the spectroscopy. Since the UV provides information about recent star formation, and the UV to optical colours constrain the dust attenuation, we do not expect to constrain these properties from the spectrum alone. A comparison of the observed spectrum with the bestfit models is also shown in Fig.~\ref{fig:compare_data_fits}, with the $\chi_\mathrm{bestfit}$ as a function of wavelength, and the spectrophotometric calibration polynomial (see Section~\ref{sec:fitting_physical_model_speccal}). The ratio of the two bestfit models, shown flattened by dividing through by a polynomial, shows that the fits are similar at the 2~per~cent level. The only notable differences between the two bestfit models are around the H$\beta$ line and Mg~{\sc II} features at $\sim5285$~\AA~--~5305~\AA\ (observed-frame). The positive ratio of the H$\beta$ line between the spectrum-and-photometry fit over the spectrum-only fit is consistent with the UV flux being constrained for the former, such that the absorption line is preferentially shallower. The difference in the Mg~{\sc II} lines reflects the difference in metallicities predicted for each fit, as well as the inability of the (fixed scaled-solar abundance) models to be flexible to such features. \vspace{0.2cm} Fig.~\ref{fig:compare_data} compares the basic stellar properties (normalisation of the diffuse dust attenuation curve, $V$-band extinction, stellar metallicity, stellar mass, and mass-weighted age) for the fits to the three sets of observations. This figure is akin to Fig.~\ref{fig:summary_fit_corner}, discussed in Section~\ref{sec:results}. For comparison, black lines indicate values measured in the literature: dashed lines indicate the stellar isochrone metallicity measured by \citet[][]{villaume2022}, while dotted and solid lines indicate the stellar mass measured by \citet{vandokkum2016} and \citet{saifollahi2021}, respectively. For reference, the prior on the age (which is implicit, as age is determined by the time bin widths and SFH) is shown as a black histogram. The broadband NUV to NIR photometry (yellow) and continuum normalised spectroscopy (green) carry different information about the galaxy properties. The broad (yet coarse) photometry provides a tighter constraint on the dust attenuation, while the spectroscopy constrains the metallicity. The dust attenuation cannot be determined from the spectroscopy alone because of the lack of continuum information; the spectrophotometric calibration marginalises over the continuum shape, and is degenerate with both the stellar mass and dust attenuation. On the other hand, the metallicity is tightly constrained by the spectroscopy as there is detailed information among the numerous absorption lines. Despite the formal consistency of the dust and metallicity parameters between these two fits (given the large uncertainties), the age posteriors are significantly different. The age posterior from the photometry largely traces the (implicit) prior. A tighter pdf for the stellar metallicity provides a more precise estimate of the age, as expected given the degeneracy between these two parameters. Simultaneously fitting the photometry and spectroscopy (shown in red) constrains the full set of parameters. In the particular case of DF44, the results are largely informed by the spectroscopy, which covers a broad range in metallicity and age features -- the inclusion of the photometry only modestly affects the posteriors. The stellar mass derived from the combined data sets is consistent with that of \citet{saifollahi2022}, while the photometry-derived posterior is skewed lower by $\sim0.23$~dex, which is likely also related to the lower estimate for the stellar metallicity. The combined result shows DF44 to be very old, metal poor, and perhaps with some small amount of dust. \subsection{SFH biased by choice of prior} \label{app:sfh_biases_priors} Fig.~\ref{fig:parameter_vs_snr} demonstrates the S/N dependence of the bias imposed by the choice of SFH prior, which in this case is an extended SFH in describing a very old stellar population. We refit the KCWI spectrum of DF44 with the extended SFH prior (\aDo), successively increasing the uncertainties of the spectrum such that the $\mathrm{S/N}_\mathrm{spec}=5$, 10, 15, and 20. The medians of the recovered pdfs are shown for the mass-weighted age (in lookback time), $t_{50}$ and $t_{90}$ (in time since the Big Bang), \logzsol, and diffuse dust, with error bars corresponding to the 68~per~cent (thick and wide) and 95~per~cent (thin and narrow) CRs. Points mark the results from fitting the spectrum and photometry simultaneously (diamonds), the spectrum alone (squares, offset vertically for clarity), and the photometry alone (circles). The prior distributions are shown in the top panels. Note that because the implicit priors for the SFH time-scales depend on the widths of the SFH time bins (a step function), the distributions are not necessarily smooth. The SFH time-scales are more heavily weighted by the SFH prior at low S/N. This is particularly true for $t_{90}$, which we use as a proxy of the quenching time. In contrast, neither the stellar metallicity nor the dust is significantly biased, or at least the offsets are well within the (large) uncertainties. While having a complete set of observations informs many of the galaxy properties, the choice of a `good' SFH prior is important. \subsection{Comparing results between studies -- prior and data dependence} \label{app:sfh_biases_compare_lit} Fig.~\ref{fig:lit_compare_times} shows a comparison of the star formation time-scales of UDGs (circles) and dwarfs (squares and diamonds) for observations from the literature (for Coma galaxies in almost all cases). We compare the time at which we consider the galaxy quenched, $t_{90}$, with how extended the SFH is, $t_{50}-t_{90}$. The grey shaded region denotes the parameter space where ages ($t_{50}$) are older than the Universe (e.g., OGS1 from \citealt{ruizlara2018}). We show the results from the literature as upper limits given the possible biases in SFH time-scales discussed above related to the S/N, and choice of SFH priors. Except for DF44, all the literature values were measured using the full-spectrum fitting code {\sc steckmap}. Notably {\sc steckmap} smooths the SFHs via (tuneable) regularisation akin to Gaussian priors on the SFH and age--metallicity relations (see the discussion in Section~\ref{sec:fitting_physical_model_sfh}). The details of the regularisation differ between all studies, where for example, \citet{ruiz-lara2015} present the outcome of averaging several results with various smoothing parameters. \citet{martinnavarro2019} show in their appendix~A the difference in their regularised and un-regularised results to be $\sim 1$~Gyr in $t_{50}$ and $\lesssim 0.4$~Gyr in $t_{90}$. \citet{ferremateu2018} compared their SFH time-scales derived from {\sc steckmap} with those from an alternative fitting code, {\sc starlight}, which does not impose regularisation but does require relative-flux calibrated spectra. Between the two fitting approaches, \citet{ferremateu2018} found consistent results in that the SFHs are extended and had similar quenching times. That said, {\sc starlight} preferred starting star formation $\sim 2$~Gyr later, such that the ages were younger and star forming time-scales were shorter. In contrast, the `burstier' prior used in this work produced earlier star formation and quenching. Because of the difficulties in determining the ages of old stellar populations, even subtle differences in data or analysis can impact results beyond the expected uncertainties. As an example, we can compare measurements for two UDGs, DF26/Yagi93 and Yagi418, both studied by \citet{ferremateu2018} and \citet{ruizlara2018}; the values are connected with dashed lines in Fig.~\ref{fig:lit_compare_times}. Each author used rest-frame optical spectroscopy (where \citealt{ruizlara2018} reported higher S/N and had a wider wavelength coverage) and they used the same code ({\sc steckmap}). However, the median mass-weighted ages differ by $\sim 1$~Gyr (uncertainties were not reported, but the luminosity weighted ages are formally consistent). In both cases the higher S/N data provided a solution shifted in the expected direction (i.e., towards older and less-extended SFHs). While DF44 appears to have (one of) the shortest SFHs and earliest quenching times, we caution that a detailed comparison should consider priors and the S/N. A poorly chosen SFH prior will have a stronger bias at a low S/N. For example, in using an extended SFH prior with the DF44 KCWI spectrum degraded to $\mathrm{S/N}=20$, we recover $t_{50}\sim 2.9\pm0.5$~Gyr and $t_{90}\sim 7.1\pm1.2$~Gyr (see Fig.~\ref{fig:parameter_vs_snr} in Appendix~\ref{app:sfh_biases_priors}), which overlaps with the lower end of UDGs in Fig.~\ref{fig:lit_compare_times}. This suggests that some of these objects could be older, and have less-extended SFHs. Along the same lines, we do not include photometry-derived results in Fig.~\ref{fig:lit_compare_times} as the comparison can be misleading given the different choices (and relative contributions) of SFH priors. In the preceding sections we have shown that the photometry-derived ages are younger than the spectroscopy- or combined-derived ages. There is a similar difference between the results of \citet[][with optical to NIR photometry; not shown in Fig.~\ref{fig:lit_compare_size}]{pandya2018} and \citet[][with rest-frame optical spectroscopy, S/N$\sim 10$~\AA$^{-1}$]{martinnavarro2019}. Both studied the UDG DGSAT~I, although using different fitting methods and assuming different SFHs. \citet{pandya2018} fitted their photometry (via MCMC) to a delayed-exponential model, while \citet{martinnavarro2019} fitted their spectroscopy with {\sc steckmap}. We note that in this example the priors are considerably different. For a delayed exponential model with linearly uniform priors with $\tau=0.1$--10~Gyr and $t_\mathrm{age}=0.1$--14~Gyr, the implicit prior on the mass-weighted age has a median of 3.2~Gyr. In comparison, a constant SFH has a median age of half the age of the Universe, $\sim 6.8$~Gyr (see also the discussion in \citealt{johnson2021}). While the luminosity-weighted ages are similar ($\sim 3$~Gyr), their mass-weighted ages are discrepant by $>1$~Gyr ($t_\mathrm{age}$ in the delayed-exponential model is the onset of star formation, where for a $\tau>3$ this corresponds to ages considerably younger than $t_\mathrm{age}$). The metallicities are also discrepant by $>1$~dex, although \citet{martinnavarro2019} found that DGSAT~I is unusually $\alpha$-enhanced. Several other studies have studied UDGs from photometry alone \citep[e.g.,][]{greco2018b, barbosa2020}, and have similarly noted younger ages than spectroscopy-derived results. We additionally note that \citet{martinnavarro2019} uses a set of SSP models different than used in both this work and the other UDGs studies discussed here. Neither the choice of SSP models or appilcation of regularisation would explain the significant offset between the SFHs of DGSAT~I and the other UDGs, however. \newpage \FloatBarrier \section{Degeneracy between dust attenuation and flux from old stellar populations in the NUV} \label{app:old_v_dust} The normalisation of the dust attenuation curve (\dust) and the fraction of old stars, both parameters of our physical model, are degenerate at optical and UV wavelengths. As a brief example of this degeneracy, Fig.~\ref{fig:obs_models_age_v_dust} shows the photometry for DF44 (black points) relative to three model SEDs with simple stellar populations (i.e., not the results of fitting the physical model described in Section~\ref{sec:fitting}). Taking the grey model as the `fiducial' model, slight variations in age and dust are shown by the purple and cyan models, respectively. While the 2.8~Gyr age increase or 0.2~dex increase in diffuse dust produces an equivalent effect in the NUV, they have opposing effects at wavelengths $>1~\mu\mathrm{m}$. Coloured markers show the expected photometry in two {\it JWST} filters in the mid-infrared, with $\mathrm{S/N}\sim5$ to reflect the average uncertainty of the IRAC data. In this example, the `old' and `dusty' models are slightly distinguishable in F560W ($\Delta m_\mathrm{AB} \sim 0.6 \sigma_m$) but very different in F770W ($\Delta m_\mathrm{AB} \sim 3 \sigma_m$). The inclusion of mid-infrared data to our data set would allow us to assess whether DF44 is as dusty as our results suggest or a product of the complex degeneracies between physical parameters (see Section~\ref{sec:results_sfh}). \bsp % \label{lastpage}
Title: Computational Fluid Dynamics with the Coupled Discrete Unified Gas Kinetic Scheme (CDUGKS)
Abstract: In this paper, we introduce our open source implementation of the Coupled Discrete Unified Gas Kinetic Scheme (CDUGKS) of this https URL, a phase space scheme capable of handling a wide range of flow regimes. We demonstrate its performance on several problems including a number of well known test problems from the astrophysical fluid dynamics literature such as the 1D Sod shock tube, 2D Kelvin-Helmholtz instability, 1D thermoacoustic wave, a triangular Gresho vortex, a sine wave velocity perturbation. For these problems, we show that the code can simulate flows ranging from the inviscid/Eulerian regime to the free-streaming regime, capturing shocks and emergent diffusive processes in the appropriate regimes. We also use a variety of Prandtl numbers to demonstrate the scheme's ability to simulate different thermal conductivities at fixed viscosity. The scheme is second-order accurate in space and time and, unlike many solvers, uses a time step that is independent of the mean free path of the gas. Our code (MP-CDUGKS) is public under a CC0 1.0 Universal license and is available on this https URL
https://export.arxiv.org/pdf/2208.09132
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} Hydrodynamics -- Instabilities -- Methods: Numerical \end{keywords} \section{Introduction} Computational fluid dynamics has proven to be an indispensable tool in astrophysics. For many decades, equations derived within the Chapman-Enskog framework, including those governing Eulerian hydrodynamical and Navier-Stokes flows, have been employed to model astrophysical fluids \citep[e.g.][]{Flash, Athena, Enzo, Athena++}. These first-order techniques have been applied in flow regimes ranging from dense stellar interiors \citep{DenseStellarInterior_Meakin_2007} to rarefied voids in cosmological simulations \citep{Void_Xu_2016}. Much attention has been dedicated to ensuring these schemes have desirable properties e.g. being stable, conservative, and total-variation diminishing \citep{Toro}. Furthermore, a significant amount of effort has gone into incorporating physics such as radiative transfer \citep[e.g.][]{RadT}, chemistry routines and cooling \citep[e.g.][]{grackle_Smith_2016}, non-ideal equations of state \citep[e.g.][]{nonideal_Colella1985}, and to ensure the schemes robustly capture shocks and hypersonic flows. However, not much effort has been put forth by the astrophysics community in investigating the intrinsic error underlying the equations made by only using perfect or first order linear-gradient constitutive relations, and how such errors manifest themselves in simulations both qualitatively and quantitatively. In this work, we employ a phase space method -- particularly, a gas kinetic scheme named CDUGKS \citep{CDUGKS} -- to identify and demonstrate flow regimes for common test problems where the Eulerian and Navier-Stokes approximations are not good descriptions. In addition, we explore the properties of flows outside of this linear-gradient constitutive regime with several new test problems. Put simply, the aim of this paper is to explore and demonstrate the capabilities of CDUGKS and to introduce our open source, massively parallel implementation, MP-CDUGKS\footnote{\url{ https://github.com/alvarozamora/CDUGKS}}. The equations of Eulerian hydrodynamics are employed in many astrophysical fluid solvers due to the low computational cost relative to Navier-Stokes or phase space methods. For the equations to be valid, there is the assumption that the mean free path of the collisional matter in any given region of a simulation is much smaller than any length scale that one is interested in. It is at the cost of fixing the collisionality that this inviscid approximation produces a computationally feasible model of fluids that is used in e.g. Enzo \citep{Enzo}. Incorporating a linear-gradient constitutive relation (Navier-Stokes), which recovers the ability to tune the collisionality in the low Knudsen number regime, may cause a significant slowdown due to the requirement to resolve the length scale of diffusive processes (e.g. viscosity and conduction). Some workarounds, such as the STS of \cite{STSS} found in Athena++ \citep{Athena++}, can get around this constraint via an approximation that results in a significant speed-up for larger viscosities. However, the accuracy of the Navier-Stokes approximation itself breaks down for larger viscosities. The gas kinetic scheme CDUGKS recovers the ability to simulate a gas of any collisionality at a large (but fixed) computational cost by resolving the higher-dimensional phase space. There has been a growing interest in gas kinetic schemes based on the BGK model \citep{BGK} in the last decade, and there now exists many different iterations of schemes based on this BGK operator in the literature \citep[e.g.][]{GKS_TAN2018214, DUGKS_Guo_2015, UGKS_MHD, CDUGKS, relativisticBGKVlasovMaxwell}. These include schemes with electromagnetism and relativity. The BGK collision operator appears in the PDE of the full distribution function $f(\bm x, \bm \xi, t)$ as \begin{equation} \label{eq:BGK} \frac{\partial f}{\partial t} + \bm{\xi}\cdot\nabla f + \bm{a}\cdot \nabla_\xi f =\Omega_\text{BGK}= \frac{f_\text{eq}-f}{\tau}, \end{equation} where $\bm{\xi}$ is the phase space velocity vector, $\bm{a}$ is the acceleration vector (due to an external potential, e.g. gravity), $f_\text{eq}$ is the distribution in equilibrium, and $\tau$ is a relaxation time. The BGK model has known deficiencies, such as having a fixed Prandtl number (equal to unity), and so the Ellipsoidal Statistical model (ES-Model) and the Shakhov model (S-Model) were developed to allow for flows with $Pr\neq1$ \citep{ES_S_BGK}. In this paper, we focus on CDUGKS, a double distribution function (DDF) scheme which tracks a velocity distribution function as well as an energy distribution function. This scheme in particular was chosen because, according to \citet{CDUGKS}, it has the following properties: \begin{itemize} \item It is an asymptotic-preserving (AP) scheme. CDUGKS recovers the NS/Euler solution when the relaxation time is taken to be small compared to the relevant timescales to the problem (i.e. small/vanishing Knudsen number) while also recovering the collisionless solution when the relaxation time is much larger than the relevant timescales to the problem (i.e. large Knudsen number). \item By using a total energy double-distribution function (TEDDF) model, the scheme is found to be numerically stable compared to those that use an internal energy distribution function (IEDF), which require the calculation of the interfacial heat flux. Unlike some other models which produce negative values for the distribution functions, the use of the TEDDF model guarantees that the distribution functions will be nonnegative. \end{itemize} It is interesting to consider the ways in which viscosity manifests itself in these different fluid models. By construction, Navier-Stokes fluids have a viscosity that operates in a way which constraints the form of the stress tensor terms to be scalar multiples of the velocity gradients, regardless of the magnitude of the viscosity. This results in the intuitive diffusive interpretation of viscosity in which increasing the magnitude of the kinematic viscosity $\nu$ increases the diffusive rate of momentum without bound. In these BGK-based models including the CDUGKS used in this paper, the bulk viscosity $\mu$ appears not as a diffusive process, but rather as a mechanism that changes the local relaxation time $\tau = \mu / p $ (with pressure $p$). Increasing the viscosity does not directly increase the rate at which momentum diffuses in CDUGKS. Instead, increasing the viscosity increases the time it would take the local velocity distribution to equilibriate. As we will see in this paper, this results in a maximum momentum diffusion rate for a given set of initial conditions. This is because momentum can only diffuse in a hard-sphere gas if mass that carries momentum diffuses; you cannot have one without the other. For a given set of initial conditions for the phase space of a fluid, the unimpeded collisionless solution with the largest mean free path gives the bounded, maximal diffusion of momentum and energy. Although these two distinct forms of viscosity ostensibly manifest themselves indistinguishably at low Knudsen numbers, they of course produce vastly different results at higher Knudsen numbers. The paper is organized as follows. First, we discuss the scheme as implemented in our code in \cref{sec:sec_scheme}. We discuss the test problems explored in this paper and the relevant physics within in \cref{sec:sec_problems}. We discuss some implementation details in \cref{sec:code_discussion}. We conclude our results and discuss potential extensions of the work in \cref{sec:conclusion} \section{The Coupled Discrete Unified Gas Kinetic Scheme (CDUGKS)} \label{sec:sec_scheme} For completeness, we present a brief overview of CDUGKS as implemented in our code. For a detailed description of the scheme and an in-depth discussion of its properties, refer to the original work by \citet{CDUGKS} (referred to herein as the CDUGKS paper). For the most part, in this paper we use the same notational conventions as in the CDUGKS paper. Consider the full distribution function, $f(\bm{x},\bm{\xi}, \bm{\eta}, \bm{\zeta}, t)$, of a 3-dimensional mass continuum constrained to vary in $D$ dimensions at the point in phase space $\bm{x},\bm{\xi}\in\mathbb{R}^D$ and time $t$. For each such point in $\mathbb{R}^D$, the distribution function $f$ captures the mass distribution across each of the $D$ kinetic degrees of freedom $\bm{\xi} = \{\xi_1, \ldots, \xi_D\}$ for the $D$ dimensions allowed to vary. It also captures the mass distribution across the components of $\bm{\eta}$, the $(3-D)$ kinetic degrees of freedom along the dimensions not allowed to vary, as well as across the components of $\bm{\zeta}$, the $K$ internal degrees of freedom. This is the distribution function that obeys equation $\ref{eq:BGK}$. Defining a velocity distribution function (VDF) $g$ and total energy distribution function (TEDF) $b$ as \begin{gather} \label{g}g = \int f(\bm{x},\bm{\xi}, \bm{\eta}, \bm{\zeta}, t)\ d\bm{\eta}d\bm{\zeta} \\ \label{b}b = \frac{1}{2}\int (\xi^2 + \eta^2 + \zeta^2)f(\bm{x},\bm{\xi}, \bm{\eta}, \bm{\zeta}, t)\ d\bm{\eta}d\bm{\zeta}, \end{gather} allows us to integrate out the degrees of freedom we are not interested in and results in the respective PDEs \begin{gather} \label{gPDE} \frac{\partial g}{\partial t} + \bm{\xi}\cdot\nabla g + \bm{a}\cdot \nabla_\xi g = \frac{g_\text{eq}-g}{\tau_g} \\ \label{bPDE} \frac{\partial b}{\partial t} + \bm{\xi}\cdot\nabla b + \bm{a}\cdot \nabla_\xi b = \frac{b_\text{eq}-b}{\tau_b} + \frac{Z}{\tau_{bg}}(g-g_\text{eq}) + g\bm{\xi}\cdot\bm{a}, \end{gather} for the $D$-dimensional distributions $g$ and $b$, where $\tau_g = \mu/p$, $\tau_b=\tau_g/\text{Pr}$, $\tau_{bg} = \tau_b\tau_g/(\tau_g-\tau_b)$, $Z = \bm{\xi}\cdot\bm{u}-u^2/2$, $\bm{u}$ is the local bulk (average) velocity, $\mu$ is the dynamic viscosity, $p$ is the pressure, and $\text{Pr}$ is the Prandtl number. Note that since the PDEs do not contain momentum and energy diffusion terms, the Prandtl number manifests itself as the ratio of the relaxation times, $\text{Pr} = \tau_g/\tau_b$, instead of the usual $\text{Pr}=\nu/\alpha$ for momentum diffusivity $\nu$ and thermal diffusivity $\alpha$. In this paper, we only consider ideal gases with $p = \rho RT$, where $\rho$ is the mass density, $R$ is the gas constant (equal to 0.5 throughout this paper), and $T$ is the temperature. The equilibrium distributions are given by \begin{gather} \label{geq} g_\text{eq} = \frac{\rho}{(2\pi RT)^{D/2}}\exp\bigg(-\frac{(\bm{\xi}-\bm{u})^2}{2RT}\bigg) \\ \label{beq} b_\text{eq} = \frac{\xi^2 + (3-D+K)RT}{2}g_\text{eq}. \end{gather} As in the CDUGKS paper, a temperature dependent model of viscosity is employed, taking the form \begin{equation} \label{eq:visc} \mu = \mu_{r} \Big(\frac{T}{T_{r}}\Big)^w \end{equation} where $w$ can be set depending on the fluid model, e.g. 0.5 for a hard-sphere model as adopted in this paper \citep{MathematicalTheoryofGases}. After specifying a reference viscosity $\mu_{r}$ at a reference temperature $T_{r}$, one can determine the viscosity at any point by evaluating \eqref{eq:visc}. The local density of mass, momentum, and energy can be found using $g$ and $b$ as \begin{align} &\rho = \int g\ d\bm\xi\\ &\rho\bm u = \int \bm u g d\bm\xi\\ &\rho E = \int b d\bm\xi. \end{align} With the adiabatic index \begin{equation} \gamma = \frac{K+5}{K+3} \end{equation} the temperature is calculated as \begin{equation} T = \frac{\gamma - 1}{R}\Big(E-\frac{1}{2}u^2\Big), \end{equation} and the energy density as \begin{equation} \rho E = \frac{1}{2}\rho \bm u^2 + \rho \epsilon \end{equation} with the internal energy $\epsilon = c_v T$ and $c_v = (3+K)R/2$ the specific heat capacity at constant volume. CDUGKS solves the equations for the fields $\phi = g, b$ by first discretizing the velocity space into a set of discrete velocities $\{\bm\xi_i\}$ and obtaining for the PDE's \begin{align} \label{eq:phiPDE} \frac{\partial\phi_i}{\partial t} + \bm \xi _i \cdot \nabla\phi_i = \Omega_{\phi,i} + S_{\phi,i} \end{align} for the fields $\phi = g,b$ at each $\bm \xi _i$, where the the collision operators for each $\phi$ are defined as \begin{align} \Omega_g = \frac{g_\text{eq}-g}{\tau_g}, \\ \Omega_b = \frac{b_\text{eq}-b}{\tau_b} \end{align} with source terms \begin{align} \label{eq:gsource} &S_g = -\bm a\cdot \nabla_{\bm\xi} g ,\\ \label{eq:bsource} &S_b = \frac{Z}{\tau_{bg}}(g - g_\text{eq}) + \bm a \cdot g\bm\xi - \bm a \cdot \nabla_{\bm\xi} b. \end{align} Note that the energy distribution source term $S_b$ is still nonzero in the case of no external acceleration field ($\bm a = 0$) if the Prandtl number is not equal to unity. This is how and when it varies from the original one-distribution BGK model. CDUGKS aims to solve the PDEs for the fields $\phi= g, b$ (\eqref{gPDE} and \eqref{bPDE}) accurately to second order in space and time. It does so via the use of a trapezoid rule for the temporal integration of the collision terms $\Omega_g$ and $\Omega_b$, the use of the midpoint rule for the temporal integration of the advection (flux) term $F$, and by performing piecewise upwind linear spatial reconstruction of the reduced distribution functions. As presented in the CDUGKS paper, a rectangle rule is used for the temporal integration of the source terms $S_g$ and $S_b$. Using these integration rules, CDUGKS uses the finite volume method which discretizes the fluid into spatial cells and uses the cell average \begin{equation} \phi_{j,i}^n = \frac{1}{|V_j|}\int_{V_j}\phi(\bm x, \bm \xi_i, t_n)\ d\bm{x} \end{equation} where $|V_j|$ is the volume of cell $V_j$ to obtain for the update rule \begin{equation} \label{eq:implicit} \phi^{n+1}_{j,i} = \phi^n_{j+1}-\frac{\Delta t}{|V_j|}F^{n+1/2}_{\phi,j,i} + \frac{\Delta t}{2}\big[\Omega^{n+1}_{j,i} + \Omega^n_{j,i}\big] + \Delta t S^n_{j,i} \end{equation} with the respective collision operator and source term for that particular $\phi$. The microflux $F^{n+1/2}_{\phi,j,i}$ is given by the sum of the interfacial fluxes \begin{align} \label{eq:flux} F^{n+1/2}_{\phi,j,i}=\sum_k \bm\xi_i \cdot \bm A^k_j \phi(\bm x^k_j, \bm \xi_i, t_{n+1/2}) \end{align} where $\bm x^k_j$ is the center of the $k$th face of cell j with outward-facing normal vector $\bm A^k_k$ with area $|\bm A^k_j|$. Here, $\phi(\bm x^k_j, \bm \xi_i, t_{n+1/2})$ is the value of the distribution function at the center of the $k$-th face at $t_{n+1/2}$, which is obtained using piecewise upwind linear interpolation with a Van Leer limiter and a quarter-step leapfrog scheme to integrate to the half-step. Note that the term "upwind" here does not refer to a condition on the bulk velocity $\bm u_j$ at cell $j$, but rather on the specific discrete velocity $\bm\xi_i$. In particular, piecewise upwind linear interpolation refers to interpolating from the $j$-th cell center when $\bm\xi_i \cdot \bm A^k_j > 0$ and from the appropriate adjacent cell center of the interface when $\bm\xi_i \cdot \bm A^k_j < 0$. Note that \eqref{eq:implicit} is an implicit update rule. The rule is made explicit by updating the conserved variables $\bm W$ in addition to $g$ and $b$ so that one can compute the equilibrium distributions $\phi_\text{eq}$ and relaxation times $\tau_\phi$ at $t_{n+1/2}$ that are required for the collision terms $\Omega_\phi$ at $t_{n+1}$. To obtain the half-step interfacial flux, a quarter-step leapfrog integration scheme is used. This scheme uses two auxiliary fields to mediate the calculation. One field is one quarter-step forward and the other one quarter-step backward from the current time in regards to the collisional and source terms (but not in the advection): \begin{align} \label{eq:phibarplus}&\bar{\phi}^+ = \phi +\frac{h}{2}\Omega + \frac{h}{2}S = \frac{2\tau+h}{2\tau}\phi + \frac{h}{2\tau}\phi_\text{eq} + \frac{h}{2}S \\ \label{eq:phibar}&\bar{\phi}\ \ = \phi -\frac{h}{2}\Omega - \frac{h}{2}S = \frac{2\tau+h}{2\tau}\phi - \frac{h}{2\tau}\phi_\text{eq} - \frac{h}{2}S \end{align} where $h = \Delta t/2$ and $\Omega$, $S$ are the corresponding $\Omega_\phi$, $S_\phi$, for $\phi =g, b$. One can imagine these distributions as freezing the current distribution $\phi$ in place at $\bm x$ and allowing only the particles at that location $\bm x$ to interact and lag or advance by a quarter-step in the collisional and external processes with a first-order Euler integration scheme with $\delta t= \pm h/2=\pm \Delta t/4$. These fields are used in integrating \eqref{eq:phiPDE} along a characteristic line terminating at the point $\bm x$ to second order using the midpoint rule. In doing so, one obtains the convenient relation \begin{equation} \label{eq:relation} \bar\phi (\bm x, \bm\xi_i, t_n + h) = \bar\phi^+(\bm x - h\bm\xi_i, \bm\xi_i, t_n). \end{equation} that the scheme will exploit. In short, these two alternative distributions were constructed to facilitate integration along this characteristic line. In other words: to first order, the velocity or energy distribution $\bar\phi$ at $\bm x$ (which is one quarter step behind of $\phi$ in the collisional and external processes) is the same as the velocity or energy distribution $\bar\phi^+$ (a quarter-step ahead of $\phi$) one half-time step ago at $\bm x - h \xi_i$. The spatial discrepancy arises because these two alternative distributions $\bar\phi,\ \bar\phi^+$ only differ from $\phi$ due the collisional and external processes and not by the advection. Since we are tracking only a set of discrete velocities $\{\bm \xi_i\}$, we know that the mass $\bar\phi$ tracks at $\bm x$ with velocity $\bm\xi_i$ would be the same as the mass $\bar\phi^+$ tracks at $\bm x - h\bm\xi_i$ one half step ago. The conserved variables $\bm W_i= (\rho, \rho\bm u, \rho E)$ for the cell located at $\bm x_i$ are given by \begin{align} &\rho= \sum_k w_k g(\bm x_i,\bm\xi_k) \\ &\rho \bm u = \sum_k w_k \bm\xi_k g(\bm x_i,\bm\xi_k) \\ &\rho E = \sum_k w_k b(\bm x_i,\bm\xi_k) \end{align} where $w_k$ are integration weights (e.g. Newton-Cotes) used to numerically integrate over the velocity space. Because the collision operator is conservative, one can also compute the conserved quantities using $\bar\phi$ sequentially as \begin{align} \label{eq:rhobar} &\rho= \sum_k w_k \bar g(\bm x_i,\bm\xi_k) \\ \label{eq:rhoubar} &\rho \bm u = \sum_k w_k \bm\xi_k \bar g(\bm x_i,\bm\xi_k) + \frac{h}{2}\rho\bm a \\ \label{eq:rhoEbar}&\rho E = \sum_k w_k \bar b(\bm x_i,\bm\xi_k) + \frac{h}{2}\rho\bm u \cdot \bm a. \end{align} This allows for the calculation of the conserved variables at the cell interfaces at $t_{n+1/2}$, as is done in \cref{sec:subsec_scheme}. We now have all of the components of the algorithm. \subsection{The Algorithm} \label{sec:subsec_scheme} Given some cell average state $\phi^n_{j,i}$ at time $t_n$ for velocity $\bm\xi_i$, the scheme starts by computing the corresponding quarter-step advanced state $\bar \phi^{+\ n}_{j,i}$ \eqref{eq:phibarplus}. This is the state that undergoes piecewise upwind linear reconstruction. To perform the reconstruction, the gradients $\bm\sigma_{j,i}$ of $\phi$ are needed. In MP-CDUGKS, a Van Leer limiter is applied as \begin{equation} \label{eq:VanLeer} \sigma = (\text{sgn}(\sigma_1) + \text{sgn}(\sigma_2)) \frac{|\sigma_1||\sigma_2|}{|\sigma_1| + |\sigma_2|} \end{equation} where $\sigma_1$ and $\sigma_2$ are the backward finite difference derivatives on each side of the interface. The scheme also requires piecewise upwind linear reconstruction of the gradients, which we call $\bm\sigma_{k,j,i}$ -- the gradient vector at the $k$-th face of cell $j$ for each discrete velocity $\bm\xi_i$. For that interpolation, the gradients of each component of the gradients are needed, which are computed in a similar fashion as the gradients themselves. With the interface value of $\bar\phi^+$ and its gradients, the scheme computes \begin{align*} \bar\phi^+(\bm x_{k,j} - h\bm\xi_i, \bm\xi_i, t_n) = \bar\phi^+(\bm x_{k,j}, \bm\xi_i, t_n) + (\bm x_{k,j}-\bm x_j)\cdot\bm\sigma_{k,j,i} \end{align*} at $\bm x_{k,j}$, the $k$th face of cell $j$. Using the relation (\ref{eq:relation}) we know that the left-hand side is equal to $\bar\phi(\bm x_{k,j}, \bm\xi_i, t_{n+1/2})$. Now that $\bar\phi$ is obtained at the cell interface one half-step later, one can compute the conserved variables $\bm W_i$ using (\ref{eq:rhobar})-(\ref{eq:rhoEbar}) which is needed to compute $\tau_\phi$ at the boundary to compute $\phi_\text{eq}$ at the boundary using \eqref{geq}, \eqref{beq}, and by inverting \eqref{eq:phibar}. Once $\phi_\text{eq}$ is obtained for all cell faces, the flux can be computed using \eqref{eq:flux}. After having computed the source terms using \eqref{eq:gsource} and \eqref{eq:bsource}, which were needed in the very first step to compute $\bar\phi^+$, the scheme finally updates the conserved variables via \begin{align} \label{eq:rhoupdate} & \rho^{n+1}_j = \rho^n_j + \frac{\Delta t}{|V_j|}\int F^{n+1/2}_{g,j,i}\ d\bm\xi + \Delta t \int S^n_{g,j}\ d \xi \\ \label{eq:rhouupdate} & \rho\bm u^{n+1}_j = \rho\bm u^n_j + \frac{\Delta t}{|V_j|}\int \bm\xi F^{n+1/2}_{g,j,i}\ d\bm\xi + \Delta t \int\bm\xi S^n_{g,j}\ d \xi \\ \label{eq:rhoEupdate} & \rho E^{n+1}_j = \rho E^n_j + \frac{\Delta t}{|V_j|}\int F^{n+1/2}_{b,j,i}\ d\bm\xi + \Delta t \int S^n_{b,j}\ d \xi \end{align} where we note $\rho$ and $\rho\bm u$ are updated using the microflux $F_g$ for the VDF $g$ and $\rho E$ is updated using the microflux $F_b$ for the TEDF $b$. Finally, the scheme computes the new $\phi_\text{eq}$ and updates $\phi$ via \begin{equation} \begin{split} \label{eq:phiupdate} \phi^{n+1}_{j,i} = & \bigg(1 + \frac{\Delta t}{2\tau^{n+1}_j}\bigg)^{-1}\bigg[\phi^n_{j,i}+\frac{\Delta t}{2}\bigg(\frac{\phi^{n+1}_{\text{eq},j,i}}{\tau^{n+1}_j} + \frac{\phi^{n}_{\text{eq},j,i}-\phi^{n}_{j,i}}{\tau^n_j}\bigg) \\ &- \frac{\Delta t}{|V_j|} F^{n+1/2}_{\phi,j,i} +\Delta t S^n_{\phi,j,i}\bigg]. \end{split} \end{equation} \section{Test Problems} \label{sec:sec_problems} We use a series of test problems to demonstrate the capability of CDUGKS in modeling fluid flow regimes in and out of the Eulerian and Navier-Stokes regimes. We explore the 1D Sod shock tube problem, the 2D Kelvin-Helmholtz instability, a 2D shearing problem, a 1D thermoacoustic wave, the 2D Gresho vortex, and a 1D sine wave perturbation problem. We compare the results either to their corresponding analytical solutions in the Eulerian or Navier-Stokes regime or to numerical solutions using either the popular astrophysics/cosmology code Enzo \citep{Enzo} or Athena++ \citep{Athena++}. We also explore the effect that the Prandtl number has on the Sod shock tube problem, the Kelvin-Helmholtz instability, the 2D shearing problem, and in the thermoacoustic wave. The Prandtl number is not a fundamental physical parameter. It is typically defined as the ratio of two physical parameters, namely the ratio of momentum diffusivity $\nu$ to energy diffusivity $\alpha$ as $Pr = \nu/\alpha$. The Prandtl number can vary by changing either one of the two parameters, or by changing both. In this paper, the effect the Prandtl number has on fluid dynamics is illustrated by keeping the reference viscosity $\mu_r$ fixed so as to only change the thermal diffusivity of the system \footnote{Recall that there is no explicit diffusive/Laplacian term in the equations governing CDUGKS and, as such, that the Pr used in the scheme is actually the ratio of the relaxation times of the VDF and the TEDF.}. Each of these test problems are hard coded and can be run by giving the code a different test problem ID at compile time. \subsection{Sod Shock Tube} The Sod shock tube problem is a popular test problem for astrophysical fluid codes for three main reasons. Firstly, it is a quick 1D problem with a simple analytical solution in the inviscid regime. Secondly, it tests whether a solver captures shocks and rarefaction properly and may expose a solver's oscillatory or non-conservative behavior. Thirdly, it is a very relevant test problem in astrophysics since shocks are ubiquitous in self-gravitating collisional systems. Much effort has been put into numerical solvers so that they match the analytic shock tube result for strong shocks. However, attempts to match the results of this test problem with high accuracy to the analytic shock tube result are limited in scope to a special case of fluid flow regime i.e. the inviscid regime. This can be an odd benchmark for some solvers, especially particle-based methods which by design use particles with nonzero mean free paths. As in the CDUGKS paper, we solve the Sod shock tube problem as a simple validation of the code. The computational domain is $x\in [0,1]$. The initial conditions are $(\rho_1, u_1, P_1) = (1,0,1) $ for $x\leq0.5$ and $(\rho_2, u_2, P_2) = (1/8,0,1/10)$ for $x> 0.5$. The problem is solved with Dirichlet boundary conditions. The initial conditions are evolved to code time $t=0.15$, with the following simulation parameters: \begin{itemize} \item Two internal degrees of freedom ($K=2$), implying $\gamma = 7/5$. \item A viscosity exponent of $w=0.5$, in accordance with the hard-sphere model from kinetic theory. \item A Prandtl number $Pr=2/3$. \item A spatial resolution of 1024 cells. \item A 1D velocity space discretized evenly with $\xi\in [-10, 10]$, resolved with 1025 cells\footnote{The fourth-order Newton-Cotes scheme used requires a size of $4n+1$ for some integer $n\geq 1$.}. \item A CFL safety factor of 0.75. \end{itemize} The versatility of CDUGKS become very clear when considering a problem such as the Sod shock tube problem. Its asymptotic-preserving property allows us to recover the well-known analytic results in the inviscid regime and the collisionless regime, in addition to obtaining results in the Navier-Stokes and transitional regime, all with the same code and for the same computational cost. Figure \ref{fig:sod_visc_results} shows the simulations results for the five runs with $\mu_r = 10^0, 10^{-2}, 10^{-3}, 10^{-4}, 10^{-6}$ at $T_r = 1$ along with the analytic inviscid (Euler) and the collisionless (CL) result. Notably, the run with negligible viscosity $\mu_r=10^{-6}$ matches the analytic Eulerian result fairly well. Similarly, the run with very large viscosity $\mu_r = 1$ matches the analytic collisionless result fairly well. For the latter run, a reference viscosity $\mu_r=1$ (mapping to a $Kn \sim 3.2$) for a hard-sphere model was chosen to illustrate the point brought up about NS vs CDUGKS viscosity. That is, there are infinite orders of magnitude above this value in parameter space for this (inverse) collisionality parameter, yet the results are very close to the collisionless case. Increasing the viscosity in CDUGKS to achieve $Kn\sim 10, 100$ may bring the result closer to the collisionless (CL) result, but increasing the viscosity effectively doesn't increase the momentum diffusion rate any more. In the Navier-Stokes model for fluids with conduction, the rate of momentum or energy diffusion would increase along with this parameter without bound (in addition to the growth rate of the respective diffusion length scales). One must be mindful about whether the microphysics in the problem considered allows for such phenomena. Since the Prandtl number is formally the ratio of the momentum diffusivity to the thermal diffusivity, the effects that the Pr has on the fluid dynamics (for values close to unity) are largest in the viscous Navier-Stokes and transitional regime. To understand why, consider a fluid in the Eulerian regime. Changing the Pr from 2/3 to 3/2 while holding the viscosity fixed changes the thermal diffusivity from negligible to even more negligible. A similar situation arises in the large Knudsen limit. More precisely, this is due the relaxation timescales $\tau_g$, $\tau_b$ being irrelevant if they are much larger or much smaller than the problem-specific hydrodynamical timescale (e.g. sound-crossing time). If they are too small, then the distributions in question are effectively always in equilibrium; if too large, neither distribution relaxes much toward equilibrium at any point in space. In this paper, we demonstrate the effects of the Prandtl number for the hard-sphere model. Figure \ref{fig:sod_Pr_results} shows the results for the Sod shock tube problem with reference viscosity $\mu_r=10^{-3}$ for three different runs: Pr$=2/3, 1, 3/2$. \subsection{Kelvin-Helmholtz Instability} The Kelvin-Helmholtz Instability (KHI) is a classic problem which demonstrates the rapid growth of vorticity that arises from small velocity perturbations in shearing flows of different densities. In the inviscid limit and in the absence of gravity, such a perturbation (of any wavelength) grows exponentially at early times. In the Navier-Stokes regime, there exists some minimum perturbation wavelength for which there is rapid growth -- any perturbation on a scale smaller than this wavelength is dampened due to viscosity\footnote{One could say this is the low Reynolds number regime, although we caution against the use of the number since the traditional definition is not Galilean-invariant outside of the pipe setting} \citep{KHImud, KHINSDampeningPaper}. In this work, a sinusoidal perturbation to the vertical velocity with a wavelength equal to the box size $\lambda = 1$ is chosen, with the following simulation parameters: \begin{itemize} \item Two internal degrees of freedom ($K=2$). \item A viscosity exponent of $w=0.5$, in accordance with the simple hard-sphere model from kinetic theory. \item A 2D velocity space discretized evenly with each component $\xi_i\in [-10, 10]$, resolved with 101 cells. \item A spatial resolution of $100^2$ cells. \item A CFL safety number of 0.8. \end{itemize} A ramp function is used to create smooth initial conditions which have finite difference gradients that converge with increasing resolution. This was done to avoid the well-documented ill effects of using initial conditions with a sharp interface and to have results to compare to since the inviscid result one gets for the sharp interface from Enzo does not converge with increasing resolution \citep{Robertson_2010}. The ramp function used \citep{Robertson_2010} is given by \begin{equation} R(y) = \frac{1}{1 + e^{-\frac{2(y-0.25)}{\delta}}}\times\frac{1}{1 + e^{-\frac{2(0.75-y)}{\delta}}} \end{equation} with $\delta=0.05$. The horizontal velocity is ramped such that $v_x = -\frac{1}{2}v_r +R(y)v_r$ with relative velocity $v_r = 2.0$, and the density is ramped such that $\rho = \rho_0 + R(y)\delta\rho$ with $\rho_0 = 1$, $\delta\rho=1$. The velocity perturbation is given by $v_y = \delta_y \sin(2\pi x)$ with $\delta_y=0.04$. The fluid is initialized at pressure equilibrium, with $P = 2$. A total of eight simulations were run, with $\mu_r = 10^{-2}, 10^{-3}, 10^{-4}, 10^{-6}$ at $T_r=1$ with $Pr = 1$. For each of the two $\mu_r = 10^{-3}, 10^{-4}$, two additional simulations with Pr $= 2/3, 3/2$ were run to investigate the effect of the Prandtl Number. The density at $t=1.6$ is presented in Figure \ref{fig:KHIrho}. Each row represents a different $\mu_r$, while each column a different Pr. A ninth simulation was run (in Enzo, at the same spatial resolution) for comparison. As can be seen in Figure \ref{fig:KHIrho}, the results for the near-inviscid $\mu_r = 10^{-6}$ simulation match the Eulerian result from Enzo very well. In general, increasing the Prandtl number sharpens the result, since the energy diffusion is decreased. As such, and as in the Sod shock tube, the correct Prandtl number for the monotonic gas (Pr$=2/3$) as expected from kinetic theory results in smoother fields. In addition, as the mean free path approaches the box size, the KH mode dampening becomes stronger as seen by the significant diffusion in the run with $\mu_r = 10^{-2}$. Such effects may be of interest in the context of mixing (e.g. heterogeneous vs homogeneous). \subsection{Shear Problem}\label{sec:shear} Consider a 2-dimensional periodic square domain $[0,1]^2$ with uniform density and temperature (and thus at pressure equilibrium), and a velocity profile \begin{equation} \bm u_0(x) = (0, \delta u \sin 2\pi x). \end{equation} Within the Navier-Stokes framework with nontrivial constant viscosity the evolution of $u_y$ for this problem is well-approximated by ignoring the pressure gradient and momentum flux term, which results in the diffusion equation \begin{equation} \frac{\partial \bm u}{\partial t} = \nu \nabla^2 \bm u \end{equation} with solution \begin{equation} \label{eq:NSshear} \bm u (x, t) = \bm (\bm u_0(x)-\Bar{\bm u}) e^{-4\pi^2 \nu t} + \Bar{\bm u} \end{equation} where for this problem the spatial average $\Bar{\bm u} = \langle \bm u_0(\bm x) \rangle =0$. The pressure gradient and momentum flux terms give rise to a thermoacoustic wave along the $x$-dimension which, although is a rich problem, shall not be discussed here. Due to the simplicity of the problem and its emphasis on a non-KHI-exciting shear, it is a great playground for exploring the nature of shear in flows beyond the Navier-Stokes regime. We evolved these initial conditions in CDUGKS up to $t=4.0$ with the following simulations parameters: \begin{itemize} \item Two internal degrees of freedom ($K=2$). \item A viscosity exponent of $w=0.0$ (constant viscosity), to be able to compare to analytical approximations. \item A Prandtl number of Pr=$2/3$ is used. \item A 2D velocity space discretized evenly with each component $\xi_i\in [-6, 6]$, resolved with 129 cells. \item A spatial resolution of $128\times1$ cells (there is zero $y$-gradient in any quantity of interest). \item A CFL safety number of 0.7. \end{itemize} Four simulations were run with $\mu_r = 10^0, 10^{-1}, 10^{-2}, 10^{-3}$. An interesting and insightful caveat must be discussed before discussing the results. Of course, specifying initial conditions for a phase space method such as CDUGKS requires more than the first few moments of the distribution function; for simplicity, every problem studied in this paper assumes a Maxwell-Boltzmann distribution (equation \ref{geq} and \ref{beq}) initially when given the moments. This initialization and its uniqueness is only appropriate and well-defined in the Eulerian regime. Once given the freedom to deviate from equilibrium, a fluid can manifest an uncountable number of velocity and energy distributions for a given set of the typical 5 moments (mass, momentum, energy). Enforcing that the distribution give rise to a particular stress tensor as in the Navier-Stokes system is one way to constrain and pick one out from the many. Inspecting the analytic expectation for the Navier-Stokes result for $v_y$ (equation \ref{eq:NSshear}), we see that the exponential viscous dampening timescale $\tau_{\nu} = (4\pi^2\nu)^{-1} $ shrinks without bound with increasing $\nu$ for fixed initial conditions. Additionally, the analytic approximation should in principle get better with increasing $\nu$, as the viscous timescale becomes a smaller and smaller fraction of the sound-crossing time. In a phase space scheme, the only way momentum can be advected elsewhere is if mass that carries momentum is advected elsewhere. Thus, two things must be true in order to satisfy this analytic NS result for larger and larger viscosity in addition to the conservation laws within CDUGKS. Firstly, one must modify the higher moments of the initial velocity distribution so as to increase the interfacial momentum flux while fixing the temperature and mean momentum. Secondly, it must be true that equations underlying CDUGKS (i.e. equation \ref{eq:BGK}, \ref{gPDE}, \ref{bPDE}) must be able to produce and perpetuate distributions with such moments; it isn't clear if that is indeed the case. In any case, this poses a computational problem since our implementation of CDUGKS uses Newton-Cotes quadrature to integrate a truncated velocity space, which could not feasibly integrate distributions with very high kurtosis or generally larger higher moments. Alternative initializations for the velocity distributions could be explored with this code. The distribution \begin{equation} \label{eq:NSdist} f_1 = f_\text{eq} \bigg( 1 - \frac{1}{n}\sqrt{2RT} \bm{A}\cdot \nabla \ln T - \frac{2}{n} \bm B :\nabla \bm u\bigg) \end{equation} for some vector $\bm{A}$ and rank 2 tensor $\bm{B}$ and the equilibrium distribution $f_\text{eq}$ that arises from the first-order Chapman-Enskog expansion \citep{MathematicalTheoryofGases} was not considered for this problem since it allows for negative values for large enough hard-sphere diameter $\sigma$ at only speeds several times the sound speed away from the mean with a high-order approximation. In particular, using the work of \cite{ChapmanThesis} to produce a 150th order accurate expansion for $\mathscr{C}^2\hat{B}(\mathscr{C})$ for values of $\mathscr{C} = |\bm{v} - \bm{u}|/\sqrt{2 R T}\in [0,6]$ for the hard-sphere model, we find that for the shear problem with $\mu_r = 1$, one gets negative values for $f$ for some of the values of $\mathscr{C}\in [0,6]$. This is simply because the values of $\bm{B}$ at fixed $\mathscr{C}$ scale with $\sigma^{-2}$, and so the final term in \eqref{eq:NSdist} diverges for smaller $\sigma$. Thus, as expected, using this truncated distribution from the first-order Chapman-Enskog expansion is nonphysical for the simulations with large mean free paths. With no clearly good alternative for the simulations with larger mean free paths, the Maxwellian-Boltzmann distribution was used for initialization for all runs \footnote{It is worth noting that, due to the non-Maxwellian initial conditions used by the Navier-Stokes approximation, one is actually implicitly considering a different set of phase space initial conditions when changing the viscosity parameter in the Navier-Stokes model for the same set of initial moments $\rho$, $\rho v$, $\rho E$.}. This problem highlights the ability for CDUGKS to model flows beyond the Navier-Stokes regime, as it shows that we are very far from equilibrium in this region of parameter space. Figure \ref{fig:shear} shows the evolution of the maximum value of $v_y$. As one increases the viscosity, the Navier-Stokes result (here obtained with simulations using Athena++ \citep{Athena++} with equal values for kinematic viscosity and thermal diffusivity, shown with dashed lines) diverges from the CDUGKS result for identical initial conserved macroscopic quantities. If the Navier-Stokes equations were valid for all Kn, our analytic approximation should become better with increasing $\nu$ since the absolute value of the diffusion term would be larger than all other terms in the PDE. However, we see the effect described in the previous section. Namely, without enough mass carrying enough momentum in the tails of the distribution that we track (and without a mechanism for distributions with larger higher moments to emerge), there is simply no manner in which one can match the Navier-Stokes result with a Maxwell-Boltzmann initialization. A much more insightful point mentioned previously is that there seems to be a maximum momentum diffusion rate as predicted by CDUGKS (and other BGK-based models). This shows up in the slope of the lines in Figure \ref{fig:shear}; $\frac{d v_\text{y}}{dt}|_{t=0}$ seems to converge to some maximum value with increasing $\mu_r$. \subsection{Thermoacoustic Wave} This test problem simply aims to display how conduction is treated in the code since, as for viscosity, there is no explicit diffusion term as there is in Navier-Stokes solvers. The problem begins with an ideal gas at pressure equilibrium, with $\rho(x) = 1 - \delta \sin 4\pi x$, $RT(x) = P_0/\rho$, and $P_0 =1$ with the following simulation parameters: \begin{itemize} \item Two internal degrees of freedom ($K=2$), \item A viscosity exponent of $w=0.5$, with $\mu_r = 10^{-3}$. \item A 1D velocity space discretized evenly with $\xi\in [-10, 10]$ resolved with 513 cells, \item A spatial resolution of $512$ cells, \item A CFL safety number of 0.8, \end{itemize} the initial conditions were evolved until code time $t=4$ for three simulations with Pr $=2/3, 1, 3/2$, all with $\delta = 0.05$. Figure \ref{fig:TA1} shows a snapshot of the simulation at $t=2$ for the three runs. Here, you can see how in just over approximately one and a half sound crossing times $t_s\sim1.2$ the different Prandtl numbers (thermal diffusivities) bring about different density, velocity and temperature profiles. Since the reference viscosity was fixed, these differences are due entirely to how the energy moves around and relaxes toward equilibrium. Figure \ref{fig:TA2} shows the time evolution of density and velocity at one of the two cells that are initialized with the maximal density $\rho_{max} = 1 + \delta = 1.05$. Notice how the run with $Pr<1$ has a velocity (in the initially densest cell) which dips well below zero, and the density increases slightly and briefly. \begin{comment} \subsection{Blob Test (Cloud Crushing Problem)} The Blob Test, also known as the Cloud Crushing Problem, is an astrophysical problem that is actively being researched due to its important implications on gas cloud stability \citep{Agertz_2007_BlobTest}. Other works have studied the influence of e.g. cooling, magnetic fields, and cosmic ray pressure on the evolution and stability of the blob [CITE]. Here, we ignore all such effects and aim to outline the intrinsic early-time pure fluid-dynamical effects on the evolution and stability of the blob as predicted by CDUGKS, as this problem typically concerns itself with rather dilute gases such as those in the circumgalactic medium. Since cooling is not incorporated, and due to the high computational cost of CDUGKS, the extra space typically present in the box has been cut off and the appropriate inflow/outflow boundaries are applied on the left and right edge. Also due to the high computational cost, we limit ourselves to a 2-dimensional version of the problem with the same density contrast as the original 3-dimensional problem. The simulations are run with the following parameters: \begin{itemize} \item Zero internal degrees of freedom ($K=0$), \item A viscosity exponent of $w=0.5$, consistent with the hard-sphere model. \item A 2D $100\times100$ velocity space discretized evenly with $\xi_x\in [-15, 25]$ and $\xi_y\in [-20,20]$. \item A spatial resolution of $100\times100$ cells, \item A CFL safety number of 0.3. \end{itemize} Two simulations with $\mu_r = 10^{-3}, 10^{-5}$ were run up to $t = 1.25t_{KH}$, where $t_{KH}$ is the Kelvin-Helmholtz time equal to roughly $1.6$ times the cloud crushing time $t_{cc}$, as in \citep{Blob}. Figure \ref{fig:BT1} shows the results of the two simulations. In the less viscid fluid, two density peaks are visible at $t=1.25t_{KH}$ -- bifurcation has occurred, as in the SPH simulations in the original blob test paper by \citep{Agertz_2007_BlobTest} and others. In the more viscid fluid, only one density peak can be found. This high viscosity has thus dampened out the bifurcation mode at the cost of some self-diffusion. The corresponding mean free path is roughly $\lambda \sim 0.0032$ times the box size. Keep in mind that, due to the high computational cost, only a spatial resolution of $100^2$ was used. Whether this may be relevant to any real astrophysics problem in three dimensions is left as an exercise to the reader. \end{comment} \subsection{Gresho Vortex} The Gresho vortex problem is the cylindrically symmetric cousin of the shear problem covered in \cref{sec:shear}. Thus, here we overlook viscosity and conduction and only analyze the angular momentum conservation properties of CDUGKS in the near-inviscid regime. This is typically a concern for methods that discretize velocities since the discretization (both into a spatial Cartesian grid, but also into a Cartesian grid in velocity space) is not rotationally symmetric for all rotation angles $\phi_R \in (0,2\pi)$, and so some loss of angular momentum is expected during a rotation. The initial conditions are similar to the original triangular vortex \citep{GreshoPaper}, with the azimuthal velocity profile \begin{align} u_\phi(r) = \begin{cases} 5r & r\leq 0.2 \\ 2-5r & 0.2 < r \leq 0.4 \\ 0 & r > 0.4 \end{cases} \end{align} and complementary pressure profile \begin{align} P(r) = \begin{cases} P_0 + \frac{25}{2}r^2 & r\leq 0.2 \\ P_0 + \frac{25}{2}r^2 + 4( 1 - 5r - \log 0.2 + \log r) & 0.2 < r \leq 0.4 \\ P_0 - 2 + 4\log 2 & r > 0.4 \end{cases} \end{align} with $P_0 = \frac{\rho u_{\text{max}}}{\gamma M^2} = 5$, constant density $\rho = 1$, $\gamma = 7/5$, $M = 1/\sqrt{7}\approx 0.378$, and $u_{\text{max}} = 1$. Three simulations were run with the following parameters: \begin{itemize} \item Two internal degrees of freedom ($K=2$), \item A viscosity exponent of $w=0.5$, with $\mu_r = 10^{-5}$. \item A 2D velocity space discretized evenly with $\xi\in [-10, 10]$ resolved with 101 cells, \item A spatial resolution of $100\times 100$ cells, \item A CFL safety number of 0.8. \end{itemize} The first simulation was simply the original problem. The second simulation added a bulk motion $\delta v_x = 3/(2\pi r_\text{max} u_\text{max}) = 15/2\pi$ so that the center of mass would traverse the box three times in the time it takes the fastest ring with $u_\text{max} = 1$ at $r_\text{max} = 0.2$ to complete a rotation. The third simulation used the original initial conditions but used a $51 \times 51$ velocity resolution on the same velocity space domain to explore the resolution dependence of the angular momentum conservation. For each simulation, the initial conditions were evolved for a full rotation of fastest ring (initially at $r = 0.2$), i.e. for a total time $t_\text{sim}=2\pi/5$. The results are well summarized by Figure \ref{fig:GV1}, showing the velocity profiles from all three simulations along with the initial conditions and the residuals from the initial conditions. As expected, both the moving simulation and the simulation with only a $50\times 50$ velocity space resolution are lossier than the stationary simulation. \subsection{Sine Wave Perturbation} The sine wave perturbation problem is often explored in the context of the Vlasov-Poisson system. Here we study a similar problem with no gravity. Since this version of CDUGKS uses a representation of the phase space distribution at a fixed, discrete set of velocities, it cannot represent a perfectly cold fluid. Furthermore, it becomes computationally intensive to represent any hypersonic flow since one must be able to resolve the width of the stream in velocity space while also being able to resolve the range of bulk flow velocities present in a simulation. The degree of this computational strain can be quantified by comparing the minimum and maximum characteristic thermal velocity $\sqrt{2 R T}$ to the minimum and maximum residual of the bulk from the mean $\bar{\bm u} - \bm{u(x)}$. Both the minimum and maximum of each quantity have to be compared because similar situations arise for very hot gases, which need larger velocity space domains for some bulk velocity profile to represent all of the mass to machine precision. In any case, in this paper we explore the problem with the cold (finite temperature) initial profile \begin{align} \rho(x) & = \rho_0 \\ v(x) &= v_0\sin 2\pi x \\ E(x) &= C_v T_0 + \frac{1}{2}v(x)^2 \end{align} with $\rho_0 = 1$, $v_0 = 1$, $T_0 = 0.1$, and $C_v =(3 + K)R/2 = 1$. We evolve these initial conditions forward for reference viscosities $\mu_r$ of $10^{-5}$, $10^{-3}$, $10^{-2}$, $10^{-1}$, and $10^{1}$, roughly representative of the behavior all the way from the Eulerian to the free streaming regime. We use \begin{itemize} \item Two internal degrees of freedom ($K=2$), \item A viscosity exponent of $w=0.5$, with $\mu_r = 10^{-5}$. \item A 1D velocity space discretized evenly with $\xi\in [-6, 6]$ resolved with 16385 cells, \item A spatial resolution of $128$ cells, \item A CFL safety number of 0.5. \end{itemize} There are a few temporal points of interest for this problem. The mass distribution converges near the center of the grid to some peak density and then (depending on the fluid regime) can form two shocks that propagate away from the center. Figure \ref{fig:SWC_density_evolution} shows the density evolution at several points in time for the different reference viscosities. It is also quite informative to inspect the phase space evolution (Figure \ref{fig:SWC_phase_evolution}). This showcases yet again the ability for CDUGKS to simulate fluids in different regimes at fixed computational cost; the code captures the formation of shocks and non-equilibriated streams (with multi-modal velocity distributions) and propagates them. \section{Code Discussion} \label{sec:code_discussion} The code was implemented in Regent, a language featuring implicit parallelism written in Legion \citep{Legion}. In addition to the vast physical differences between these phase space methods and the traditional 3D hydro methods used in astrophysics, there are some important algorithmic differences that affect computational performance. Firstly, since CDUGKS uses the discrete velocity method with static grid boundaries, there exists no mass with speeds larger than the largest speed that is being tracked. Thus, the CFL condition for the timestep taken becomes \begin{equation} \Delta t = \alpha \frac{\Delta x_\text{min}}{2|\bm\xi_{j,\text{max}}|} \end{equation} for $\alpha \in [0,1]$, where $\Delta x_\text{min}$ is the smallest spatial cell width and $|\bm\xi_{j,\text{max}}|$ the largest discrete speed tracked. Note that the original CDUGKS paper has $|\bm\xi_{j,\text{max}} + \bm{u}_{j,\text{max}}|$ in the denominator, but we simply replace the largest average bulk velocity with the largest possible microvelocity to remove the global synchronization required across all nodes. This has its pros and cons. For small problems that e.g. can be run on a single node or laptop, one is limited by smaller time steps which are already smaller than those in usual hydro methods which typically only compare the values of bulk velocities to several local wave speeds (e.g. sound, AlfvГ©n). However, in larger problems that require many nodes to run in a reasonable amount of time, such a CFL condition enables distributed compute without global synchronizations; with a static spatial and velocity grid, the same time step is used over and over again. The calculation of the conserved variables at the boundary, which is required to compute the equilibrium distribution $\phi^{n+1/2}_\text{eq}$ at the boundary, adds a computational strain not seen in typical 3D hydro codes -- this is what makes it a 6D code. For every spatial cell in a $D$-dimensional problem, one must do $D$ numerical integrations over all velocity space for the $D$ right boundaries. This can thus be thought of as using a local spatial stencil plus a global velocity space stencil. Due to these global velocity space integrals done for every spatial cell, the parallelization is done strictly in spatial subregions. In other words, the grids are not broken up in velocity space. This parallelization strategy will not work for very large problems where the phase space distribution for a single spatial point does not fit on a single node's memory. However it will work for many large problems. \section{Conclusion and Future Work}\label{sec:conclusion} In this paper, we introduced our open source implementation of CDUGKS. We explored how CDUGKS can be used to simulate gases across a wide range of mean free paths, showcasing the asymptotic preserving property, including how it can be used to test the validity of the Eulerian, Navier-Stokes, and free streaming (collisionless) approximations. We discuss how it becomes nontrivial to pick initial conditions for problems outside of the Eulerian/Navier-Stokes regimes, as well as some of the properties of non-equilibrium velocity distributions. We showcase the ability of CDUGKS to simulate gases with different Prandtl numbers, and how it can capture and propagate features such as shocks and even multi-modal velocity distributions (overlapping streams). There are many ways to extend this work and here we will mention some of them. The first is to use the algorithm to characterize the circumstances in which instabilities grow beyond the Navier-Stokes regime. For the Kelvin-Helmholtz instability, for example, the minimum perturbations wavelengths that grow in the presence of viscosity are known \citep{KHINSDampeningPaper}. However, it is unclear how this relationship changes in the transitional regime just beyond the Navier-Stokes regime. Another extension is to implement a 3D (1 spatial, 2 velocity) version of the code that is able to simulate spherically symmetric problems, so as to study things such as the spherically symmetric gravitational collapse of weakly interacting classical particles. Yet another way is to incorporate more physics into the code (e.g. simple cooling models) to extend the domain of applicability -- algorithms that include electromagnetism and relativity exist \citep[see][]{relativisticBGKVlasovMaxwell}, but there is no open source implementation of them. \section*{Acknowledgements} The Legion team was an incredibly helpful resource for Regent, Legion, and parallel programming generally and they have our thanks. In addition, this research was made possible by the Stanford Sherlock HPC Cluster, on which these simulations were run. \section*{Data Availability} All of the physics problems discussed in this paper are test problems that are available in the source code. The simulation data can be generated easily (even on a personal computer) by running the code with the proper test problem ID. Refer to the code repository for instructions on how to generate the data. Many of the plotting routines are also available in the repository. \bibliographystyle{mnras} \bibliography{example} % \bsp % \label{lastpage}
Title: JWST unveils heavily obscured (active and passive) sources up to z~13
Abstract: A wealth of extragalactic populations completely missed at UV-optical wavelengths has been identified in the last decade, combining the deepest HST and Spitzer observations. These dark sources are thought to be very dusty and star-forming systems at 3<z<5, and major contributors to the stellar mass build up. JWST is now promising to detect such objects well beyond the end of the Epoch of Reionization. In this Letter we report an investigation of the deep JWST survey in the SMACS0723 cluster, analysing NIRCam and MIRI images. We search for sources in the F444W band that are undetected in the F200W catalogues. We characterise the main properties of these sources via detailed SED modelling that account for a wide set of parameters and star formation histories, after a careful determination of their photometry. Among a robust sample of 20 candidates, we identify a mixed population of very red sources. We highlight the identification of candidate evolved systems, with stellar masses M*~10^(9-11)Msun at 8<z<13 characterized by unexpectedly important dust content at those epochs (Av up to ~5.8mag), challenging current model predictions. We further identify an extremely red source (F200W-F440W~7mag) that can be reproduced only by the spectrum of a passive, quenched galaxy of M*~10^11.8Msun at z~5, filled of dust (Av~5mag).
https://export.arxiv.org/pdf/2208.02825
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} galaxies: high-redshift -- galaxies: evolution -- infrared: galaxies \end{keywords} \section{Introduction} \label{intro} The statistical identification of galaxies at various cosmic epochs is key to understanding their formation and evolution. In the deepest extragalactic fields, multiwavelength observational efforts (from the X-ray to the radio spectral region) allowed for a reconstruction of the distinct galaxy populations and their formation history. The measurement of the star formation rate density (SFRD) is a key finding (e.g. \citealt{madaudickinson}). The SFRD peaked at $z\sim1-3$ and then descended quickly to the current period. However, recent studies have suggested that the portion of the SFRD hidden by dust, and so unaccounted for by optical/UV surveys at $z > 2$ is not negligible and is likely to increase with redshift at least up to $z\sim5-6$ (e.g. \citealt{Novak_2017}, \citealt{Gruppioni_2020}). Thus, a thorough investigation of high redshift galaxies is crucial for our comprehension of the early epochs of galaxy stellar mass growth. The more traditional method of analyzing sources at $z > 3$ relies on their broadband colors, as they allow to identify the presence of a brightness drop (i.e. the Lyman Break or the Lyman Forest). Such objects are known as Lyman-Break Galaxies (LBGs). Although this method is relatively simple to use, it is also significantly incomplete and contaminated. In particular, the LBG selection is known to be significantly biased towards relatively young massive and starforming galaxies citep[$M_*\gtrsim 10^{11}M_\odot$;][]{Giavalisco02,Shapley11,Dunlop13}, missing the heavy dust obscuration in the UV spectral region. As a result LBGs are actually more likely to exclude the more massive galaxies due to their increased dust content (e.g. \citealt{Whitaker17}). Other independent near-IR color schemes have thus been suggested to extend the census of the high-$z$ population, in particular thanks to the {\it Spitzer} space telescope. \citet{Wang2016} provide a strategy that allows for a rather clear selection of $z>3$ galaxies. For example, the color cut $H-[4.5]>2.25$ is proposed to select old or dusty galaxies at $z>3$ (called HIEROs) that are completely missed even by the deepest {\it Hubble} Space Telescope ({\it HST}) imaging. Such UV-optically dark objects are dominated by obscured and dusty systems, including submillimeter sources \citep{wang19}. HIEROs have typical $M_*\gtrsim 10^{10}M_\odot$ and SFR$\gtrsim 200M_\odot/yr$ at an average $z\sim4$. They exhibit low number space densities and active star formation, contributing up to $\sim20\%$ of the SFRD at $z\sim3$ to 5 \citep[e.g. ][]{Gruppioni_2020,Sun2021,Talia2021,Enia2022}, and up to $50\%$ of the bright end of the stellar mass function \citep{Rodighiero2007}. A more extreme class of objects revealed at millimeter wavelengths includes optically dark sources undetected even in deep Spitzer images \citep[e.g.][]{Yamaguchi19,williams19,Gruppioni_2020}. However, their contribution to the SFRD is still uncertain given their low statistics. Despite the significance of these galaxies, most of their physical characteristics are still very speculative, with the exception of a few spectroscopic confirmations \citep{wang19,Casey19,Umehata20,Caputi2021,Irakashi22}. The {\it James Webb Space Telescope} ({\it JWST}) has just opened a new window on the distant Universe, allowing through its near-to-mid IR eyes to detect farther and fainter sources. In this Letter we exploit the first deep field imaging offered by the early release observations (ERO) to demonstrate the ability of {\it JWST} to identify optically and near-IR dark sources missed even by {\it Spitzer} because of their fainter luminosities. We present the detection and characterization of sources selected in the longest NIRCam filter, F444W, lacking a detection in the F200W band in blindly extracted catalogs (e.g. F200W dropouts). By combining the Near-Infrared Camera \citep[NIRCam;][]{Rieke2005} and Mid Infrared Instrument \citep[MIRI;][]{RiekeG2015} photometry, we investigate the properties of the most robust detections, highlighting: i) the discovery of extremely dusty low mass systems, filling the faint end of the HIEROs mass function; ii) the potential identification of massive quenched galaxies at $z\sim5$; iii) the confirmation of high-$z$ systems ($8<z<13$) as already probed by {\it JWST} \citep{Adams2022,Atek2022,Carnall2022,Castellano2022,Donnan2022,Finkelstein2022,Naidu2022}, but with large dust content. Throughout the paper, we consider a $\Lambda$CDM cosmology with $H_0=70 km s^{-1} Mpc^{-1}$, $\Omega_M=0.27$, $\Omega_\Lambda=0.73$ and a \citet{Kroupa2001} stellar Initial Mass Function; all magnitudes are in the AB system. \section{JWST OBSERVATIONS OF SMACS0723} The Reionization Lensing Cluster Survey \citep[RELICS,][]{Coe2019} dedicated 188 HST orbits and 946 Spitzer hours, observing 41 of the most massive galaxy clusters discovered by Planck at redshift $z\sim 0.2-1.0$. The relatively deep ACS and WFC3/IR imaging, spanning 0.4-1.7 $\mu$m, has been used to derive accurate magnification maps of these clusters. The cluster SMACS J0723.3$-$7327 (hereafter SMACS0723) at $z=0.39$ has two lens models (Glafic and Lenstool), publicly available in the RELICS repository\footnote{ \url{https://archive.stsci.edu/missions/hlsp/relics/smacs0723-73/models/}}. According to these maps, the magnification region at $\mu\ge100$ is relatively extended, allowing the selection of galaxies at $z\sim 6-8$ \citep{salmon}. The observations of SMACS0723 by the world's premier space science observatory {\it JWST} marked the official beginning of the highly promising observatory’s science operations \citep[][]{2022arXiv220713067P}. The high quality of the Webb first images and spectra of SMACS0723 have been obtained in particular with the instruments NIRCam and MIRI. The ERO program aims at demonstrating the ability of {\it JWST} to image high-redshift galaxies, at a depth unrivaled by {\it HST} or ground based telescopes. \subsection{NIRCam images} \label{NIRCAM} The NIRCam instrument targeted SMACS0723, pointing one detector on the cluster, and the other detector on the adjacent off-field. The NIRCam filters F090W, F150W, F200W, F277W, F356W, and F444W have been exposed for $\sim 7500$ seconds each, resulting in a 5$\sigma$ sensitivity limit of $\sim 28.5-29.6$ AB magnitude for point-like sources. These depths are equivalent to the ones obtained with WFC3/IR for the HUDF12 pointing \citep{Koekemoer}, and they are a factor of 10 times more sensitive than the deepest {\it Spitzer}/IRAC imaging available at 3.6 and 4.5 $\mu$m. The SMACS J0723.3-7327 JWST observations include two NIRCam modules, each observing with a $2.2’\times2.2’$ Field of View (FoV, one centered on the cluster BCG, the other offset by $3’$). The NIRCam reduced images have been retrieved from the Mikulski Archive for Space Telescopes (MAST)\footnote{\url{https://archive.stsci.edu}.}. The official reduction has overall good quality, with slightly off-centering problems of alignment between the different bands. In order to overcome this issue, we decided to carry out a first catalog in each band using SExtractor \cite[][]{Bertin96}, matching a posteriori the catalogs in absolute coordinates. The matched catalog has been used in order to pre-select high-z galaxy candidates with the dropout technique. As described in Section 2.3, a detailed photometric analysis has been applied on the relevant sources only. We adopt the calibration software version 1.5.3 and the updated NIRCam photometric calibration\footnote{\url{https://jwst-crds.stsci.edu/context_table/jwst_0942.pmap}} released on the 29th of July 2022 by the Space Telescope Science Institute. \subsection{MIRI images} The MIRI instrument observed the central region of SMACS0723 with the F770W, F1000W, F1500W and F1800W filters. The MIRI observations cover only part of the NIRCam field, given the difference in the field-of-view of the two instruments (i.e. $112.6^{\prime\prime}\times 73.5^{\prime\prime}$ for MIRI, two $2.2^\prime\times2.2^\prime$ for NIRCam). Differently from NIRCam, the MIRI fully reduced images available on MAST show the presence of strong background patterns (e.g. vertical striping and gradients). The prominence of these features varies significantly among filters. % Since such a background could impact on the number of detections and photometric quality of our sample, we decide to re-run the {\it JWST} pipeline\footnote{\url{https://jwst-pipeline.readthedocs.io/en/latest/index.html}.} (v. 1.6.1) adding an additional step to improve the background cleaning and homogenization. The final result is not purely cosmetic: a SExtractor run % on the final image shows that we are able to minimize spurious detections while maximizing the number of real sources. Besides, the magnitude of bright sources is not affected. This ensures us that the extra-cleaning of the background does not impact on our magnitude estimates. As for NIRCam, we use SExtractor to correct the astrometry of the MIRI images. \subsection{Sample selection} We propose to identify potential high-$z$ and/or dusty sources by selecting F200W dropouts candidates in the SMACS0723 {\it JWST} deep field. Indeed, these objects could be interpreted either as: i) $z>10$ LBGs, ii) heavily extinguished galaxies iii) red and dead passive sources at $3<z<6$. We start from the SExtractor photometry in the different NIRCam bands (see Sect. \ref{NIRCAM}), we cross-match the single bands extractions to the F444W catalog adopting a 0.2 arcsec search radius. We look for sources with a F444W mag$<$29mag [AB] detection (above the 5$\sigma$ depth) and we select a sample of F200W dropouts that lack a counterpart in the F200W band extracted from SExctractor (see Section 2.1). We visually inspected each candidate, removing all spurious or contaminated objects. We identify a robust sample of 20 sources, for which we perform a refined photometric measurement (see Section \ref{photometry}), in order to avoid biases due to local background variations in the NIRCam and MIRI maps. The coordinates and multiwavelength fluxes of the final sample are presented in Table 1 (available as online material). % {\it We note that some non-detections at wavelengths shorter then F200W in the preliminary SExtractor catalogs are instead very faint detections after our detailed photometric analysis.} The nature of the sources will be investigated through SED fitting in Section \ref{res}. \subsection{Ad hoc source photometry} \label{photometry} Our photometric analysis is based on the extraction of cumulative light profiles from sky-subtracted images after the removal of contamination from point-like and extended sources. We provide a brief description of our method below, using the NIRCam F444W image of target \#15 as a working example (Fig.\,\ref{fig:photo_example}), but the same approach is used for all the other {\it JWST} images. More detailed information on the method are provided in \citet{Marasco22}. We first extract cutouts of $50\times50$ pixels, centered at the coordinates of the target. Each cutout is visually inspected for the presence of major contaminants (such as an off-centered bright galaxy, or the diffraction figure from a nearby source), which are manually masked and excluded from the analysis. The image is then partitioned into a `sky' and a `galaxy' region, via an ellipse (shown in red in the left panel of Fig.\,\ref{fig:photo_example}) whose size, axial ratio and orientation are manually selected after visual inspection. The image background $b$ and noise $\sigma$ are determined by modelling the 1D pixel intensity distribution in the sky region with the sum of a Gaussian function, whose mean and standard deviation correspond to $b$ and $\sigma$, and a Schechter function, that accounts for minor contaminants (such as a population of faint, unresolved sources) within this region. This procedure is illustrated in the inset of the left panel of Fig.\,\ref{fig:photo_example}. The background is subtracted from the cutout before the next analysis steps. The galaxy region is partitioned into a series of rings that are used to extract the radial profile and the growth curve (right panel in Fig.\,\ref{fig:photo_example}) by replacing masked pixels in each ring with the mean intensity computed in that ring. Profiles are truncated where the signal-to-noise ratio (SNr) in a given ring drops below unity: this corresponds to measuring fluxes using a \emph{variable} aperture, with a size that is tuned to the properties of each target. Targets with sufficiently good SNr feature a visible flattening in their growth curve, which is a key check for the goodness of their photometry. MIRI fluxes are corrected for aperture effects, using simulated MIRI point spread functions\footnote{\url{https://jwst-docs.stsci.edu/jwst-mid-infrared-instrument/miri-performance/miri-point-spread-functions}}. We do not implement corrections for NIRCam fluxes, given that the adopted apertures are large enough to enclose virtually all of the PSF light. Flux uncertainties are determined with a Monte-Carlo technique: we re-compute $N$ times the flux by injecting Gaussian noise into the image, and take the standard deviation of the resulting flux distribution as our fiducial uncertainty. Finally, our procedure is fairly robust against small variations in the target center. Visual inspection of our cutouts indicates that target coordinates can be kept fixed for all filters of a given instrument, but must be adjusted from MIRI to NIRCam (by typically 1.3\arcsec) to account for astrometric offsets between the two instruments. \begin{comment} \begin{table*} \resizebox{1.03\textwidth}{!}{ \centering \begin{tabular}{ccc|ccccccccccc} ID & RA & DEC & f$_{F090W}$ & f$_{F150W}$ & f$_{F200W}$ & f$_{F277W}$ & f$_{F356W}$ & f$_{F444W}$ & f$_{F770W}$ & f$_{F1000W}$ & f$_{F1500W}$ & f$_{F1800W}$ \\ & h m s & d m s & nJy & nJy & nJy & nJy & nJy & nJy & nJy & nJy & nJy & nJy\\ \hline JWST-S1 & 7:23:16.79 & -73:26:41.72 & 0.15$\pm$0.39 & 0.31$\pm$0.27 & 7.24$\pm$0.74 & 15.31$\pm$1.53 & 33.84$\pm$3.38 & 93.67$\pm$9.37 & 353.59$\pm$77.16 & 393.37$\pm$155.32 & 0.00$\pm$36.66 & 178.68$\pm$165.84 \\ % 2 & 7:22:50.39 & -73:28:17.63 & 0.29$\pm$0.42 & 0.01$\pm$0.33 & 1.03$\pm$0.29 & 148.9$\pm$14.89 & 289.31$\pm$28.93 & 581.58$\pm$58.16 & -- & -- & -- & -- & \\ % 3 & 7:22:48.73 & -73:29:05.09 & 0.38$\pm$0.38 & 3.65$\pm$0.76 & 6.4$\pm$0.73 & 40.46$\pm$4.05 & 134.17$\pm$13.42 & 241.54$\pm$24.15 & -- & -- & -- & -- \\ % 4 & 7:23:26.72 & -73:26:10.13 & 0.98$\pm$0.87 & 11.76$\pm$1.18 & 16.55$\pm$1.13 & 112.7$\pm$11.27 & 222.7$\pm$2.24 & 265.6$\pm$2.72 & 799.69$\pm$79.97 & 616.41$\pm$268.20 & 1084.28$\pm$211.57 & 28.49$\pm$176.89 \\ % 5 & 7:22:56.99 & -73:29:23.35 & 2.32$\pm$0.97 & 6.28$\pm$0.62 & 6.52$\pm$0.8 & 15.72$\pm$2.15 & 24.86$\pm$2.49 & 44.05$\pm$4.40 & -- & -- & -- & -- \\ % 6 & 7:22:39.56 & -73:30:08.24 & \textbf{-0.33}$\pm$0.59 & 12.9$\pm$1.74 & 18.24$\pm$1.82 & 33.19$\pm$3.32 & 30.77$\pm$3.08 & 36.16$\pm$3.62 & 35.27$\pm$74.49 & \textbf{-10.42}$\pm$21.31 & 55.28$\pm$38.01 & 160.7$\pm$97.6 \\ % 7 & 7:22:42.47 & -73:29:47.51 & 6.05$\pm$1.31 & 20.74$\pm$2.36 & 25.88$\pm$2.59 & 25.41$\pm$2.54 & 26.96$\pm$2.70 & 19.52$\pm$1.95 & -- & -- & -- & -- \\ % 8 & 7:23:10.87 & -73:27:57.67 & \textbf{0.53}$\pm$0.52 & 6.83$\pm$0.68 & 6.78$\pm$0.69 & 3.29$\pm$0.56 & 5.54$\pm$0.58 & 14.4$\pm$1.44& -- & -- & -- & -- \\ % 9 & 7:22:56.30 & -73:28:42.20 & \textbf{-0.31}$\pm$0.61 & 3.79$\pm$0.72 & 12.91$\pm$0.98 & 37.76$\pm$2.79 & 20.86$\pm$2.09 & 28.08$\pm$2.31 & -- & -- & -- & -- \\ % 10 & 7:22:46.86 & -73:28:54.08 & 5.27$\pm$0.69 & 5.85$\pm$0.62 & 7.04$\pm$0.70 & 18.3$\pm$1.83 & 18.15$\pm$1.81 & 25.83$\pm$2.58 & -- & -- & -- & -- \\ % 11 & 7:22:33.28 & -73:29:11.18 & 2.91$\pm$0.51 & 9.89$\pm$1.12 & 9.02$\pm$1.06 & 15.99$\pm$1.60 & 12.33$\pm$1.85 & 16.09$\pm$2.27 & -- & -- & -- & -- \\ % 12 & 7:23:25.60 & -73:26:12.01 & 5.91$\pm$0.95 & 6.59$\pm$0.88 & 11.76$\pm$1.17 & 30.94$\pm$3.09 & 17.12$\pm$1.71 & 38.72$\pm$3.87& -- & -- & -- & -- \\ % 13 & 7:23:05.46 & -73:26:29.98 & 0.0$\pm$0.55 & 19.98$\pm$2.00 & 4.79$\pm$0.48 & 29.93$\pm$2.99 & 33.66$\pm$3.37 & 32.37$\pm$3.24 & -- & -- & -- & -- \\ % 14 & 7:22:33.73 &-73:28:03.54 & 0.0$\pm$0.53 & 0.0$\pm$0.36 & 0.0$\pm$0.31 & 0.65$\pm$0.54 & 2.6$\pm$0.47 & 27.28$\pm$2.73 & -- & -- & -- & -- \\%old 30 15 & 7:22:44.02 &-73:29:15.86 & 0.38$\pm$0.5 & 18.17$\pm$1.82 & 15.98$\pm$1.60 & 16.08$\pm$1.61 & 20.01$\pm$2.00 & 38.57$\pm$3.86 & -- & -- & -- & -- \\%old 31 16 & 7:23:09.34 &-73:27:21.35 & 11.27$\pm$1.63 & 6.87$\pm$0.98 & 15.81$\pm$1.6 & 26.23$\pm$2.65 & 66.77$\pm$6.68 & 50.44$\pm$5.04 & -- & -- & -- & -- \\%old 32 17 & 7:22:47.69 & -73:27:47.12 & 2.26$\pm$0.94 & 0.0$\pm$0.69 & 0.0$\pm$0.53 & 12.7$\pm$1.71 & 11.57$\pm$1.38 & 27.8$\pm$2.28& -- & -- & -- & -- \\%old 33 18 & 7:22:44.02 & -73:29:15.86 & 0.49$\pm$0.34 & 2.49$\pm$0.55 & 5.2$\pm$0.66 & 47.62$\pm$4.76 & 151.68$\pm$1.52 & 239.4$\pm$23.94& -- & -- & -- & -- \\%old 34 19 & 7:23:09.34 & -73:27:21.35 & 0.0$\pm$0.48 & 0.67$\pm$0.52 & 0.87$\pm$0.4 & 0.11$\pm$0.78 & 4.65$\pm$0.76 & 14.95$\pm$1.49 & -- & -- & -- & -- \\%old 35 20 & 7:22:47.69 & -73:27:47.12 & 0.0$\pm$0.41 & 5.99$\pm$0.60 & 4.43$\pm$0.71 & 16.53$\pm$1.65 & 15.11$\pm$1.51 & 19.85$\pm$1.98& -- & -- & -- & -- \\%old 36 \end{tabular} } \caption{Position and photometry in the available NIRCam and MIRI filters of the 20 F200W dropouts. We applied aperture correction to the MIRI photometry, when necessary. The complete table is available online.} \label{tab:list_obj} \end{table*} \begin{table*} \centering % \resizebox{0.99\textwidth}{!}{ \begin{tabular}{c|ccccc|ccccc|c|c} ID & z & $\rm log_{ 10}(M/M_{\odot})$ & $\rm log_{10}(SFR/M_{\odot} yr^{-1})$ & A$_{V}$ & $\chi^2_{1,gal}$ & z & $\rm log_{10}(M/M_{\odot})$ & $\rm log_{10}(SFR/M_{\odot} yr^{-1})$ & A$_{V}$& $\chi^2_{2,gal}$& $\chi^2_{red,BD}$ & $\mu$\\ & \multicolumn{5}{c}{1$^{st}$ solution}& \multicolumn{5}{c}{2$^{nd}$ solution} & \\ 1 & 12.11$^{+0.12}_{-0.15}$ & 9.53$^{+0.34}_{-0.16}$& 1.55$^{+0.39}_{-0.16}$ & 2.36$^{+0.14}_{-0.12}$ & 5.76 & 3.37$^{+0.03}_{-0.02}$& 8.94$^{+0.22}_{-0.07}$& 0.88$^{+0.13}_{-0.18}$ & 4.41$^{+0.13}_{-0.30}$ & 17.80 & 1848.55 & 2.65$\pm$0.35 \\ % 2 & 5.40$^{+0.28}_{-0.28}$& 11.56$^{+0.09}_{-0.09}$& -0.56$^{+1.46}_{-4.60}$& 4.74$^{+0.32}_{-0.35}$ & 56.87 & -- & -- & -- & -- & -- & 21375.50 & -- \\ % 3 & 6.05$^{+0.59}_{-0.56}$& 10.41$^{+0.18}_{-0.17}$& 1.80$^{+0.25}_{-0.20}$& 3.18$^{+0.23}_{-0.22}$& 1.40 & 2.53$^{+0.22}_{-0.12}$& 9.48$^{+0.11}_{-0.28}$ & 0.81$^{+0.15}_{-0.20}$ & 5.17$^{+0.38}_{-0.33}$ & 7.24 & 3509.75 & -- \\ % 4 & 2.98$^{+0.05}_{-0.11}$& 9.04$^{+0.22}_{-0.15}$& 1.07$^{+0.09}_{-0.14}$& 3.59$^{+0.11}_{-0.15}$& 24.89 & 5.28$^{+0.56}_{-0.24}$ & 10.08$^{+0.08}_{-0.08}$ & 1.34$^{+0.12}_{-0.12}$ & 2.14$^{+0.16}_{-0.13}$ & 32.45 & 4866.44 & \textbf{1.4$\pm$0.1} \\ % 5 & 6.05$^{+0.34}_{-0.56}$& 8.79$^{+0.14}_{-0.18}$& 0.27$^{+0.25}_{-0.17}$& $1.00^{+0.25}_{-0.18}$& 1.58 & 1.46$^{+0.80}_{-0.05}$& 7.77$^{+0.29}_{-0.34}$& -0.86$^{+0.33}_{-0.31}$& $2.52^{+0.41}_{-0.60}$& 7.47 & 95.47 & -- \\ % 6 & 0.48$^{+0.08}_{-0.08}$ & 7.22$^{+0.18}_{-0.15}$ & -2.47$^{+0.26}_{-0.30}$ & 2.83$^{+0.55}_{-0.47}$ & 0.35 & 2.13$^{+1.50}_{-0.76}$& 8.06$^{+0.22}_{-0.37}$& -0.74$^{+0.46}_{-0.55}$& 0.65$^{+0.43}_{-0.33}$& 3.81 & 150.88 & -- \\ % 7 & 0.09$^{+0.05}_{-0.04}$& 5.84$^{+0.39}_{-0.55}$& -4.37$^{+0.53}_{-0.92}$& 2.60$^{+0.47}_{-0.41}$& 0.15 & -- & -- & -- & -- & -- & 28.63 & -- \\ % 8 & 0.24$^{+0.02}_{-0.01}$& 5.21$^{+0.20}_{-0.25}$& -2.87$^{+0.12}_{-0.18}$& 0.67$^{+0.27}_{-0.19}$& 19.20 & 8.76$^{+0.66}_{-0.45}$& 7.83$^{+0.15}_{-0.42}$& -0.36$^{+0.06}_{-0.15}$& 0.06$^{+0.02}_{-0.04}$& 30.43 & 29.92 & -- \\ % 9 & 0.35$^{+0.08}_{-0.06}$& 7.14$^{+0.17}_{-0.22}$& -3.79$^{+0.89}_{-4.83}$& 5.49$^{+0.68}_{-0.72}$& 17.63 & 11.91$^{+0.10}_{-0.22}$& 8.80$^{+0.09}_{-0.19}$& 0.72$^{+0.15}_{-0.14}$& 0.57$^{+0.13}_{-0.18}$& 23.73 & 231.24 & -- \\ % 10 & 5.14$^{+0.24}_{-0.13}$& 8.32$^{+0.10}_{-0.12}$& -0.15$^{+0.12}_{-0.10}$& 0.59$^{+0.14}_{-0.12}$& 0.70 & -- & -- & -- & -- & -- & 210.78 & -- \\ % 11 & 0.47$^{+0.10}_{-0.11}$& 6.76$^{+0.24}_{-0.24}$& -2.81$^{+0.31}_{-0.29}$& 1.81$^{+0.37}_{-0.49}$& 3.49 & 1.89$^{+0.33}_{-0.42}$& 7.66$^{+0.14}_{-0.14}$& -1.48$^{+0.24}_{-0.34}$ & $0.34^{+0.30}_{-0.21}$ & 4.36 & 64.77 & --\\ % 12 & 3.54$^{+0.04}_{-0.03}$& 7.82$^{+0.25}_{-0.23}$& -0.23$^{+0.10}_{-0.19}$& 1.07$^{+0.19}_{-0.27}$& 7.97 & 0.42$^{+0.04}_{-0.07}$& 6.13$^{+0.19}_{-0.18}$& -2.02$^{+0.20}_{-0.13}$ & 3.14$^{+0.26}_{-0.15}$ & 19.73 & 357.56 & 1.5$\pm$0.1 \\ % 13 & 1.44$^{+0.02}_{-0.04}$& 7.03$^{+0.09}_{-0.05}$& $-0.90^{+0.10}_{-0.06}$& 2.98$^{+0.19}_{-0.09}$& 67.37 & 5.38$^{+0.40}_{-0.36}$& 8.78$^{+0.10}_{-0.09}$& -0.06$^{+0.17}_{-0.43}$ & 0.79$^{+0.24}_{-0.49}$ & 74.03 & 275.26 & 1.7$\pm$0.2\\ % 14 & 11.06$^{+0.89}_{1.14}$& 11.29$^{+0.38}_{-0.51}$& 3.22$^{+0.35}_{-0.49}$& 5.83$^{+0.12}_{-0.22}$& 0.13 & 6.01$^{+0.27}_{-0.21}$& 9.40$^{+0.26}_{-0.29}$& 1.48$^{+0.14}_{-0.31}$ & 5.69$^{+0.21}_{-0.20}$ & 2.20 & 48.07 & -- \\ % 15 & 8.85$^{+0.22}_{-0.45}$& 8.59$^{+0.15}_{-0.21}$& 0.22$^{+0.09}_{-0.08}$& 0.12$^{+0.11}_{-0.07}$& 1.95 & 9.70$^{+0.22}_{-0.26}$& 8.73$^{+0.10}_{-0.13}$& 0.31$^{+0.11}_{-0.08}$& 0.15$^{+0.12}_{-0.10}$& 2.57 & 109.05 & -- \\ % 16 & 4.43$^{+0.20}_{-0.29}$& 8.63$^{+0.12}_{-0.18}$& 0.27$^{+0.18}_{-0.16}$& 1.16$^{+0.30}_{-0.22}$& 22.13 & -- & -- & -- & -- & -- & 209.19 & 12.3$\pm$3.9 \\ % 17 & 0.35$^{+0.08}_{-0.07}$& 6.07$^{+0.28}_{-0.28}$& -2.37$^{+0.23}_{-0.27}$& 3.68$^{+1.20}_{-0.66}$& 0.15 & 5.46$^{+0.46}_{-0.32}$& 8.18$^{+0.21}_{-0.18}$& 0.07$^{+0.08}_{-0.12}$ & 1.06$^{+0.48}_{-0.25}$ & 0.68 & 33.91 & -- \\ % 18 & 4.88$^{+0.19}_{-0.16}$& 10.26$^{+0.12}_{-0.12}$& 1.82$^{+0.17}_{-0.15}$ & 3.94$^{+0.34}_{-0.30}$ & 4.02 & 2.60$^{+0.19}_{-0.16}$& 9.56$^{+0.16}_{-0.12}$& 0.93$^{+0.14}_{-0.15}$& 5.47$^{+0.26}_{-0.34}$& 12.42 & 6802.54 & --\\ % 19 & 8.26$^{+2.42}_{-2.35}$& 10.09$^{+0.65}_{-0.60}$& 1.72$^{+0.86}_{-0.66}$& 4.55$^{+0.93}_{-0.92}$& 4.85 & -- & -- & -- & -- & -- & 8.19 & 39.5$\pm$33.6\\ % 20 & 5.19$^{+0.35}_{-0.15}$ & 8.28$^{+0.09}_{-0.09}$& -0.43$^{+0.15}_{-0.13}$& 0.30$^{+0.21}_{-0.14}$ & 4.36 & 2.91$^{+0.16}_{-0.19}$ & 7.65$^{+0.17}_{-0.15}$& -0.47$^{+0.06}_{-0.27}$& 1.32$^{+0.30}_{-0.18}$ & 5.53 & 128.92 & -- \\%old 36, there is a new secondary solution, low probability but similar chi2. no secondary solution, best ris sfh \end{tabular}} \color{black} \caption{Results from the SED fitting analysis. Columns 2 to 6 show the redshift, stellar mass, SFR, $A_V$ and $\chi^{2}$ of the best solution derived with BAGPIPES, while we report in columns 7 to 11 the same quantities for the secondary solution, when present. In column 12 we report the $\chi^{2}$ obtained using templates of brown dwarfs. The magnification due to gravitational lensing, {\bf derived from the updated model presented by \citep{Mahler22}} at the redshift of the first solution is listed in column 13. The magnification is not reported for galaxies outside the cluster, i.e. in the adjacent off-field, or in foreground (\#8 and \#9). {\bf Stellar masses and SFR should be corrected for magnification.}} \label{tab:SED_results} \end{table*} \end{comment} \section{SED FITTING} \label{SED} We derived the redshift and galaxy physical properties (e.g. stellar mass, SFR) using the Bayesian Analysis of Galaxies for Physical Inference and Parameter EStimation \citep[BAGPIPES;][]{Carnall2018}. In particular, we consider \citet{BC03} stellar population models with stellar metallicity from 0.005 solar up to solar. We allow the code to explore the redshift range $0<z<15$ and the stellar mass range up to 10$^{12.5}\,\rm M_{\odot}$. Nebular emission lines were included assuming a ionization parameter from 10$^{-4}$ to 10$^{-2}$ and we considered the same reddening law \citep[i.e.,][]{Calzetti2000} for both the stellar continuum and the nebular emission lines. We run the code twice, once with an exponentially declining (i.e. SFR$\propto e^{-(t/\tau)}$) star-formation history and once with a rising (i.e. SFR$\propto t\,e^{-(t/\tau)}$) one, both with ages ranging from 1 Myr to the age of the Universe and $\tau=$0.01 to 10 Gyr. Between these two runs, we kept the fit with the minimum $\chi^{2}$, but we highlight that redshift, stellar mass and SFR are, for the majority of cases, consistent between the two cases. We show the fits of source 1-PENNAR, 2-KLAMA and 15-HOLLAR as examples in Figure \ref{fig:example_SED}. \par The spectral properties of a local brown dwarf can resemble the rest-frame optical observations of high-$z$ galaxies. Therefore, we also fit our candidates dropouts with L and T dwarf models available from \citet{Burrows2006}. Such templates span effective temperatures between 700 K and 2300 K, metallicities between [Fe/H]=-0.5 and 0.5 and gravities between $10^{4.5}$ and $10^{5.5}\,\rm cm\, s^{-2}$. \par We performed the SED fitting allowing the extinction parameter to span the range of values $0<A_V<6$. Values of $A_V$ exceeding $\sim6$ mag have been observed only in the the central 1–2 kpc of local luminous IR galaxies \citep{Mayya,Scoville15}. In comparison, dusty submillimeter galaxies at $2<z<3$ have typical average extinction around $A_V\sim2.5$ \citep{Knudsen05,dacunha15}. Another class including heavily obscured sources is that of HIEROs. However, even in this case the reddening has typical values around $A_V < 4.0$ at $3 < z < 6$ \citep{wang19,Barrufet}. Results from this SED fitting analysis are reported in Table 2. % \section{RESULTS} \label{res} \subsection{Nature of the F444W sources with a faint F200W counterpart} We rely on the SED fitting approach described in Section \ref{SED} to infer the physical properties of our sample. A summary of the photometric redshifts and basic outputs from BAGPIPES is reported in Table 2. When the posterior distribution of the physical parameters derived with BAGPIPES show two or more separate peaks, we report the two most probable solutions. {\bf We note that a Brown Dwarf solution is never preferred by the $\chi ^2$.} Fig. \ref{fig:MS} summarizes the position of the sources in the M$_*$-SFR plane at their best assigned photo-$z$ (corrected for magnification when required). The sources can be grouped in the following main classes. \begin{center} \underline{Low redshift contaminants ($z<3$)} \end{center} \begin{itemize} \item {\bf Red and dusty low-$z$ dwarf galaxies}: the SED of four sources are reproduced with templates of $z<0.5$ galaxies, low stellar masses log(M$_*$/M$_\odot$)$\sim$5-7, and SFR consistent with the faint end of the Main Sequence (MS) in the local Universe (Fig. \ref{fig:MS} top-left). These sources are less massive even than Low Surface Brightness (LSB) galaxies at $z=0$ \citep[e.g.][]{McGaugh}. However, the {\it JWST} dwarves are much more extinguished than traditional UV selection, with $A_V$ up to $\sim5.5$mag (IDs 6-PRUDEGAR, 7-TULLE, 8-LAMARA, 9-BUSCAR, 11-SCHACHER, 17-MOSELE). \item {\bf Low-mass star forming sources at Cosmic Noon}: the selection also includes normal MS galaxies at $z\sim1-3$, with log(M$_*$/M$_\odot$)$\sim$7-9, probing the faint end of the Main Sequence \citep[e.g.][ IDs 4-LAITEN, 13-BISCHOFARN, Fig. \ref{fig:MS} top-right]{Bisigello,Lucia}. \end{itemize} Having recognized the lower redshift contaminants, we are then left with higher$-z$ candidates, most of them being unreachable or unidentifiable before the {\it JWST}. \begin{center} \underline{Distant sources ($3<z<13$)} \end{center} \begin{itemize} \item {\bf $3<z<7$ dusty star-forming systems}: 40\% of the sample sits on the MS at $z\sim4-6$, part of them filling the obscured faint end of the Main Sequence missed by LBGs (similar to \cite{Barrufet}, IDs 3-STOCKE, 5-ORKENTAAL, 10-BALDE, 12-TAAL, 16-RUTZER, 18-MORAR, 20-BERGA, Fig. \ref{fig:MS} bottom-left). In particular, IDs 3- and 18- present a color F200W-F444W$>$2.3mag, very close to the HIERO definition, due to their larger extinction parameter ($A_V\sim3$). We also note that half of the global sample have such red F200W-F444W colors. As discussed in Section \ref{intro}, this population was completely missed by {\it HST} and {\it Spitzer}, lacking the sensitivity to statistically identify [4.5]>$24$mag optically dark sources, but they could play a major role in the stellar assembly of today's massive galaxies. We also note that the detection of very low mass galaxies at $z>4$ with large dust content, such as those inferred here (12-TAAL, 16-RUTZER but also 19-SCHBANZ at $z=8.26$), is unexpected \citep[see e.g. ][]{Whitaker17,Pope17}. \item {\bf A quenched, dusty and massive galaxy at $z\sim5$?} We identify the most extreme red object in our sample (F200W-F440W$\sim$7mag), that can be explained only by templates of a massive (log(M$_*$/M$_\odot$)$\sim$11.56) quenched galaxy at $z\sim5.4$ with anomalous abundant dust extinction, $A_V=4.7$mag (ID 2-KLAMA). This object is even redder than the JWST source with SCUBA2 analysed by \citep{zavala}, which is at a similar redshift but it is starbursting. We verify the presence of a possible less dusty solution by running BAGPIPES limiting the reddening to $A_V<2$. However, the best solution results in a $\chi^2$ that is three times larger than the solution with $A_V=4.7$. We also varied the ionization parameter (U) to test if the red colors of this galaxy could be due to the presence of nebular emission lines. Indeed, strong rest-optical emission lines at $z\sim5.5-6.$ could significantly contaminate broadband photometry around 3 and 4 $\mu$m \citep[e.g.][]{Labbe13,Stark13,Alcade19}. A red F200W dropout could then mask as a ultra-faint source with continuum below the detection limit and the F444W flux boosted by an emission line. Typically, such objects are likely to be low mass and low metallicity sources \citep[e.g.][]{Maseda19,Maseda20}. We have mitigated this possibility leaving the U parameter free to vary down to values of log(U)=-4, and metallicities as low as $Z=0.005Z_{\odot}$. Even with this configuration, the best solution remains a galaxy around $z=5$ and with $A_V\sim5$. While passive sources at $3<z<5$ are already emerging in {\it JWST} early observations \citep{Carnall_QG}, the existence of a massive source with log(M$_*$/M$_\odot$)$\sim$11.56, dusty and quenched at $z=5$ in a small survey volume is very unexpected. The majority of quiescent galaxies that have spectroscopy and studied individually in detail at $z\sim2-4$ to date do not show evidence of abundant dust content \citep{Valentino20}. In fact, little evidence exists for widespread dust in quiescent galaxies out to the highest redshifts currently probed \citep[][]{Schreiber18,Whitaker21Nature}, apart from some exceptions \citep[][]{Gobat,Magdis21}, suggesting that it likely does get rapidly destroyed . Given the quality of the new JWST imaging products, the accurate photometric treatment and the extended range of parameters accounted for by our SED fitting, we consider 2-KLAMA as a very strong candidate for a quiescent galaxy whose dust content has yet to be destroyed, a possible indicator of recent quenching. \item {\bf Extinguished high-$z$ star-forming sources ($7<z<13$):} the daily recording of the farthest objects with {\it JWST} is currently providing candidates up to $z\sim17$ \citep[e.g.][]{Harikane}. These sources are consistent with primordial young star forming galaxies, with a negligible dust content. Indeed, the common LBG technique used to select them privileges UV blue and bright spectral types. In our approach we include redder populations, and we do not limit the extinction parameter while fitting the observed SEDs. Surprisingly, we classify four objects (IDs 1-PENNAR, 14-KABERLABA, 15-HOLLAR, 19-SCHBANZ) at $z>8$, with mature stellar populations, log(M$_*$/M$_\odot$)$\sim$9-11, that differ from already detected {\it JWST} sources at similar cosmic epochs for their extreme dust content ($A_V=0.4-5.8$mag). \cite{Fudamoto} report the ALMA $\ion{\rm C}{II}$ detection of two sources at $z\sim6-7$, providing additional evidence for the existence of obscured systems that could contribute on the order of $\sim20\%$ to the $z>6$ cosmic SFRD. Such objects are currently unexplained by theoretical models. \cite{Ferrara} provide a possible explanation based on the assumption that dust has been efficiently ejected during the early stages of galaxy formation. Our results bring the attention to a potentially unexplored evolution of dust production and dust lifetime in the primeval Universe. In particular, we highlight source 1-PENNAR, the only object securely detected even in two MIRI bands (F770W and F1000W). The NIRCAM+MIRI photometry provides stronger constraints on the SED fitting, turning a primary solution at $z=12.1$ (see Fig \ref{fig:example_SED}, with the source sitting on the extrapolated MS at $z>8$ (see Fig. \ref{fig:MS} bottom-right). Compared to the other UV bright sources at the same cosmic epoch, \#1 has an extinction best-fit of $A_V=2.36$ mag. \end{itemize} By selecting and photometrically characterizing NIRCam F444W sources in the SMACS0723 deep field that lack a F200W counterpart, we provide only a first glimpse on the potential of the {\it JWST} to uncover new galaxy populations. We remind that their classification remains still speculative, until upcoming spectroscopic follow-ups will systematically constrain their distance and nature. \begin{comment} \begin{sidewaystable} \resizebox{1.2\textwidth}{!}{ \centering \begin{tabular}{ccc|ccccccccccc} ID & RA & DEC & f$_{F090W}$ & f$_{F150W}$ & f$_{F200W}$ & f$_{F277W}$ & f$_{F356W}$ & f$_{F444W}$ & f$_{F770W}$ & f$_{F1000W}$ & f$_{F1500W}$ & f$_{F1800W}$ \\ & h m s & d m s & nJy & nJy & nJy & nJy & nJy & nJy & nJy & nJy & nJy & nJy\\ \hline JWST-S1 & 7:23:16.79 & -73:26:41.72 & 0.15$\pm$0.39 & 0.31$\pm$0.27 & 7.24$\pm$0.74 & 15.31$\pm$1.53 & 33.84$\pm$3.38 & 93.67$\pm$9.37 & 353.59$\pm$77.16 & 393.37$\pm$155.32 & 0.0$\pm$36.66 & 178.68$\pm$165.84 \\ % 2 & 7:22:50.39 & -73:28:17.63 & 0.29$\pm$0.42 & 0.01$\pm$0.33 & 1.03$\pm$0.29 & 148.9$\pm$14.89 & 289.31$\pm$28.93 & 581.58$\pm$58.16 & -- & -- & -- & -- & \\ % 3 & 7:22:48.73 & -73:29:05.09 & 0.38$\pm$0.38 & 3.65$\pm$0.76 & 6.4$\pm$0.73 & 40.46$\pm$4.05 & 134.17$\pm$13.42 & 241.54$\pm$24.15 & -- & -- & -- & -- \\ % 4 & 7:23:26.72 & -73:26:10.13 & 0.98$\pm$0.87 & 11.76$\pm$1.18 & 16.55$\pm$1.13 & 112.7$\pm$11.27 & 222.7$\pm$2.24 & 265.6$\pm$2.72 & 799.69$\pm$79.97 & 616.41$\pm$268.20 & 1084.28$\pm$211.57 & 28.49$\pm$176.89 \\ % 5 & 7:22:56.99 & -73:29:23.35 & 2.32$\pm$0.97 & 6.28$\pm$0.62 & 6.52$\pm$0.8 & 15.72$\pm$2.15 & 24.86$\pm$2.49 & 44.05$\pm$4.40 & -- & -- & -- & -- \\ % 6 & 7:22:39.56 & -73:30:08.24 & 0.0$\pm$0.59 & 12.9$\pm$1.74 & 18.24$\pm$1.82 & 33.19$\pm$3.32 & 30.77$\pm$3.08 & 36.16$\pm$3.62 & 35.27$\pm$74.49 & 0.0$\pm$21.31 & 55.28$\pm$38.01 & 160.7$\pm$97.6 \\ % 7 & 7:22:42.47 & -73:29:47.51 & 6.05$\pm$1.31 & 20.74$\pm$2.36 & 25.88$\pm$2.59 & 25.41$\pm$2.54 & 26.96$\pm$2.70 & 19.52$\pm$1.95 & -- & -- & -- & -- \\ % 8 & 7:23:10.87 & -73:27:57.67 & 0.0$\pm$0.52 & 6.83$\pm$0.68 & 6.78$\pm$0.69 & 3.29$\pm$0.56 & 5.54$\pm$0.58 & 14.4$\pm$1.44& -- & -- & -- & -- \\ % 9 & 7:22:56.30 & -73:28:42.20 & 0.0$\pm$0.61 & 3.79$\pm$0.72 & 12.91$\pm$0.98 & 37.76$\pm$2.79 & 20.86$\pm$2.09 & 28.08$\pm$2.31 & -- & -- & -- & -- \\ % 10 & 7:22:46.86 & -73:28:54.08 & 5.27$\pm$0.69 & 5.85$\pm$0.62 & 7.04$\pm$0.70 & 18.3$\pm$1.83 & 18.15$\pm$1.81 & 25.83$\pm$2.58 & -- & -- & -- & -- \\ % 11 & 7:22:33.28 & -73:29:11.18 & 2.91$\pm$0.51 & 9.89$\pm$1.12 & 9.02$\pm$1.06 & 15.99$\pm$1.60 & 12.33$\pm$1.85 & 16.09$\pm$2.27 & -- & -- & -- & -- \\ % 12 & 7:23:25.60 & -73:26:12.01 & 5.91$\pm$0.95 & 6.59$\pm$0.88 & 11.76$\pm$1.17 & 30.94$\pm$3.09 & 17.12$\pm$1.71 & 38.72$\pm$3.87& -- & -- & -- & -- \\ % 13 & 7:23:05.46 & -73:26:29.98 & 0.0$\pm$0.55 & 19.98$\pm$2.00 & 4.79$\pm$0.48 & 29.93$\pm$2.99 & 33.66$\pm$3.37 & 32.37$\pm$3.24 & -- & -- & -- & -- \\ % 14 & 7:22:33.73 &-73:28:03.54 & 0.0$\pm$0.53 & 0.0$\pm$0.36 & 0.0$\pm$0.31 & 0.65$\pm$0.54 & 2.6$\pm$0.47 & 27.28$\pm$2.73 & -- & -- & -- & -- \\%old 30 15 & 7:22:44.02 &-73:29:15.86 & 0.38$\pm$0.5 & 18.17$\pm$1.82 & 15.98$\pm$1.60 & 16.08$\pm$1.61 & 20.01$\pm$2.00 & 38.57$\pm$3.86 & -- & -- & -- & -- \\%old 31 16 & 7:23:09.34 &-73:27:21.35 & 11.27$\pm$1.63 & 6.87$\pm$0.98 & 15.81$\pm$1.6 & 26.23$\pm$2.65 & 66.77$\pm$6.68 & 50.44$\pm$5.04 & -- & -- & -- & -- \\%old 32 17 & 7:22:47.69 & -73:27:47.12 & 2.26$\pm$0.94 & 0.0$\pm$0.69 & 0.0$\pm$0.53 & 12.7$\pm$1.71 & 11.57$\pm$1.38 & 27.8$\pm$2.28& -- & -- & -- & -- \\%old 33 18 & 7:22:44.02 & -73:29:15.86 & 0.49$\pm$0.34 & 2.49$\pm$0.55 & 5.2$\pm$0.66 & 47.62$\pm$4.76 & 151.68$\pm$1.52 & 239.4$\pm$23.94& -- & -- & -- & -- \\%old 34 19 & 7:23:09.34 & -73:27:21.35 & 0.0$\pm$0.48 & 0.67$\pm$0.52 & 0.87$\pm$0.4 & 0.11$\pm$0.78 & 4.65$\pm$0.76 & 14.95$\pm$1.49 & -- & -- & -- & -- \\%old 35 20 & 7:22:47.69 & -73:27:47.12 & 0.0$\pm$0.41 & 5.99$\pm$0.60 & 4.43$\pm$0.71 & 16.53$\pm$1.65 & 15.11$\pm$1.51 & 19.85$\pm$1.98& -- & -- & -- & -- \\%old 36 \end{tabular} } \caption{Position and photometry in the available NIRCam and MIRI filters of the 20 F200W dropouts. We applied aperture correction to the MIRI photometry, when necessary. The complete table is available online.} \label{tab:list_obj} \end{sidewaystable} \end{comment} \section*{Acknowledgements} We thank the anonymous reviewer for his/her comments, that improved the work quality and flow. GR and LB acknowledge the support from grant PRIN MIUR 2017 - 20173ML3WW 001. We thank Daniel Stark and Michael Topping for providing the stellar masses and SFR for the sample of REBELS galaxies reported in our Figure 3. We thank Andrea Ferrara and Pavel Kroupa for their feedbacks and comments. \section*{DATA AVAILABILITY} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras} \bibliography{example} %
Title: Deep ALMA redshift search of a z~12 GLASS-JWST galaxy candidate
Abstract: The James Webb Space Telescope (JWST) has discovered a surprising abundance of bright galaxy candidates in the very early Universe (<500Myrs after the Big Bang), calling into question current galaxy formation models. Spectroscopy is needed to confirm the primeval nature of these candidates, as well as to understand how the first galaxies form stars and grow. Here we present deep spectroscopic and continuum ALMA observations towards GHZ2/GLASS-z13, one of the brightest and most robust candidates at z>10, identified in the GLASS-JWST Early Release Science Program. While the lack of dust continuum detection supports its high-redshift nature by ruling out lower redshift dusty interlopers, we do not detect any bright emission line at the position of the target despite covering a total of 30GHz and 98% of the source's redshift probability distribution (z=11.9-13.5). A tentative emission line is identified 0.5arcsec away from the JWST position of GHZ2/GLASS-z13, which would imply a spectroscopic redshift of z=12.117+/-0.012 if associated with the [OIII]88um line. Further confirmation is though necessary to confirm the signal is astrophysical and associated with the target. The current constraints on the oxygen line luminosity place it along (or below) the [OIII]-SFR relation for metal-poor galaxies. The low metallicity and dust content implied by these observations are also consistent with the blue UV slope observed by JWST, which suggest negligible dust attenuation in galaxies at this early epoch. This work illustrates the synergy between JWST and ALMA and paves the way for future spectroscopic surveys of z > 10 galaxy candidates.
https://export.arxiv.org/pdf/2208.13642
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} galaxies: distances and redshifts -- galaxies: high-redshift -- galaxies: formation -- galaxies: evolution -- galaxies: ISM -- ISM: abundances -- (ISM:) dust, extinction -- techniques: spectroscopic \end{keywords} \section{Introduction} The James Webb Space Telescope (JWST) recently opened a new window to the Universe, with its unprecedented sensitivity and angular resolution at near-infrared (NIR) wavelengths. The public release of the JWST Early Release Observations (ERO) and the Director’s Discretionary Early Release Science Programs (DD-ERS) have unlocked new searches for the faintest, rarest, and most distant galaxies ever found. Notably, the high sensitivity of the NIRCam instrument (\citealt{NIRCAM}) and its wavelength coverage (reaching up to $\sim5$\micron{}) is ideal for the identification of candidate galaxies at redshifts above ten. To date, several $z>10$ galaxy candidates have been reported \citep{Adams2022a,Atek2022,Castellano2022,Donnan2022,Finkelstein2022,morishita22,Naidu2022,Yan2022} in the public extragalactic fields conducted with the NIRCam camera, including the Cosmic Evolution Early Release Science (CEERS) Survey (Finkelstein et al. in prep), the GLASS-JWST survey (\citealt{Treu2022a}), and the observations around the galaxy cluster SMACS J0723.3-7327 taken as part of the JWST-ERO. The unexpected abundance of high-redshift galaxies -- particularly at bright luminosities -- has been suggested to be in tension with predictions from widely-adopted galaxy formation models \cite[e.g.,][]{Boylan-Kolchin2022,FPD2022,Finkelstein2022,Harikane2022,MTT2022}. It is important to stress, however, that {\it none} of the $z>10$ candidates discovered by JWST have been spectroscopically confirmed to date and that the robustness of some of them have been called into question (e.g. \citealt{Naidu2022b,Zavala2022a}). Spectroscopic confirmation is thus necessary to measure the current tension between models and observations. The galaxy GHZ2/GLASS-z13 -- first reported by \citet{Castellano2022} and \citet{Naidu2022} and centered at RA = $+$00:13:59.76 DEC = $-$30:19:29.1 -- stands out as one of the best candidates at $z>10$ ever detected. Its photometric redshift ($z=11.960-12.423$ at a confidence limit of 68\%) has been confirmed by multiple independent teams \citep{Castellano2022,Donnan2022,Harikane2022,Naidu2022}, with negligible chances of being a lower-redshift contaminant due to its accurately measured colors and sharp break in the NIRCam photometry. The depth of this feature, associated with the Lyman break, means that the redshift identification is still robust after the recent in-flight re-calibration of the JWST instruments \citep{Rigby2022}. Here we present deep ALMA spectroscopic and continuum observations towards this galaxy. This paper is organized as follows. In Section~\ref{sec:JWST} we briefly recap JWST observations for convenience of the reader. In Section~\ref{sec:2}, we discuss the ALMA observations, and discuss the observational results and line search in Section~\ref{sec:3}. We discuss the implications of our observations in Section~\ref{sec:4}, and provide future perspectives on the spectroscopic follow-up of JWST targets in Section~\ref{sec:5}. Finally, we summarize our results in Section~\ref{sec:6}. Throughout this paper, we assume a flat $\Lambda$CDM cosmology with $\Omega_\mathrm{m} = 0.3$, $\Omega_\mathrm{\Lambda} = 0.7$ and $h = 0.7$. \section{Summary of JWST observations} \label{sec:JWST} The GLASS-JWST program represents the deepest extragalactic survey of the ERS campaign and consists of NIRISS \citep{Roberts2022} and NIRSpec spectroscopy observations centered on the cluster A2744 with parallel NIRCam imaging offset from the cluster center. The multi-band strategy of the NIRCam observations \citep{Merlin2022}, which include imaging in seven wide filters (F090W, F115W, F150W, F200W, F277W, F356W, and F444W), allows the identification of $z>10$ galaxy candidates via color-color diagrams and/or SED fitting techniques. The NIRCam images used in this paper were reduced as described by \cite{Merlin2022}, who constructed a multi-band photometric catalog. High-$z$ candidates were selected by \cite{Castellano2022} using a combination of color cuts and photometric redshifts designed to minimize contamination by lower redshift interlopers. As mentioned above, GHZ2/GLASS-z13 was identified as a $z\sim12.5$ candidate by several teams using independent reductions of the GLASS data \citep{,Donnan2022,Harikane2022,Naidu2022}. \citet{Santini2022} presented the physical properties of this galaxy, which we update here using the most recent photometric calibrations \citep{Rigby2022}. From our best-fit SED we constrain the following physical properties: star formation rate of $\rm SFR=20^{+15}_{-14}\,\rm M_\odot\,yr^{-1}$, $M_\star=1.2^{+7.2}_{-0.4}\times10^{8}\,\rm M_\odot$, and absolute magnitude M$_{1500}=-21.04^{+0.20}_{-0.21}\,{\rm AB}$. \section{ALMA Observations and Data reduction} \label{sec:2} ALMA observations were carried out between 2022-08-03 and 2022-08-05 as part of project 2021.A.00020.S (Bakx \& Zavala), and are summarized in Table~\ref{tab:alma_observations}. The spectral setup consists of four adjacent tunings covering a total bandwidth of $\sim30\,\rm GHz$ from 233.4 to $263.0\,\rm GHz$ in the ALMA Band 6. This range covers the expected (redshifted) frequency of the [OIII]88\micron{}\ ($\nu_{\rm rest}=3393.0062\,\rm GHz$) from $z=11.9$ to $z=13.5$, where our target was expected to be (covering $\sim98$\% of the posterior distribution function of the photometric redshift). Each of the tuning was observed for around $2.2\,$hrs on-source ($\sim4\,$hrs in total). Data reduction was performed following the standard procedure and using the ALMA pipeline. Then, we use CASA for imaging the {\it uv} visibilities using Briggs weighting with a robust parameter of 2.0 (to maximize the depth of the observations at the expense of slightly increasing the final synthesized beam-size). This process results in a typical depth of $0.1\,$mJy\,beam$^{-1}$ in 35\,km\,s$^{-1}$ channels with a mean synthesized beam-size of $\theta\approx0.34''\times0.30''$. In addition, to have a better better sensitivity to extended emission beyond the $\sim 0.3$~arcsecond beam and to broad emission lines, we explore {\it uv}-tapering at 0.3, 0.5, and 1.0~arcseconds and we create several cubes varying the velocity binning across the full frequency coverage, creating cubes with 15, 50, 100, 150, 300, and 400~km/s channels. Finally, we combine the four different tunings to create a single continuum image (at a representative frequency of $\nu_{\rm obs}=248\,$GHz) adopting Briggs weighting with a robust parameter of 2.0. The final continuum images has a root-mean square of $4.6\,\mu\rm Jy\, beam^{-1}$ and a beam-size of $\approx0.34''\times0.31''$. \section{% Line search and dust continuum emission} \label{sec:3} \subsection{Search for \oiii{} line} Figure~\ref{fig:spectrum} shows the spectrum of GHZ2/GLASS-z13 extracted from an aperture centered on the JWST-position with a circular size of $0\farcs35$ selected to match the average synthesized beam-size. No line emission is detected above $4\sigma$ in the spectrum. Similarly, no emission is seen in any of the resampled spectra with different velocity binnings (including velocity offsets) and taperings. \subsubsection{Upper-limit on the \oiii{} luminosity} We use the standard-deviation of the map at each frequency to evaluate the \oiii{} luminosity upper limit. We find no redshift dependency, although the atmospheric windows and instrumental sensitivity slightly vary across the spectral windows. The average $5\sigma$ line luminosity upper limit across the entire window is estimated to be $1.7 \times{} 10^8$~L$_{\odot}$ assuming a line velocity of 100~km/s and no spatially-extended emission. Assuming a wider line-width of 200~km/s would increase the derived upper upper limit by $\sim40\%$. \subsubsection{A tentative emission line 1.5\,kpc away from GHZ2/GLASS-z13 at $z=12.1$} \label{sec:tentativelinesection} We extend the line search to the tapered data cubes and those with wider velocity channel widths, in case the \oiii\ line is extended or very broad. We visually inspect the data cubes at adjacent positions of the source. We find a moderately-extended $\sim 5 \sigma$ feature at $0\farcs5$ north-east of the JWST source at $\sim258.7$~GHz, which would correspond to $z = 12.1$, assuming it is associated with the \oiii88\,$\mu$m line (which is within the expected range of the current photometric redshift constraints). At this redshift, the position offset (which is larger than the expected absolute astrometric accuracy of $<0\farcs1$) would corresponds to a physical offset of $\sim 1.5$~kpc. The full 30\,GHz spectrum at this position is shown in Figure \ref{fig:spectrum}, while a zoomed in version can be seen in Figure~\ref{fig:tentativeline}. The moment-zero map is represented as magenta contours in Figure~\ref{fig:stamps}. This tentative line has a relatively wide velocity width ($\sim 400$~km/s), and it spatially extends across $0\farcs4$. Using the {\sc emcee} Monte Carlo fitting tool \citep{emcee2013}, we extract the tentative line properties, which is centered at $258.68 \pm 0.03$~GHz and has a total velocity-integrated line intensity of $0.193 \pm 0.036$~Jy~km/s, with a line full-width at half-maximum of $400 \pm 70$~km/s. If real, this tentative detection would imply a spectroscopic redshift of $z = 12.117\pm0.012$ and a line luminosity of $L_{\rm [OIII]}=9.0 \times{} 10^8$~L$_{\odot}$ (following \citealt{Solomon2005}). Nevertheless, there are several possible reasons why this marginal detection may be spurious or not associated with our target. The first concern is the significance of the detection itself, while the other concerns are related to the physical offset and its large velocity width. To assess the reliability of this tentative detection, we first check the emission across the three independent executions covering this frequency range (Tuning 3 from Table~\ref{tab:alma_observations}). Marginal emission is seen in the three different observations (Figure \ref{fig:tentativeline}), disfavoring a false-positive associated with a single noise spike. Second, we perform a blind search for similar features ($\delta v\approx400\,\rm km\,s^{^-1}$ and $\sim5\sigma$ significance) across the central $10''\times10''$ area finding only one extra detection across the whole $\sim30\,$GHz (no negative peaks are found within this area at this significance). The probability of finding a noise peak at the signal-to-noise ratio of our tentative line and at 0\farcs5 from our target is thus estimated to be less than $5\%$. This tentative emission lies, however, in the middle of an atmospheric absorption feature, which could boost the noise at the frequency of the observed line. Nevertheless, the atmospheric transmission is still very high (close to $\sim 90$~per cent as shown in the bottom panel of Figure \ref{fig:tentativeline}) and, thus, its impact is expected to be small. Beyond the significance of the detection, there are other points that need to be considered when assessing the reliability of this tentative line: {\bf (1)} The line is spatially-offset from the JWST detection. At $z = 7 - 8$, spatial offsets between emission lines \oiii{}, \cii{}, Ly$\alpha$ and the UV or dust continuum, have been reported both in observations \citep[e.g.,][]{carniani:2017oiii} and simulations \citep[e.g.,][]{Pallottini2019,Arata2020}. These offsets are typically understood to be due to chemically-evolved components with high dust-obscuration in some cases. This scenario is unlikely at $z \approx 12$ where chemical enrichment is expected to be low, and particularly for GHZ2/GLASS-z13, due to its very low dust content (see \S\ref{sec:dust}). {\bf (2)} The large velocity width is in excess of what is seen in the local Universe for systems with stellar masses of $\sim10^9$~M$_{\odot}$. At 400~km/s, this emission is unlikely to be caused by the velocity dispersion of a single galaxy and the JWST images do not show any sign of interaction \citep{Treu2022b}. {\bf (3)} The emission appears spatially more extended than the size inferred from the JWST images of GHZ2/GLASS-z13 \citep{Yang2022}. An outflow-scenario or a galaxy interaction would be able to explain the large spatial offset, large line width, and spatially-extended emission. In the case of a galaxy interaction, however, it would require the presence of a heavily obscured component to explain the non-detection in the NIRCam filters and a weak dust emission contrast against the CMB to explain the non-detection of dust continuum (e.g. \citealt{daCunha2013,Zhang2016}). Although it is certainly possible that that early phases of galaxy evolution are dynamically complex \cite[e.g.,][]{Arata2019}, making these scenarios conceivable, we stress that further observations are necessary to first rule out a spurious detection and then to confirm the potential association with GHZ2/GLASS-z13. Hence, in the rest of the paper, we will assume the $5 \sigma$ upper limit for the \oiii line luminosity derived above, while we will also show the implications that this tentative detection would have in case is confirmed. \subsubsection{Tentative detection at $\sim254.4$~GHz} A lower-significance line ($\sim3.5\sigma$) at $\sim254.4$~GHz (corresponding to $z\approx12.3$ if associated with the \oiii88\,$\mu$m emission line) is seen in the on-source extracted spectrum of GHZ2/GLASS-z13. The moment-0 map of the line, however, extends beyond the JWST image, and would thus be extended, similar to the more significant offset emission. In addition, the contamination from false positives at this level is significantly higher than at $5.5\sigma$. As such, it is unlikely to be a true line candidate. \subsubsection{Possibility of \oiii\ line being outside of our coverage} An important consideration in our observation design was to include as much of the line confidence limits as possible, within a reasonable observation time. The various fitting codes produced similar redshift constraints (C.L. $z = 11.96 - 12.42$), with the {\sc prospector} \citep{Leja2017,Leja2019,Johnson2021} fit including a small probability peak near $z =13.8$. Based on the photometric redshift analysis by \citet{Castellano2022} and \citet{Naidu2022} conducted with {\sc eazy} and {\sc zphot}, we expect only a 2\% chance that the line be redshifted below or above our observing window. There is also a non-negligible probability that the source is within $z\approx13.5-14.5$ according to the {\sc prospector} fit of \citep{Naidu2022}. Potential systematic errors in the photo-$z$, or selection effects altering the prior distribution could lead to underestimating this probability, though. However, we believe the chances of the redshift being outside our window of observation are minor because the marginal detection in F150W and the clear photometric break tightly constrain the photometric redshift regardless of the prior and template choice. \subsection{Search for dust continuum emission} \label{sec:dust} No dust emission is seen in the collapsed (multi-frequency synthesis) continuum image, down to $13.8$~$\mu$Jy at $3 \sigma$. Assuming a typical dust thermal emission SED (e.g. \citealt{Casey2012}), we derive an upper limit on the dust-obscured star formation of $< 2 - 5$~M$_{\odot}\,\rm yr^{-1}$ at $3 \sigma$ for low-redshift interlopers ($z < 6$), depending on the galaxy model. Hence, these observations rule out the possibility of a low-redshift interloper associated with a dusty star-forming galaxy where the observed break in the NIRCam photometry would be rather associated to the Balmer break combined with high dust attenuation (e.g. \citealt{Zavala2022a}). The dust non-detection is fully consistent with the blue colors and multiple JWST detections redwards of the strong Lyman break, which also rule out a $z\sim4$ quiescent galaxy. Furthermore, the compact size of $0.047\pm0.006\,$kpc (corresponding to $0.17\pm0.02$\,kpc at $z\sim12$; \citealt{Yang2022}) is much more compatible with a high-redshift source than with a one at much lower redshift. The contamination from a dwarf star has also been ruled out since dwarf SED templates do not provide a good fit to the NIRCam data points. Hence, these observation support the high-redshift scenario for GHZ2/GLASS-z13. \section{Discussion} \label{sec:4} \subsection{Metallicity and the \oiii{}-SFR relation} \label{sec:metallicity} Figure~\ref{fig:oiii_sfr} shows the upper-limit on the \oiii{} emission against the star-formation estimate from JWST observations. The limit (and tentative line from Subsection~\ref{sec:tentativelinesection}) are compared to local starbursting galaxies \citep{delooze14}, metal-poor galaxies \citep{Cormier2019,Harikane2019} and a reference sample of $z > 6$ Lyman-break selected galaxies from \cite{Harikane2019}. The tentative line detection lies slightly above the scaling relation (although still consistent within the error bars), and could suggest a further enhancement of \oiii{} emission in the early Universe, if it is real in spite of all caveats given above. In contrast, the 5$\sigma$ \oiii{} line luminosity upper limit would suggest that the metal-abundance of GHZ2/GLASS-z13 might not yet be as high as that seen in $z = 6 - 9.2$ Lyman Break Galaxies \citep{Harikane2019,Jones2020}. Following Equation~2 of \cite{Jones2020} and assuming the source is indeed at $z\sim12$, the 5$\sigma$ line flux limit implies an Oxygen abundance of $12 + \log{O/H} < 7.83$ (adopting SFR~$= 20$ M$_{\odot}$~yr$^{-1}$, electron temperature $T_e = 1.5 \times 10^4$~K, density $n_e = 250$~cm$^{-3}$, and an ionization correction factor of 0.17 dex from O$^{++}$ to total O abundance). The uncertainty arising from unknown physical conditions is of order 0.4 dex \citep{Jones2020}. This limit corresponds to $< \frac{1}{7}$ the solar value \citep[$12 + \log{O/H}_{\odot} =8.69$;][]{Asplund2009}, comparable to the typical metallicities inferred for luminous \oiii\ emitters at $z\sim8$ \citep{Jones2020}. Our metallicity limit implies that GHZ2/GLASS-z13 is likely to be in an early stage of chemical enrichment. From a simple closed-box chemical evolution model, assuming oxygen yields $y_O = 0.007-0.039$ from low-metallicity stars \citep{Vincenzo2016}, the metallicity of GHZ2/GLASS-z13 suggests only $< 3$--14\% of its gas has been processed into stars (i.e. $>86$\% gas fraction). However, effects of gaseous inflows and outflows can permit smaller gas fractions; the low metallicity may thus indicate accretion and outflow rates which are comparable or larger than the SFR. The low metallicity implied by the non-detection is in agreement with the SED fitting to the NIRCam photometry from which we constrain the stellar metallicity to be $<0.2Z_\odot$ (with a best-fit metallicity of $0.02Z_\odot$). This is expected given that only $\sim 400$ Myr elapsed from the Big Bang to the time of observation, leaving little time to form heavy elements \citep{Maiolino2019,Ucci2021}. Our low metallicity limit further corroborates the young age based on SED fitting \citep{Santini2022}. \subsection{Lack of dust in the cosmic dawn?} The lack of a dust detection (down to a $3 \sigma$ limit of 13.8~$\mu$Jy; see Figure~\ref{fig:stamps}) suggests an upper limit of $1.5\times{}10^6$~$M_{\odot}$ of inter-stellar dust, an far-infrared luminosity less than $6.5 \times{} 10^{10}$ L$_{\odot}$, and a dust-obscured star-formation rate of 11~M$_\odot$\,yr$^{-1}$. This explains the blue UV slope ($\beta_{UV} \approx -2.4$), suggesting little dust obscuration of the young ($\sim 70$~Myr; \citealt{Naidu2022}) stellar population. This assumes a dust temperature of 50~K, although average temperatures could rise to 75~K or beyond based on the observed dust temperature evolution with cosmic distance reported in e.g., \cite{Bouwens2020,Bakx2021}. The dearth of dust is in line with dust production models, which typically require several tens of Myr before the supernovas of the heaviest stars produce the metals necessary for dust. Wolf-Rayet stars are an alternative dust production pathway, where the orbital dynamics of two binary stars creates a region where stellar winds are able to produce dust. We can place a relatively weak constraint on the dust production from these types of systems down to $< 1.5 \times{} 10^{-3}$ M$_{\odot}$/star, in line with models by \cite{Lau2021}. Figure~\ref{fig:IRX_beta} shows the comparison of the dust-obscured emission ($\rm IRX = L_{\rm IR}/L_{\rm UV}$) against the UV slope, and finds that GHZ2/GLASS-z13 is at the low end of dust-obscured star-formation. The $\log_{10} \rm IRX$ can move upwards by 0.5 if the dust temperature is 75~K instead of 50~K, removing the source from the extremely low IRX region. Regardless, this galaxy stands in contrast to the relatively high dust-obscuration factors found at $ z\sim8$ (e.g. \citealt{Inami2022}), implying a very low dust content in the early Universe and a negligible dust attenuation at $z\sim12$. This is consistent with the recent calculations by \citet{MTT2022} and \citet{FPD2022}, who concluded that a negligible dust attenuation is necessary to explain the number of bright JWST candidates reported at $z\approx11-14$. \section{Future prospects for spectroscopy of $z>10$ galaxies} \label{sec:5} Our very deep ALMA observations strengthened the case for GHZ2/GLASS-z13 being at $z\sim12$, but could not provide a conclusive redshift identification. In this section we attempt to draw some lessons for this case study that may help guide follow-up and spectroscopic confirmation of GHZ2/GLASS-z13 and other high redshift sources that are being found by JWST. As far as ALMA is concerned, with Carbon requiring nearly half a billion years to build up \citep{Maiolino2019}, the typical Oxygen timescale (at 50~Myr) makes it generally the best spectroscopic redshift indicator. Indeed, as thoroughly explored in \cite{Bouwens2021Rebels}, and initially-indicated by \cite{Inoue2016}, \oiii{} is likely the brightest line in the distant Universe. Unfortunately, our $\sim16\,$hr ALMA effort obtained informative upper limits but was unable to detect a line at the location of the JWST counterpart at high significance, This is in part due to the relatively narrow bandwidth of the ALMA receivers, which prevented us to cover the full photometric redshift probability distribution with a single tuning, forcing us to divide the time between settings at the cost of substantial overhead. The development of wider bandwidth receivers (\citealt{ALMAROADMAP}) would speed up significantly the process of building large samples of spectroscopically-confirmed galaxies at these early epochs, and the characterization of their metallicity and dust content, which remain a major and compelling scientific goal for ALMA. In the near infrared, JWST-NIRSpec should be able to provide conclusive redshift identification for large samples of galaxy candidates at these redshifts identified by NIRCam. For targets as bright as GHZ2/GLASS-z13, just a few hours of integration with the prism would be sufficient to detect the continuum, and thus secure a redshift identification via identification of the Lyman break at high spectroscopic resolution. If emission lines are present, the same short prism observations would detect common emission lines such as \ion{N}{V}$\lambda 1242$, \ion{C}{iV}$\lambda 1548$, \ion{He}{II}$\lambda 1640$, \ion{O}{III}]$\lambda 1660$, \ion{C}{III}]]$\lambda 1909$ -- and [\ion{O}{II}]$\lambda\lambda3726,3729$ below $z\sim13$ -- for equivalent width as low as 5\AA. The detection of these lines would nicely complement detection or upper limits on [\ion{O}{III}] from ALMA, in terms of metalliticy measurements \citep[see discussion by][at lower resdshift]{Jones2020}. Even for candidates not as photometrically secure as GHZ2/GLASS-z13, with colors and photo-$z$ allowing for lower-redshift solutions, JWST NIRSpec should easily distinguish the Lyman Break from the most likely contaminants, which are galaxies with the Balmer break at the corresponding wavelength and blue rest-frame optical colors, owing to the abundant and strong lines around the Balmer break. At wavelengths between NIRSpec and ALMA, JWST-MIRI should provide a third important window into early galaxy formation, by allowing the detection of strong optical emission lines such as H$\beta$ H$\alpha$ and [\ion{O}{III}]$\lambda\lambda4959,5007$, if they are present and strong. We conclude that ALMA and JWST are highly synergistic and together they should revolutionize our understanding of early galaxy formation and evolution. \section{Summary} \label{sec:6} We reported on the ALMA band 6 redshift search for the spectroscopic redshift of GHZ2/GLASS-z13 through the \oiii{} emission line, covering 30\,GHz contiguously. Despite the sensitive depth of our observations ($1\sigma=0.1\,$mJy\,beam$^{-1}$ in 35\,km\,s$^{-1}$ channels), we found no obvious line emission at the central position, with only a tentative line 0\farcs5 offset from the target. If real and associated with the source, this detection would imply a spectroscopic redshift of $z=12.117\pm0.012$. Confirmation of this tentative emission line is, nevertheless, required to rule out a spurious nature. The \oiii{} luminosity upper-limit from our observations suggest a metal-poor system ($12 + \log{O/H} < 7.83$) in the distant Universe with a hint towards a lower line luminosity compared to $z\approx6-9$ galaxies. The lack of dust emission, even with our deep observations, contrasts with lower redshift galaxies, implying a very low dust content and a negligible dust-obscuration at this early epoch, potentially due to the short cosmic time. We have also discussed potential strategies for deriving spectroscopic redshifts of $z\gtrsim11$ candidates, the necessity of improving current instruments' capabilities, and the importance of combining multi-wavelength observations to constrain the physical properties of the earliest galaxies in the Universe. \section*{Acknowledgements} This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2021.A.00020.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This work is partly based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program JWST-ERS-1324. We acknowledge financial support from NASA through grant JWST-ERS-1324. TB and YT acknowledge funding from NAOJ ALMA Scientific Research Grant Numbers 2018-09B and JSPS KAKENHI No. 17H06130, 22H04939. We thank Stefano Carniani and Stefano Berta for their kind and useful discussions. \section*{Data Availability} The data are publicly available through the ALMA science archive and the MAST portal managed by Space Telescope Science Institute. Other calibrated products used in this article will be shared upon request. \bibliographystyle{mnras} \bibliography{example} \appendix \section{ALMA observation table}\label{secc:appendix} In this appendix we summarise the ALMA observations, given in Table A1. \begin{table*} \centering \caption{Parameters of the ALMA observations} \label{tab:alma_observations} \begin{tabular}{cccccc} \hline UT start time & Baseline length & N$_{\rm ant}$ & Frequency & T$_{\rm int}$ & PWV \\ $[$YYYY-MM-DD hh:mm:ss$]$ & [m] & & [GHz] & [min] & [mm] \\ \hline \multicolumn{6}{c}{\textbf{Tuning 1}} \\ 2022-08-03 06:33:45 & 15 -- 1301 & 43 & 233.42--237.14 \& 248.22--251.94 & 44.30 & 0.82\\ 2022-08-03 07:48:41 & 15 -- 1301 & 43 & 233.42--237.14 \& 248.22--251.94 & 44.37 & 0.94\\ 2022-08-03 09:03:07 & 15 -- 1301 & 43 & 233.42--237.14 \& 248.22--251.94 & 44.38 & 0.97\\ \multicolumn{6}{c}{\textbf{Tuning 2}} \\ 2022-08-03 10:42:26 & 15 -- 1301 & 43 & 237.12--240.84 \& 251.92--255.64 & 43.88 & 1.03\\ 2022-08-03 12:03:52 & 15 -- 1301 & 43 & 237.12--240.84 \& 251.92--255.64 & 43.90 & 1.15\\ 2022-08-04 06:50:26 & 15 -- 1301 & 44 & 237.12--240.84 \& 251.92--255.64 & 43.83 & 0.57\\ \multicolumn{6}{c}{\textbf{Tuning 3}} \\ 2022-08-04 08:14:20 & 15 -- 1301 & 44 & 240.82--244.54 \& 255.62--259.34 & 44.87 & 0.56\\ 2022-08-04 09:32:36 & 15 -- 1301 & 44 & 240.82--244.54 \& 255.62--259.34 & 44.85 & 0.57\\ 2022-08-04 10:48:43 & 15 -- 1301 & 44 & 240.82--244.54 \& 255.62--259.34 & 44.87 & 0.62\\ \multicolumn{6}{c}{\textbf{Tuning 4}} \\ 2022-08-05 06:57:24 & 15 -- 1301 & 46 & 244.52--248.24 \& 259.32--263.04 & 44.05 & 0.48\\ 2022-08-05 08:01:01 & 15 -- 1301 & 46 & 244.52--248.24 \& 259.32--263.04 & 44.02 & 0.45 \\ \hline \end{tabular} \end{table*} \bsp % \label{lastpage}
Title: Extragalactic Observatory Science with the ASTRI Mini-Array at the Observatorio del Teide
Abstract: The ASTRI Mini-Array is a next-generation system of nine imaging atmospheric Cherenkov telescopes that is going to be built at the Observatorio del Teide site. After a first phase, in which the instrument will be operated as an experiment prioritizing a schedule of primary science cases, an observatory phase is foreseen in which other significant targets will be pointed. We focus on the observational feasibility of extragalactic sources and on astrophysical processes that best complement and expand the ASTRI Mini-Array core science, presenting the most relevant examples that are at reach of detection over long-term time scales and whose observation can provide breakthrough achievements in the very-high energy extragalactic science. Such examples cover a wide range of $\gamma$-ray emitters, including the study of AGN low states in the multi-TeV energy range, the possible detection of Seyfert galaxies with long exposures and the searches of dark matter lines above 10 TeV. Simulations of the presented objects show that the instrument performance will be competitive at multi-TeV energies with respect to current arrays of Cherenkov telescopes.
https://export.arxiv.org/pdf/2208.03176
\let\WriteBookmarks\relax \def\floatpagefraction{1} \def\textpagefraction{.001} \shorttitle{Extragalactic Science with the ASTRI Mini-Array} \shortauthors{F. G. Saturni et al.} \title[mode = title]{Extragalactic Observatory Science with the ASTRI Mini-Array at the {\itshape Observatorio del Teide}} \author[1,2]{F. G. Saturni}[type=editor,orcid=0000-0002-1946-7706] \cormark[1] \ead{francesco.saturni@inaf.it} \cortext[cor1]{Corresponding author} \author[3,4,5,6]{C. H. E. Arcaro}[orcid=0000-0002-1998-9707] \author[7]{B. Balmaverde}[orcid=0000-0002-0690-0638] \author[8,9]{J. {Becerra Gonz{\'a}lez}}[orcid=0000-0002-6729-9022] \author[10]{A. Caccianiga}[orcid=0000-0002-2339-8264] \author[11]{M. Capalbi}[orcid=0000-0002-9558-2394] \author[1]{A. Lamastra}[orcid=0000-0003-2403-913X] \author[1,2]{S. Lombardi}[orcid=0000-0002-6336-865X] \author[1,2]{F. Lucarelli}[orcid=0000-0002-6311-764X] \author[12]{R. {Alves Batista}}[orcid=0000-0003-2656-064X] \author[1,2]{L. A. Antonelli}[orcid= 0000-0002-5037-9034] \author[13]{E. M. {de Gouveia Dal Pino}}[orcid=0000-0001-8058-4752] \author[10]{R. {Della Ceca}}[orcid=0000-0001-7551-2252] \author[1,2,14]{J. G. Green}[orcid=0000-0002-1130-6692] \author[11]{A. Pagliaro}[orcid=0000-0002-6841-1362] \author[15]{C. Righi}[orcid=0000-0002-1218-9555] \author[15]{F. Tavecchio}[orcid=0000-0003-0256-0995] \author[15]{S. Vercellone}[orcid=0000-0003-1163-1396] \author[10]{A. Wolter}[orcid=0000-0001-5840-9835] \author[16]{E. Amato}[orcid=0000-0002-9881-8112] \author[1,2]{C. Bigongiari}[orcid=0000-0003-3293-8522] \author[4]{M. B{\"o}ttcher}[orcid=0000-0002-8434-5692] \author[17]{G. Brunetti}[orcid=0000-0003-4195-8613] \author[18]{P. Bruno}[orcid=0000-0003-3919-9611] \author[19]{A. Bulgarelli}[orcid=0000-0001-6347-0649] \author[20]{M. Cardillo}[orcid=0000-0001-8877-3996] \author[19]{V. Conforti}[orcid=0000-0002-0007-3520] \author[18]{A. Costa}[orcid=0000-0003-0344-8911] \author[11]{G. Cusumano}[orcid=0000-0002-8151-1990] \author[19]{V. Fioretti}[orcid=0000-0002-6082-5384] \author[21]{S. Germani}[orcid=0000-0002-2233-6811] \author[22]{A. Ghedina}[orcid=0000-0003-4702-5152] \author[19]{F. Gianotti}[orcid=0000-0003-4666-119X] \author[18]{V. Giordano}[orcid=0000-0001-8865-5930] \author[23]{A. Giuliani}[orcid=0000-0002-4315-1699] \author[18]{F. Incardona}[orcid=0000-0002-2568-0917] \author[11]{A. {La Barbera}}[orcid=0000-0002-5880-8913] \author[18]{G. Leto}[orcid=0000-0002-0040-5011] \author[24,25]{F. Longo}[orcid=0000-0003-2501-2270] \author[16]{G. Morlino}[orcid=0000-0002-5014-4817] \author[26]{B. Olmi}[orcid=0000-0001-6022-8216] \author[19]{N. Parmiggiani}[orcid=0000-0002-4535-5329] \author[15]{P. Romano}[orcid=0000-0003-0258-7469] \author[18]{G. Romeo}[orcid=0000-0003-3239-6057] \author[1]{A. Stamerra}[orcid=0000-0002-9430-5264] \author[15]{G. Tagliaferri}[orcid=0000-0003-0121-0723] \author[1]{V. Testa}[orcid=0000-0003-1033-1340] \author[10,21]{G. Tosti}[orcid=0000-0002-0839-4126] \author[23]{P. A. Caraveo}[orcid=0000-0003-2478-8018] \author[15]{G. Pareschi}[orcid=0000-0003-3967-403X] \address[1]{INAF -- Osservatorio Astronomico di Roma, Via Frascati 33, I-00078 Monte Porzio Catone (RM), Italy} \address[2]{ASI -- Space Science Data Center, Via del Politecnico snc, I-00133 Roma, Italy} \address[3]{INAF -- Osservatorio Astronomico di Padova, V.lo Osservatorio 5, I-35122 Padova, Italy} \address[4]{North-West University, Centre for Space Research, SA-2520 Potchefstroom, South Africa} \address[5]{Universit{\`a} di Padova, Dip. di Fisica, Via F. Marzolo 8, I-35121 Padova, Italy} \address[6]{INFN -- Sezione di Padova, Via F. Marzolo 8, I-35121 Padova, Italy} \address[7]{INAF -- Osservatorio Astrofisico di Torino, Via Osservatorio 20, I-10025 Pino Torinese (TO), Italy} \address[8]{Instituto de Astrof{\'i}sica de Canarias, C/ V{\'i}a L{\'a}ctea s/n, E-38205 La Laguna (Tenerife), Spain} \address[9]{Universidad de La Laguna, Dep.to de Astrof{\'i}sica, Av.da Astrof{\'i}sico F. S{\'a}nchez s/n, E-38206 La Laguna (Tenerife), Spain} \address[10]{INAF -- Osservatorio Astronomico di Brera, Via Brera 28, I-20121 Milano, Italy} \address[11]{INAF -- Istituto di Astrofisica Spaziale e Fisica Cosmica di Palermo, Via U. La Malfa 153, I-90146 Palermo, Italy} \address[12]{UAM -- CSIC, Instituto de F{\'i}sica Te{\'o}rica, C/ N. Cabrera 13-15, E-28049 Madrid, Spain} \address[13]{Univ. de S{\~a}o Paulo, Inst. de Astronomia, Geof{\'i}sica e Ci{\^e}ncias Atmosf{\'e}ricas, Cid. Universitaria, R. do Mat{\~a}o 1226, BR-05508-090 S{\~a}o Paulo (SP), Brazil} \address[14]{Max-Planck-Institut F{\"u}r Physik, F{\"o}hringer Ring 6, D-80805 M{\"u}nchen, Germany} \address[15]{INAF -- Osservatorio Astronomico di Brera, Via E. Bianchi 46, I-23807 Merate (LC), Italy} \address[16]{INAF -- Osservatorio Astrofisico di Arcetri, L.go E. Fermi 5, I-50125 Firenze, Italy} \address[17]{INAF -- Istituto di Radioastronomia, Via P. Gobetti 101, I-40129 Bologna, Italy} \address[18]{INAF -- Osservatorio Astrofisico di Catania, Via S. Sofia 78, I-95123 Catania, Italy} \address[19]{INAF -- Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Via P. Gobetti 93/3, I-40129 Bologna, Italy} \address[20]{INAF -- Istituto di Astrofisica e Planetologia Spaziali di Roma, Via del Fosso del Cavaliere 100, I-00133 Roma, Italy} \address[21]{Universit{\`a} di Perugia, Dip. di Fisica e Geologia, Via G. Pascoli snc, I-06123 Perugia, Italy} \address[22]{INAF -- Fundaci{\'o}n Galileo Galilei, Rbla. J. A. Fern{\'a}ndez P{\'e}rez 7, ES-38712 San Antonio de Bre{\~n}a (TF), Spain} \address[23]{INAF -- Istituto di Astrofisica Spaziale e Fisica Cosmica di Milano, Via A. Corti 12, I-20133 Milano, Italy} \address[24]{Universit{\`a} degli Studi di Trieste, Dip. di Fisica, Via A. Valerio 2, I-34127 Trieste, Italy} \address[25]{INFN -- Sezione di Trieste, Via A. Valerio 2, I-34127 Trieste, Italy} \address[26]{INAF -- Osservatorio Astronomico di Palermo, P.zza del Parlamento 1, I-90134 Palermo, Italy} \begin{keywords} Telescopes \sep Cherenkov arrays \sep Gamma rays: general \sep Gamma rays: galaxies \sep Dark matter \end{keywords} \tableofcontents \section{Introduction}\label{sec:intro} Observations from Earth with arrays of imaging air Cher\-enkov telesc\-opes \citep[IACTs; e.g.,][]{aha92} play a par\-amount role in the future development of the $\gamma$-ray astronomy. In this context, the ``Astronomia con Specchi a Tecnologia Replicante Italiana'' (ASTRI) Mini-Array, a system composed of 9 ASTRI Small-Sized Telescopes (SSTs) originally proposed as a precursor for the Southern site of the Cherenkov Telescope Array \citep[CTA;][]{par16}, is now under construction at the {\itshape Observatorio del Teide} (Tenerife, Canary Islands). The ASTRI project is an international collaboration led by the Italian National Institute for Astrophysics (INAF), that involves the Instituto de Astrof{\'i}sica de Canarias (IAC, Spain) as strategic partner and scientific partnerships from other Italian institutes, Brazil and South Africa. It points towards the realization of an IACT array of dual-mirror SSTs with Schwarzschild-Couder optical configuration. Such telescopes are characterized by a large field of view (FoV) of $\sim$10$^\circ$ with a spatial and energy resolution of $\lesssim$0.1$^\circ$ and $\sim$10\%, respectively, for energies $\gtrsim$1 TeV, and are equipped with Cherenkov cameras based on silicon photo-multiplier (Si\-PM) detectors and an innovative readout electronics. The first ASTRI prototype (ASTRI-{\itshape Horn D'Artu\-ro}) has been operating at the Serra La Nave observing station on Mt. Etna (Catania, Italy) since 2014. The full functionality of its optical design and camera for Cherenkov observations has been recently demonstrated through the detection of the $\gamma$-ray emission from the Crab Nebula at TeV energies \citep{lom20}. We highlight that this is the fourth article of a series of papers devoted to the comprehensive description of the ASTRI Mini-Array project from a technological, managerial and scientific point of view. The full technical description of the ASTRI Mini-Array and the expected performance of the system are reported in Scuderi et al. (2022, {\bfseries Paper I} hereafter) and Vercellone et al. (2022, {\bfseries Paper II}), respectively. Since the ASTRI Mini-Array will start to operate as an experiment, it will prioritize observations of core-science cases, which are outlined in {\bfseries Paper II}. Observations of additional sources will be either carried out simultaneously to the core science ones, exploiting the large instrumental FoV, or performed in a subsequent observatory phase. D'A{\`i} et al. (2022, {\bfseries Paper III}) focuses on the potential science outcome from observations of Galactic targets; in this document ({\bfseries Paper IV}), we aim to highlight the scientific prospects of the ASTRI Mini-Array for the observations of extragalactic sources during the observatory phase of the instrument. In view of the analysis and scientific exploitation of the ASTRI Mini-Array data, the ASTRI Comprehensive Data Challenge project (ACDC) started in 2018 with the goal of producing a representative data set of the ASTRI Mini-Array capabilities, based on a state-of-the-art model of the $\gamma$-ray sky and a realistic observing plan. Details can be found in \citet{pin20}. Although the simulations presented in \citet{pin20} were performed within the framework of the ASTRI Mini-Array located at the CTA Southern site (thus taking into account astrophysical objects that may be unfavorably observable from the Northern hemisphere), they nevertheless provided a useful benchmark to infer the capabilities of the instrument in observing high-energy processes in astrophysical sources. The paper is organized as follows: we provide an overv\-iew on the extragalactic science at TeV energies in Sect. \ref{sec:targets}; we discuss the possibility to perform serendipitous observations of some extragalactic sources simultaneously to the core-science targets (see {\bfseries Paper II}) in Sect. \ref{sec:simobs}; then, we briefly describe the analysis and simulation setup adopted for each of the proposed targets and present the corresponding results in Sect. \ref{sec:results} and \ref{sec:dmctools}, also comparing the results obtained with ASTRI Mini-Array simulated observations to the existing literature and outlining potential observing strategies to improve the future scientific exploitation of the instrument; finally, we summarize our most important results in Sect. \ref{sec:conc}. Throughout the article, we evaluate the scientific pros\-pects of observation of potential ASTRI Mini-Array targets by performing $\gamma$-ray observing simulations using the most updated versions of the public software {\ttfamily ctools} \citep{kno16} and {\ttfamily GammaPy} \citep{gammapy:2017}, coupled with a suitable set of ASTRI Mini-Array instrument response functions (IRFs). We refer to {\bfseries Paper II} for a detailed description about the IRF production and validation process. In such simulations, we make use of the most recent model for extragalactic background light (EBL) by \citet{fra17} unless otherwise stat\-ed (in which cases, the adopted EBL model is indicated). We note that the adopt\-ed IRFs were produced for a fixed zenith angle (ZA) of 20$^\circ$. While these IRFs are appropriate for sources that are observable at low ZAs ($\sim$30$^\circ$), they may not be entirely adequate for sources whose culmination is at significantly higher ZAs. In order to avoid significant bias in our analysis, we therefore limit our panoramic view of the ASTRI Mini-Array extragalactic targets to objects that can be observed under low-to-intermediate ZAs ($\lesssim$45$^\circ$). This choice ensures that the energy threshold -- a particularly important quantity for extragalactic VHE studies, due to EBL absorption -- is at most a factor of $\sim$2 greater than that of the adopted IRFs\footnote{Waiting for the production IRFs at intermediate (40$^\circ$) and large ZAs (60$^\circ$), we estimate, by means of an {\itshape ad-hoc} Monte-Carlo simulation of $\gamma$-rays at various ZAs up to 60$^\circ$, that the energy threshold of the ASTRI Mini-Array can be approximated by the empirical formula $E_{\rm thr}({\rm ZA}) = E_{\rm thr}(0^\circ) \times \cos{({\rm ZA})}^{-2.5}$. Therefore, the energy threshold at ZA $= 45^\circ$ is a factor of $\sim$2 greater than that at ZA $= 20^\circ$.}, while the other performance quantities should not be much affected (thus making the impact on our obtained results quite limited). Finally, we adopt a $\Lambda$-CDM cosmology with $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm M} = 0.3$ and $\Omega_\Lambda = 0.7$. \section{Overview of the extragalactic science at TeV energies}\label{sec:targets} The main scientific prospects of extragalactic astronomy with the ASTRI Mini-Array mainly rely on deep observations of active galactic nuclei \citep[AGN; e.g.,][]{lyn69} and galaxy clusters at energies $\gtrsim$1 TeV, and on cosmology and fundamental phys\-ics studies. Since the search for Lorentz invariance violation (LIV) effects, the TeV observations and constraints on the EBL and the test on the existence of axion-like particles have been already presented in {\bfseries Paper II} {\bfseries (and refs. therein)}, here we focus on the search of $\gamma$-ray signals produced by dark matter annihilation or decay into Standard Model (SM) pairs \citep[e.g.,][]{ber05} from halos around extragalactic astrophysical sources, such as the dwarf spheroidal galaxies \citep[dSphs; e.g.,][]{str08}. In the following, we therefore provide an overview on such fields of extragalactic science at very high energies (VHE), and also briefly outline additional science cases that are worth considering for future observations. \subsection{Emission of $\gamma$-rays from active galactic nuclei} AGN are one of the primary $\gamma$-rays emitters located outside the Milky Way. In these objects, the gravitational energy released by matter falling on the central supermassive black hole (SMBH) through accretion processes \citep{sal64,zel65} is released in the form of radiation and/or kinetic energy powering gas outflows. An exhaustive review on $\gamma$-ray observations of AGN is given e.g. in \citet{mad16}; here, we mainly focus on the capabilities of the ASTRI Mini-Array to detect: \begin{itemize} \item the signature emission from the brightest and closest blazars Mkn 421 and Mkn 501 \citep[e.g.,][]{mar72}; \item the signal from additional (extreme) high-synchrotron peaked blaz\-ars \citep[HSPs and EHSPs; e.g.,][]{pad95}, besides the sources mentioned above; \item the $\gamma$-ray emission from Seyfert galaxies \citep{FermiColl19}. \end{itemize} \subsubsection{TeV emission from blazars} Blazars are extragalactic sources characterized by emission of radiation covering the whole electromagnetic spectrum and usually showing flux variability, often characterized by exceptional amplitude and, in some cases, by extremely short timescales down to few minutes \citep[e.g.,][]{aha07}. Their spectral energy distribution (SED) is dominated by non-thermal radiation attributed to a relativistic jet of plasma pointing close to our line of sight \citep[see e.g.][]{urr95}. In their SEDs we indeed identify a low-energy component associated to synchrotron radiation by relativistic electrons, and a component peaking at higher energy that, although widely interpreted as inverse Compton (IC) emission, could also be associated to hadronic processes involving high-energy protons or ions \citep[see e.g.][for reviews of the blazar emission mechanisms and energetics]{cel08,bot13, 2020arXiv200306587M}. Protons (or nuclei) accelerated at very high energy could indeed emit through various processes, such as synchrotron radiation \citep{aha00,mue03}, photo-meson reactions \citep{man93,mue03} or pion production through collisions with low-energy protons \citep[e.g.,][]{kel06}. Blazars are also considered possible sources of ultra-high-energy cosmic rays (UHECRs), and have been recently associated with PeV neutrinos detected by IceCube \citep{pad16,TG15,res17,ice18}. Recent VHE blazar studies within a multi-wavelength context have found evidence for a more complex blazar jet structure than assumed in classical one-zone models. The properties of VHE emitting blazars suggest that a spine-she\-ath structure characterizes their jets \citep{GTC05}. Moreover, in order to reproduce the observed emission, structured jets with multiple emission regions are required \citep[see e.g.] []{2011ApJ...730L...8A, 2014A&A...567A.135A, 2019A&A...623A.175M}. In particular, the extremely high flux variability of few minutes duration observed in some blazars at TeV energies -- e.g., Mkn 501 \citep{Albert2007} -- suggests the existence of extremely fast and compact acceleration/emission regions which could be plausibly explained by fast magnetic reconnection involving misaligned current sheets inside the jet \citep{giannios_etal_09, 2019arXiv190308982D} that can be naturally excited by turbulence driven by kink instabilities in the underlying helical fields \citep{2016ApJ...824...48S,kad21,med21}. In such scenario, the observed VHE spectrum is supposed to be a superposition of more than one emitting region. However, no solid detection of the expected spectral features has been possible so far using the current generation of Cherenkov telescopes. Next-generation arrays with better sensitivity and energy resolution might help in the search of such characteristic signatures. Blazars have been empirically divided into two main clas\-ses on the basis of their optical spectral properties: flat-spec\-trum radio quasars (FSRQs), and BL Lac objects (BL Lacs). The former show strong and broad emission lines, whereas the latter are characterized by featureless optical spectra. A further classification proposed for the blazars is based on the position of the synchrotron peak in their SED that define low, intermediate or high-synchrotron peaked (HSP) blazars, when the peak falls below 10$^{14}$ Hz, between 10$^{14}$ and 10$^{15}$ Hz and above 10$^{15}$ Hz, respectively. Since HSPs typically have featureless optical spectra, i.e. they belong to the class of BL Lacs, the HSPs are also often called high-energy peak\-ed BL Lacs \citep[HBLs;][]{pad95}. Within the class of HSPs/HBLs there is an important minority, called ``extreme HSPs'' (EHSPs), where the synchrotron emission peaks in the 0.1--10 keV X-ray band \citep{Costamante2001,Biteau2020}. Since the synchrotron and IC humps are usually correlated, in the class of HSPs also the $\gamma$-ray hump peaks at very high energies, typically above 100 GeV. \citet{cos18} recently recognized an even more extreme class of HSPs in which the $\gamma$-ray hump peaks above $\sim$1 TeV (the hard-TeV BL Lacs). As discussed by \citet[][see also the review in \citealt{Biteau2020}]{cos18}, the high value of the Compton peak is potentially challenging the standard one-zone leptonic model. The next generation of Cherenkov telescopes, like the ASTRI Mini-Array, will be fundamental to better understand the physics of these enigmatic sources. To quantify the actual capabilities of ASTRI Mini-Array to detect and study in detail VHE spectral features in normal and extreme populations of blazars, we investigate simulated observations of the two BL Lac objects Mkn 421 and Mkn 501 . Such sources are the closest known VHE blazars (and HSPs/HBLs), and possibly the best sources to search for peculiar spectral features due to several reasons: (i) they are not strongly affected by the EBL absorption up to high energies; (ii) they are bright VHE sources, allowing a good signal-to-noise detection which is crucial to search for features, and (iii) they are likely only slightly affected by internal absorption within the source. Indeed, Mkn 501 is the only blazar for which a $\sim$4$\sigma$ hint of a narrow spectral feature has been detected so far \citep{2011MNRAS.410.2556D}. Along with these targets, we also present a selection of (E)HBLs potentially detectable with the ASTRI Mini-Array. As concrete examples of how an (E)HBL should appear at the ASTRI Mini-Array we discuss the simulation of two so\-urces, i.e. the prototypical EHBL 1ES 0229$+$200 \citep{aharonian_etal_07} and the HSP RGB J1117$+$202 detected by {\itshape Fermi}-LAT \citep{abd10}. \subsubsection{$\gamma$-ray emission from Seyfert galaxies}\label{1068_sec} The SED of Seyfert galaxies is dominated by thermal emission in the optical-to-UV waveband produced by the accretion disk ar\-ound the SMBH (Seyfert 1). In addition, a corona of hot plasma forms above the accretion disc and can IC-scatter accretion disc photons up to X-ray energies. A large fraction of the optical-UV and X-ray radiation may be obscured by interstellar gas and dust close to the accretion disk and/or in the host galaxy (Seyfert 2). The absorbed radiation is reprocessed at some other waveband, most likely in the infrared. Seyfert galaxies emit also non-thermal radiation in the $\gamma$-ray band, as indicated by the detection of the nearby Seyfert galaxies NGC 1068, NGC 4945, and the Circinus galaxy with the {\itshape Fermi}-LAT $\gamma$-ray space telescope \citep{ack12,hay13}. These galaxies exhibit characteristics of starburst activity, AGN-driven winds, and weak misaligned jets \citep{gal96,elm98,len09,kri11,Garcia14,mel15,zsc16,hen18}. Given the existence of several possible emission mechanisms operating at high-energy the origin of the $\gamma$-ray emission is still undetermined. A potential mechanism for this emission could be the acceleration of relativistic particles by magnetic reconnection \citep{dgdp_lazarian_05} in the nuclear region of these sources, at the turbulent magnetized corona around the black hole \citep[e.g.,][]{2015ApJ...802..113K, 2015ApJ...799L..20S, 2016MNRAS.455..838K, 2019ApJ...879....6R}. Another possibility relies on the evidence of starburst activity in these systems. The standard paradigm for the origin of the $\gamma$-ray emission in star-forming galaxies is non-thermal emission from relativistic particles accelerated in the shocks produced by supernova explosions \citep[e.g.][]{per08,cea09,abd10,ack12}. Finally, a further possibility supported by recent UHECR observations \citep{augerSBG} is that the $\gamma$-ray emission could result from particles accelerated via other mechanisms. Regardless of how particles are accelerated in these galaxies, the $\gamma$-ray emission is predominantly hadronic, and it is produced by the decay of neutral pions created by collisions between relativistic proton and ambient protons. The detection of the nearby starburst galaxies M 82 and NGC 253 by VERITAS \citep{2009Natur.462..770V} and H.E.S.S. \citep{2009Sci...326.1080A} indicates that VHE photons can be produced in the nuclei of these galaxies. Similarly to the shocks produced by supernovae explosions, the shocks produced by the interaction of AGN-driven winds with the surrounding interstellar matter are expected to accelerate protons and electrons to relativistic energies, with an efficiency that may exceed that of supernova remnants \citep{fau12,nim15,lam16}. In this scenario, the hadronic emission from pion decays following proton-proton interactions is dominant above about 100 MeV. At lower energies leptonic processes like IC scattering and non-thermal brems\-strahlung can significantly contribute to the $\gamma$-ray emission. Relativistic particles can also be accelerated in misaligned jets; in the leptonic AGN jet scenario, the $\gamma$-ray emission is produced by IC emission, where the high-energy electrons that are accelerated in the jet up-scatter photons produced either through synchrotron emission from those same electrons or external seed photons \citep{len10}. Any hadronic interactions which produce $\gamma$-rays through neutral pion decay will also produce neutrinos through charg\-ed pion decay. Thus understanding the nature of the $\gamma$-ray emission in Seyfert/starburst galaxies has important implications for the neutrino signal expected from this astrophysical objects. A search for astrophysical point-like neutrino sources using 10 years of IceCube data finds an excess of neutrino events over expectations from the isotropic background 0.35 degrees away from the Seyf\-ert galaxy NGC 1068, with a 2.9$\sigma$ statistical significance \citep{ice10}. While the estimated IceCube neutrino flux appears higher than that predicted by starburst and AGN wind models built on measured $\gamma$-ray flux, the large uncertainty from IceCube spectral measurement and the possible $\gamma$-ray absorption with\-in the source prevented a straightforward connection. Thus, studying possible $\gamma$-ray and neutrino production mechanisms in NGC 1068 is a timely task that may provide a key clue for unveiling the origin of the cosmic diffuse neutrino background flux \citep[see e.g.][]{MAGIC19}. The VHE emission from individual Seyfert galaxies is expected to be low and has yet to be directly observed. To quantify the capabilities of ASTRI Mini-Array to detect the VHE emission in NGC 1068, in this paper we present dedicated simulation of observation of the $\gamma$-ray spectrum predicted by the model that envisages $\gamma$-ray emission in the energy band covered by the instrument. A significant fraction of starburst galaxies may coexist with AGN as indicated by observational evidence and theoretical arguments \cite[see e.g.][and references therein]{Alexander12}. The performance of the ASTRI Mini-Array in detecting the VHE emission from starburst galaxies detected by {\itshape Fermi}-LAT in the Northern Hemisphere is analyzed in {\bfseries Paper II}. This analysis indicated that the most promising target for observations with the ASTRI Mini-Ar\-ray is the starburst galaxy M82, for which we performed dedicated simulations of the VHE spectrum. Here we investigate the potential discovery of the VHE emission from star\-burst- and AGN-driven outflows with the ASTRI Mini-Array by considering a few more examples of starburst/Seyf\-ert galaxies which co\-uld benefit of long exposure times th\-anks to simultaneous observations with the extragalactic so\-urces presented in this paper and the ASTRI Mini-Array core sci\-ence targets. To this aim we used: \begin{itemize} \item the list of starburst galaxies detected by {\itshape Fermi}-LAT \citep{aje20}; \item a selection of starburst galaxies with {\itshape Fermi}-LAT in search of $\gamma$-ray emission \citep{ack12} that are extracted from a survey of the dense molecular gas tracer HCN \citep{gao04}; \item a sample of local ($d_\odot < 130$ Mpc) starburst galaxies selected in the radio and infrared bands presented in \citet[][see therein for the sample selection criteria]{Lunardini19}. \end{itemize} \subsection{VHE emission from the intra-cluster medium} During the hierarchical process of cluster formation, sh\-ocks and turbulence are generated in the intra-cluster medi\-um (ICM) and are expected to accelerate both electrons and protons at relativistic energies, leading to a non-thermal population of cosmic rays (CRs) that are confined within the cluster magnetic fields \citep[see e.g.][for a review]{bru14}. The presence of these components is proved by radio observations that detect cluster-scale radio emission from galaxy clusters in the form of radio halos and relics \citep[see e.g.][for a review]{van19}. An unavoidable consequence is the generation of high energy emission due to the decay of $\pi^0$ generated by CR proton-proton collisions in the ICM and IC scattering of CMB photons from primary and secondary electrons. A $\gamma$-ray emission has recently been detected by {\itshape Fermi}-LAT in the vicinity of the Coma cluster \citep{xi18,ada21,bag21}, although its origin is unclear and contribution from discrete sources may be important. Overall, current observations of near\-by clusters with {\itshape Fermi}-LAT constrain the CR energy budget in these systems to less than few percent of that of the thermal ICM \citep{fer14,zan14,ada21} and shed light on the origin of the diffuse radio emission suggesting that radio emitting electrons are reaccelerated in the ICM, presumably by turbulence \citep{bru12,bru17,ada21}. Models based on turbulent reacceleration of primary and secondary particles in the ICM \citep{bru05,bru11,pin17} predict levels of $\gamma$-ray flux for a Coma-like cluster in the range $E^2 d\Phi/dE \sim 10^{-14} \div 10^{-13}$ erg s$^{-1}$ cm$^{-2}$ at $5 \div 10$ TeV, i.e. about one order of magnitude fainter than the sensitivity achievable by the ASTRI Mini-Array in 50 h of observation. An additional mechanism to produce VHE photons in galaxy clusters is IC scatter from electron–posit\-ron pairs that are generated by photo-pair and photo-pion production due to the interaction between ultra high energy CR and photons of the CMB. If CR protons are accelerated at EeV energies and confined in the ICM \citep[see e.g.][and refs. therein]{bru14}, the high-energy pairs that are produced should radiate IC emission peaking in the TeV energy band \citep{ino05,van11}; the combination of cosmological and Monte Carlo CR simulations indicates that clusters can contribute substantially to the diffuse $\gamma$-ray flux beyond 100 GeV as observed by {\itshape Fermi}-LAT, HAWC, and CASA-MIA upper limits, depending on the power-law index and the maximum energy of the injected CR spectrum. The contribution amounts to up 100\% of the flux for a spectral index of $\sim$2 and a maximum energy around $10^{17}$ eV \citep{hus22}. Future observations with the ASTRI Mini-Array, in conjunction with other Cherenkov facilities (MAGIC, H.E.S.S., VERITAS, CTA), may allow us to obtain interesting constraints on these processes. Overall, the search of VHE emission from galaxy clusters will be the subject of forthcoming dedicated studies. \subsection{Indirect dark matter searches with observations of extragalactic astrophysical sources}\label{indirectDM} \begin{table*}[width=17cm,align=\centering] \centering \caption{Basic properties of the most relevant DM-dominated extragalactic sources (dSphs at $d_\odot \lesssim 100$ kpc and galaxy clusters) that can be observed from the {\itshape Observatorio del Teide} site. In the dSph classification, ``cls'' stands for ``classical'' and ``uft'' for ``ultra-faint''.} \label{tab:dmnorth} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccc} \hline \hline \multicolumn{7}{l}{ }\\ Target & Class & R.A. J2000 & dec. J2000 & Min. ZA & $d_\odot$ & Notes \\ (IAU Name) & & (deg) & (deg) & (deg) & (kpc) & \\ \multicolumn{7}{l}{ }\\ \hline \multicolumn{7}{l}{ }\\ \multicolumn{7}{l}{{\itshape dSphs}}\\ Bo{\"o}tes I & dSph (uft) & 210.03 & $+$14.50 & 13.80 & $66 \pm 2$ & \\ Bo{\"o}tes II & dSph (uft) & 209.50 & $+$12.85 & 15.45 & $42 \pm 1$ & \\ Bo{\"o}tes III & dSph (uft) & 209.30 & $+$26.80 & 1.50 & $47 \pm 2$ & \\ Coma Berenices & dSph (uft) & 186.75 & $+$23.90 & 4.40 & $44 \pm 4$ & 1.6 deg distance from ON 246\\ Draco I & dSph (cls) & 260.05 & $+$57.92 & 29.62 & $76 \pm 6$ & \\ Draco II & dSph (uft) & 238.20 & $+$64.57 & 36.27 & $20 \pm 3$ & \\ Laevens 3 & dSph (uft) & 316.73 & $+$14.98 & 13.32 & $67 \pm 3$ & \\ Segue 1 & dSph (uft) & 151.77 & $+$16.08 & 12.22 & $23 \pm 2$ & \\ Segue 2 & dSph (uft) & 34.82 & $+$20.18 & 8.12 & $35 \pm 2$ & 3.2 deg distance from 1ES 0229$+$200\\ Sextans & dSph (cls) & 153.26 & $-$1.61 & 29.91 & $86 \pm 4$ & \\ Triangulum II & dSph (uft) & 33.32 & $+$36.18 & 7.88 & $30 \pm 2$ & \\ Ursa Major II & dSph (uft) & 132.88 & $+$63.13 & 34.83 & $32 \pm 4$ & \\ Ursa Minor & dSph (cls) & 227.29 & $+$67.22 & 38.92 & $76 \pm 3$ & \\ Willman 1 & dSph (uft) & 162.34 & $+$51.05 & 22.75 & $38 \pm 7$ & 1.7 deg distance from GB6 J1053$+$4930\\ \multicolumn{7}{l}{ }\\ \multicolumn{7}{l}{{\itshape Clusters}}\\ Abell 520 & Gal. Cluster & 73.58 & $+$2.95 & 25.35 & $9.746 \times 10^5$ & \\ Coma Berenices & Gal. Cluster & 194.95 & $+$27.98 & 0.32 & $1.007 \times 10^5$ & \\ Perseus & Gal. Cluster & 49.95 & $+$41.51 & 13.21 & $0.777 \times 10^5$ & \\ Virgo & Gal. Cluster & 187.70 & $+$12.34 & 15.96 & $0.154 \times 10^5$ & \\ NGC 5813 & Gal. Cluster & 225.30 & $+$1.70 & 26.60 & $0.281 \times 10^5$ & 1.3 deg distance from NGC 5846\\ NGC 5846 & Gal. Cluster & 226.62 & $+$1.61 & 26.69 & $0.246 \times 10^5$ & 1.3 deg distance from NGC 5813\\ \multicolumn{7}{l}{ }\\ \hline \end{tabular} } \end{table*} Dark matter \citep[DM;][]{zwi33} is the major component of the matter amount in the Universe \citep[$\Omega_{\rm DM} \sim 0.24$;][]{pla16}. Its existence is so far only inferred through indirect evidence of gravitational interaction with baryonic matter, such as the dynamical stability of galaxy clusters, the flattening of spiral-galaxy rotation curves at large distances from the central bulges \citep{rub80}, the different kinematic behavior of gas reservoirs and gravitational wells in events of cluster collisions \citep{clo04}, the formation of large-scale structures and the observed distribution of fluctuations in the Cosmic Microwave Background (CMB). In past years, efforts aimed at observing the DM components under the form of baryonic matter concentrated in astrophysical objects with no or negligible luminosity at all wavelengths (the so-called Massive Compact Halo Objects, MaCHOs) have proven unfruitful \citep[e.g.,][]{tis07}. Therefore, the current frontier of the DM searches is represented by the identification of candidate elementary particles outside the Standard Model (SM). In particular, since particle DM is compatible with a collisionless fluid of cold Weakly Interacting Massive Particles (WIMPs), there could be the possibility of detecting $\gamma$-ray signals emitted from DM annihilation or decay into SM pairs \citep{ber97}: \begin{equation}\label{eqn:dmfluxann} \frac{d\Phi_{\rm ann}}{dE_\gamma} = B^{\rm (ann)}_{\rm F} \frac{\langle \sigma_{\rm ann} v\rangle}{8 \pi m_\chi^2} \sum_i {\rm BR}_i \frac{dN_\gamma^{(i)}}{dE_\gamma} \times J\left( \Delta\Omega \right) \end{equation} \begin{equation}\label{eqn:dmfluxdec} \frac{d\Phi_{\rm dec}}{dE_\gamma} = \frac{B^{\rm (dec)}_{\rm F}}{4 \pi m_\chi} \sum_i\Gamma_i\frac{dN_\gamma^{(i)}}{dE_\gamma} \times D\left( \Delta\Omega \right) \end{equation} where $m_\chi$ is the DM particle mass, $\langle \sigma_{\rm ann} v\rangle$ its velocity-aver\-aged cross section in annihilation processes, $dN_\gamma^{(i)}/dE_\gamma$ the specific number of final-state VHE photons produced in each SM interaction channel with branching ratio BR$_i$ and/or lifetime $\tau_i = 1/\Gamma_i$, and $B_{\rm F}$ a generalized (de)boost factor that summarizes all the processes that may enhance or quench the $\gamma$-ray emission -- for e.g. monochromatic lines we have $B_{\rm F} = \alpha^2$ because of loop-induced suppression, with $\alpha$ the fine structure constant. Measurements on the CMB power spectrum predict that $\langle \sigma_{\rm ann} v\rangle \lesssim 3 \times 10^{-26}$ cm$^3$ s$^{-1}$ for $100$ GeV $\lesssim m_\chi \lesssim$ 100 TeV, the order of magnitude of those proper of SM electroweak interactions. The sensitivity to such cross-section values is at reach of the next-generation $\gamma$-ray Cherenkov telescopes \citep[e.g.,][]{pie14}, making DM-dominated astrophysical sour\-ces compelling targets for observations with these instruments. The evaluation of the potential goodness of a source is quantified by the so-called astrophysical factors $J$ (for DM annihilation) and $D$ \citep[for DM decay;][]{eva04}, i.e. the integral quantities of functions of the DM density profile $\rho_{\rm DM}$ along the line of sight to the targets and the projected angular dimensions $\Delta\Omega$ of the DM halos: \begin{eqnarray}\label{eqn:jdfact} J\left( \Delta\Omega \right) = \int_{\Delta\Omega} d\Omega \int_{\rm l.o.s.} \rho^2_{\rm DM}\left( \ell, \Omega \right) d\ell \\ D\left( \Delta\Omega \right) = \int_{\Delta\Omega} d\Omega \int_{\rm l.o.s.} \rho_{\rm DM}\left( \ell, \Omega \right) d\ell \end{eqnarray} Due to the unobservability of DM with direct astronomical techniques, its spatial distribution around galaxies must be inferred by the study of the kinematic properties of the baryonic matter moving in the DM potential wells \citep[see e.g.][]{bon15a,bon15c}. A discussion of the prospects of DM searches in the Mil\-ky Way center and halo is made in {\bfseries Paper III}. Concerning the extragalactic science, the most DM-dominated sources are: \begin{itemize} \item the dwarf spheroidal galaxies \citep[dSphs; e.g.,][]{str08,mcc12}, whose relative proximity ($d_\odot \lesssim 250$ kpc) and lack of background emission\footnote{See instead \citet{cro22} for the description of a case of non-background free dSph.} \citep[e.g.,][]{mat98} configure them among the best astrophysical targets to indirectly search for $\gamma$-ray signals from DM annihilation or decay; \item the nearest clusters of galaxies, which represent the largest gravitationally bound structures in the Universe ($M \sim 10^{15}$ M$_\odot$) formed up to $\sim$80\% by DM \cite[e.g.,][]{jel09,pin09}. \end{itemize} In Tab. \ref{tab:dmnorth}, we report the basic properties of the dSphs within a distance $d_\odot$ of 100 kpc and the clusters of galaxies that are visible from the {\itshape Observatorio del Teide} site under a maximum ZA of 45$^\circ$. The threshold distance of 100 kpc for dSphs has been chosen since the astrophysical factors $J$ and $D$ of these halos scale with $d_\odot^{-2}$ and $d_\odot^{-1}$ respectively \citep[e.g.,][]{pac19}, making their expected DM $\gamma$-ray signal very faint at larger distances. In the Northern sky, the dSphs with the highest values of $J$ and $D$ are the ``classical'' Ursa Minor \citep[UMi;][]{wil55}, the ``ultra-faint'' Coma Ber\-enices \citep[CBe;][]{tru35} and Ursa Major II \citep[UMa II;][]{ser60}. In Sect. \ref{sec:dmctools}, we analyze the prospects of such targets for DM detection with the ASTRI Mini-Array. \section{Serendipitous observations of ancillary sources and optimized strategies for dedicated pointings of extragalactic targets}\label{sec:simobs} The scientific prospects at VHE with the ASTRI Mini-Array presented in this paper deal with the observation of $\gamma$-ray emissions from several classes of extragalactic sources over time scales spanning from $\sim$10 to $\gtrsim$100~h. Such time scales are typical for $\gamma$-ray observations at multi-TeV regime and allow us to successfully detect peculiar spectral features (emission lines, bumps, hard cut-offs) or to strength\-en the constraints on the expected emission parameters. If, on the one hand, these goals are often at reach mostly when the studied extragalactic sources are in high-activity states or only with dedicated long-term observing campaigns, on the other hand the large FoV of the ASTRI Mini-Array can be fully exploited to perform simultaneous observations of sour\-ces located within an angular distance up to $\sim$5$^\circ$ from a given primary target. In fact, the reduction in sensitivity by a factor of $\lesssim$2 for off-axis observations at $\sim$5$^\circ$ with respect to on-axis exposures (see {\bfseries Paper II}) is still suitable for scientific purposes. It is therefore clear that a consistent fraction of observation time of extragalactic sources may be obtained almost ``for free'' in the case those ancillary targets are contained in the same fiducial FoV of a given primary target. This, in turn, can be seen as an effective increase of the total duty cycle of the system, since no large amount of dedicated time for ancillary targets would be needed. In the following, we thus explore in detail some of the opportunities offered by the large FoV of the ASTRI Mini-Array, both for the observations of the core-science targets described in {\bfseries Paper II}, that will be performed during the first 2-3 years of the project, and for dedicated pointings to be proposed in the observatory phase of the instrument lifetime. First, we consider the possibility of optimizing the observation strategy in order to include more than one target in a single observation of a core-science object. That will be feasible only in those specific cases where two or more interesting targets are clustered in relatively small regions of the sky. The second possibility that we want to explore is the use of pointed observations to serendipitously detect relatively faint sources, not included in the lists of selected candidates presented in this paper, and that are undergoing particularly strong flares. To this end, we will present a large list of additional sources (mainly blazars and starburst/Seyfert galaxies) that fall within the FoV of core-science targets described in {\bfseries Paper II}. Most of these sources are probably too faint to be detected in normal conditions but the presence of an ASTRI Mini-Array pointing will allow a ``free'' monitoring that could provide, in case of strong flares or unexpected conditions, a detection. Even in case of a non-detection, the observations will nonetheless provide a potentially useful upper-limit on the VHE emission from this class of objects. Finally, we will evaluate whether a similar pointing strategy might be adopted during the observatory phase of the ASTRI Mini-Array, to simultaneously observe sky-projected clusters of some of the extragalactic candidates proposed in this paper. In this way, we will be able to further optimize the dedicated exposure time with respect to the amount needed to observe such targets individually without losing any sources of interest. Most of the ASTRI observations will be performed in the so-called {\it wobble} observation mode \citep{fom94}. In this observational method, the primary target is typically displaced by $\sim$0.5$^\circ$ from the FoV center. However, thanks to the large FoV of the ASTRI Mini-Array and the rather flat performance up to a few degrees off-axis, a wobble angle of $\sim$1$^\circ$ may be safely adopted for regular observations. Under this assumption, the cross-search radius between the ASTRI core-science targets and the ancillary extragalactic targets can hence reach a realistic value of 4$^\circ$: this ensures that each candidate target lies within $\sim$5$^\circ$ of the FoV center, regardless of the actual position of the main target observed in wobble mode. Such an observational scheme can be immediately applied during the first phase of the ASTRI Mini-Array operations to the regions around the core-science targets. Fig. \ref{fig:aitoff_alltargets} shows the sky distribution, in Galactic coordinates and Hammer-Aitoff projection, of all the extragalactic ASTRI Mini-Array targets described in this paper (blazars of the HSP/HBL and EHSP sub-class, the sample of dSph and galaxy clusters listed in Tab.~\ref{tab:dsph}, and the Seyfert galaxy NGC 1068) along with the sky position of all the core-science targets described in {\bfseries Paper II}. To mark a possible ancillary extragalactic target within the same FoV, a 4$^\circ$ radius circles has been drawn around each of the core-science targets (solid green circles). A visual inspection already allows us to identify three pairs of main core-science target and a dSph/cluster within the same FoV: Segue 2/1ES 0229$+$200, Perseus Cluster/IC 310, and Virgo Cluster/M 87 (see Fig. \ref{fig:aitoff_alltargets}). A more detailed analytical cross-correl\-ation study between the core-science targets and the dSph/clu\-ster catalogue reveals that the three targets reported above are within 3.2, 0.6 and 0.1 deg, respectively, from the core-sci\-ence targets. Given the small angular separation between these three couples of objects, a single wobble observation of one of this core-science target will already allow to gather useful exposure on each of these three DM candidates proposed here. As the possible simultaneous observability of HSP/HBL blazars along with the main ASTRI Mi\-ni-Arr\-ay targets is concerned, no relevant objects are found within the same fiducial FoV. Thus, dedicated pointings of some of the most interesting sources among the ones listed in this paper (see Tab.~\ref{tab:hsp-ma} and Tab.~\ref{tab:ehsp-ma}) shall necessarily be performed. Nevertheless, we notice that during the observation of one of the proposed HBL target (BZB J1217+3007, also known as TON 605 or 1ES 1215$+$303)), the large ASTRI FoV will allow to simultaneously observe and monitor other two close-by known TeV blazars, W Comae and 1ES 1218+304 (this last one classified as EHSP), which are, respectively, about 2$^\circ$ and 0.8$^\circ$ from the suggested target (see Fig. \ref{fig:aitoff_alltargets} for the sky location of this triplet). \onecolumn \begin{scriptsize} \begin{center} \captionsetup{width=17cm} \begin{longtable}{m{0.25\textwidth}ccccccc} \caption{\small List of potentially interesting candidates to be observed simultaneously by the ASTRI Mini-Array during the observations of the main core-science targets described in {\bfseries Paper II}. The selection of (ancillary) candidate targets has been performed using the list of blazars available from the {\it Open Universe -- Blazars} reference list \citep{gio19}, which is based on the BZCAT v5.0 \citep{mas15}, the 3HSP \citep{cha19}, and the {\itshape Fermi}-LAT 4LAC \citep{fer20} catalogues, the targets for DM searches reported in Tab. \ref{tab:dmnorth}, and a selection of starburst galaxies that are visible from the Northern hemisphere from \citet{ack12}, \citet{Lunardini19} and \citet{aje20}. For each core-science target (named in the first column) the table reports: the name of the blazar/DM-dominated object/starburst galaxy found within 4$^\circ$ from the main target (see text for discussion about the interplay between the adopted value of angular separation and the wobble angle); its celestial coordinates (J2000); the angular separation in degrees from the main target; source redshift (when available); optical classification for each object; and, for blazars, the SED classification extracted from the 4LAC catalogue.}\\ \label{tab:anc-targ}\\ \hline \hline \multicolumn{8}{c}{ }\\ Core-Science Target & Blazar/DM/Starburst Gal. Name & R.A. J2000 & dec. J2000 & Ang. Sep. & $z$ & Optical Class & SED Class \\ & (within 4$^\circ$ of the main target) & (deg) & (deg) & (deg) & & & \\ \multicolumn{8}{c}{ }\\ \hline \endfirsthead \multicolumn{8}{c}{ }\\ {\scriptsize \tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \hline \multicolumn{8}{c}{ }\\ Core-Science Target & Blazar/DM/Starburst Gal. Name & R.A. J2000 & dec. J2000 & Ang. Sep. & $z$ & Optical Class & SED Class \\ & (within 4$^\circ$ of the main target) & (deg) & (deg) & (deg) & & & \\ \multicolumn{8}{c}{ }\\ \hline \endhead \hline \multicolumn{4}{r}{\scriptsize \textit{Continued on next page}} \\ \endfoot \hline \endlastfoot \multicolumn{8}{c}{ }\\ \multirow{2}{3cm}{Tycho}&3FGL J0014.6+6119 & 3.70 & $+$61.30 & 3.1 & -- & BCU & LSP \\ & 3HSP J005758.3+632639.3 & 14.49 & 63.44 & $+$3.7 & 0.180 & BLL & HSP \\ \multicolumn{8}{c}{ }\\ \hline \multicolumn{8}{c}{ }\\ \multirow{1}{3cm}{eHWC 1907+063} & 3HSP J191803.6+033031.1 & 289.52 & $+$3.51 & 3.8 & 0.230 & BLL & HSP \\ \multicolumn{8}{c}{ }\\ \hline \multicolumn{8}{c}{ }\\ \multirow{3}{3cm}{$\gamma$ Cygni}& 3FGL J2000.1+4212 & 300.00 & $+$42.23 & 4.0 & -- & BCU & LSP \\ & 5BZU J2015+3710 & 303.87 & $+$37.18 & 3.7 & -- & BZU/FSRQ & LSP \\ & 3FGL J2018.5+3851 & 304.63 & $+$38.86 & 1.9 & -- & BCU & ISP \\ \multicolumn{8}{c}{ }\\ \hline \multicolumn{8}{c}{ }\\ \multirow{3}{3cm}{Crab} & 5BZB J0521+2112 & 80.44 & $+$21.21 & 3.1 & 0.108 & BLL & HSP \\ & 3FGL J0528.3+1815 & 82.12 & $+$18.28 & 4.0 & -- & BCU & -- \\ & 5BZB J0540+2507 & 85.06 & $+$25.13 & 3.4 & 0.623 & BLL & -- \\ \multicolumn{8}{c}{ }\\ \hline \multicolumn{8}{c}{ }\\ \multirow{3}{3cm}{Geminga}& 5BZB J0621+1747 & 95.45 & $+$17.79 & 2.9 & -- & BLL & -- \\ & 3FGL J0631.2+2019 & 97.75 & $+$20.35 & 2.7 & -- & BCU & -- \\ & 3HSP J064813.9+160656.5 & 102.06 & $+$16.12 & 3.8 & 0.350 & BLL & HSP \\ \multicolumn{8}{c}{ }\\ \hline \multicolumn{8}{c}{ }\\ \multirow{9}{3cm}{M 82} & 3HSP J091429.7+684508.7 & 138.62 & $+$68.75 & 3.8 & 0.450 & BLL & HSP \\ & 5BZQ J0921+7136 & 140.35 & $+$71.60 & 3.4 & 0.594 & FSRQ & -- \\ & 3HSP J092113.0+684902.2 & 140.30 & $+$68.82 & 3.2 & -- & BLL & -- \\ & 3FGL J0928.7+7300 & 142.25 & $+$72.95 & 3.9 & -- & BCU & -- \\ & 4FGL J0931.9+6737 & 142.99 & $+$67.62 & 3.0 & 0.023 & RDG & -- \\ & 3HSP J095849.8+703959.4 & 149.71 & $+$70.67 & 1.0 & 0.310 & BLL & HSP \\ & 3HSP J100313.9+705912.6 & 150.81 & $+$70.99 & 1.4 & -- & BLL & HSP \\ & 5BZQ J1003+6813 & 150.78 & $+$68.22 & 1.6 & 0.770 & FSRQ & -- \\ & 3HSP J102704.3+671619.0 & 156.77 & $+$67.27 & 3.7 & 0.270 & BLL & HSP \\ \multicolumn{8}{c}{ }\\ \hline \multicolumn{8}{c}{ }\\ \multirow{10}{*}{IC 310} & 3HSP J030544.1+403510.5 & 46.43 & $+$40.59 & 2.2 & 0.240 & BLL & HSP \\ & 5BZB J0310+4056 & 47.53 & $+$40.95 & 1.3 & 0.137 & BLL & -- \\ & 5BZQ J0310+3814 & 47.71 & $+$38.25 & 3.3 & 0.816 & FSRQ & LSP \\ & 5BZU J0313+4120 & 48.26 & $+$41.33 & 0.7 & 0.136 & BZU/RDG & LSP \\ & 5BZG J0313+4115 & 48.49 & $+$41.26 & 0.5 & 0.029 & BLL & -- \\ & 4FGL J0315.5+4231 & 48.86 & $+$42.55 & 1.2 & -- & BCU & -- \\ & 5BZU J0319+4130 & 49.95 & $+$41.51 & 0.6 & 0.018 & BZU/RDG & LSP \\ & 4FGL J0333.8+4007 & 53.45 & $+$40.11 & 3.5 & -- & BCU & -- \\ & 4FGL J0334.3+3920 & 53.58 & $+$39.36 & 3.9 & 0.021 & RDG & ISP \\ & Perseus & 49.95 & $+$41.51 & 0.6 & - & Gal. Cluster & \\ \multicolumn{8}{c}{ }\\ \hline \multicolumn{8}{c}{ }\\ \multirow{8}{*}{M 87} & 3HSP J122307.2+110038.1 & 185.78 & $+$11.01 & 2.3 & 0.500 & BLL & HSP \\ & 3HSP J122340.1+124203.6 & 185.92 & $+$12.70 & 1.8 & 0.340 & BLL & HSP \\ & 4FGL J1223.3+1213 & 185.95 & $+$12.04 & 1.8 & -- & BLL & LSP \\ & 3HSP J122820.5+155655.1 & 187.09 & $+$15.95 & 3.6 & 0.232 & BLL & HSP \\ & 5BZB J1231+1421 & 187.85 & $+$14.36 & 2.0 & 0.256 & BLL & ISP \\ & 3HSP J123353.4+145925.7 & 188.47 & $+$14.99 & 2.7 & 0.520 & BLL & HSP \\ & Virgo & 187.70 & $+$12.34 & 0.1 & 0.004 & Gal. Cluster & -- \\ & NGC 4254 & 184.71 & $+$14.43 & 3.6 & 0.008 & Starburst Gal. & -- \\ \multicolumn{8}{c}{ }\\ \hline \multicolumn{8}{c}{ }\\ \multirow{9}{*}{Mkn 501} & 4FGL J1639.2+4129 & 249.82 & $+$41.48 & 3.3 & 0.691 & FSRQ & LSP \\ & 5BZQ J1642+3948 & 250.75 & $+$39.81 & 2.1 & 0.593 & FSRQ & LSP \\ & 5BZQ J1646+4059 & 251.74 & $+$40.99 & 1.8 & 0.835 & FSRQ & -- \\ & 3HSP J164702.6+385001.6 & 251.76 & $+$38.83 & 1.6 & 0.135 & BLL & -- \\ & 5BZQ J1648+4104 & 252.12 & $+$41.07 & 1.7 & 0.852 & FSRQ & -- \\ & 4FGL J1648.2+4232 & 252.13 & $+$42.56 & 3.0 & -- & BCU & -- \\ & 5BZQ J1650+4140 & 252.52 & $+$41.68 & 2.0 & 0.585 & FSRQ & \\ & 5BZB J1651+4212 & 252.79 & $+$42.22 & 2.5 & 0.269 & BLL & -- \\ & 5BZB J1652+3632 & 253.20 & $+$36.54 & 3.2 & 0.648 & BLL & -- \\ \multirow{6}{*}{Mkn 501} & 5BZB J1652+4023 & 253.21 & $+$40.39 & 0.7 & 0.240 & BLL & HSP \\ & 5BZB J1655+3723 & 253.97 & $+$37.39 & 2.4 & -- & BLL & -- \\ & 5BZQ J1659+3735 & 254.88 & $+$37.59 & 2.4 & 0.771 & FSRQ & -- \\ & 5BZB J1701+3954a & 255.29 & $+$39.91 & 1.4 & -- & BLL & -- \\ & 5BZB J1701+3954b & 255.35 & $+$39.91 & 1.5 & 0.507 & BLL & -- \\ & 3HSP J170132.2+381103.9 & 255.38 & $+$38.18 & 2.2 & 0.600 & BLL & HSP \\ \multicolumn{8}{c}{ }\\ \hline \multicolumn{8}{c}{ }\\ \multirow{15}{*}{Mkn 421} & 5BZB J1051+3943 & 162.86 & $+$39.72 & 3.0 & 0.498 & BLL & ISP \\ & 5BZG J1100+4210 & 165.09 & $+$42.18 & 4.0 & 0.323 & BLL & -- \\ & 5BZB J1100+4019 & 165.09 & $+$40.32 & 2.3 & 0.225 & BLL & -- \\ & 3FGL J1101.5+4106 & 165.35 & $+$41.06 & 2.9 & -- & BCU & -- \\ & 5BZB J1101+4108 & 165.35 & $+$41.15 & 3.0 & 0.580 & BLL & LSP \\ & 4FGL J1101.5+3904 & 165.38 & $+$39.08 & 1.0 & -- & BCU & LSP \\ & 5BZB J1102+3801 & 165.60 & $+$38.02 & 0.4 & 0.392 & BLL & \\ & 5BZG J1105+3946 & 166.47 & $+$39.78 & 1.6 & 0.099 & BLL & LSP \\ & 3HSP J110600.3+375445.6 & 166.50 & $+$37.91 & 0.4 & 0.640 & BLL & HSP \\ & 5BZB J1109+3736 & 167.41 & $+$37.60 & 1.2 & 0.398 & BLL & ISP \\ & 5BZB J1110+3539 & 167.74 & $+$35.65 & 2.9 & -- & BLL & ISP \\ & 5BZB J1111+3452 & 167.88 & $+$34.87 & 3.6 & 0.212 & BLL & -- \\ & 3HSP J111603.4+371036.1 & 169.01 & $+$37.18 & 2.5 & 0.269 & BLL & HSP \\ & 3HSP J111644.6+402635.8 & 169.19 & $+$40.44 & 3.3 & 0.202 & BLL & HSP \\ & Arp 148 & 165.98 & $+$40.85 & 2.6 & 0.035 & Starburst Gal. & -- \\ \multicolumn{8}{c}{ }\\ \hline \multicolumn{8}{c}{ }\\ \multirow{8}{3cm}{1ES 0229+200} & 4FGL J0226.7+2312 & 36.63 & $+$23.19 & 3.2 & -- & BCU & ISP \\ & 4FGL J0227.8+2246 & 36.94 & $+$22.81 & 2.8 & 0.428 & BCU & LSP \\ & 3HSP J023005.9+194921.0 & 37.53 & $+$19.82 & 0.8 & 0.530 & BLL & HSP \\ & 4FGL J0237.3+2000 & 39.33 & $+$20.01 & 1.1 & -- & BLL & -- \\ & 5BZB J0238+1636 & 39.66 & $+$16.62 & 3.9 & 0.940 & BLL & LSP \\ & 5BZU J0242+1742 & 40.60 & $+$17.72 & 3.4 & 0.551 & BZU & -- \\ & 3HSP J024507.8+184308.1 & 41.28 & $+$18.72 & 3.3 & 0.430 & BLL & HSP \\ & Segue 2 & 34.82 & $+$20.18 & 3.2 & * & dSph (uft) & -- \\ \multicolumn{8}{c}{ }\\ \end{longtable} \begin{flushleft} {\normalsize $^*$Distance of $35 \pm 2$ kpc (see Tab. \ref{tab:dmnorth}).} \end{flushleft} \end{center} \end{scriptsize} \twocolumn In addition to the cross-search between the core-science and the extragalactic targets presented above, we also perform a search for other potentially interesting extragalactic sources (in particular known blazars, blazar candidates and starburst galaxies) within the large ASTRI Mini-Array FoV around the core-science targets. Although such additional sources do not satisfy the criteria to be considered as main candidate targets for the observatory phase of the instrument, their observation may nevertheless provide interesting insights, such as flux upper limits (ULs) on different source classes, useful to constrain their predicted $\gamma$-ray emission. To this end, we cross-match the list of core-science targets with the {\it Open Universe} master list of known and candidate blazars\footnote{The v2.0 of the {\it Open Universe} blazars list is available at the following web address: {\ttfamily http://openuniverse.asi.it/OU4Blazars/MasterListV2/}.} \citep{gio19,Chang_2020} which has been assembled by combining the 5BZCAT \citep{mas15}, the 3HSP \citep{cha19}, and the {\itshape Fermi}-LAT 4LAC catalogs \citep{fer20}, as well as with the sample of starburst galaxies presented in \citet{ack12}, \citet{Lunardini19}, and \citet[][see Sect. \ref{1068_sec} for more details]{aje20}. Tab. \ref{tab:anc-targ} reports, for each core-science target, the list of blazars and/or starburst galaxies found within 4$^\circ$ from it, along with the already identified cross-matches with the DM-dominated astrophysical targets. All blazars with $z \gtrsim 1$ have been removed from the selection. The table also reports, when available, the optical and SED classification of each object in the last two columns. Among the known BL Lac blazars found in the cross-match search (indicated as BLL in the ``Optical Class'' column), around 30\% of the objects listed in Tab. \ref{tab:anc-targ} belong to the HSP BLL sub-class. These sources, due to their high redshifts or their expected weakness as hard TeV emitters, did not pass the stringent criteria used to select the sample of HSPs and EHSPs shown in Tables \ref{tab:hsp-ma} and \ref{tab:ehsp-ma} and, thus, have to be considered only as ancillary targets of the main ASTRI Mini-Array core-science observations. Besides the relevant number of blazars within 4$^\circ$ from each of the main targets, the cross-match search with the starburst galaxy samples returns also two more objects of this source class which might be exposed during the initial ASTRI Mini-Array experiment phase: Arp 148, 2$^\circ$.6 away from Mkn 421, and NGC 4254, 3$^\circ$.6 away from M 87. As we have seen above, the majority of the extragalactic targets presented in the paper will be necessarily observed by means of dedicated pointings, presumably during the second phase of the experiment. Nevertheless, as we stated at the beginning of the section, we can again take advantage of the large ASTRI FoV to perform a joint observation of close targets. The red dotted circles in Fig. \ref{fig:aitoff_alltargets} show some of the possible joint pointings which could allow to optimize the observation of two or more targets at the same time, like, for example: the (projected) triplet of blazars composed of Mkn 180, 3HSP J113630.0$+$673704 and 3HSP J122514.2$+$721447 Nor\-thwards of M 82; the cluster pairing NGC 5813/NGC 5846 (separated by an angular distance of $\sim$1$^\circ$.3) at $\sim$50$^\circ$ of Galactic latitude; the two dSphs Bo{\"o}tes I and II (with an angular separation of $\sim$1$^\circ$.7), at a Galactic latitude of $\sim$70$^\circ$. It is clear that the feasibility of the presented plan of simultaneous observations critically depends on a number of collateral issues that must be preliminary addressed. In particular, the scheduling plan of the ASTRI Mini-Array experiment phase should account for the possibility to have multiple sour\-ces of interest in the same FoV when allocating observing time for the core-science targets. In this respect, a global optimization of the pointing strategy around a given core-science target may be adopted, e.g. defining a pointing region that maximizes the number of possible target in the FoV, while keeping the sensitivity on the primary target very close to the on-axis one. \section{Results of the simulated observations of TeV-emitting AGN}\label{sec:results} We present here a comprehensive view of the scientific prosp\-ects that can be achieved with long-term ASTRI Mini-Array observations of the VHE extragalactic sky. Such pros\-pects are related to challenging science cases for which the ASTRI Mini-Array can obtain breakthrough data at $E_\gamma \gtrsim 1$ TeV with exposures that overcome the experiment phase of the instrument. For the case of $\gamma$-ray emitting AGN, we identify the science cases reported below: \begin{itemize} \item the bright and nearby ($z \sim 0.03$) BL Lac objects Mkn 421 and Mkn 501; \item two catalogues of HSPs and EHSPs that can represent potential scientific cases for the ASTRI Mini-Array; \item two representative science cases from such catalogues, like RGB J1117+202 and 1ES 0229+200; \item the $\gamma$-ray emitting Seyfert 2 galaxy NGC 1068. \end{itemize} The catalogue and simulation studies of AGN observable at VHE with the ASTRI Mini-Array highlight that the instrument is able to perform detections of $\gamma$-ray signals with expected exposure times in the range from $\sim$10 h (blazars) to $\sim$200 h (Seyferts and starburst galaxies), depending on the object class and activity state. In particular, telescope pointings at nearby blazars in high state or extreme $\gamma$-ray emitters may allow to better characterize peculiar spectral features (e.g., $\gamma$-ray lines, spectral breaks and cut-offs) and systematically study populations of rare and unusual objects. To this end, the catalogues of blazars potentially at reach of the ASTRI Mini-Array presented in this paper offer a powerful tool to immediately identify the best candidates to be targeted for dedicated observations. In the following, we describe for each target the expected scientific results from ASTRI Mini-Array observations. \subsection{Mkn 421 and Mkn 501} The blazar subgroup of BL Lac objects dominates the TeV sky as observed by the current generation of Cherenkov arrays. At these energies we can optimally probe the emission from the most energetic electrons emitting through IC scattering \citep[e.g.,][]{1998ApJ...509..608T} and, potentially, from the by-products of hadronic reactions \citep[possibly involved in the emission of high-energy neutrinos; e.g.,][]{2016APh....80..115P}. Although the spectral characterization of the VHE emission of blazars beyond tens of TeV is already at reach of IACTs during high-activity source states and flares -- see e.g. the cases of Mkn 501 detected by HEGRA \citep{aha99} and the Mkn 421 flare detected by MAGIC in 2013 \citep{mag20b} -- the study of the most energetic part of their $\gamma$-ray spectrum during quiescent states is currently hampered by the limited sensitivity of the present generation of arrays above 10 TeV. The ASTRI Mini-Array will allow us to study in detail the emission from the most energetic particles, constraining the maximum energy attained by the acceleration process(es) and investigating the time-depend\-ent evolution. Complemented with multi-wavelength data, the spectrum record\-ed by the instrument can be used to derive tight constrains on the physical parameters of the emission region. In Tab. \ref{tab:mkn} we present the basic properties of the two closest ($z \sim 0.03$) BL Lac objects Mkn 421 and Mkn 501 for observations with the ASTRI Mini-Array. \subsubsection{Spectral characterization of low and high flux states} The goal of the proposed observations is to determine the spectrum of Mkn 421 and Mkn 501 from few TeV up to 30 TeV, above which the EBL absorption completely suppresses the observed emission. We propose ASTRI Mini-Array observations of Mkn 421 and Mkn 501, the closest BL Lacs, to probe: (i) the spectral slope, the maximum energy and the dynamics (through variability) of the most energetic particles; (ii) the optical depth, a key parameter for the potential multimessenger role \citep[e.g.,][]{2019MNRAS.488.4023T}; (iii) the existence of other spectral components, related to photo-meson and/or synchrotron losses of high-energy protons \citep[e.g.,][]{2017A&A...602A..25Z}. The observations above 10 TeV, where the absorption by EBL is rather important, can also be used to test several proposals of fundamental physics (see {\bfseries Paper II}). \begin{table*}[width=17cm,align=\centering] \centering \caption{Basic properties of the BL Lac objects Mkn 421 and Mkn 501 for observations from the {\itshape Observatorio del Teide} site.} \label{tab:mkn} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccc} \hline \hline \multicolumn{7}{c}{ }\\ Target & Class & R.A. J2000 & dec. J2000 & Min. ZA & $z$ & Notes \\ (IAU Name) & & (deg) & (deg) & (deg) & & \\ \multicolumn{7}{c}{ }\\ \hline \multicolumn{6}{c}{ }\\ Mkn 421 & Blazar & 166.11 & $+$38.21 & 9.91 & 0.030 & Better suited for ToO observations of high states\\ Mkn 501 & Blazar & 253.47 & $+$39.76 & 11.46 & 0.034 & Better suited for ToO observations of high states\\ \multicolumn{7}{c}{ }\\ \hline \end{tabular} } \end{table*} In Fig. \ref{FIG:Chap7_para1_Fig01} and \ref{FIG:Chap7_para1_Fig02} we report the VHE section of the SED of the proposed targets, Mkn 421 and Mkn 501. In both cases we report representative spectra for low, high and flare states; such spectra are simulated with observing times of 200 h, 20 h and 5 h. It is worth noting that, although not representative of a single observing run of the source, the 200-h simulation performed here must be interpreted as resulting from the combination of multiple data sets taken over years of ASTRI Mini-Array observations, during periods in which Mkn 421 and Mkn 501 remain at quiescent flux levels. This approach is commonly adopted in current IACTs in the framework of the multi-wavelength (MWL) and multi-messenger study of the blazar emission \citep[see e.g.][]{hec22}. In particular, the better sensitivity above 10 TeV will allow the ASTRI Mini-Array to probe the potential emergence of additional spectral components (e.g., hadronic), possibly less variable than the leptonic IC component. In all cases the spectrum should be observable up to $\sim$30 TeV for both sources with moderate ($\gtrsim$20 h) exposures during intermediate states, in a similar way to the results achieved by HEG\-RA \citep{aha99}. On the other hand, for low states the detection of the steep spectrum above 10 TeV requires exposures larger than at least 100 h. For both sources, the best opportunities are offered by observations during (relatively frequent, especially for Mkn 421) high states. Historical records show that these states can last for several days, allowing to easily accumulate $\gtrsim$5 hours of data during a single event. Low states can be potentially relevant in view of the possible presence of slowly-varying hadronic components. \subsubsection{Searches of very-high energy spectral features in Mkn 501} Mkn 501 displayed a historical high flux and hard spectrum during a two-week flare detected with {\itshape Swift}-XRT in the X-rays and with the MAGIC telescopes in the VHE band from 2014 July 16 to 2014 July 31. On 2014 July 19 (MJD 56857.98), a narrow spectral feature centered around 3 TeV was detected at $\sim$4$\sigma$ confidence level, in coincidence with the day with the highest X-ray flux ($>$0.3~keV) measured during more than 14 years of operation of the {\itshape Swift} mission \citep{2020A&A...637A..86M}. If real, this VHE spectral feature can be interpreted within the context of three different theoretical scenarios: a two-zone emitting region model, a pile-up in the electron energy distribution or as the result of a pair cascade from electrons accelerated in a black hole magnetospheric vacuum gap. Since the narrow spectral feature is centered at 3 TeV, within the observation window of the ASTRI Mini-Array, and being Mkn 501 one of the targets selected to be followed with the array, simulations of the ASTRI Mini-Array response have been carried out. The starting point of the simulation is the spectral shape observed by MAGIC after correcting for the EBL absorption using the model by \citet{2011MNRAS.410.2556D}. For the MAGIC spectral fit a log-parabola (LP) and an additional strongly curved LP or a Gaussian function were used. The best-fit parameters are given in table 4 of \citet{2020A&A...637A..86M}. For the ASTRI Mini-Array simulations, the spectral points observed with the MAGIC telescopes are used as input to avoid additional uncertainties from the spectral fit. From this input, we simulate the detectable events, that are in turn used to produce a SED by splitting them into independent energy slices that correspond to independent spectral points. In order to calculate the significance of a possible spectral feature, two types of fits are used and compared with a likelihood ratio test, for both the observed spectrum as well as for the intrinsic one after EBL correction \citep{2011MNRAS.410.2556D}. On one hand a broad-band LP is assumed as null hypothesis, and two distinct functions are used to test the hypothesis of an extra spectral component: (i) a LP plus a curved LP \citep[as described in equation 6 of][]{2020A&A...637A..86M} and (ii) a LP plus a Gaussian function. Following this procedure, a single realization of a 1-h observation of Mkn 501 with the ASTRI Mini-Array has been simulated. The result is shown in Fig. \ref{fig:mrk501_bump}: while the observations with the MAGIC telescopes could only reveal the spectral feature at $\sim$4$\sigma$ confidence level, both assumptions of the narrow LP and of the Gaussian function are preferred with respect to a single broad LP fit with a significance of 5.8$\sigma$ and 5.8$\sigma$, respectively, for the observed spectrum, and 5.4$\sigma$ and 5.3$\sigma$ for the EBL-corrected (intrinsic) spectrum. In order to account for the statistical uncertainties of the spectral points observed by MAGIC, we have simulated a set of 200 realizations. At each iteration, every spectral point has been extracted from a normal distribution whose mean and standard deviations correspond to the individual observed spectral points and their corresponding uncertainties, respectively. The results are reported in Tab.~\ref{tab:mrk501bump}. Therefore, the presence of a narrow feature in the spectrum of Mkn 501 assuming a similar behaviour as the one observed with the MAGIC telescopes, could be confirmed with the ASTRI Mini-Array in a 1-h exposure assuming the spectral points observed by MAGIC as shown in Fig.~\ref{fig:mrk501_bump}. However, when taking into account the statistical uncertainties, in order to have at least a $\sim$50\% probability of detection the feature 1.5 h of observation time would be required, increasing up to $\sim$80\% probability for 2 h of exposure. This set of simulations have been performed assuming the same characteristics of the MAGIC observations. However, future observations will be essential to search for spectral features with potentially different characteristics and/or in different $\gamma$-ray blazars. \begin{table} \caption{Simulations of the Mkn 501 spectral feature hint. The results are reported as percentage of number of detections of the spectral feature with respect to a broad LP above 5$\sigma$ confidence level for 200 realizations, to account for the statistical uncertainties of the spectral points observed with the MAGIC telescopes.} \label{tab:mrk501bump} \begin{tabular}{ccccc} \hline \hline \multicolumn{5}{l}{ }\\ Obs. Time & \multicolumn{2}{c}{Observed} & \multicolumn{2}{c}{Intrinsic} \\ (h) & LP & Gaussian & LP & Gaussian \\ \multicolumn{5}{l}{ }\\ \hline \multicolumn{5}{l}{ }\\ 1.0 & 32\% & 27\% & 24\% & 19\% \\ 1.5 & 57\% & 53\% & 48\% & 44\% \\ 2.0 & 77\% & 78\% & 75\% & 70\% \\ \multicolumn{5}{l}{ }\\ \hline \end{tabular} \end{table} \subsection{Blazars beyond the local Universe} In addition to the two (very local) HSPs Mkn 421 and Mkn 501, discussed in the previous section, more blazars will be likely within reach of detection by the ASTRI Mini-Array, even if the EBL absorption is expected to significantly reduce the observed VHE flux of more distant objects. As already discussed, HSPs represent the most promising class of extragalactic so\-urces to be detected. The actual probability of detecting them depends on their global brightness, the shape of the $\gamma$-ray spectrum and their redshift. About 50 HBLs have been currently detected at TeV energies by the former and current ground-based Cher\-enkov detectors\footnote{See the ASI-SSDC TeGeV web catalog ({\ttfamily https://www.ssdc.asi.it/tgevcat/}) or the TeVCat v2.0 ({\ttfamily http://tevcat2.uchicago.edu/}) for the complete list.}. The relatively small number of blazars currently detected at very high energies is mainly due to the limited sensitivity of the current Cher\-enkov instruments at the TeV energies and to the lack of systematic searches. Observations of HSP blazars by the ASTRI Mini-Array can be interesting for several reasons: they can be used as probes for the EBL distribution and fundamental physics studies (as shown in {\bfseries Paper II}), but they can also be observed to study particle acceleration processes up to the most extreme energies and to identify the origin sites of UHECRs and cosmic neutrinos with energies beyond the PeV. Given the expected energy threshold ($\sim$1 TeV) and sensitivity (e.g., a factor of $\gtrsim$2 better than the H.E.S.S. Cherenkov array at the highest energies above $\sim$10 TeV; see Sect. 8.2 of {\bfseries Paper II}), the ASTRI Mini-Array is a suitable instrument to perform observations and spectral characterization of HBL/HSP blazars and, in particular, of the so-called {\it extreme} blazars (EHSP), with the IC component peaking in the TeV band. Moreover, the large ASTRI Mini-Array FoV can be exploited to perform HSP/HBL and EHSP blazar surveys, possibly in joint observations with the other class of candidate and known TeV extragalactic sources (see Sect. \ref{sec:simobs}). Coupled with data from existing Cherenkov facilities (MAGIC, H.E.S.S., VERITAS, HAWC), the analysis of ASTRI Mini-Array exposures will be beneficial for the VHE astronomical community in order to characterize the TeV emission of these sources in a multi-instrument framework. \begin{table*}[width=17cm,align=\centering] \centering \caption{List of HSPs potentially detectable with 50-h observations by the ASTRI Mini-Array on the basis of the method described by \citet{bal20}. The last column reports the value of the parameter $F$ which is proportional to the chance for the source to be detected by the ASTRI Mini-Array (see text for details).} \label{tab:hsp-ma} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccc} \hline \hline \multicolumn{8}{l}{ }\\ Target & IAU name & R.A. J2000 & dec. J2000 & Min. ZA & $z$ & {\itshape Fermi} / TeVCat & $F$ \\ & & (deg) & (deg) & (deg) & & & \\ \multicolumn{8}{l}{ }\\ \hline \multicolumn{8}{l}{ }\\ BZBJ0643+4214 & B3 0639+423 & 100.86 & $+$42.24 & 13.94 & 0.089 & N / N & 0.14 \\ BZBJ1104+3812 & Mkn 421 & 166.11 & $+$38.21 & 9.91 & 0.030 & Y / Y & 0.85 \\ BZBJ1117+2014 & RGB J1117+202 & 169.28 & $+$20.24 & 8.06 & 0.138 & Y / N & 0.14 \\ BZBJ1136+7009 & Mkn 180 & 174.11 & $+$70.16 & 41.86 & 0.045 & Y / Y & 0.48 \\ BZBJ1217+3007 & TON605 & 184.47 & $+$30.12 & 1.82 & 0.130 & Y / Y & 0.21 \\ BZBJ1428+4240 & 1ES 1426+428 & 217.14 & $+$42.67 & 14.37 & 0.129 & Y / Y & 0.12 \\ BZBJ1653+3945 & Mkn 501 & 253.47 & $+$39.76 & 11.46 & 0.033 & Y / Y & 0.41 \\ BZBJ1728+5013 & 1ES 1727+650 & 262.08 & $+$50.22 & 21.92 & 0.055 & Y / Y & 0.23 \\BZBJ1959+6508 & 1ES 1959+650 & 300.00 & $+$65.15 & 36.85 & 0.047 & Y / Y & 0.46 \\ BZBJ2123$-$1036 & RBS 1742 & 320.78 & $-$10.61 & 38.91 & 0.023 & N / N & 0.19 \\ BZBJ2347+5142 & 1ES 2344+514 & 356.77 & $+$51.70 & 23.40 & 0.044 & Y? / Y & 0.17 \\ \multicolumn{6}{l}{ }\\ \hline \end{tabular} } \end{table*} In order to select a list of HSPs potentially detectable with the ASTRI Mini-Array, we have followed two independent methods. In the first one we start from the catalog of blazars discovered so far \citep[the Roma-BZCAT Multifrequency Catalogue of Blazars;][]{mas09,mas15} to find all the HSPs (not necessarily ``extreme'') with declination $\delta \gtrsim -20^\circ$ -- which guarantees a good source visibility for Cherenkov observations under a maximum ZA of about $45^\circ$ -- that can have a VHE emission strong enough to be detected by the instrument. Since the BZCAT is not a complete catalog, we also present a second selection, this time based on the 3HSP catalog of EHSP candidates \citep{cha19}, specifically focused on the selection of more cases of extreme HSPs not yet detected at TeV energies. \subsubsection{HSPs from the BZCAT}\label{sec:bzcat} In order to select a reasonable list of good targets, we have first considered all the HSPs present in the BZCAT and followed the method discussed in \citet{bal20} to predict the VHE emission of each object. Unlike other methods, which are based on the extrapolation of $\gamma$-ray (typically {\itshape Fermi}-LAT) photometric points into the VHE regime, this technique is based just on low energy data, in the radio and in the X-ray band, respectively. This approach was motivated by the idea that some of the HSPs detectable by CTA could be faint at lower energies if they have a very hard $\gamma$-ray spectrum, and may not be yet detected by {\itshape Fermi}-LAT. We now want to use the same approach for the ASTRI Mini-Array. The prediction of the VHE properties based on radio and X-ray data is possible because the synchrotron and IC humps observed in the SEDs of HSPs are mutually connected so that radio, X-ray and $\gamma$-ray properties are significantly correlated. For example, as discussed in detail in \citet{bal20}, from the intensity of radio emission it is possible to obtain a reasonable estimate of intensity of the $\gamma$-ray emission in the {\itshape Fermi}-LAT energy band, while from the X-ray-to-radio flux ratio parametrized with the two-point spectral index $\alpha_{\rm RX}$ we can derive the slope of the $\gamma$-ray spectrum and the position of the synchrotron and IC peaks. The predicted VHE emission can be then folded with the EBL absorption model \citep{fra17} in order to obtain a prediction of the number of photons actually observable at VHE. In order to take into account the large scatter on these statistical relations we produce a significant number (1000) of possible realizations of the VHE spectrum \citep[see][for more details]{bal20}. The fraction $F$ of VHE spectral realizations that lay above the 50-h ASTRI Mini-Array sensitivity curve then gives a reasonable estimate of the detection probability with this instrument. We note that the large scatter insisting on the relations used in this method -- from $\sim$0.2 dex to $\sim$0.5 dex \citep{bal20} -- is not only due to the intrinsic variance of the properties of the population, but it also includes the strong variability of the sources. This means that a relatively low value of $F$, e.g. 0.1, does not necessarily mean that the object has a low (10\%) probability of being detected: if the source is variable and it is caught during a high state -- e.g., following a trigger from MWL monitoring -- then its chances of being detected could be significantly higher. At the same time, variability may lead to an overestimate of the value of F under some specific circumstances. This means that a high value of F will not necessarily guarantee the actual detection of the source. As a test, we have applied the same method to the HSPs observed by VERITAS: 25 HSPs have been detected so far according to \citet{ben19}, while other 43 have only an UL in \citet{arc16}. Using the VERITAS sensitivity curve of 50 h, we computed the values of $F$ for all of these 68 HSPs (see Fig. \ref{fig:veritas}). As expected, the 25 detected sources have values of $F$ significantly larger than the non-detected ones, with 80\% having $F>0.1$ (compared to 30\% of the non-detected). This is a reasonable result, considering the large uncertainties of the method, the variability of the sources and the fact that different exposure times have likely been used for all these targets, while we are using only one sensitivity curve. Based on this result, we have decided to adopt a threshold of $F>0.1$ for the selection of a list of candidates for the ASTRI Mini-Array follow-up. This selection should maximize the detection probability on one hand, and the level of completeness on the other. For the sources without a redshift estimation but with a lower limit from the literature, we adopt the lower limit as an actual value. If no lower limits are available, we assume a tentative $z$ based on the optical magnitude. By selecting the sources with at least 10\% of VHE spectral realizations above the sensitivity curve ($F \gtrsim 0.1$) we obtain a list of 11 HSPs with $\delta \gtrsim -20$ deg (i.e. observable from the ASTRI Mini-Array site). The list includes the two brightest and most local HSPs, i.e. Mkn 421 and Mkn 501. As expected, several extreme HSPs like 1ES 1426$+$428 are present in the list, but we also find HSPs with not-so extreme synchrotron peak (between 10$^{16}$ and 10$^{17}$ Hz) like RGB J1117+202, that will be discussed in more details in the next sections. All but two sources in the selected list are detected by {\itshape Fermi}-LAT, although one (BZB J2347$+$5142) only appears in the preliminary version of the 4FGL catalogue \citep{FermiColl19} and not in the final one. Finally, 8 out of 11 selected HSPs have been already detected also at TeV energies with the current generation of Cherenkov telescopes (see Tab.~\ref{tab:hsp-ma}). As discussed above, we expect that most of the sources listed in Tab.~\ref{tab:hsp-ma} will be actually detected in sufficiently deep ASTRI Mini-Array pointings, although the objects with the lowest values of $F$ ($<$0.2) may require a preventive MWL monitoring to detect them during a high state. In conclusion, while the above studies show that ASTRI can re-detect the brightest TeV emitters and possibly add individual sources to the TeV catalog, it is unlikely that the number of newly detected sources will significantly increase the currently known catalog of ~50 BL Lac objects. \subsubsection{Extreme blazars}\label{sec:blazres} The second selection is focused on the ``extreme'' HSP. Recently, the third edition of the HSP/HBL and extreme blaz\-ar catalog \citep[3HSP;][]{cha19} has been released. The catalog contains more than 2000 HSP blazar candidates, with more than 300 classified as EHSPs. Most of the catalog sources also report a redshift estimation (a photometric one whenever the spectroscopic measure is not available) and their $\gamma$-ray counterpart, based on the cross-match with the first release of the Fourth Catalog of {\it Fermi}-LAT Sources \citep[4FGL;][]{FermiColl19}, the Second and Third Catalog of Hard {\it Fermi}-LAT Sources \citep[2FHL and 3FHL;][]{Fermi_2FHL, Fermi_3FHL}, and the First Brazil ICRANet $\gamma$-ray blazar catalog \citep[1BIGB;][]{1BIGB_CAT}. A useful quantity provided with the catalog is the figure of merit (FoM) parameter\footnote{The FoM is defined as the ratio of the flux at the synchrotron peak ($\nu_{\rm peak}{\rm f_{\nu_{\rm peak}}}$) of a given source to the peak flux of the faintest 1WHSP blazar detected in the TeV Band by the current IACTs array \citep{cha19}.}, which quantifies the level of potential detectability at TeV energies of each HSP object. In what follows, we have used this catalog to select the best possible EHSP targets to be observed with the ASTRI Mini-Array from the {\itshape Observatorio del Teide} site. \begin{table*}[width=17cm,align=\centering] \centering \caption{List of candidate EHSP targets extracted from the 3HSP catalogue, sorted by decreasing value of the FoM parameter (see text). We note that one source (3HSPJ064326.7+421418) is in common with the list presented in Tab. \ref{tab:hsp-ma}.} \label{tab:ehsp-ma} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline \hline \multicolumn{8}{l}{ }\\ Target & R.A. J2000 & dec. J2000 & Min. ZA & $z$ & {\itshape Fermi} / TeV & FoM & Min. Det. Time \\ & (deg) & (deg) & (deg) & & & & [hrs] \\ \multicolumn{8}{l}{ }\\ \hline \multicolumn{8}{l}{ }\\ 3HSPJ064007.2$-$125315 & 100.03 & $-$12.89 & 41.19 & 0.110 & Y / N & 5.01 & $<$50 \\ 3HSPJ151148.6$-$051346 & 227.95 & $-$5.23 & 33.53 & -- & Y / N & 2.51 & -- \\ 3HSPJ180408.9$+$004222 & 271.04 & $+$0.71 & 27.59 & 0.087 & N / N & 2.51 & -- \\ 3HSPJ001827.8$+$294730 & 4.62 & $+$29.79 & 1.49 & 0.100 & Y / N & 1.58 & $<$100 \\ 3HSPJ034819.9$+$603508 & 57.08 & $+$60.59 & 32.29 & -- & Y / N & 1.58 & -- \\ 3HSPJ050021.5$+$523801 & 75.09 & $+$52.63 & 24.33 & 0.150 & Y / N & 1.58 & $<$200 \\ 3HSPJ204206.0$+$242652 & 310.53 & $+$24.45 & 3.85 & 0.104 & Y / N & 1.58 & $<$200 \\ 3HSPJ044127.5$+$150455 & 70.36 & $+$15.08 & 13.22 & 0.109 & Y / N & 1.26 & $<$200 \\ 3HSPJ151041.1$+$333504 & 227.67 & $+$33.58 & 5.28 & 0.114 & N / N & 1.26 & -- \\ 3HSPJ151845.7$+$061356 & 229.69 & $+$6.23 & 22.06 & 0.102 & Y / N & 1.26 & $\gg$200 \\ 3HSPJ064326.7$+$421418 & 100.86 & $+$42.24 & 18.44 & 0.089 & N / N & 1.00 & -- \\ 3HSPJ005916.9$-$015017 & 14.82 & $-$1.84 & 30.13 & 0.114 & Y / N & 0.79 & $<$200 \\ 3HSPJ102212.6$+$512400 & 155.55 & $+$51.40 & 23.10 & 0.142 & Y / N & 0.79 & $\gg$200$^*$ \\ 3HSPJ090802.2$-$095937 & 137.01 & $-$9.99 & 38.29 & 0.054 & N / N & 0.63 & -- \\ 3HSPJ122514.2$+$721447 & 186.31 & $+$72.25 & 43.95 & 0.114 & N / N & 0.63 & -- \\ 3HSPJ190411.8$+$362658 & 286.05 & $+$36.45 & 8.15 & 0.130 & Y / N & 0.63 & $<$100 \\ \multicolumn{8}{l}{ }\\ \hline \end{tabular} } \begin{flushleft} {\scriptsize $^*$The 4FGL counterpart of this candidate target (4FGL J1021.9+5123) shows a very poor spectrum determination, and it is flagged as 2048 (highly curved spectrum) in the DR3 Catalogue \citep{Fermi4FGLDR3}.} \end{flushleft} \end{table*} We start to select only objects with $\delta \gtrsim -20^\circ$ and redshift $z < 0.15$, since above $\sim$1 TeV (i.e. the nominal energy threshold of the ASTRI Mini-Array) the $\gamma$-ray absorption due to the EBL of sources above a redshift of 0.1 is already severe \citep[see e.g. figures 11 and 12 in][]{fra17}. With this first filtering, we end with a sample of 258 3HSP objects accessible from the ASTRI Mini-Array at Teide. Applying a further selection on FoM $\gtrsim 0.5$, we eventually end with a sample of 146 3HSP sources, which shares most of the well-known TeV-emitting HBLs, like e.g. Mkn 421, Mkn 501, 1ES 1426$+$428, Mkn 180, 1ES 1959$+$650, 1ES 2344$+$514 (already included in Tab. \ref{tab:hsp-ma}) and 1ES 0229$+$200 (with a $z=0.139$ and FoM $= 3.98$ from the 3HSP Catalog); in particular, the latter is considered as the prototype of such extreme TeV sources\footnote{1ES 0229$+$200 also appears as one of the core-science target discussed in {\bfseries Paper II}.}. The scientific aim of targeted observations of EHSP blaz\-ars with the ASTRI Mini-Array is two-fold: first, spectral characterization above several TeV of a few selected EHSP blaz\-ars already detected at TeV energies; secondly, detection of new TeV sources belonging to this class of objects. In particular, we want to concentrate the observations on already known TeV-extreme blazars like 1ES 0229$+$200 plus the observations of a selected sample of EHSP candidates extracted by the 3HSP catalog yet undetected at TeV energies. Detailed simulations of the expected spectrum to be observed with the ASTRI Mini-Array in the case of 1ES 0229$+$200, along with the estimate of the observing time needed to reach our first scientific objective (extended spectral measurements of EHSP target), will be given in the next section. In what follow, we will further process our sample of 3HSP visible for the Mini-Array site in order to select highly potential detectable objects not yet detected at TeV energies. Using the selection criteria described above on source declination, redshift and FoM, and further filtering the remaining sample by selecting only the extreme blazars with an estimated $\nu_{\rm peak}$ above 10$^{17}$~Hz, we finally end with a sample of 16 EHSP sources visible from the ASTRI Mini-Array site not yet detected at TeV energies. The complete list is reported in Tab. \ref{tab:ehsp-ma}. Then, taking advantage of the very recent Data Release 3 of the Fourth {\itshape Fermi}-LAT Catalog \citep[4FGL-DR3;][]{Fermi4FGLDR3}, covering 12 yrs of observations, we have reviewed and updated the $\gamma$-ray counterparts of the selected 3HSP sample. We ended with 11 out of 16 targets with a counterpart in the 4FGL-DR3 Catalog (see last column in Tab. \ref{tab:ehsp-ma}). Still exploiting the 4FGL-DR3 catalog, we have perform\-ed a detectability study of the selected sample for all the sources which both have a Fermi-LAT counterpart and a redshift estimation (9 out of 16 targets). Most of the 4FGL counterparts are significantly detected up to 1 TeV; hence, we have made an estimation of the intrinsic 3HSP multi-TeV source spectrum extrapolating the power-law spectrum measured by {\itshape Fermi}-LAT, and then correcting it by the EBL absorption. The expectation for the EBL-absorbed target spectra above 1 TeV are then compared with the ASTRI sensitivity curves for different exposure times (50, 100, 200, and 500 hours). In the last column of Tab. \ref{tab:ehsp-ma}, we report the minimum number of hours required for a detection at 5$\sigma$ level estimated in this way. The overall result of our study, although limited to the sources with {\itshape Fermi}-LAT counterpart and known redshift, is that at least 40\% of the candidate EHSP targets displayed in Tab. \ref{tab:ehsp-ma} are detectable within 200 hours of observation. % Since the full ASTRI Mini-Array will be likely operated in $2 \div 3$ years from now, some of the EHSP targets listed above might be already detected at TeV energies by the current Cherenkov detectors operating in the Northern hemisphere (MAGIC, VERITAS, HAWC). If this would be the case, the EHSP targets proposed to be observed might change accordingly. Observation priority should be given in any case to the targets with the highest FoM and the hardest $\gamma$-ray spectra (when available). Finally, the large FoV of the ASTRI Mini-Array can be exploited to perform joint observations of 3HSP blazars with different classes of possible TeV emitters see Sect. \ref{sec:simobs}. In this way, the very long integration times needed to achieve significant results in the other fields of interest (see Sect. \ref{sec:ngc1068} and \ref{sec:dmctools}) might be used to observe more than a target at the same time. \subsubsection{Simulations of representative cases of observable blazars} In order to show the actual capability of the ASTRI Mini-Array to detect HSPs/EHSPs, we present here the detailed simulations of two representative cases, namely an extreme HSP (1ES 0229$+$200) and a non-extreme one (RGB J1117$+$\\202). The observing simulation of RGB J1117$+$202 presented in this paper is of particular interest, since it allows us to assess the performance of the ASTRI Mini-Array for the study of similar objects in their low state. The results obtained show the improved sensitivity in the multi-TeV range of the ASTRI Mini-Array compared to the current IACTs. With this new facility, we will be able to extend to higher energies the spectral study of known TeV emitting HSPs, and also to detect in the VHE range some new HSP candidates such as those recently selected for VHE observations \citep[see e.g.][]{cos18,bal19,cha19}. In particular, the improved performances at VHE will be a significant advantage for the study of extreme HSP and, among them, of the so called hard-TeV BL Lacs.\\ \begin{center} {\itshape 1ES 0229+200} \end{center} The VHE $\gamma$-ray emission of 1ES 0229$+$200 was discovered by H.E.S.S. in 2007 \citep{2007A&A...475L...9A}. Later, the source was observed also by VERITAS \citep{aliu14} and MAGIC \citep{mag19}. By correcting the observed spectrum taking into account the EBL absorption effect, a very hard intrinsic spectrum with a photon index $\Gamma_{\rm intr} = 1.5$ is obtained. Thus, the $\gamma$-ray emission peak results to be located in the multi-TeV range. Due to its spectral characteristics, this source is classified as a hard-TeV BL Lac. To perform the simulation, we provide in input, as spectral model, an EBL-absorbed power law obtained from the results of the VERITAS observations. The significance of the detection is estimated through the test statistics (TS). The TS values derived ensure a solid detection ($>$5$\sigma$) in each of the simulated energy bins. From this study, we expect to detect this source with observations of less than 100-h duration; observations of about 200 h will allow a good characterization of the spectrum. Fig. \ref{fig:1es0229_simul} shows the results of our simulation for 200-h observations with the ASTRI Mini-Array compared to existing datasets. Given the assumed input model and being the VHE emission suppressed for the EBL absorption, the improved ASTRI Mini-Array performance at VHE is not fully exploited in this case. However, the possibility to extend to higher energies the spectral study of this class of objects is important. Since this source is also included in the ASTRI Mini-Array core science programme (see {\bfseries Paper II}), it will be therefore possible to collect data for even longer exposure time, thus enabling accurate studies to investigate in detail its fundamental properties. The simulation results for this source are presented mainly as a reference for the study of possible new candidates of this class. Only few objects with a similar VHE spectral shape have been observed so far, and thus the detection of similar sourc\-es would be very useful for several purposes. An extended sample of hard-TeV sources would allow a significant progress for the investigation on fundamental physics and for testing of EBL models, discussed in {\bfseries Paper II}, and would be crucial for the study of emission processes at VHE and, in general, for the knowledge of the blazar population in more detail. \begin{center} {\itshape RGB J1117+202} \end{center} As described in Sect. \ref{sec:bzcat}, we expect that the ASTRI Mini-Array will be able to detect not only EHSPs but also less extreme sources. Among the objects included in Tab. \ref{tab:hsp-ma}, we decided to focus on RGB J1117$+$202 ($z=0.138$), a non-extreme HSP (the synchrotron peak falls between 10$^{16}$ and 10$^{17}$ Hz) that has been observed at TeV energies with H.E.S.S. \citep{aha05b} but not detected yet. Our procedure has selected it as potential target for the ASTRI Mini-Array thanks to its bright radio flux combined to a relatively flat $\alpha_{\rm RX}$ value (i.e. relatively high X-ray-to-radio flux ratio). Indeed, it turned out to be a strong $\gamma$-ray source, clearly detected by {\itshape Fermi}-LAT. We decided to simulate this source to show the actual capability of the ASTRI Mini-Array to detect HSPs not necessarily extreme and not yet observed with the current generation of Cherenkov telescopes. This will demonstrate the actual potentialities of the Mini-Array in improving our knowledge of the VHE properties of the blazar population. For the simulation, we extrapolate the observed {\itshape Fermi}-LAT spectrum at higher energies using a power law with an exponential cut-off: \begin{equation} \Phi_\gamma(E_\gamma)=k_0\left(\frac{E_\gamma}{E_0}\right)^\Gamma e^{-E_\gamma/E_{\rm cut}} \end{equation} where $k_0$ is the normalization, $\Gamma$ is the power-law index, $E_0$ the pivot energy and $E_{\rm cut}$ is the energy of the exponential cut-off. We set the cut-off at 3 TeV on the basis of the statistical relations described in \citet[][see their Eqs. 2 and 3]{bal20}, taking into account the correction for the effect of EBL absorption (see Sect. \ref{sec:intro}). The resulting spectrum given as input for our simulation is reported in Fig. \ref{fig:J1117}. Using this model, we simulate the observation of the source with the ASTRI Mini-Array considering 50 h and 200 h of exposure time producing a single event list for photon energies $E_\gamma \gtrsim 0.8$ TeV. Then, we estimate the detection significance of the source through the TS value: we obtain a significance $\gtrsim$12$\sigma$ already for a 50-h total exposure, confirming that RGB J1117$+$202 can be clearly detected with such an amount of observing time. The simulated spectrum for a 200-h exposure is presented in Fig. \ref{fig:J1117}, showing how it allows a good characterization of the observed $\gamma$-ray emission. We note that the H.E.S.S. observation of RGB J1117$+$\\202 provided an UL on the integrated flux above 0.61~TeV of $1.44 \times 10^{-12}$ cm$^{-2}$ s$^{-1}$ \citep{aha05b}. This value corresponds to a flux density at 0.61~TeV which is $\sim$40\% lower than our model. Interestingly, also the GeV flux density reported in the 10-yr version of the {\itshape Fermi}-LAT catalogue \citep[4FGL-DR2;][]{abd20} is lower by $\sim$30\% than the 3FGL flux density that we used to constrain the model. This difference, considering the errors on the fluxes reported by these catalogues (4\%-7\%), seems to be significant, suggesting that the source has varied. This is further confirmed by the high value of the {\itshape variability index} given in the 4FGL (138.7) which is well above the threshold (18.48) used to claim that a source is variable at 99\% confidence level \citep[see][for details]{aha05b}. However, if we renormalize the model by this 30\%-40\%, we are still able to predict a detection at $\gtrsim$8$\sigma$ significance in a 50-h observation with the ASTRI Mini-Array. This confirms that the HSPs listed in Tab.~\ref{tab:hsp-ma} with low values of $F$ (between 0.1 and $\sim$0.2) are difficult targets, but whose detection is nevertheless feasible (especially if the observation is carried out during a high state, see Sect. \ref{sec:bzcat}). As a conclusion, our simulations demonstrate that the detection with the ASTRI Mini-Array of sources belonging to the HSP class revealed by {\itshape Fermi}-LAT, but not yet at higher energies by current generation of Cherenkov telescopes, is possible even with relatively short observations (50 h), at least up to redshifts of $\sim$0.15. \subsection{NGC 1068}\label{sec:ngc1068} The prototypical Seyfert 2 galaxy NGC 1068 is a nearby galaxy ($d_\odot = 14.4$ Mpc, corresponding to $z = 0.0034$) hosting a luminous AGN \citep[$L_{\rm bol} \simeq 10^{45}$ erg s$^{-1}$;][]{rig09}. Its hard X-ray-to-[O {\scriptsize IV}] luminosity ratio is about 500 times lower than the value expected for unobscured AGN of this luminosity, implying that an extremely high column density $N_{\rm H} \gtrsim 10^{25}$ cm$^2$ \citep[the so-called Compton-thick regime; e.g.,][]{com04} is blocking the line of sight to the nucleus. This source exhibits also starburst activity in its central region. Interferometric observations in the millimetric band identified a $\sim$2 kpc-wide starburst ring that surrounds a circumnuclear disk (CND) of $\sim$100 pc diameter. A sizeable fraction the molecular gas in the CND is observed to be involved in a massive AGN-driven wind \citep{kri11,Garcia14} which causes shocks at the interface with the quiescent gas. In the radio band, structures similar to collimated outflows but weaker and slower than the jets observed in blazars, have been detected \citep{gal96}. In the $\gamma$-ray domain, NGC 1068 was observed in the HE band by {\itshape Fermi}-LAT, and in the VHE band by the MAGIC telescopes and the HAWC $\gamma$-ray Observatory. NGC 1068 is the brightest of the Seyfert/starburst galaxies detected by {\itshape Fermi}-LAT. The spectral analysis based on 10 yr of {\itshape Fermi}-LAT data yields a power-law index of $-2.3$ and a energy flux integrated between 100 MeV and 100 GeV of $6.5\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ \citep{FermiColl19,Ballet20}. The MAGIC telescopes observed NGC 1068 for 125 hours. No significant $\gamma$-ray emission was detected, and an UL at the 95\% confidence level to the $\gamma$-ray flux above 200 GeV of $<$5.1$\times 10^{-13}$ cm$^{-2}$ s$^{-1}$ was derived \citep{MAGIC19}. The origin of the $\gamma$-ray emission in NGC 1068 is still undetermined, owing to the simultaneous presence of different particle acceleration sites (starburst ring, CND, jets). Figure \ref{fig:ngc1068} shows the $\gamma$-ray spectrum of NGC 1068 in the HE and VHE band, as well as the spectra predicted by the starburst \citep{eic16}, AGN jet \citep{len10}, and AGN wind \citep{lam16,lam19} models that have been proposed in the literature to explain the $\gamma$-ray emission. The predictions of the theoretical models differ significantly in the VHE band. The leptonic AGN jet model is characterized by a sharp cutoff at energies $\sim$100 GeV, while the hadronic starburst and AGN wind models extend to the VHE band, but with different spectral slopes. MAGIC observations of NGC 1068 put stringent const\-raints on the AGN wind model parameters such as the proton spectral index $p$, cut-off energy $E_{\rm cut}$, calorimetric fraction $F_{\rm cal}$, and proton acceleration efficiency $\eta$. In this paper we simulated the VHE spectrum predicted by the AGN wind model that takes into account the MAGIC constraints (see Figure \ref{fig:ngc1068}). Following \cite{MAGIC19} we adopted $p=2$, $E_{\rm cut}=2\times 10^6$ GeV, $\eta=0.1$, and $F_{\rm cal}=0.5$. This model predicts a $\gamma$-ray flux lower by about an order of magnitude than that measured by {\it Fermi}-LAT, requiring another mechanism(s) to explain $\gamma$-rays in the HE band as AGN jet and star formation activity, but provides a quite flat spectrum that extends in the energy band covered by the ASTRI Mini-Array. As discussed in \cite{lam19}, although NGC 1068 is a relatively nearby source, above few tens of TeV the effects of the interaction of $\gamma$-ray photons with the EBL start to be important and determine the absorption of a substantial fraction of the flux. The electron-positron pairs produced in this way, however, scatter off the photons of the CMB, triggering an electromagnetic cascade that reprocesses the absorbed flux. These pairs can be deflected by intergalactic magnetic fields (IGMFs), which are very uncertain~\citep[see][for a review]{alvesbatista2021a}. Convers\-ely, it is possible to use $\gamma$-ray observations to constrain IGMF intensities \citep{neronov2010a,tavecchio2010a,hess2014a,veritas2017a,fermi2018a, alvesbatista2020a,cta2021b}, as discussed in {\bfseries Paper II}. It has been suggested that electromagnetic cascades could also be qu\-enched by plasma instabilities \citep{broderick2012a,schlickeiser2012a}, which would ultimately cool down the pairs, hardening the spectrum at lower energies. However, the importance of this effect is disputed \citep{miniati2013a,alvesbatista2019g,alawashra2022a}. Here, given the uncertainties associated to both IGMFs and plasma instabilities, we neglect these effects and consider only EBL absorption. The $\gamma$-ray spectrum obtained in this way, which is shown with the dashed line in Figure \ref{fig:ngc1068}, is used to carry out the simulation following the procedure already used for starburst galaxies and other scientific cases presented in {\bfseries Paper II}. We simulated NGC 1068 as a point-like source, located at the known coordinates, and without considering the energy dispersion. \begin{table*}[width=17cm,align=\centering] \centering \caption{Relevant quantities (see text) for the three optimal dSphs selected in the Northern hemisphere: Ursa Minor (UMi), Coma Berenices (CBe), and Ursa Major II (UMa~II). The astrophysical factors for DM annihilation $J$ and decay $D$ are reported for both integration angles of 0.1 deg and optimal angles $\alpha_J$ and $\alpha_D$ as defined in \citet{bon15c}.} \label{tab:dsph} \resizebox{\textwidth}{!}{ \begin{tabular}{lccccccc} \hline \hline \multicolumn{8}{l}{ }\\ Name & Type & $r_h$ & $\sigma_v$ & $M_V$ & $M/L$ & $\epsilon$ & Ref.\\ & & (pc) & (km s$^{-1}$) & (mag) & (M$_\odot$/L$_\odot$) & & \\ \multicolumn{8}{l}{ }\\ \hline \multicolumn{8}{l}{ }\\ UMi & cls & $181 \pm 27$ & $9.5 \pm 1.2$ & $-8.8 \pm 0.5$ & 34 & $0.56 \pm 0.05$ & 1\\ CBe & uft & $77 \pm 10$ & $4.6 \pm 0.8$ & $-4.1 \pm 0.5$ & 252 & $0.38 \pm 0.14$ & 1\\ UMa II & uft & $149 \pm 21$ & $6.7 \pm 1.4$ & $-4.2 \pm 0.6$ & 953 & $0.63 \pm 0.05$ & 1\\ \multicolumn{8}{l}{ }\\ \hline \hline \multicolumn{8}{l}{ }\\ Name & $\alpha_J$ & $\log{J(0^\circ .1)}$ & $\log{J(\alpha_J)}$ & $\alpha_D$ & $\log{D(0^\circ .1)}$ & $\log{D(\alpha_D)}$ & Ref.\\ & (deg) & (GeV$^2$ cm$^{-5}$) & (GeV$^2$ cm$^{-5}$) & (deg) & (GeV cm$^{-2}$) & (GeV cm$^{-2}$) & \\ \multicolumn{8}{l}{ }\\ \hline \multicolumn{8}{l}{ }\\ UMi & $0.49$ & $18.6^{+0.3}_{-0.2}$ & $19.1^{+0.1}_{-0.1}$ & $0.25$ & $17.4^{+0.1}_{-0.1}$ & $18.1^{+0.1}_{-0.1}$ & 2\\ CBe & $0.20$ & $18.7^{+0.5}_{-0.4}$ & $19.2^{+0.6}_{-0.5}$ & $0.10$ & $17.7^{+0.5}_{-0.4}$ & $17.7^{+0.5}_{-0.4}$ & 2\\ UMa II & $0.53$ & $18.9^{+0.5}_{-0.4}$ & $20.1^{+0.7}_{-0.6}$ & $0.27$ & $17.8^{+0.5}_{-0.3}$ & $18.7^{+0.5}_{-0.4}$ & 2\\ \multicolumn{8}{l}{ }\\ \hline \multicolumn{8}{l}{$^1$\citet{mcc12}.}\\ \multicolumn{8}{l}{$^2$\citet{bon15c}.}\\ \end{tabular} } \end{table*} To reduce the impact of variations between individual simulations, we performed sets of $N = 100$ statistically independent realisations (see section 3.2.1 of {\bfseries Paper II} for a detailed description of the simulation setup). Briefly, the spectrum is calculated in 6 energy bins logarithmically spaced between 0.8 and 200 TeV. In each bin, we first create event lists based on our input model, and then fit a power-law model by using an unbinned maximum-likelihood approach. For each realisation, the power-law best-fit spectral parameters are used to calculate 100 values of flux and TS in the given energy bin. When the mean TS value in a given bin is greater than 9, we calculate the flux value and associated uncertainty, respectively, as the mean $\overline{F_{\rm sim}}$ and the standard deviation $\sigma_{\rm sim}$ obtained from the distribution of the 100 simulated fluxes. When the mean TS value is below 9, an UL at 95\% confidence level on flux is calculated as \citep{Bevington}: \begin{equation}\label{eqn:bevington} F_{\rm UL}=\overline{F_{sim}}+1.96 \times \frac{\sigma_{\rm sim}}{\sqrt{N}} \end{equation} Such a procedure is always applied to the entire sample of simulated flux values in each energy bin, regardless of the statistical significance of the single realizations. It should be noted that, for very weak sources such as NGC 1068, perhaps more than 100 realisations are needed to obtain a reliable average, as indicated by the fact that the simulated points are slightly below the input model. We find in this way that, with an exposure time of 200 h, the ASTRI Mini-Array is able to measure the source spectrum in the energy bins $\sim$2--5 TeV and $\sim$5--13 TeV, though with a low detection significance of ${\rm TS} = 11$ and 12, respectively (corresponding to a detection significance of $\sim$4.8$\sigma$). We therefore conclude that, with $\sim$10\% more exposure time (i.e. a total of $\sim$220 h), we are able to detect the source with a detection significance of 5$\sigma$. Confirmed observation of VHE emission from NGC 1068 would represent an observational evidence of particle acceleration, and interaction of accelerated protons with the interstellar matter, in an AGN-driven outflow, that requires a dedicated long-term observing campaign. This observational effort can contribute significantly to improve our understanding of AGN feedback mechanisms and the extragalactic $\gamma$-ray and neutrino backgrounds. The $\gamma$-ray emission may represent the on-set of the interaction between AGN winds and gas in their host galaxy, which is often identified as the main mechanism responsible for suppressing the star formation in AGN host. The resulting hadronuclear $\gamma$-ray and neutrino emission is expected to contribute to the corresponding diffuse fluxes \citep{tamborra14,wang16,wang_gamma,lam17,liu18}. \section{Dark matter in dwarf spheroidal galaxies}\label{sec:dmctools} In order to assess the capabilities of the ASTRI Mini-Array to search for DM in dSphs, in the context of the widely studied WIMP scenario (see Sect. \ref{indirectDM}), we consider three optimal targets observable from the Northern hemisphere: Ursa Minor (UMi), Coma Berenices (CBe), and Ursa Major II (UMa II). The targets were selected among the dSphs with the highest values of astrophysical factor, as reported in \citet{bon15b}. The relevant kinematic (velocity dispersion $\sigma_v$, mass-to-light ratio $M/L$ and ellipticity $\epsilon$) and brightness properties (half-light radius $r_h$ and $V$-band absolute magnitude $M_V$) of the three selected targets are reported in Tab. \ref{tab:dsph}. In our analysis, we take advantage of the full-likelihood method presented in \citet{ale12} and implemented into the {\ttfamily ctools} analysis chain. This method is derived from the likelihood maximization procedure commonly adopted in the analysis of $\gamma$-ray emission from astrophysical sources, and relies on the evaluation of a model-dependent Poisson likelihood function: \begin{equation}\label{eqn:fulllike} \mathcal{L}\left[ N_{\rm e}, M(\mathbf{\theta}) | N_{\rm o}, E_{1 \rightarrow N_{\rm o}} \right] = \frac{N_{\rm e}^{N_{\rm o}}}{N_{\rm o}!} e^{-N_{\rm e}} \prod_{i=1}^{N_{\rm o}} \mathcal{P}_i \end{equation} Here, $N_{\rm e}$ and $N_{\rm o}$ are the total number of estimated and observed events in the regions of interest (source and background) respectively, and $\mathcal{P}_i = \mathcal{P}[E_i, M(\mathbf{\theta})]$ is the value of the probability density function (PDF) associated to the $i$-th event with measured energy $E_i$ according to the DM emission model $M(\mathbf{\theta})$. Such a method is particularly well suited for DM studies, since it fully profits from the potential presence of DM spectral features in VHE data, since it is able to quantitatively compare expected and measured energy distributions in place of number of events. We include the DM spectral models for the case of annihilation of particles with masses in the range $0.55 - 100$ TeV. Each model is numerically computed for different DM interaction channels \citep{cem11,cir11,cir12}, and is characterized by the particle mass $m_\chi$ and the velocity-averaged cross section $\langle \sigma_{\rm ann} v \rangle$. We take the single-interaction photon counts for the $b\bar{b}$, $\tau^+\tau^-$, $W^+W^-$ and $\gamma\gamma$ interaction channels from \citet[see Fig. \ref{fig:dmspec}]{cia11}\footnote{Available at {\ttfamily http://www.marcocirelli.net/PPPC4DMID.html}.}, and convert them to fluxes thro\-ugh Eqs. \ref{eqn:dmfluxann} and \ref{eqn:dmfluxdec} taking into account the mass-dependent ``thermal relic'' $\langle \sigma_{\rm ann} v\rangle$ by \citet[$\langle \sigma_{\rm ann} v\rangle \simeq 2.2 \times 10^{-26}$ cm$^3$ s$^{-1}$ for $m_\chi \gtrsim 0.1$ TeV]{ste12}. We then simulate event lists for 100 h observations of UMi and CBe, and for both 100 h and 300 h in the case of UMa II; for this task, we assume the dSph halos to be point-like with respect to the ASTRI Mini-Array PSF of $\sim$0.1 deg. Although this assumption is not strictly valid for the dSphs in object, given their typical projected angular halo extension $\alpha_{\rm opt} \gtrsim 0.1$ deg (see Tab. \ref{tab:dsph}), it provides consistent results as long as the majority of the expected DM signal is enclosed in a region of integration $0.1 \times 0.1$ deg$^2$ wide. We demand the analysis of dSphs as extended targets to future publications. The simulated observations of the considered dSphs provide no evidence of detection for signals from DM annihilation or decay coming from the regions of interest; therefore, we use the signal ULs to derive constraints on the interaction parameters of the DM particles -- cross section $\langle \sigma_{\rm ann} v \rangle$ and particle lifetime $\tau_{\rm dec}$ -- as a function of the DM particle mass $m_\chi$. We thus produce the expected ASTRI Mini-Array sensitivities to the DM parameters for each interaction channel, using the maximum-likelihood evaluation model to solve the equation: \begin{equation}\label{eqn:loglikeulim} -2\ln{\mathcal{L}} = 2.71 \end{equation} looking for the largest solution at each DM particle mass. This procedure yields flux ULs to the DM signal integrated over the ASTRI Mini-Array energy range (0.65 TeV $\div$ 200 TeV) that are then converted to a minimal cross section or maximal lifetime of the DM particle at a given mass: \begin{equation}\label{eqn:xsec} \langle\sigma_{\rm ann} v\rangle_{\rm lim} = \langle\sigma_{\rm ann} v\rangle_{\rm thr} \cdot \frac{{\rm UL}\left( m_\chi \right)}{\int_{E_{\rm min}}^{E_{\rm max}} \frac{d\Phi_{\rm ann}\left( m_\chi \right)}{dE_\gamma} dE_\gamma} \end{equation} \begin{equation}\label{eqn:tdec} \tau_{\rm lim} = \frac{D\left( \Delta\Omega \right)}{4 \pi m_\chi {\rm UL}\left( m_\chi \right)} \cdot \int_{E_{\rm min}}^{E_{\rm max}} \frac{dN_\gamma\left( m_\chi \right)}{dE_\gamma} dE_\gamma \end{equation} with $\langle \sigma_{\rm ann} v \rangle_{\rm thr} \simeq 2.2 \times 10^{-26}$ cm$^3$ s$^{-1}$ for continuous spectra \citep{ste12} and $\sim$1.2$\times 10^{-30}$ cm$^3$ s$^{-1}$ for monochromatic emission lines \citep[see e.g. section 4.1.5 by][]{cta19}. For the dSph stacking analysis, we adopt as average astrophysical factors an arithmetic mean of the logarithmic values of each target measured at $0^\circ.1$, and weighted by their average logarithmic errors. The associated uncertainties are then derived as direct sum of the logarithmic intrinsic dispersion of such factors with the average logarithmic errors, yielding $\langle\log{J(0^\circ.1)}\rangle = 18.7^{+0.5}_{-0.4}$ and $\langle\log{D(0^\circ.1)}\rangle = 17.4^{+0.4}_{-0.3}$ respectively. This allows us to estimate uncertainties at 68\% confidence level (CL) on the DM parameters taking into account both the IRF photon statistics ($\sim$0.2 dex uncertainty) and the error on the modeling of the DM distribution. We present in Fig. \ref{fig:dmcomp} the final ASTRI Mini-Array sensitivity curves at 300 h to DM annihilation cross sections and decay lifetimes for the $\tau^+\tau^-$ and $\gamma\gamma$ channels, both in the case of single-target (UMa~II) and stacked observations of 3 dSph halos (UMi, CBe and UMa~II). We then outline how the foreseen prospects on DM se\-arches with the ASTRI Mini-Array compare to current results. As a visual guidance, in Fig. \ref{fig:dmcomp} we show such a comparison in a graphical way. Concerning the scenario of DM particles annihilating into SM pairs, the most stringent limits on the cross section at TeV energies have been currently established by 145-h VERITAS observations of the dSph UMa II \citep[$\langle \sigma v \rangle \lesssim 5 \times 10^{-24}$ cm$^3$ s$^{-1}$ in the $\tau^+\tau^-$ channel for $m_\chi \lesssim 10$ TeV;][]{zit17} for single-target observations, and by 354-h MAGIC combined exposures \citep[$\langle \sigma v \rangle \lesssim 2 \times10^{-25}$ cm$^3$ s$^{-1}$ in the $\gamma\gamma$ channel for $m_\chi \lesssim 10$ TeV;][]{mag22} for the case of stacking analysis. This latter result in particular may be significantly improved by the prospects of 300-h ASTRI Mini-Array observat\-ions presented here ($\langle \sigma v \rangle \lesssim 5 \times 10^{-25}$ cm$^3$ s$^{-1}$), especially at the highest masses ($\langle \sigma v \rangle \lesssim 10^{-24}$ cm$^3$ s$^{-1}$ for $m_\chi \lesssim 100$ TeV). Regarding the scenario of DM particle decay, for the case of continuous spectra the highest limits on the DM particle lifetime set with both single-target observations of the CBe dSph and dSph stacking analysis come from a 507-day integration with the HAWC $\gamma$-Ray Observatory \citep[$\tau \gtrsim 10^{27}$ s for $m_\chi \sim 100$ TeV in the $\tau^+\tau^-$ channel;][]{alb18}; such a value is $\gtrsim$10 times larger than the corresponding limit derived from ASTRI Mini-Array observations, albeit obtained in a much larg\-er time span of $\gtrsim$12,000 h. No comparable lifetime limits on DM decay into monochromatic lines have been obtained through single-targ\-et observations or stacking analyses of dSph halos, only existing in the literature for extended and cont\-aminated sources like the Perseus galaxy cluster \citep{acc18}. Coupled with observations of the Galactic center and halo (see {\bfseries Paper III} for a discussion), the search of $\gamma$-ray signals at multi-TeV energies from DM-dominated extragalactic sourc\-es is a science topic in which the ASTRI Mini-Array can give interesting contributions before the CTA era, provided that long-term duration exposures are carried out. Such searches will also take advantage from the large ASTRI Mini-Array FoV of $\sim$10$^\circ$, which will allow to simultaneously observe multiple targets falling in the same sky region (see Sect. \ref{sec:simobs}). In addition, the ASTRI Mini-Array angular resolution of $\lesssim$0.1 deg above 1 TeV and energy resolution of $\sim$10\% are particularly suited for an efficient search of monochromatic $\gamma$-ray emission lines, whose expected fluxes at $m_\chi \gtrsim 10$ TeV can be enhanced by fundament\-al-physics mechanisms \citep[see e.g. section 4.1.5 of][and refs. therein]{cta19}. \section{Summary and conclusions}\label{sec:conc} In this paper, we have explored in detail the scientific prospects of extragalactic astrophysics at multi-TeV energies that are within the reach of the next-generation IACT ASTRI Mini-Array, to be deployed at the {\itshape Observatorio del Teide}. In particular, we have focused on the observing feasibility of scientifically interesting targets that will be taken into account in the observatory phase of the array, subsequent to the experiment phase in which the observation of core-science targets (see {\bfseries Paper II}) will be prioritized. The $\gamma$-ray emission properties of such observatory targets can be characterized with exposure times from $\sim$1 h to $\sim$200 h, covering a variety of sky objects (blazars, Seyfert galaxies, DM-dominated halos) and science topics (from the study of the VHE spectral emission of AGN to the indirect searches of self-interacting DM). Albeit some of the sources potentially detectable with the ASTRI Mini-Array are already being observed since several years with previous and existing Cherenkov facilities (e.g., HEGRA, MAGIC, H.E.S.S., VERITAS and HAWC), the enhanced capabilities of this new instrument will be exploitable during the observatory phase for: \begin{itemize} \item better characterizing the spectral shape and features (such as bumps) of the multi-TeV $\gamma$-ray emission from extreme blazars (HSPs and EHSPs) with respect to current measurements, and extending the search for VHE signals up to $\sim$10$\div$20 TeV for the closest targets; \item studying the $\gamma$-ray emission from AGN- or starburst-pow\-ered outflows in Seyfert galaxies; \item obtaining new independent observations aimed at improving the constraints on the parameters of particle DM annihilating or decaying into SM products, especially in the case of two VHE photons (monochromatic emission lines), thro\-ugh long-term observation of dwarf sph\-eroidal galaxies. \end{itemize} Such observations will greatly benefit of the large ASTRI FoV ($\sim$10$^\circ$ diameter) and almost uniform instrument response up to $\sim$5$^\circ$ off-axis (see {\bfseries Paper II}): in fact, these characteristics will allow us to obtain a relevant fraction of ``free'' observing time for those extragalactic sources located close to core-science targets already during the ASTRI Mini-Array experiment phase. In addition, the same exposures may provide useful insight on the properties of weaker ancillary sour\-ces falling in the same FoV for which interesting flux ULs can be derived. Finally, an observing strategy optimized to take full advantage of the ASTRI Mini-Array capabilities may be foreseen to point at several extragalactic targets at once in the observatory phase, in order to increase the number of observed sources without requesting large amounts of dedicated exposure time. \section*{Acknowledgements} \noindent {\small The ASTRI project is becoming a reality thanks to Giovanni ``Nanni'' Bignami and Nicol{\`o} ``Nichi'' D'Amico, two outstanding scientists who, in their capability of INAF Presidents, provided continuous support and invaluable guidance. While Nan\-ni was instrumental to start the ASTRI telescope, Nichi transformed it into the Mini-Array in Tenerife. Now the project is being built owing to the unfaltering support of Marco Tavani, the current INAF President. Paolo Vettolani and Filippo Zerbi, the past and current INAF Science Directors, as well as Massimo Cappi, the Coordinator of the High Energy branch of INAF, have been also very supportive to our work. We are very grateful to all of them. Nanni and Nichi, unfortunately, passed away but their vision is still guiding us. This work was conducted in the context of the ASTRI Project, and is supported by the Italian Ministry of Education, University, and Research (MIUR) with funds specifically assigned to the Italian National Institute of Astrophysics (INAF). We acknowledge support from the Brazilian Funding Agency FAPESP (Grant 2013/10559-5) and from the South African Department of Science and Technology through Funding Ag\-reement 0227/2014 for the South African Gamma-Ray Astronomy Programme. This work has been supported by H2020-ASTER\-ICS, a project funded by the European Commission Framework Programme Horizon 2020 Research and Innovation action under grant agreement no. 653477. IAC is supported by the Spanish Ministry of Science and Innovation (MCIU). JBG acknowledges the support of the ``Viera y Clavijo'' program, funded by ACIISI and ULL. RAB acknowledges funding by the ``la Caixa'' Foundation (ID 100010434) and the European Union's Horizon~2020 Research and Innovation Programme under the Marie Sk{\l}odowska-Curie grant agreement No. 847648, fellowship code LCF/BQ/PI21/\\11830030. This research made use of the {\ttfamily ctools} \citep{kno16} and {\ttfamily GammaPy} software \citep{gammapy:2017,gammapy:2019}, community-develop\-ed analysis packages for IACT data. The {\ttfamily ctools} software is based on {\ttfamily Gamma\-Lib}, a community-develop\-ed toolbox for the scientific analysis of astronomical $\gamma$-ray data \citep{kno11,kno16b}. \textcopyright\ 2022. This manuscript version is made available under the CC-BY-NC-ND 4.0 license ({\ttfamily https://cre\-ativecommons.org/licenses/by-nc-nd/4.0/}).} \bibliographystyle{cas-model2-names} \bibliography{ASTRI_extragal_biblio}
Title: Near-infrared Accretion Signatures from the Circumbinary Planetary Mass Companion Delorme 1 (AB)b
Abstract: Accretion signatures from bound brown dwarf and protoplanetary companions provide evidence for ongoing planet formation, and accreting substellar objects have enabled new avenues to study the astrophysical mechanisms controlling formation and accretion processes. Delorme 1 (AB)b, a ~30-45 Myr circumbinary planetary mass companion, was recently discovered to exhibit strong H$\alpha$ emission. This suggests ongoing accretion from a circumplanetary disk, somewhat surprising given canonical gas disk dispersal timescales of 5-10 Myr. Here, we present the first NIR detection of accretion from the companion in Pa$\beta$, Pa$\gamma$, and Br$\gamma$ emission lines from SOAR/TripleSpec 4.1, confirming and further informing its accreting nature. The companion shows strong line emission, with $L_{line} \approx 1-6 \times 10^{-8}~L_\odot$ across lines and epochs, while the binary host system shows no NIR hydrogen line emission ($L_{line} <0.32-11\times10^{-7}\ L_\odot$). Observed NIR hydrogen line ratios are more consistent with a planetary accretion shock than with local line excitation models commonly used to interpret stellar magnetospheric accretion. Using planetary accretion shock models, we derive mass accretion rate estimates of $\dot{M}_{\mathrm{pla}}\sim3$-$4\times 10^{-8}\ M_\mathrm{J}$ yr$^{-1}$, somewhat higher than expected under the standard star formation paradigm. Delorme 1 (AB)b's high accretion rate is perhaps more consistent with formation via disk fragmentation. Delorme 1 (AB)b is the first protoplanet candidate with clear (S/N$\sim$5) NIR hydrogen line emission.
https://export.arxiv.org/pdf/2208.05016
\title{Near-infrared Accretion Signatures from the Circumbinary Planetary Mass Companion Delorme~1~(AB)b \footnote{Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{o}es (MCTI/LNA) do Brasil, the US National Science Foundation’s NOIRLab, the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).} } \author[0000-0002-8667-6428]{S. K. Betti} \altaffiliation{Visiting astronomer, Cerro Tololo Inter-American Observatory at NSF’s NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. } \affiliation{Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA} \author[0000-0002-7821-0695]{K. B. Follette} \affiliation{Department of Physics and Astronomy, Amherst College, Amherst, MA 01003, USA} \author[0000-0002-4479-8291]{K. Ward-Duong} \affiliation{Department of Astronomy, Smith College, Northampton, MA, 01063, USA} \author[0000-0003-0568-9225]{Y. Aoyama} \affiliation{Institute for Advanced Study, Tsinghua University, Beijing 100084, PR China} \affiliation{Department of Astronomy, Tsinghua University, Beijing 100084, PR China} \author[0000-0002-2919-7500]{G.-D. Marleau} \affiliation{Institut f\"ur Astronomie und Astrophysik, Universit\"at T\"ubingen, Auf der Morgenstelle 10, 72076 T\"ubingen, Germany} \affiliation{Physikalisches Institut, Universit\"at Bern, Gesellschaftsstr.~6, 3012 Bern, Switzerland } \affiliation{Max-Planck-Institut f\"ur Astronomie, K\"onigstuhl 17, 69117 Heidelberg, Germany} \author{J. Bary} \affiliation{Department of Physics and Astronomy, Colgate University, Hamilton, NY, 13346, USA} \author[0000-0003-1639-510X]{C. Robinson} \affiliation{Department of Physics and Astronomy, Amherst College, Amherst, MA 01003, USA} \author[0000-0001-8345-593X]{M. Janson} \affiliation{Institutionen f\"or astronomi, AlbaNova universitetscentrum, Stockholms universitet, 10691 Stockholm, Sweden} \author[0000-0001-6396-8439]{W. Balmer} \affiliation{The William H. Miller III Department of Physics \& Astronomy, Johns Hopkins University, Baltimore, MD 21218, USA} \affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore MD 21218, USA} \author[0000-0003-4022-8598]{G. Chauvin} \affiliation{Laboratoire Lagrange, Université Cote d’Azur, CNRS, Observatoire de la Cote d’Azur, 06304 Nice, France} \author[0000-0002-6217-6867]{P. Palma-Bifani} \affiliation{Laboratoire Lagrange, Université Cote d’Azur, CNRS, Observatoire de la Cote d’Azur, 06304 Nice, France} \correspondingauthor{S. K. Betti} \email{sbetti@umass.edu} \section{Introduction} \label{sec:intro} The theory of magnetospheric accretion, whereby infalling inner disk material flows along stellar magnetic field lines and forms a shock in a young star’s atmosphere, is well-established and consistent with a range of observations \citep[e.g.,][]{Koenigl1991}. X-ray emission originating from the shock front is absorbed and re-radiated as excess optical/ultraviolet Balmer continuum \citep[e.g.][]{Hartmann2016, Valenti1993, Gullbring1998, Calvet1998}, while infalling gas exhibits line emission, including the Balmer, Paschen, and Brackett series hydrogen lines. The same accretion process has been assumed to extend to substellar masses \citep[e.g.][]{Muzerolle2005, Alcala2017}, and accretion signatures from planetary mass companions (PMCs) have been interpreted under the stellar paradigm. Recent discoveries of H$\alpha$ accretion signatures in substellar companions---both brown dwarfs (BD) \citep[e.g.,~SR12c;][]{Santamaria-Miranda2018,Santamaria-Miranda2019} and protoplanet candidates \citep[e.g., PDS 70 b and c and LkCa 15 b,][]{Haffert2019, Wagner2018, Sallum2015}---have provided incontrovertible evidence of accretion onto secondary objects in young systems. Combined with the first detections of circumplanetary disks \citep{Benisty2021}, these systems allow for direct study of planet formation processes. Recently, \citet{Eriksson2020} discovered strong hydrogen (H$\alpha$, H$\beta$) and helium emission lines suggestive of ongoing accretion from the PMC 2MASS J01033563-5515561(AB)b, also known as Delorme 1 (AB)b. Among the first imaged circumbinary PMCs, Delorme 1 (AB)b was discovered in $L'$ band by \citet{Delorme2013} at 1\farcs77 (84 au) separation, with an estimated mass of 12--14 $M_\mathrm{Jup}$, placing it at the deuterium burning limit. Its host, Delorme 1 AB, is an M5.5 binary \citep[separation of $\sim$0\farcs25 or 12 AU;][]{Delorme2013} at 47.2 $\pm$ 3 pc \citep{Riedel2014} in the Tucana-Horologium association \citep{Gange2015}, placing its age at $\sim$30--45~Myr. While the system shows evidence of youth, including an overluminous central binary \citep{Riedel2014}, red $JHK_s$ colors \citep[similar to other young bound nonaccreting companions;][]{Riedel2014}, and low surface gravity \citep{Liu2016}, ongoing PMC accretion at 30--45~Myr is possible, as lower mass objects have been found to have long disk dispersal timescales \citep{Luhman2022}\footnote{\citet{Luhman2022} found that in the 15-21 Myr Lower Centaurus Crux and Upper Centaurus Lupus association, disk fractions increase with decreasing mass, from 0.7\% to 9\%, indicating lower-mass stars can retain disks far longer than originally estimated ($\sim$10 Myr).}. In this letter, we present the first detection of near infrared (NIR) emission lines from Delorme 1 (AB)b, corroborating the claim of ongoing companion accretion, and confirming the lack of accretion in the host binary system. This is the first accreting PMC with \PaB, \PaG, and Br$\gamma$ detections, and provides a critical benchmark for future NIR accretion studies of PMCs. NIR line ratios provide an important probe of the physical properties of the emitting region that can inform accretion paradigms. \section{Observations and Reductions} Delorme 1 (AB)b was observed with the TripleSpec 4.1 Near-IR spectrograph \citep{Schlawin2014} on the Southern Astrophysical Research (SOAR) Telescope during two observing runs in 2021-2022 (ID: 2021B-0311). TripleSpec 4.1 covers 0.8-2.47 $\mu$m at moderate resolution (R$\sim$3500) with a fixed 1\farcs1 $\times$ 28\arcsec slit. Both observations were taken in good weather conditions, with seeing around 0\farcs95-1\farcs0, with the slit aligned to the parallactic angle. Delorme 1 (AB)b was observed on 2021 November 20 (epoch 1) at an airmass of 1.2. Sixteen 180~s exposures were taken in an ABBA cycle, for a total exposure time of 2880 s, yielding a final reduced spectrum with a mean SNR of $\sim90$ at $H$-band. On 2022 January 24 (epoch 2), we observed Delorme 1 (AB)b at an airmass of 1.27 with an identical observational strategy and total integration time, with the reduced spectrum achieving a mean SNR of $\sim60$ at $H$-band. We observed the binary Delorme 1 AB on 2022 January 23 (airmass 1.34), and on 2022 January 24 (airmass 1.65). We took eight 30-s exposures in an ABBA cycle, for a total of 240~s each night, yielding average final spectrum SNRs of 270 and 300 at $H$-band. As the seeing on January 23 was $\sim$1\farcs3, we were not able to sufficiently resolve the companion and did not attempt to observe it. Spectra were reduced using a TripleSpec 4.1 version of SpeXtool \citep{Cushing2004} following the standard procedure: subtraction of A and B frames for sky removal, order identification, spectral extraction, and wavelength calibration from arc lamps. The orders were merged and areas of significant atmospheric absorption removed. A spectrophotometric standard (HIP 6364, A0V) was observed before and after Delorme 1 for both telluric correction and flux calibration, following \citet{Vacca2003} using the SpeXtool \texttt{xtellcor} software. Due to its close distance \citep[$47.2 \pm 3.1$~pc;][]{Riedel2014}, Delorme 1 resides in the Local Bubble \citep[area of low interstellar extinction;][]{Sfeir1999}; therefore, we assume zero reddening. \section{Results} We detect strong \PaB, \PaG, and \BrG emission lines (Figure~\ref{fig1}) in Delorme 1 (AB)b in both epochs. Hydrogen emission lines are not detected in the host binary (see Table~\ref{tab:results} for line flux upper limits), providing strong confirmation they are unique to the companion. We compute equivalent widths (EW), fluxes (\Fline) and luminosities (\Lline) for each line and epoch (Table~\ref{tab:results}). Line fluxes are computed by integrating under a best-fit Gaussian profile after subtracting a linear fit to the local continuum. The uncertainty on the line is a function of the scatter in the continuum and the best-fit Gaussian given by \begin{equation} \sigma = \sqrt{N_\mathrm{pix}} \times F_\mathrm{noise} \times \Delta\lambda, \end{equation} where $N_\mathrm{pix}$ is the number of pixels within 3$\times$FWHM, $F_\mathrm{noise}$ is the rms of the local continuum, and $\Delta\lambda$ is the wavelength resolution per pixel at each line. EWs are obtained from the ratio of line fluxes to the average local continuum level within a 50 \AA{} window on either side of the line. We estimate EW uncertainties following \citet{Vollmann2006}. We do not detect \BrG in epoch 2, potentially due to poorer seeing conditions. For non-detected lines, we calculate \Fline upper limits as $F_\mathrm{line}^\mathrm{upp} = 3\sigma$. \begin{deluxetable*}{@{\extracolsep{4pt}}cccccc@{}ccc} \tablenum{1} \tablecaption{Delorme 1 (AB)b Line Characteristics\label{tab:results}} \tablewidth{0pt} \tablehead{ \colhead{} & \colhead{} & \colhead{} & \colhead{} & \multicolumn{2}{c}{stellar scaling\tablenotemark{a}} & \multicolumn{2}{c}{planetary scaling\tablenotemark{b}} & \multicolumn{1}{||c}{} \\ \cline{5-6} \cline{7-8} \colhead{Line} & \colhead{EW} & \colhead{\Fline} & \colhead{\Lline} & \colhead{$\log(\Lacc)$} & \colhead{$\log(\Mdot)$} & \colhead{$\log(\Lacc)$} & \colhead{$\log(\Mdot)$} & \multicolumn{1}{||c}{\multirow{-1.5}{*}{\parbox{2.5cm}{\centering Delorme 1 AB \Fline}}} \\ \colhead{} & \colhead{(\AA)} & \colhead{(10$^{-16}$ erg/s/cm$^2$)} & \colhead{($10^{-8} \Lsun$)} & \colhead{($\Lsun$)} & \colhead{($ M_\mathrm{J}$ yr$^{-1}$)}& \colhead{($\Lsun$)} & \multicolumn{1}{c}{($ M_\mathrm{J}$ yr$^{-1}$)} & \multicolumn{1}{||c}{(10$^{-15}$ erg/s/cm$^2$)} } \startdata \multicolumn{8}{c}{UT 2021-11-20} & \multicolumn{1}{||c}{}\\ \hline \PaG & -1.95$\pm$0.74 & 6.82$\pm$1.33 & 4.75$\pm$1.11 & -5.50$\pm$0.53 & -8.82$\pm$0.53 & -3.94$\pm$0.30 & -7.27$\pm$0.30 & \multicolumn{1}{||c}{$<$3.67}\\ \PaB & -2.31$\pm$0.88 & 8.05$\pm$1.49 & 5.60$\pm$1.27 & -4.92$\pm$0.62 & -8.25$\pm$0.62 & -4.02$\pm$0.30 & -7.35$\pm$0.30 & \multicolumn{1}{||c}{$<$3.71} \\ \BrG & -2.08$\pm$1.11 & 1.64$\pm$0.56 & 1.14$\pm$0.42 & -5.43$\pm$0.94 & -8.75$\pm$0.96 & -3.91$\pm$0.30 & -7.24$\pm$0.30 & \multicolumn{1}{||c}{$<$1.01}\\ \hline \multicolumn{8}{c}{UT 2022-01-24} & \multicolumn{1}{||c}{}\\ \hline \PaG & -1.24$\pm$0.47 & 2.94$\pm$0.77 & 2.05$\pm$0.61 & -5.95$\pm$0.55 & -9.28$\pm$0.56 & -4.25$\pm$0.30 & -7.58$\pm$0.30 & \multicolumn{1}{||c}{$<$7.91} \\ \PaB & -1.44$\pm$0.62 & 3.49$\pm$0.85 & 2.43$\pm$0.67 & -5.31$\pm$0.64 & -8.64$\pm$0.65 & -4.33$\pm$0.30 & -7.67$\pm$0.30 & \multicolumn{1}{||c}{$<$6.56} \\ \BrG & -- & $<$0.74 & $<$0.52 & $<$-5.81 & $<$-9.17 & $<$-4.20 & $<$-7.53 & \multicolumn{1}{||c}{$<$1.86} \\ \enddata \tablenotetext{a}{\Lacc--\Lline scaling relation from \citet{Alcala2017}} \tablenotetext{b}{\Lacc--\Lline scaling relation from \citet{Aoyama2021}} \end{deluxetable*} During magnetospheric accretion, the infalling column of gas is heated to $\sim10^4$ K, producing broad emission lines \citep{Hartmann2016} such as \PaB, \PaG, and \BrG. The gas travels at free-fall velocity, and heats to $10^6$ K when it shocks at the stellar photosphere, fully ionizing and preventing the formation of hydrogen line emission. In contrast, recent simulations of accreting PMCs \citep{Aoyama2018, Aoyama2020} suggest differences in the physical conditions of the shocked region. Due to smaller masses and lower surface gravities, accreting gas travels at lower free fall velocities, leading to a non-fully ionized post-shock region. This results in shock-heated accreting gas capable of hydrogen line emission \citep{Aoyama2018}. In other words, the detections of Paschen and Brackett-series emission from accreting objects are an unambiguous sign of accretion; however, the dominant source of line emission may be either the infalling accretion column or the post-shock region. Given this ambiguity, we estimate accretion rates for Delorme 1 (AB)b following both families of accretion models, and discuss the differences below. The mass accretion rate is given by: \begin{equation} \dot{M} = \left(1-\frac{R_\star}{R_\mathrm{in}}\right)^{-1} \frac{\Lacc R_\star}{GM_\star}, \end{equation} where $R_\mathrm{in}$ is the inner disk radius, assumed to be 5 $R_\star$ \citep[e.g.,][]{Gullbring1998, Herczeg2008, Rigliaco2012, Alcala2017}, $R_\star$ is the radius of the accreting object, $M_\star$ is its mass, and \Lacc is the estimated accretion luminosity. Total accretion luminosity has been found to strongly correlate with emission line luminosities in T Tauri stars \citep{Rigliaco2012, Alcala2014, Alcala2017} as \begin{equation} \log (L_\mathrm{acc}/L_\odot) = a\times\log (L_\mathrm{line}/L_\odot) + b, \end{equation} where $a$ and $b$ are the fit coefficients for each line. These relationship can be used to estimate $\dot{M}$ from a single accretion-tracing line. However, \citet{Aoyama2020, Aoyama2021} argue that the \Lline--\Lacc relationships are not valid for planetary mass objects because of the different physical conditions of the emitting region. \citet{Aoyama2021} derived new theoretical \Lacc--\Lline relationships expected for PMCs based on the \citet{Aoyama2018} shock models. We refer to all accretion luminosities and mass accretion rates derived from \citet{Alcala2017} \Lacc--\Lline scaling relations as ``stellar" (e.g., ~$L_\mathrm{acc, ste}$/$\dot{M}_\mathrm{ste}$) and those derived from \citet{Aoyama2021} as ``planetary" (e.g., ~$L_\mathrm{acc, pla}$/$\Mdotpla$) for ease of distinguishing between the two. Following \citet{Eriksson2020}, we assume a companion mass of $M_p = 0.012\ M_\odot$ and radius $R_p = 0.163\ R_\odot$. We calculate $\Lacc$ following the \Lacc--\Lline scaling relations calibrated empirically for stars (\PaB: ($a,b$)=(1.06, 2.76), \PaG: ($a,b$)=(1.24, 3.58), \BrG: ($a,b$)=(1.19, 4.02)) by \citet{Alcala2017} and theoretically for PMCs (\PaB: ($a,b$)=(0.86, 2.21), \PaG: ($a,b$)=(0.85, 2.28), \BrG: ($a,b$)=(0.85, 2.84)) by \citet{Aoyama2021}. This allows us to directly compare our NIR-derived results to the accretion rates estimated by \citet{Eriksson2020}. Our \Mdot estimates are given in Table~\ref{tab:results} for both the ``stellar" and ``planetary" relations. \Mdot estimates are relatively consistent among lines and epochs under each scaling relation; however, the \citet{Aoyama2021} models predict \Mdot's that are systematically higher by several orders of magnitude. On average, using the stellar scaling relations of \citet{Alcala2017} we find a $\log(\dot{M}_\mathrm{ste})$ of $-8.53 \pm 0.28 \ M_\mathrm{J}\ \mathrm{yr}^{-1}$ for epoch 1 and $-8.85\pm0.28\ M_\mathrm{J}\ \mathrm{yr}^{-1}$ for epoch 2. Using the \citet{Aoyama2021} planetary shock-model relations, we find $\log(\Mdotpla) = -7.38\pm0.23\ M_\mathrm{J}\ \mathrm{yr}^{-1}$ and $-7.19\pm0.31\ M_\mathrm{J}\ \mathrm{yr}^{-1}$ for epochs 1 and 2, respectively. \section{Discussion} We have presented mass accretion rate estimates for Delorme 1 (AB)b derived from NIR hydrogen emission lines under two assumed scalings of \Lline to $\Lacc/\dot{M}$. Accretion rate estimates for individual NIR lines agree with one another within the ``planetary'' and ``stellar'' accretion paradigms, with the exception of the ``stellar'' Pa$\beta$ accretion rate, which is marginally inconsistent with the other ``stellar" accretion estimates. In Figure~\ref{fig2}, we compare our NIR observations (diamonds/stars) with the marginally-resolved H$\alpha$ observations of \citet{Eriksson2020} (gray/black circles) and convert each to \Mdot using both ``stellar" (unfilled symbols, \citealt{Alcala2017}; dark gray, \citealt{Natta2004}) and ``planetary" (filled symbols, \citealt{Aoyama2019}; light gray, \citealt{Thanathibodee2019}) scaling relations. H$\alpha$ can originate from chromospheric activity, complicating its interpretation. \citet{Eriksson2020} found that the contribution to the H$\alpha$ line profile due to chromospheric activity should be minimal at this age, pointing toward Delorme 1 (AB)b experiencing ongoing accretion. We find that our NIR \Mdot's generally agree with the \citet{Eriksson2020} estimates within uncertainties, albeit with slightly higher \Mdot values relative to the Balmer series, though our \PaB measurement is marginally inconsistent at the 1$\sigma$ level. Given the strength of the companion's NIR EWs relative to diagnostics measured for active low-mass stars \citep[$\sim0.04-0.05$ \AA; e.g.,][]{Schofer2019}, our results are most consistent with the presence of PMC accretion. We find agreement between $\Mdotpla$ and $\dot{M}$(H$\alpha$ 10\%); both are $\sim$1.5 mag higher than $\dot{M}_\mathrm{ste}$. As $\dot{M}$(H$\alpha$ 10\%) does not rely on scaling relationships, accurate continuum subtraction, or extinction, it is considered a robust independent measure of accretion \citep{White2003, Stelzer2007}, including for the lowest mass accreting protoplanets \citep[e.g., PDS 70b;][]{Haffert2019}. As noted by \citet{Alcala2014}, the empirical relationship between H$\alpha$ 10\% width and \Mdot \citep{Natta2004} has considerable scatter, and line luminosities should also be used when possible. However, the strong agreement between $\dot{M}$(H$\alpha$ 10\%) and $\Mdotpla$ could indicate that $\Mdotpla$ is a more accurate estimate of \Mdot for Delorme 1 (AB)b. The marginal inconsistency in $\dot{M}_\mathrm{ste}$ could be a result of applying stellar scaling relations to an object accreting under a different paradigm; this is not seen in the $\Mdotpla$s. To independently determine the accretion paradigm most consistent with Delorme 1 (AB)b without a reliance on scaling relations, line ratios can be used. NIR hydrogen lines are ideal for measuring accretion line ratios \citep[see][]{Edwards2013, Bary2008} due to their small line opacities, resulting in little blue or redshifted absorption from winds or infalling gas \citep{Folha2001, Edwards2006}. By comparing observed line ratios to accretion model prediction, we can probe physical conditions of the emitting region such as number density, temperature, and infall velocity. Line ratios have discriminating power between different physical line emission sources, as different accretion models predict different line ratios. To this end, we consider two models: the local line excitation model of \citet{Kwan2011} and the planetary shock model of \citet{Aoyama2018}. As shown in Figure~\ref{fig3}, the predicted line ratios of post-shock gas in a planetary atmosphere \citep[planetary paradigm,][purple/yellow circles]{Aoyama2018} can vary from those predicted by local line excitation models developed to describe infalling accretion columns of T Tauri stars \citep[stellar paradigm,][green/blue squares]{Kwan2011}, allowing us to infer which model better matches observations, though there is some overlap for lower densities, where we are not able to distinguish between accretion paradigms. We calculate line ratios for each line pair and epoch (star/diamond symbols) over the whole emission range\footnote{In T Tauri stars, winds and outflow absorption can affect line ratios. As such, residual line profiles selected over regions with no opacity effects are used to calculate line ratios \citep[see][]{Edwards2013}. However, these are assumed to be negligible in PMCs.}. In panel d, we include ratios with respect to published H$\alpha$ emission for context, noting these observations were not obtained contemporaneously with our NIR data. Line ratios may be affected by intrinsic and instrumental variability; therefore, inconsistency of the H$\alpha$ ratio with either model grid may not be indicative of variability in the physical conditions of the emitting region. For all measurements, observed line ratios fall nearest the \citet{Aoyama2018} models and consistently diverge from the \citet{Kwan2011} models, suggesting that planetary scaling relations are likely more appropriate in this situation. Therefore, we use the \citet{Aoyama2021} models and relations for further analysis. We extract all model physical input parameters consistent with observed line ratios within uncertainties. We find that the best-fitting models have preshock velocities of $70-170$ km/s and number densities of $10^{13-14}$ cm$^{-3}$. While the preshock velocity is consistent with measured \Mdot's and assumed mass \citep[and radius; see Figure 13 of][]{Aoyama2020}, the number density is higher than expected for the measured \Mdot assuming a pure planetary shock model. This could be explained by shock emission with a low filling factor resulting from a magnetospheric accretion flow, absorption in the post-shock region \citep{Hashimoto2020}, strong accretion column extinction \citep[][though they found that the \Mdot is too low for absorption by either gas or dust in the accretion flow]{Marleau2022}, or circumplanetary disk extinction in the line of sight \citep{Aoyama2020}. High resolution (R$\sim$10,000) spectra will help disentangle the accretion flow geometry and shed light on the nature of the accretion shock, as resolved line profiles can distinguish between geometries \citep{Aoyama2020, Marleau2022}. In Figure~\ref{fig4}, we show the \Mdot--$M$ relation for all known accreting substellar objects, together with low mass stars (Betti et al., in prep.). The $\Mdotpla$'s for Delorme 1 (AB)b lie above the canonical $\Mdot\sim M^{2.1}$ \citep{Muzerolle2005} T Tauri star relation consistent with formation via collapsing prestellar cores. The mass accretion rates are similar to other bound planetary mass companions (black squares), whose previous accretion rate estimations mostly come from H$\alpha$ line luminosity or H$\alpha$ 10\% width. The location of these bound PMCs in \Mdot--$M$ space is consistent with model predictions of PMC formation through disk fragmentation in disks with low viscosities \citep{Stamatellos2015}. These models predict higher accretion rates; companions that form in dynamically unstable systems have larger than expected gas mass reservoirs, allowing them to accrete more material \citep{Stamatellos2015} for longer. The high \Mdot observed for Delorme 1 (AB)b suggests that it may have formed via disk fragmentation. Its \Mdot is most consistent with \citet{Stamatellos2015} models with low disk viscosity ($\alpha \sim$0.001), and is comparable to PMCs with similar masses such as GSC~06214-00210~b, GQ~Lup~b, and DH~Tau~b, all of which have been theorized to have formed via disk fragmentation \citep{Stamatellos2015, Zhou2014}. In summary, the strong Pa$\beta$, Pa$\gamma$, and Br$\gamma$ emission seen from Delorme 1 (AB)b indicates strong ongoing mass accretion onto the PMC. Utilizing line ratios, we find that the NIR hydrogen emission is most consistent with models of planetary shock accretion, though the high predicted number density does not exclude magnetospheric accretion from occurring as well on the planetary surface. We conclude that higher \Mdot estimates derived from planetary scaling relations are more likely to reflect the true accretion rate, and the position of Delorme 1 (AB)b in \Mdot--$M$ space is consistent with formation via disk fragmentation. This would account for its high accretion rate, which is consistent with low disk viscosity, likely resulting in slower disk evolution and perhaps explaining why this 30--45~Myr object is still actively accreting \citep[potentially a ``Peter Pan disk";][]{Silverberg2020}. Detailed modeling of the planetary surface and disk will provide a clearer understanding of Delorme 1 (AB)b, and future observations of a wider range of line ratios will help constrain the nature of the accretion shock. Forthcoming work (Betti et al, in prep) will present detections of NIR accretion for a comprehensive sample of accreting BDs and PMCs as well as observational \Lacc--\Lline empirical relationships for the substellar regime in order to help constrain substellar formation mechanisms. Delorme 1 (AB)b is a benchmark accreting PMC, with current observations and theoretical models suggesting the nature of its emission is in the planetary regime. \acknowledgments We thank the anonymous referee for their careful review. We thank the SOAR support scientist, CTIO scientist Sean Points, for allocating some engineering time for our observations. S.K.B. and K.B.F. acknowledge support from NSF AST-2009816. G-DM acknowledges the support of the DFG priority program SPP 1992 ``Exploring the Diversity of Extrasolar Planets'' (MA~9185/1) and from the Swiss National Science Foundation under grant 200021\_204847 ``PlanetsInTime''. Parts of this work have been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation. \vspace{5mm} \facilities{SOAR(TripleSpec4.1)} \software{astropy \citep{2013A&A...558A..33A,2018AJ....156..123A}, specutils \citep{specutils}, Spextool \citep{Vacca2003, Cushing2004}, Matplotlib \citep{Hunter2007}} \bibliography{ref} \bibliographystyle{aasjournal}
Title: A roadmap to cosmological parameter analysis with third-order shear statistics I: Modelling and validation
Abstract: In this work, which is the first of a series to prepare a cosmological parameter analysis with third-order cosmic shear statistics, we model both the shear three-point correlation functions $\Gamma^{(i)}$ and the third-order aperture statistics $\langle\mathcal{M}_\mathrm{ap}^3\rangle$ from the BiHalofit bispectrum model and validate these statistics with a series of N-body simulations. We then investigate how to bin the shear three-point correlation functions to achieve an unbiased estimate for third-order aperture statistics in real data. Finally, we perform a cosmological parameter analysis on KiDS1000-like mock data with second- and third-order statistics. We recover all cosmological parameters with very little bias. Furthermore, we find that a joint analysis almost doubles the constraining power on $S_8$ and increases the figure-of-merit in the $\Omega_\mathrm{m}$-$\sigma_8$ plane by a factor of 5.9 with respect to an analysis with only second-order shear statistics. Our modelling pipeline is publicly available at this https URL
https://export.arxiv.org/pdf/2208.11686
\title{ A roadmap to cosmological parameter analysis with third-order shear statistics I: Modelling and validation } \titlerunning{Cosmological parameter analysis with third-order shear statistics} \author{Sven Heydenreich \inst{1}, Laila Linke \inst{1}, Pierre Burger \inst{1}, Peter Schneider \inst{1} } \authorrunning{Heydenreich et al.} \institute{ Argelander-Institut f\"ur Astronomie, Auf dem H\"ugel 71, 53121 Bonn, Germany \\ \email{sven@astro.uni-bonn.de} } \date{Version \today; received xxx, accepted yyy} % \abstract { In this work, which is the first of a series to prepare a cosmological parameter analysis with third-order cosmic shear statistics, we model both the shear three-point correlation functions $\Gamma^{(i)}$ and the third-order aperture statistics $\MapMapMap$ from the \textsc{BiHalofit} bispectrum model and validate these statistics with a series of N-body simulations. We then investigate how to bin the shear three-point correlation functions to achieve an unbiased estimate for third-order aperture statistics in real data. Finally, we perform a cosmological parameter analysis on KiDS1000-like mock data with second- and third-order statistics. We recover all cosmological parameters with very little bias. Furthermore, we find that a joint analysis almost doubles the constraining power on $S_8$ and increases the figure-of-merit in the $\Omm$-$\sigma_8$ plane by a factor of 5.9 with respect to an analysis with only second-order shear statistics. Our modelling pipeline is publicly available at \url{https://github.com/sheydenreich/threepoint/releases/}. } \keywords{gravitational lensing -- weak, cosmology -- cosmological parameters, methods -- statistical } \section{Introduction} \label{sec:introduction} The $\Lambda$ Cold Dark Matter model ($\Lambda$CDM) has been considered the standard model of cosmology for the past few decades. This relatively simple, 6-parameter model describes a wide range of observations, from the cosmic microwave background (CMB) to the observed large-scale structure of galaxies (LSS), with remarkable accuracy. As the reported uncertainties on cosmological parameters are approaching the per-cent level, a few tensions arise between CMB observations of the early Universe and observations of the local Universe that quantify the LSS \citep[for example, in the Hubble parameter $H_0$, see][and references therein]{diValentino:2021h}. In the past few years, also the matter clustering parameter $S_8=\sigma_8\sqrt{\Omm/0.3}$ has become subject to tension \citep[][and references therein]{Hildebrandt:2017, planck2020, joudaki2020, heymans2021,DES2021,diValentino:2021s}: The local Universe seems less clustered than observations of the CMB suggest. Assuming that these tensions are not due to unknown systematic effects, extensions to the $\Lambda$CDM model need to be explored. One of the most popular extensions is the $w$CDM model, where the equation-of-state of dark energy differs from $w=-1$. The dark energy task force has established that the weak gravitational lensing effect from the LSS, also called cosmic shear, poses one of the most promising methods to constrain the equation-of-state of dark energy \citep{Albrecht:2006}. The next generation of cosmic shear surveys like {\it Euclid} \citep{Laureijs:2011} or the Vera Rubin Observatory Legacy Survey of Space and Time \citep[LSST,][]{Ivezic:2008} will be able to constrain potential extensions to the $\Lambda$CDM model and may help to decipher the nature of dark energy. Tight constraints on cosmological parameters are essential to discriminate between the different modifications of the $\Lambda$CDM model. So far, two-point statistics have been established as the main analysis tool for cosmic shear \citep{Schneider:1998,Troxel:2018,Hildebrandt:2017,Hikage:2019,Asgari:2020,Hildebrandt:2020}. These statistics capture the entire information content of a Gaussian random field. Since the initial density field of the Universe is believed to be Gaussian, two-point statistics capture a large amount of cosmological information. However, in late times non-linear structure formation has introduced non-Gaussian features at the smaller scales of the matter distribution, whose information content cannot be captured by two-point statistics. To use this information, a variety of higher-order statistics has been introduced in recent years, including peak count statistics \citep{Martinet:2018,Harnois-Deraps:2021}, persistent homology \citep{Heydenreich:2021}, density split statistics \citep{Gruen:2018,Burger2022} and many others. In this work, we consider third-order shear statistics, which measure the skewness of the LSS at various scales. In contrast to most of these higher-order statistics, three-point statistics can be directly modelled from a matter bispectrum, allowing for a broad range of consistency checks that can be performed. Furthermore, their modelling does not require simulations that are adjusted to specific survey properties, which allows us to apply them to any data set easily. However, these natural extensions to two-point statistics have yet received surprisingly little attention. Several papers have reported a massive potential information gain when combining two- and three-point statistics \citep{Kilbinger:2005,Sato:2013,Kayo:2013}. \citet{Fu:2014} performed a combined analysis of two- and three-point statistics on $154\,\mathrm{deg}^2$ CFHTLenS data, reporting a rather moderate gain in information content. \citet{Secco:2022} measured the shear three-point correlation functions and third-order aperture statistics in the third-year data release of the Dark Energy Survey \citep{Flaugher:2005,Sevilla-Noarbe:2021}, showing that they can be detected with high signal-to-noise. Recently, \citet{Pyne:2021} showed that a combined analysis of two- and three-point statistics has an additional advantage: These two statistics react differently to observational and astrophysical systematics, meaning that a combined analysis allows us to constrain nuisance parameters internally without the need for any additional observations or simulations, yielding an additional null-test and much tighter bounds on cosmological parameters (in an optimistic case, a factor of 20 in the figure-of-merit of dark energy can be achieved). This article aims at preparing a cosmological parameter analysis with cosmic shear three-point statistics by developing a pipeline to measure and model both the three-point correlation functions $\Gamma^{(i)}$ and third-order aperture mass statistics $\MapMapMap$. We compare different estimators, computational costs and information content of both statistics. The covariance calculation will be investigated in the second publication of this series (Linke et al., in prep.). We further show that these statistics can be measured with relative ease in a Stage-III survey, which constitutes a significant advantage over their Fourier-space counterpart, the convergence bispectrum. The aperture mass statistics have several additional advantages: They offer good data compression, are not subject to the mass-sheet degeneracy, and, most importantly, they decompose the signal into E- and B-modes, where to leading order only E-modes can be created by gravitational lensing. In total, our modelling and validation pipeline can be summarised in the diagram in Fig.~\ref{fig:diagram_analysis_pipeline}. Our modelling algorithm is publicly available under \url{https://github.com/sheydenreich/threepoint/releases}. The paper is structured as follows: In Sect.~\ref{sec:nbody_sims} we introduce the N-body simulations we use to validate and test our modelling pipeline. We then present the convergence bispectrum in Sect.~\ref{sec:power_and_bispectra}, the shear three-point correlation functions in Sect.~\ref{sec:shear_3pcf} and the third-order aperture statistics in Sect.~\ref{sec:map3}. For each of these statistics, we describe their theoretical background, how we chose to model them, how they are measured in simulations, and the validation tests we performed. We then compare their information content to the one of second-order shear statistics in a mock-MCMC in Sect.~\ref{sec:results_mcmc} and discuss our findings in Sect.~\ref{sec:discussion}. \section{Model validation and covariance estimation with $N$-body simulations} \label{sec:nbody_sims} We use N-body simulations containing only dark matter to validate our model and estimate covariance matrices. One of the main advantages of third-order shear statistics is that they can be easily adapted to different survey specifications. To highlight this, we use different simulation suites with varying source redshift distributions, galaxy number densities and cosmologies for the validation and parameter estimation. In this paper, we use the full-sky gravitational lensing simulations described in \citet[][hereafter T17]{Takahashi2017}, the Millennium Simulations \citep[][hereafter MS]{Springel:2005,Hilbert:2008}, and the Scinet-LIghtCone Simulations \citep[][hereafter SLICS]{Harnois-Deraps:2018}. \subsection{T17 simulations} \label{sec:T17_description} The T17 are used in this work to perform a realistic analysis of a survey that mimics the KiDS-1000 data and are constructed from a series of nested cubic boxes with side lengths of $L,2L,3L...$ placed around a fixed vertex representing the observer’s position, with $L=450\,\mathrm{Mpc}/h$. Each box is replicated eight times and placed around the observer using periodic boundary conditions. With the $N$-body code \textsc{gadget2} \citep{Springel2001} the gravitational evolution of $2048^3$ dark matter particles is simulated. Within each box, three spherical lens shells are constructed, each with a width of $150\,\mathrm{Mpc}/h$, which are then used by the public code \textsc{GRayTrix}\footnote{\url{http://th.nao.ac.jp/MEMBER/hamanatk/GRayTrix/}} to trace the light-ray trajectories from the observer to the last scattering surface\footnote{These maps are freely available for download at \url{http://cosmo.phys.hirosaki-u.ac.jp/takahasi/allsky_raytracing/}}. The cosmological parameters of the simulation are $\Omega_{\rm m}=1-\Omega_\Lambda=0.279$, $\Omega_{\rm b}=0.046$, $h=0.7$, $\sigma_8=0.82$, and $n_{\rm s}=0.97$. The matter power spectrum agrees with theoretical predictions of the revised \textsc{Halofit} \citep{Takahashi2012} within $5\%(10\%)$ for $k<5 (6)\,h\,\mathrm{Mpc}^{-1}$ at $z<1$. For each of the 108 realisations, we build a realistic convergence map by taking a weighted average of all 38 convergence shells at different redshifts, where the weights were determined by the fiducial KiDS-1000 $n(z)$ -- see Fig.~\ref{fig:nofz}. We then transform the pure convergence maps into realistic convergence maps by adding to each pixel a Gaussian random variable with a vanishing mean and standard deviation of \begin{equation} \sigma_\mathrm{pix} = \frac{\sigma_\epsilon}{\sqrt{n_\mathrm{gal}A_\mathrm{pix}}} \,, \end{equation} where $A_\mathrm{pix}$ is the pixel area of the convergence grid, and the effective number density $n_\mathrm{gal}=6.17\,\mathrm{arcmin}^{-2}$ and $\sigma_\epsilon=0.265$ are chosen such that they are consistent with the combined 1-5 tomographic bin of the KiDS-1000 data. \subsection{Millennium Simulations} The MS were run with $2160^3$ particles in a $500\,h^{-1}\,\mathrm{Mpc}$ box in a flat {\LCDM} cosmology with $h=0.73$, $\sigma_8=0.9$, $\Omm=0.25$, $\Omega_\mathrm{b}=0.045$ and $n_s=1$. Subsequently, shear- and convergence-maps of 64 independent lines of sight with an area of $4\times 4\,\text{deg}^2$ each were created at 36 different redshifts \citep{Hilbert:2008,Hilbert:2009}. Each map is calculated on a grid of $4096\times 4096$ pixel. For each line of sight, we use the full shear- and convergence maps at redshift $z=1$. As we use the MS solely to validate our model, we do not add any noise to the maps. \subsection{Scinet-LIghtCone Simulations} The SLICS were run with $1536^3$ particles in a $505\,h^{-1}\,\mathrm{Mpc}$ box, filling up $10\times 10\,\mathrm{deg}^2$ light-cones up to $z=3$. All SLICS were run in a flat {\LCDM} cosmology with $h=0.69$, $\sigma_8=0.83$, $\Omm=0.29$, $\Omega_\mathrm{b}=0.047$ and $n_s=0.969$. From these simulations, we use convergence maps as well as galaxy catalogues with a redshift distribution of \begin{equation} n(z)\propto z^2\exp\left[-\left(\frac{z}{z_0}\right)^\beta\right]\; , \end{equation} with $z_0=0.637$, $\beta=1.5$ and the overall proportionality constant given by normalising the distribution to $30\,\text{gal}/\text{arcmin}^2$. We use mock galaxy catalogues provided for 924 (pseudo-)independent lines of sight. The SLICS are useful to estimate the constraining power of third-order statistics since the three-point correlation function can be calculated relatively quickly on the $100\,\mathrm{deg}^2$ fields, and the 924 lines of sight enable the determination of a stable covariance matrix. \section{Convergence power- and bispectrum} \label{sec:power_and_bispectra} In this section, we will briefly recap the basics of the weak gravitational lensing formalism, focusing on the shear statistics in Fourier space. More detailed reviews can be found in \citet{Bartelmann:2001,Hoekstra:2008, Munshi:2008, Bartelmann:2010}. We start by defining the density contrast at comoving position $\vec{x}$ and redshift $z$, $\delta(\vec{x},z) = \frac{\rho(\vec{x},z)}{\bar{\rho}(z)}-1$, where $\rho(\vec{x},z)$ is the matter density at position $\vec{x}$ and redshift $z$ and $\bar{\rho}(z)$ the average density at redshift $z$. From this density contrast, we define the convergence $\kappa$ for sources at redshift $z$ as a line-of-sight integration, weighted by the lensing efficiency \begin{align} \kappa(\vec{\theta},z) = \frac{3\Omm H_0^2}{2 c^2}\int_0^{\chi(z)}\dd \chi'{}&{}\, \frac{f_K[\chi'(z)]\,f_K[\chi'-\chi(z)]}{f_K[\chi(z)]}\nonumber\\ {}&{}\times\frac{\delta[f_K(\chi')\vec{\theta},z]}{a(\chi')}\; , \end{align} where $f_K(\chi)$ is the comoving angular diameter distance. We note that throughout this paper, we work in a flat Universe with $\Omm+\Omega_\Lambda=1$, meaning that $f_K[\chi(z)]=\chi(z)$, where $\chi(z)$ is the comoving distance at redshift $z$. However, everything we present in this section also works for open or closed universes. The convergence can not be directly observed, but it can be recovered from an observed shear field \citep{Kaiser:1995, Seitz:1998, Jeffrey:2020}. The relations between shear, convergence and the matter density contrast allow us to relate all second- and third-order shear statistics to the well-understood matter power spectrum $P_\delta(k,z)$ and bispectrum $B_\delta(k_1,k_2,k_3,z)$. \subsection{Definition of power- and bispectrum} \label{subsec:power_and_bispectra_theory} The matter power spectrum $P_\delta(k,z)$ and bispectrum $B_\delta(k_1,k_2,k_3,z)$ can be defined as \begin{align} \expval{\hat{\delta}(\vec{k}_1,z)\hat{\delta}(\vec{k}_2,z)} = {}&{} (2\pi)^3\,\delta_\mathrm{D}(\vec{k}_1+\vec{k}_2)\,P_\delta(k_1,z) \; , \\ \expval{\hat{\delta}(\vec{k}_1,z)\hat{\delta}(\vec{k}_2,z)\hat{\delta}(\vec{k}_3,z)} = {}&{} (2\pi)^3\,\delta_\mathrm{D}(\vec{k}_1+\vec{k}_2+\vec{k}_3) \nonumber\\ {}&{}\qquad \times B_\delta(k_1,k_2,k_3,z) \; , \label{eq:defn_bispectrum} \end{align} where $\hat{\delta}$ describes the Fourier transform of $\delta$ and $\delta_{\rm D}$ is the Dirac-delta distribution. The fact that the power- and bispectrum only depend on the moduli of the $k$-vectors can be easily derived from the statistical isotropy of the Universe. The convergence power- and bispectrum can then be computed using the Limber approximation \citep{Limber:1954,Peebles:1980,Kaiser:1997,Bernardeau:1997,Schneider:1998}, \begin{align} \pkappa (\ell) = {}&{} \frac{9\Omm^2H_0^4}{4c^4}\int_0^{\chi_\mathrm{max}} \dd \chi\;\frac{g^2(\chi)}{a^2(\chi)} \nonumber \\ {}&{}\qquad\times P\left[\frac{\ell}{f_K(\chi)},z(\chi)\right] \;, \label{eq:pkappa_defn}\\ \bkappa(\ell_1,\ell_2,\ell_3) = {}&{} \frac{27H_0^6\Omm^3}{8c^6}\int_0^{\chi_\mathrm{max}}\dd \chi\; \frac{g^3(\chi)}{a^3(\chi)\,f_K(\chi)} \nonumber \\ {}&{}\times B_\delta\left[\frac{\ell_1}{f_K(\chi)},\frac{\ell_2}{f_K(\chi)},\frac{\ell_3}{f_K(\chi)},z(\chi)\right] \label{eq:bkappa_defn}\, . \end{align} Here, \begin{equation} g(\chi) = \int_\chi^{\chi_\mathrm{max}} \dd \chi'\; p(\chi')\,\frac{f_K(\chi'-\chi)}{f_K(\chi')} \label{eq:lensing_efficiency_defn} \end{equation} describes the lensing efficiency, where $p(\chi)$ is the (normalised) comoving distance probability distribution of sources. We note that usually one instead measures a redshift probability distribution $p(z)$; in our modelling pipeline we thus instead write Eqs.~\eqref{eq:pkappa_defn}, \eqref{eq:bkappa_defn} and \eqref{eq:lensing_efficiency_defn} as integrals over the redshift $z$. The Limber approximation breaks down for small values of $\ell$. For example, the bispectrum from the Limber approximation overestimates the truth by up to an order of magnitude for $\ell\ll 60$ (corresponding to angular scales of roughly \astroang{6;;}), depending on the source redshift distribution \citep{Deshpande:2020}. However, at these scales, the impact of non-linear structure formation is small, meaning that the matter distribution is well-described by a Gaussian and higher-order statistics like the bispectrum are small. In this work, we only consider shear statistics up to $\sim\astroang{4;;}$; at these scales, we expect the Limber approximation to hold. % Instead of the three $\ell$-values $\ell_1,\ell_2$ and $\ell_3$, we can also describe the bispectrum as a function of the two vectors $\vec{\ell}_1,\vec{\ell}_2$ or their moduli $\ell_1,\ell_2$ and the angle $\varphi$ between them. We define \begin{equation} b(\ell_1,\ell_2,\varphi)=B_\kappa\left(\ell_1,\ell_2,\sqrt{\ell_1^2+\ell_2^2+2\ell_1\ell_2\cos\varphi}\right)\; . \end{equation} \subsection{Modelling the bispectrum} \label{subsec:modelling_bispectrum} We use the state-of-the-art \textsc{BiHalofit} algorithm \citep{Takahashi:2020} to model the dark matter bispectrum on non-linear scales. In comparison to older bispectrum models \citep[e.g.][]{Gil-Marin:2012,Scoccimarro:2001}, \textsc{BiHalofit} appears to trace the non-equilateral triangles much better: In comparison with N-body simulations, \textsc{BiHalofit} retains an accuracy of $10\%$ or better, whereas the other two fitting formulae are subject to errors of more than $200\%$. The effects on higher-order shear statistics are substantial, as can be seen in \citet{Halder:2021}, who modelled a different third-order shear statistic from using both \textsc{BiHalofit} and the bispectrum model of \citet{Gil-Marin:2012}. In a direct comparison with N-body simulations, \textsc{BiHalofit} is accurate on all tested scales, whereas the other fitting formula breaks down on scales of $\lesssim\astroang{;30;}$. Another advantage of \textsc{BiHalofit} is that it only requires a linear power spectrum as its input, compared to a non-linear spectrum in \citet{Gil-Marin:2012} or \citet{Scoccimarro:2001}. We use the fitting formula developed by \citet{Eisenstein:1999} to model the linear power spectrum. One of the main advantages of third-order shear statistics is that one can rigorously test each stage of the modelling pipeline. We perform such a test on our bispectrum model in App.~\ref{sec:app_testing_bispectrum} and conclude that the Limber integrated \textsc{BiHalofit} bispectrum is consistent with the MS up to $\ell\lesssim 10^4$ and deviates by up to 40\% for larger values $\ell$. \section{Shear three-point correlation functions} \label{sec:shear_3pcf} \subsection{Definition of the shear three-point correlation functions} \label{subsec:definition_shear_3pcf} Shear three-point correlation functions (3pcf) are the natural extension to the widely used two-point correlation functions. Let $\gamma_\mathrm{c}=\gamma_1+\mathrm{i}\gamma_2$ denote the complex shear in Cartesian coordinates. Considering a triangle of galaxies, as a first step, we project the shear $\gamma^i$ of each galaxy $i$ to its tangential- and cross-components with respect to a point fixed with respect to the triangle, for example one of its centres, \begin{equation} \label{eq:gamma_t_defn} \gamma\equiv\gamma_\mathrm{t} + \ii\gamma_\times = -\gamma_\mathrm{c}\ee^{-2\ii\zeta}\; , \end{equation} where $\zeta$ is the angle of the projection direction. \citet{Schneider:2003} established four \emph{natural components} of the shear 3pcf, which remain invariant under rotations of the triangle. They are defined as \begin{align} \label{eq:defn_natural_components} \Gamma^{(0)}=\expval{\gamma\gamma\gamma}\, {}&{}, \, \Gamma^{(1)}=\expval{\gamma^*\gamma\gamma}\, , \nonumber\\ \Gamma^{(2)}=\expval{\gamma\gamma^*\gamma}\, {}&{}, \, \Gamma^{(3)}=\expval{\gamma\gamma\gamma^*}\, , \end{align} where the `$^*$' denotes complex conjugation. The choice of the reference point of the projection is to some degree arbitrary. Usually one of the triangles cenres is chosen, most often the orthocenter (the intersection of its three altitudes) or the centroid (the intersection of its three medians), as shown in Fig.~\ref{fig:triangle_centers}. The natural components have the advantage that they are invariant under the choice of triangle centre up to multiplication with a complex phase factor, meaning that their moduli are invariant under the choice of triangle centre. Parametrizing the shear 3pcf by the triangle side-lengths, $x_1, x_2$ and $x_3$, where the indices 1,2 and 3 are ordered in a counter-clockwise direction, the natural components exhibit a nice behaviour concerning cyclic permutation of arguments. While the first natural component $\Gamma^{(0)}$ is invariant under cyclic permutations, the other three components transform into each other, \begin{align} \Gamma^{(0)}(x_1,x_2,x_3) = {} &\Gamma^{(0)}(x_2,x_3,x_1)=\Gamma^{(0)}(x_3,x_1,x_2) \;,\nonumber \\ \Gamma^{(1)}(x_1,x_2,x_3) = {} &\Gamma^{(3)}(x_2,x_3,x_1)=\Gamma^{(2)}(x_3,x_1,x_2) \; . \label{eq:permutations_natural_components} \end{align} Similar behaviour can be observed for parity transformations: \begin{align} \Gamma^{(0)}(x_1,x_2,x_3) = {}&\Gamma^{(0)*}(x_2,x_1,x_3)\;,\nonumber\\ \Gamma^{(1)}(x_1,x_2,x_3) = {}&\Gamma^{(1)*}(x_1,x_3,x_2)\;,\nonumber\\ \Gamma^{(2)}(x_1,x_2,x_3) = {}&\Gamma^{(2)*}(x_3,x_2,x_1)\;,\\ \Gamma^{(3)}(x_1,x_2,x_3) = {}&\Gamma^{(3)*}(x_2,x_1,x_3)\;.\nonumber \end{align} \subsection{Modelling the shear three-point correlation functions} \label{subsec:modelling_shear_3pcf} We model the natural components $\left\{\Gamma^{(i)}\right\}_{i=0,1,2,3}$ of the three-point correlation functions of cosmic shear using the methods described in \citet[][hereafter S+05]{Schneider:2005}: When we project the shear of all three galaxies to the orthocenter of the triangle, the projection direction at the triangle vertex $\vec{X}_i$ is orthogonal to the orientation $\varphi_i$ of the triangle side $\vec{x}_i$. Thus, the shear transforms as \begin{equation} \label{eq:rotation_orthocenter} \gamma^{(\mathrm{o})}(\vec{X}_i)=\gamma_\mathrm{c}(\vec{X}_i)\ee^{-2\ii\varphi_i}\; . \end{equation} We now utilise the relation between convergence and shear in Fourier space \citep{Kaiser:1993}, \begin{equation} \hat{\gamma}_\mathrm{c}(\vec{\ell})=\ee^{2\ii\beta_\ell}\hat{\kappa}(\vec{\ell})\; , \end{equation} where $\beta_\ell$ is the polar angle of $\vec{\ell}$, to write \begin{align} \Gamma^{(0)}{}&{}(x_1,x_2,x_3) = \myexpval{\gammao(\vec{X}_1)\gammao(\vec{X}_2)\gammao(\vec{X}_3)} \notag\\ = {}&{} \int\frac{\dd^2\ell_1}{(2\pi)^2}\int\frac{\dd^2\ell_2}{(2\pi)^2}\int\frac{\dd^2\ell_3}{(2\pi)^2}\; \myexpval{\tilde{\kappa}(\vec{\ell}_1)\tilde{\kappa}(\vec{\ell}_2)\tilde{\kappa}(\vec{\ell}_3)} \notag\\ {}&{}\times \exp\left[-\ii(\vec{\ell}_1\cdot\vec{X}_1+\vec{\ell}_2\cdot\vec{X}_2+\vec{\ell}_3\cdot\vec{X}_3)\right]\\ {}&{}\times\exp\left[2\ii\sum_i\left(\beta_i-\varphi_i\right)\right] \; . \notag \end{align} This can then be transformed into equation (15) of S+05\footnote{We note that due to the different definitions of the convergence bispectrum (Eq.~\ref{eq:defn_bispectrum} vs equation 4 in S+05), we get a factor of 3 difference.}: \begin{align} \Gamma^{(0)}{}&{}(x_1,x_2,x_3) = \frac{2\pi}{3}\int_0^\infty\frac{\dd\ell_1\,\ell_1}{(2\pi)^2}\int_0^\infty\frac{\dd\ell_2\,\ell_2}{(2\pi)^2}\int_0^{2\pi}\dd{\varphi} \nonumber\\ {}&{} \times b(\ell_1,\ell_2,\varphi)\, \ee^{2\ii\bar{\beta}} \left[\ee^{\ii(\phi_1-\phi_2-6\alpha_3)}J_6(A_3) \right. \\ {}&{} \left. + \ee^{\ii(\phi_3-\phi_2-6\alpha_1)}J_6(A_1) + \ee^{i(\phi_3-\phi_1-6\alpha_2)}J_6(A_2) \right] \; , \nonumber \end{align} with \begin{align} &A_3 = \left[(\ell_1x_2)^2 + (\ell_2x_1)^2 + x_1x_2\ell_1\ell_2\cos(\varphi+\phi_3)\right]^{\frac{1}{2}} \nonumber\; , &&&\\ &|\vec{\ell}_1+\vec{\ell}_2|^2 \cos2\bar{\beta} = (\ell_1^2+\ell_2^2)\cos\varphi+2\ell_1\ell_2 \; ,&&&\nonumber\\ &|\vec{\ell}_1+\vec{\ell}_2|^2 \sin 2\bar{\beta} = (\ell_1^2-\ell_2^2)\sin\varphi\; ,&&&\\ & A_3\cos\alpha_3 = (\ell_1x_2+\ell_2x_1)\cos\left(\frac{\varphi+\phi_3}{2}\right) \; ,&&&\nonumber\\ &A_3\sin\alpha_3 = (\ell_1x_2-\ell_2x_1)\sin\left(\frac{\varphi+\phi_3}{2}\right) \; .&&& \nonumber \end{align} The quantities $A_{1,2}$ and $\alpha_{1,2}$ are obtained by cyclic permutation of indices. The angles $\phi_i$ are the interior angles of the triangle, as shown in Fig.~\ref{fig:triangle_centers}. By introducing polar coordinates $R=\sqrt{\ell_1^2+\ell_2^2}$, $\psi = \arctan(\ell_2/\ell_1)$, we get \begin{align} \Gamma^{(0)}{}&{}(x_1,x_2,x_3) = \frac{1}{3(2\pi)^3}\int_0^{2\pi}\dd{\varphi} \int_0^{\pi/2}\dd{\psi} \int_0^\infty \dd{R}\,\nonumber\\ {}&{}\times R^3\sin\psi\cos\psi\, b(R\cos\psi,R\sin\psi,\varphi)\,\ee^{2\ii\bar{\beta}} \\ {}&{}\times \left[\ee^{\ii(\phi_1-\phi_2-6\alpha_3)}J_6(R\,A_3') + \ee^{\ii(\phi_3-\phi_2-6\alpha_1)}J_6(R\,A_1') \right. \nonumber\\ {}&{} \quad + \left.\ee^{i(\phi_3-\phi_1-6\alpha_2)}J_6(R\,A_2') \right] \, , \nonumber \end{align} with \begin{align} &A'_3 = \frac{A_3}{R} = \left[(x_2\cos\psi)^2 + (x_1\sin\psi)^2 \right.\nonumber\\ {}&{}\qquad\qquad\qquad\left.+ x_1x_2\sin2\psi\,\cos(\varphi+\phi_3)\right]^{\frac{1}{2}} \; , \nonumber&&&\\ &\cos2\bar{\beta} = (\cos\varphi+2\cos\psi\,\sin\psi) \; , \nonumber&&&\\ &\sin 2\bar{\beta} = (\cos^2\psi-\sin^2\psi)\sin\varphi \; , &&&\\ &A'_3\cos\alpha_3 = (x_2\cos\psi + x_1\sin\psi )\cos\left(\frac{\varphi+\phi_3}{2}\right) \; , &&&\nonumber\\ &A'_3\sin\alpha_3 = (x_2\cos\psi-x_1\sin\psi )\sin\left(\frac{\varphi+\phi_3}{2}\right) \; . \nonumber &&& \end{align} Defining \begin{equation} E_3 = \ee^{\ii(\phi_1-\phi_2-6\alpha_3)} \; , \end{equation} and $E_1$ and $E_2$ via cyclic permutations of indices, we can write \begin{align} \Gamma^{(0)}(x_1,x_2,x_3) = {}&{} \frac{1}{6(2\pi)^5} \int_0^{\pi/2}\dd{\psi} \sin2\psi\int_0^{2\pi}\dd{\varphi} \nonumber \\ {}&{}\times \ee^{2\ii\bar{\beta}}\sum_{i=1}^3\frac{E_i}{A_i'^4}\int_0^\infty \dd{R}\,R^3 \label{eq:gamma0_from_bkappa}\\ & \times b\left(\frac{R}{A_i'}\cos(\psi),\frac{R}{A_i'}\sin(\psi),\varphi\right)J_6(R)\; . \nonumber \end{align} The $R$-integration filters the bispectrum with a 6-th order Bessel function, making the integration routine difficult to solve numerically, as the functional form of the bispectrum prevents the application of fast Hankel transform algorithms like FFTLog \citep{Hamilton:2000}. We thus use the method developed in \citet{Ogata:2005} to solve the $R$-integration and integrate the remaining dimensions using the \textsc{cubature} library.\footnote{\url{https://github.com/stevengj/cubature}} To model $\Gamma^{(1)}$, we apply the same transformations to equation (18) of S+05; for $\Gamma^{(2)}$ and $\Gamma^{(3)}$ we perform a cyclic permutation of the input variables as outlined in Eq.~(\ref{eq:permutations_natural_components}). While a triangle of galaxy positions for which we evaluate the three-point correlation function can be described by its three side-lengths $x_1, x_2$ and $x_3$, it is certainly not a good idea to use these variables for a binning scheme; for example, for $x_1>x_2+x_3$ a triangle can not be defined, which means that the 3pcf would not be defined for many bins. A better way to bin the triangles was introduced by \citet{Jarvis:2004}. Assuming $x_1>x_2>x_3$ they defined a triangle via the values $r\in[0,\infty]$, $u\in [0,1]$ and $v\in [-1,1]$ by \begin{equation} r = x_2,\quad u=\frac{x_3}{x_2},\quad v=\pm\frac{x_1-x_2}{x_3} \; . \end{equation} Here, $v$ is positive for triangles where $x_1,x_2$ and $x_3$ are oriented clockwise and negative for a counter-clockwise orientation. This binning choice allows us to bin the triangle size $r$ in logarithmic steps without having bins where the 3pcf is not defined. In all cases, we bin the shear 3pcf logarithmic in $r$ and linear in $u$ and $v$. We note that Eq.~\eqref{eq:permutations_natural_components} implies that the four shear 3pcf for $x_1>x_2>x_3$ \citep[as in the binning scheme of][]{Jarvis:2004} already contain the entire information content of the third-order shear signal, as does knowledge of $\Gamma^{(0)}$ and $\Gamma^{(1)}$ for all combinations of $x_1,x_2$ and $x_3$. In a similar manner, Eq~\eqref{eq:permutations_natural_components} implies that $\Gamma^{(i)}(r,u,v)=\Gamma^{(i)*}(r,u,-v)$ holds, where the `${}^*$' denotes complex conjugation. To ensure compatibility with results from the measured 3pcf, we transform the modelled functions from the centroid (as used in S+05) to the orthocenter \citep[as used in][compare Sect.~\ref{subsec:measuring_3pcf}]{Jarvis:2004}. For a potential cosmological parameter analysis, the three-point correlation functions face a few hurdles: Assuming we bin the three-point correlation functions in 10 bins for each $r,u$ and $v$, then our data vector consists of $8000$ entries. This makes estimating a covariance matrix using simulations practically impossible and leads to a modelling time that is unfeasible even for a non-tomographic analysis. \subsection{Measuring the shear three-point correlation functions} \label{subsec:measuring_3pcf} We use the public tree-code \textsc{treecorr} \citep{Jarvis:2004} to measure the three-point correlation functions $\Gamma_i$. This algorithm estimates the quantity \begin{equation} \widehat{\Gamma}^{(0)} = \frac{\sum_{ijk}w_i\varepsilon_{i}\,w_j\varepsilon_{j}\,w_k\varepsilon_{k}}{\sum_{ijk}w_iw_jw_k}\; , \end{equation} where the $\varepsilon_i = \varepsilon_{\mathrm{t},i}+\mathrm{i}\varepsilon_{\times,i}$ are the observed ellipticities of galaxies and $w_i$ the associated weights. The other natural components $\widehat{\Gamma}^{(1)},\dots$ are estimated in the same manner. This estimator has the advantage that it is not impacted by the survey geometry: As long as at least one galaxy triplet falls into each bin, it remains unbiased \citep{Simon:2008}.\footnote{Even if certain bins remain empty, the estimated correlation function can be rebinned with a tesselation scheme to yield unbiased values of $\Gamma^{(i)}$ for all bins, as was shown by \citet{Linke:2020} for the related galaxy-galaxy-galaxy-lensing correlation function.} The disadvantage of the estimator is that its computational complexity scales with $\mathcal{O}(\Ngal^3)$, which is not feasible to execute even for moderate values of $\Ngal\gtrsim 10^6$. That is why \textsc{treecorr} constructs a hierarchical ball tree out of the galaxy sample and calculates the correlation functions from this tree. This results in a remarkable speed-up and allows us to calculate the shear 3pcf for an ensemble of about $10^7$ source galaxies distributed over a $10\times 10\,\mathrm{deg}^2$ field in about $1\,500$ CPUh. A disadvantage of the tree-code is that its execution time scales massively with the number of bins: If $b$ is the logarithmic bin size, then the run-time scales roughly with $b^{-4}$. The \textsc{treecorr} algorithm also has a \textsc{binslop} parameter, which allows balls of the KD-tree to overlap the edges of a bin. This parameter heavily affects computation time, and while the expectation value is relatively stable between different values of \textsc{binslop}, the covariance is subject to change \citep{Secco:2022}. \subsection{Validation} We test our developed integration routine described in Sect.~\ref{subsec:modelling_shear_3pcf} with a lensing potential for which we can derive analytic expressions for both the convergence bispectrum and the shear 3pcf. As discussed in more detail in App.~\ref{sec:app_test_gamma_integration} we found an agreement to the sub-percent level. \label{subsec:validation_3pcf_vs_Nbody} To validate our model for the shear 3pcf, we measure the shear signal at a redshift of $z=1$ in the MS. We choose to measure the signal in $10^3$ bins (10 bins in each $r,u$ and $v$), with logarithmic $r$-bins from $\astroang{;0.1;}$ to $\astroang{;120;}$. To speed up computation time, we randomly select every tenth pixel of the $4096^2$ pixel grid. As we do not include shape noise, we expect the loss of signal to be small. The results can be seen in Fig.~\ref{fig:gamma0_bihalofit_vs_MS}. We conclude that we can model the shear 3pcf reliably down to sub-arcminute scales for almost all triangle configurations. Only for almost degenerate, flattened triangle configurations ($v>0.9$) do we see that the model and simulations differ significantly. This might signify that \textsc{BiHalofit} breaks down at the corresponding triangle configurations in Fourier-space. Alternatively, this might point towards a break-down of the tree-code's accuracy at these very degenerate triangles. As we can see in Sect.~\ref{subsubsec:validation_n_body_sims_map}, these points play a negligible role in the conversion to aperture mass statistics. Overall we see that the agreement between shear 3pcf is better than the one at the bispectrum level (compare Sect.~\ref{subsubsec:validation_n_body_sims_bispectrum}). We observe the same effects for the other natural components of the 3pcf (compare Fig.~\ref{fig:gamma1_bihalofit_vs_MS}). \section{Aperture mass statistics} \label{sec:map3} \subsection{Definition of aperture mass statistics} \label{subsec:background_map3} An alternative way to analyze cosmic shear is via aperture mass maps \citep{Schneider:1996,Bartelmann:2001}. Their advantage is that they can separate the signal into so-called E- and B-modes \citep{Schneider:2002}, where B-modes can, to leading order, not be created by the weak gravitational lensing effect. Thus, the absence of B-modes provides a crucial null-test in a cosmic shear analysis \citep[see, e.g.,][]{Hildebrandt:2017,Asgari:2019}. Compared to convergence maps \citep{Kaiser:1993,Seitz:2001,Gatti:2021}, aperture mass maps are constructed in a way that they do not suffer from the well-known mass-sheet degeneracy \citep{Falco:1985,1995A&A...294..411S}. The aperture mass $\Map$ at position $\vec{\vartheta}$ and filter radius $\theta$ are defined as \begin{equation} \label{eq:definition_aperture_mass_kappa} \Map(\vec{\vartheta};\theta)=\int\dd^2\vartheta'\; U_\theta(|\vec{\vartheta}-\vec{\vartheta'}|)\, \kappa(\vec{\vartheta'}) \; ; \end{equation} here, $U_\theta(\vartheta)$ is a compensated filter (i.e. $\int \dd\vartheta\,\vartheta\, U(\vartheta) = 0$). Given a shear field $\gamma$, the aperture mass $\Map$ and its respective B-mode counterpart $\Mperp$ can be calculated as \begin{align} \label{eq:definition_aperture_mass_gamma} \Map(\vec{\vartheta};\theta)+\ii \Mperp(\vec{\vartheta};\theta) = {}&{} \int\dd^2\vartheta' \; Q_\theta(|\vec{\vartheta}-\vec{\vartheta'}|)\nonumber\\ {}&{}\times\left[\gamma_\mathrm{t}(\vec{\vartheta'})+\ii\gammax(\vec{\vartheta'})\right] \; , \end{align} where $\gammat$ and $\gammax$ are projected along the vector $\vec{\vartheta}'-\vec{\vartheta}$ (see Eq.~\ref{eq:gamma_t_defn}) and $Q_\theta$ is related to $U_\theta$ via \begin{equation} Q_\theta(\vartheta) = \frac{2}{\vartheta^2}\int_0^\vartheta \dd\vartheta'\;\vartheta'\, U_\theta(\vartheta')-U_\theta(\vartheta)\; . \end{equation} For simplicity of notation, we define $U_\theta(\vartheta)=\theta^{-2}u(\vartheta/\theta)$ and denote by $\hat{u}(\eta)$ the Fourier transform of $u$. In this work, we opt for the filter function introduced by \citet{Crittenden:2002}, \begin{align} u(x)= {}&{}\frac{1}{2\pi}\left(1-\frac{x^2}{2}\right)\ee^{-x^2/2},\quad \hat{u}(\eta) = \frac{\eta^2}{2}\ee^{-\eta^2/2}, \nonumber\\ Q_\theta(\vartheta) = {}&{} \frac{\vartheta^2}{4\pi\theta^4}\exp\left(-\frac{\vartheta^2}{2\theta^2}\right)\; . \end{align} While the construction of aperture mass maps has its uses \citep[see for example][]{Harnois-Deraps:2021,Heydenreich:2021}, we do not care about the structure of an aperture mass map but rather about its statistical properties. We define for arbitrary combinations of E- and B-mode aperture mass statistics \begin{align} {\expval{\Map^m\Mperp^n}}{}&{}(\theta_1,\ldots,\theta_n) = \left<\Map(\vec{\vartheta};\theta_1)\dots\Map(\vec{\vartheta};\theta_m)\right.\nonumber\\ {}&{}\times\left.\Mperp(\vec{\vartheta};\theta_m+1)\dots\Mperp(\vec{\vartheta};\theta_m+n)\right>_{\vec{\vartheta}} \; . \end{align} By construction, $\expval{\Map}(\theta)$ vanishes. In a parity-symmetric field, all odd powers of B-mode components vanish \citep{Schneider:2003b}, meaning that the relevant B-mode counterparts to $\expval{\Map^2}$ and $\MapMapMap$ are $\expval{\Mperp^2}$ and $\MapMperpMperp$, respectively. \subsection{Modelling aperture mass statistics} \label{subsec:modelling_map} Given a model for the convergence power spectrum, the second-order aperture statistics can be calculated as \begin{equation} \expval{\Map^2}(\theta) = \int\frac{\dd \ell\;\ell}{2\pi}\,P_\kappa(\ell)\,\hat{u}^2(\theta\ell)\; . \end{equation} As a model for the non-linear power spectrum, we use the revised \textsc{Halofit} model of \citet{Takahashi2012}. Equivalently, the third-order aperture statistics $\MapMapMap$ can be derived from a bispectrum model via \citep[compare][]{Jarvis:2004,Schneider:2005}\footnote{Again, due to the different definitions of the convergence bispectrum, we get a factor of 3 difference with respect to S+05. We also use the symmetry of the bispectrum to only integrate from 0 to $\pi$ in $\varphi$, introducing a prefactor of 2.} \begin{align} \MapMapMap {}&{}(\theta_1,\theta_2,\theta_3) = \frac{2}{(2\pi)^3}\int_0^\infty \dd \ell_1\,\ell_1\int_0^\infty \dd \ell_2\,\ell_2 \int_0^\pi\dd\varphi \nonumber\\ {}&{}\times\hat{u}(\theta_1\ell_1)\, \hat{u}(\theta_2\ell_2)\,\hat{u}\left(\theta_3\sqrt{\ell_1^2+\ell_2^2+2\ell_1\ell_2\cos\varphi}\right)\nonumber\\ {}&{}\times b(\ell_1,\ell_2,\varphi)\; \label{eq:map3_from_bkappa}. \end{align} We use the public \textsc{cubature} library to solve this integration. In our implementation, the integration kernel is executed on a graphics processing unit (GPU), yielding a significant speed-up over parallelisation on central processing units (CPUs). \subsection{Measuring aperture mass statistics} \label{subsec:measuring_map} There are three main methods to estimate the third-order aperture mass statistics $\MapMapMap$, first, via the convergence field $\kappa$, second, via the shear field or, in practice, from the observed galaxy ellipticities, and third, via the third-order correlation functions $\Gamma^{(i)}$. \subsubsection{Measuring aperture mass statistics directly} \label{subsec:measuring_map_direct} The most straightforward way is to measure aperture mass maps directly on a convergence field using Eq.~\eqref{eq:definition_aperture_mass_kappa}. In a real survey, this is difficult, as the convergence is not directly observable. In principle, one could compute the aperture mass statistics of a reconstructed convergence field, but this is not a good way to estimate aperture statistics, as the convergence reconstruction yields a convergence map that is necessarily smoothed and potentially also inhibits other systematic effects. While not really applicable to real data, this method yields a quick and unbiased way to estimate aperture statistics in lightcones from simulations, as for them, convergence maps are readily available. However, one faces the issue of boundary effects when the integral in Eq.~\eqref{eq:definition_aperture_mass_kappa} extends past the simulation boundary. To avoid this issue, we cut off a slice of width $4\theta$ from the computed aperture mass maps\footnote{Both the $Q$- and the $u$-filter function have $99.9\%$ of their power within this range, meaning that boundary effects beyond this cut-off are negligible.}. Another way to estimate aperture statistics is from an ensemble of observed galaxy ellipticities, using \begin{equation} \label{eq:map_discrete_estimator} {\MapEst}(\vec{\vartheta};\theta) + \ii {\MperpEst}(\vec{\vartheta};\theta) = \frac{1}{n_\mathrm{gal}}\sum_i Q_\theta(|\vec{\vartheta}-\vec{\vartheta}_i|)\left(\varepsilon_{\mathrm{t},i}+\ii\varepsilon_{\times,i}\right)\; , \end{equation} where $\varepsilon_{\mathrm{t}/\times}$ are the observed galaxy ellipticities converted into their tangential and croos components according to Eq.~\eqref{eq:gamma_t_defn}; $\vec{\vartheta}_i$ are their respective positions. Here, $n_\mathrm{gal}$ can be the global number density of galaxies \citep{Bartelmann:2001} or the number density of galaxies within the aperture radius \citep{Martinet:2018}. For this work, we define $n_\mathrm{gal}$ as the number of galaxies weighted by the $Q$-filter function: \begin{equation} n_\mathrm{gal} = \sum_i Q_\theta(|\vec{\vartheta}-\vec{\vartheta}_i|) \; . \end{equation} We tested all three definitions of $n_\mathrm{gal}$ using the SLICS and found that, for randomly distributed galaxies, setting $n_\mathrm{gal}$ as the number density within the aperture radius or the one weighted by the $Q$-filter function induces sub-percent differences on the third-order aperture masses $\MapMapMap$. However, setting $n_\mathrm{gal}$ as the global galaxy density can induce differences of about $5\%$ in $\MapMapMap$. In the following, we adopt $n_\mathrm{gal}$ to be the number of galaxies weighted by the $Q$-filter function. Rewriting Eq.~\eqref{eq:gamma_t_defn} as \begin{equation} \gamma_t + \ii\gamma_\times = -\left(\gamma_1+\ii\gamma_2\right) \frac{(\vec{\vartheta}-\vec{\vartheta}')^*}{(\vec{\vartheta}-\vec{\vartheta}')}\; , \end{equation} where the vector $\vec{\vartheta}-\vec{\vartheta}'$ denotes the projection direction of the tangential shear in complex notation, we can rewrite Eqs.~\eqref{eq:definition_aperture_mass_gamma} and \eqref{eq:map_discrete_estimator} as a convolution. To calculate aperture mass fields, we distribute galaxies on a grid using a cloud-in-cell method. From this, we compute the aperture masses using a Fast Fourier Transform (FFT), allowing us to compute an aperture mass map in $\mathcal{O}(\Npix\log\Npix)$ operations. To compute second- and third-order aperture statistics, we apply the same cut-off of $4\theta_\mathrm{ap}$ to the aperture mass maps. From these aperture mass maps, we obtain estimates for the second- and third-order aperture statistics by multiplication of the respective aperture mass maps on each pixel, and then taking the average of all pixel values. To extract the data vectors from the (full-sky) convergence maps of the T17 simulations, we smooth the maps with \textsc{healpy} function \textsc{smoothing}, with a given beam window function created by the function \textsc{beam$2$bl}, which in turn is determined by the corresponding $U_\theta$ filter. For each filter radius $\theta$, this yields a full-sky aperture mass map ${\MapEst}(\vec{\vartheta};\theta)$, without the need to cut off boundaries. \subsubsection{Measuring aperture statistics from three-point correlation functions} \label{subsec:measuring_map_from_3pcf} While the abovementioned method to estimate aperture statistics is extremely fast, it can not be applied to realistic survey data. Assuming a relatively large aperture radius of $\theta_\mathrm{ap}=30'$, we would have to cut off a $2\degree$-strip around every edge or mask in the survey footprint, meaning that we would disregard most of the data. While active research is being conducted to circumvent these problems \citep{Porth:2020,Porth:2021}, the arguably best method to estimate third-order aperture statistics from real data is to derive them from the measured 3pcf, as has been introduced in \citet{Jarvis:2004}, generalised in \citet{Schneider:2005} and applied to survey data in \citet{Fu:2014} and \citet{Secco:2022}. The shear 3pcf can be estimated straightforwardly from a survey with arbitrarily complex geometry, meaning that the converted aperture statistics are not biased by boundary effects. One caveat is that this conversion requires the knowledge of the 3pcf for all triangle configurations, particularly for infinitesimally small or extremely large ones, both of which can not be measured. The incomplete knowledge of the correlation functions can lead to a mixing of E- and B-modes for the aperture statistics \citep{Kilbinger:2006}. However, for third-order aperture statistics, this effect appears to be not as severe as for their second-order counterpart \citep[at least for the diagonal part of the aperture statistics, this has been demonstrated in][we are testing this assumption for non-diagonal aperture mass statistics in Sect.~\ref{subsec:validation_binning_choice}]{Shi:2014}. Despite its advantages, this method comes at the price of computation time: Calculating the shear 3pcf for a realistic number of source galaxies takes orders of magnitude longer\footnote{For a $10\times 10\,\mathrm{deg}^2$ field of a Stage-IV survey, direct estimation of aperture mass statistics takes a few minutes vs.~$\sim$1500 CPUh for the estimation of the 3pcf.} than the direct estimation of aperture statistics, so calculating the shear 3pcf of an ensemble of simulations (as would be necessary to estimate a covariance matrix) comes at a prohibitively high computational cost. The method to transform shear three-point correlation functions into aperture statistics is already implemented in \textsc{treecorr}. To compute the third-order aperture statistics, the quantities \begin{align} \expval{MMM}(\theta_1,\theta_2,\theta_3) = {}&{} A_1 \int \dd y_1 \int \dd y_2 \int_0^{2\pi}\dd \psi \nonumber\\ \times {}&{} \Gamma^{(0)}_\mathrm{cen}(y_1,y_2,\psi) F_1(y_1,y_2,\psi)\;, \nonumber\\ \expval{MMM^*}(\theta_1,\theta_2;\theta_3)= {}&{} A_2 \int \dd y_1 \int \dd y_2 \int_0^{2\pi}\dd \psi\label{eq:MMM_from_gamma}\\ \times {}&{} \Gamma^{(3)}_\mathrm{cen}(y_1,y_2,\psi) F_2(y_1,y_2,\psi) \nonumber \end{align} need to be computed. Here, $A_{1,2}$ and $F_{1,2}$ are the prefactors and filter functions, which are specified in S+05 (see equations 62 and 71). The aperture statistics $\MapMapMap$ and $\MapMperpMperp$ are linear combinations of these quantities: \begin{align} \MapMapMap {}&{}(\theta_1,\theta_2,\theta_3) \nonumber\\=\,\,\,{}&{}\!\!\!\Re\left[\expval{M^2 M^*}(\theta_1,\theta_2;\theta_3) +\expval{M^2 M^*}(\theta_1,\theta_3;\theta_2) \right. \nonumber \\ &\!\left. +\expval{M^2 M^*}(\theta_2,\theta_3;\theta_1) +\expval{M^3}(\theta_1,\theta_2,\theta_3)\right]/4 \;,\nonumber \\ \MapMperpMperp {}&{}(\theta_1;\theta_2,\theta_3) \label{eq:map3_from_MMM} \\=\,\,\,{}&{}\!\!\!\Re\left[\expval{M^2 M^*}(\theta_1,\theta_2;\theta_3) +\expval{M^2 M^*}(\theta_1,\theta_3;\theta_2) \right. \nonumber \\ &\!\left. -\expval{M^2 M^*}(\theta_2,\theta_3;\theta_1) -\expval{M^3}(\theta_1,\theta_2,\theta_3)\right]/4 \;. \nonumber % \end{align} We will not consider the quantities $\expval{\Map^2\Mperp}$ and $\expval{\Mperp^3}$, as they vanish for any parity-symmetric field \citep{Schneider:2003b}. In \textsc{treecorr}, this method is implemented in the following way: First the $\Gamma^{(1)}$ and $\Gamma^{(2)}$ are transformed into $\Gamma^{(3)}$ via Eq.~\eqref{eq:permutations_natural_components}. For each bin $(r,u,v)$, the transformation matrix $\frac{\dd\{r,u,v\}}{\dd\{y_1,y_2,\psi\}}$ is then calculated. The value of the respective integral is computed as the sum of the values of $\Gamma^{(3)}$ multiplied by the determinant of the transformation matrix and the value of the filter functions $F_{1,2}$ at the bin centre. While numerically very cheap, this is probably not the most efficient way to compute that integral. In case the filter functions vary significantly over a bin \citep[compare Fig.~2 and 3 of][]{Schneider:2005}, it might be more appropriate to calculate the average of the filter function in a bin, for example. We tried to improve the integration results by interpolating the measured shear 3pcf and performing the same integral, achieving rather moderate improvements. We leave an optimisation of the conversion from shear 3pcf to aperture mass statistics for future work. \subsection{Validation} \subsubsection{Binning choice of three-point correlation functions} \label{subsec:validation_binning_choice} As a first step, we want to validate the conversion $\Gamma^{(i)}\dashrightarrow\MapMapMap$ performed by the \textsc{TreeCorr} algorithm. In particular, we investigate the number of bins necessary to achieve an unbiased estimate for third-order aperture statistics and quantify the leakage of E- and B-modes. While the latter has already been investigated by \citet{Shi:2014}, we extend upon these results by using a realistic convergence bispectrum model and by taking into account non-diagonal aperture mass statistics. Our modelling pipeline provides us with the ability to test this conversion. As we start from the same convergence bispectrum $B_\kappa$, the aperture mass statistics achieved by the conversion $B_\kappa\overset{\eqref{eq:gamma0_from_bkappa}}{\to}\Gamma^{(i)}\overset{\eqref{eq:MMM_from_gamma}}{\dashrightarrow}\MapMapMap$ and by direct modelling $B_\kappa\overset{\eqref{eq:map3_from_bkappa}}{\to}\MapMapMap$ have to be consistent. Furthermore, the modelled $\Gamma^{(i)}$ are pure E-mode functions, so any B-modes $\MapMperpMperp$ that we observe have to be created by the transformation $\Gamma^{(i)}\overset{\eqref{eq:MMM_from_gamma}}{\dashrightarrow}\MapMapMap$. These tests would be unfeasible to perform with simulations due to the prohibitively high computational cost of extracting the shear 3pcf from an extensive simulation set for different bin sizes. The results of our tests can be seen in Fig.~\ref{fig:map3_from_gamma_binning}. We see that as long as the three filter radii $\theta_1$, $\theta_2$ and $\theta_3$ are similar, the conversion appears to work reasonably well. Only when we bin $\Gamma^{(i)}$ in $7^3$ bins do we get significant deviations, meaning that this is certainly not a sufficient number of bins. The conversion becomes less accurate when two filter radii are much smaller than the third one, as shown in the top-right corner of Fig.~\ref{fig:map3_from_gamma_binning}. There also the results for $10^3$ bins show significant deviations, whereas the results for $15^3$ bins seem overall consistent with the ones from $20^3$ bins. We also observe a non-negligible amount of B-mode leakage for these cases, even for $15^3$ and $20^3$ bins. We are planning to exclude all combinations of filter radii from cosmological parameter analyses where we observe a B-mode leakage of more than 10\% for the 3pcf in $15^3$ bins. Inspecting Fig.~\ref{fig:gamma1_of_r_u_v} we further note that the function $\Gamma^{(1)}(r,u,v)$ is relatively smooth and well-behaved with respect to $r$, but strongly varies as a function of $u$ and $v$, especially when $u\approx 0$ and $v\approx\pm 1$. This implies that $r$ can be binned rather coarsely, as long as $u$ and $v$ are finely binned. This is in contrast to binning choices in other studies, e.g.~\citet{Secco:2022}, who preferred a fine binning in $r$ (55 bins) and a coarser binning in $u$ and $v$ (10 bins). \subsubsection{Comparison to N-body simulations} \label{subsubsec:validation_n_body_sims_map} We compare the modelled aperture mass statistics with the ones we measure in the MS in Fig.~\ref{fig:third_order_map_model_vs_MS}. As expected from our discussions in Sect.~\ref{subsec:validation_binning_choice}, we observe that the conversion from shear 3pcf to third-order aperture masses fails when two aperture radii are small, and the third one is large (as observed in the top-right panel). In these cases, we also register significant B-modes. We also note that in most cases, the uncertainties on the direct measurements are significantly larger than those from the shear 3pcf. Cutting off a stripe around the boundary in order to avoid edge effect in the estimate of $\MapMapMap$ (as discussed in Sect.~\ref{subsec:measuring_map}), leads to a loss of information compared to measuring the 3pcfs, for which all triplets of available points\footnote{Again, we only use every tenth pixel to calculate the 3pcf, but do not expect the a strong loss of signal to noise from this.} in the field are used. \section{Cosmological parameter estimation} \label{sec:results_mcmc} To perform a cosmological parameter inference, we use our described pipeline to create a model vector. For the covariance, we rely on N-body simulations, where we use the method of \cite{Percival2021} to debias the estimated covariance matrix $\Tilde{C}$. Given a data vector $\vb{d}$ and a covariance matrix $\Tilde{C}$ measured from $n_\mathrm{r}$ simulated survey realisations, the posterior distribution of a model vector $\vb{m}(\boldsymbol{\Theta})$ depending on $n_\Theta$ parameters, is \begin{equation} \boldsymbol{P}\left(\vb{m}(\boldsymbol{\Theta})|\vb{d},\Tilde{C}\right) \propto |\Tilde{C}|^{-\frac{1}{2}} \left( 1 + \frac{\chi^2}{n_{\rm r}-1}\right)^{-m/2}\, , \label{eq:t_distribution} \end{equation} where \begin{equation} \chi^2 = \left[\vb{m}(\boldsymbol{\Theta})-\vb{d}\right]^{\rm T} \Tilde{C}^{-1} \left[\vb{m}(\boldsymbol{\Theta})-\vb{d}\right] \, . \label{eq:chi2} \end{equation} The power-law index $m$ is \begin{equation} m = n_\Theta+2+\frac{n_\mathrm{r}-1+B(n_\mathrm{d}-n_\Theta)}{1+B(n_\mathrm{d}-n_\Theta)} \;, \label{eq:m_exponent} \end{equation} with $n_{\rm d}$ being the number of data points and \begin{equation} B = \frac{n_\mathrm{r}-n_\mathrm{d}-2}{(n_\mathrm{r}-n_\mathrm{d}-1)(n_\mathrm{r}-n_\mathrm{d}-4)} \, . \label{eq:B} \end{equation} If $m=n_\mathrm{r}$ the formalism of \cite{Sellentin2016} is recovered. Normally, one needs to evaluate this likelihood function in high-dimensional parameter space to perform a cosmological parameter analysis with third-order statistics. The creation of the model vector for third-order aperture statistics with 35 combinations of aperture radii takes about one minute on an NVIDIA A40 GPU, and needs to be evaluated at about $10^4$ points even for sampling methods like \textsc{polychord} \citep{Handley:2015}. This means that a complete likelihood analysis with third-order aperture statistics is possible but takes a long time, whereas a complete likelihood analysis with the 3pcf, where the modelling of the 3pcf in $10^3$ bins takes two to three hours, is not feasible. To solve this issue, we use a neural network emulator called \textsc{CosmoPower} \citep{COSMOPOWER2022}, which was first developed to emulate power spectra but can easily be adapted for arbitrary vectors. Since a neural network emulator needs as many as possible evaluation points, we calculate our model at 7500 points\footnote{Due to the long modelling time, we only use 500 of the 7500 sampled points for the 3pcf.} in a four-dimensional Latin hypercube describing a flat $w$CDM cosmological model, varying the parameters $\Omm, S_8, w_0$ and $h$. We leave all other parameters fixed at the values corresponding to the SLICS, which we introduced in Sect.~\ref{sec:nbody_sims}. We then train the emulator with 6500 points and use the remaining 1000 as a validation test, as shown in Fig.\ref{fig:emulator_acc}. This neural network-based emulator performs extraordinarily well, modelling all statistics with a sub-per-cent accuracy. As our bispectrum model is only accurate to about 10\% \citep{Takahashi:2020}, the emulator uncertainty plays a negligible role in the modelling process. \subsection{Shear three-point correlation functions vs.~third-order aperture masses} One aspect of this work is investigating the information loss using aperture mass statistics instead of the 3pcf itself. Although the aperture mass statistics are well suited for a cosmological parameter inference due to their E-/B-mode decomposition and fast modelling times, they should not be used if the loss of information is too severe. Unfortunately, we cannot quantify the full information content of the shear 3pcf, as the data vector contains about $10^4$ entries, and therefore a reliable covariance matrix is not accessible. To circumvent this problem, we model the shear 3pcf in $10^3$ bins in $r,u$ and $v$, where we bin $r$ logarithmically from $0.\!'1$ to $\astroang{;100;}$, at 500 of the 7500 training nodes. We then perform a principal component analysis (PCA) to decide on the 40 most relevant principal components of the 3pcf data vector. Using this PCA, we determine the covariance matrix for the shear 3pcf, which we measure from 200 $10\times 10\,\mathrm{deg}^2$ tiles of the SLICS in the same configuration as the training data. For the model vector, we again use \textsc{CosmoPower}, which is trained on the PCA components of the 500 models used in determining the principal components. In Fig.~\ref{fig:MCMC_Map3vsGamma} we compare the constraining power of the PCA analysis of 3pcf to the $\langle \mathcal{M}_\mathrm{ap}^{3} \rangle$ analysis, where the covariance for PCA analysis of 3pcf is measured from 200 SLICS realisations; for $\langle \mathcal{M}_\mathrm{ap}^{3} \rangle$ we use all $927$ available SLICS realisations and take into consideration all combinations of the filter radii $\astroang{;0.5;},\astroang{;1;},\astroang{;2;},\astroang{;4;},\astroang{;8;},\astroang{;16;}$, and $\astroang{;32;}$. The different number of realisations are considered by Eq.~\eqref{eq:m_exponent}. In both cases, the data vector was created by the \textsc{CosmoPower} Emulator. It is clearly seen that the constraining power from the PCA analysis of 3pcf is only slightly better than the one from $\langle \mathcal{M}_\mathrm{ap}^{3} \rangle$, and this slight difference may very well be explained by the use of different scales between the 3pcf and aperture mass statistics, and the fact that the aperture mass maps are smaller due to the cutoff at the boundaries. Overall, the advantages of the $\langle \mathcal{M}_\mathrm{ap}^{3} \rangle$ justify their use, even considering their potentially slightly lower constraining power. \subsection{Combination of second- and third-order aperture masses} To assess the constraining power for third-order aperture statistics, especially when combined with second-order shear statistics, we perform a mock analysis for a non-tomographic KiDS-1000-like setup. To estimate the covariance matrix, we made use of all 108 realisations of the T17 simulations with a resolution $\textsc{nside}=4096$ (corresponding to a pixel size of $\astroang{;0.74;}^2$). From each realisation, we extract 18 HEALPix squares of size $\approx 860\,\mathrm{deg}^2$ that do not share common borders. This results in 1944 independent realisations from which the covariance matrix is estimated, such that with a data vector size of $\sim 50$, the statistical noise of the covariance matrix can be neglected. Since the area of the square patches is slightly larger than the one from KiDS-1000 with $\approx 777.4\,\mathrm{deg}^2$, the covariance needs to be re-scaled by a factor of $1.11$. Furthermore, in order to have a data vector as unbiased and noise-free as possible, we estimate it with one full-sky realisation with a resolution $\textsc{nside}=8192$ (corresponding to a pixel size of $0.18$\,arcmin). Lastly, we use the filter scales of $(4,8,16,32)$\,arcmin, as the model and simulations are inconsistent for smaller scales. The resulting posterior distribution is shown in the left panel of Fig.~\ref{fig:MCMC_T17}, where we first notice that the combination of second- and third-order statistics significantly increases the constraining power, especially in the $\Omega_\mathrm{m}$-$\sigma_8$ panel to the different degeneracy directions of the individual statistics (compare Tab.~\ref{tab:parameter_constraints}). Indeed, a joint analysis increases the constraints on $S_8$ by 42\% with respect to second-order statistics; the constraints on $\Omm$ and $\sigma_8$ increase by at least 68\% and 54\%, respectively\footnote{As the constraints for second-order statistics are at least partly dominated by the prior, the true increase is likely to be much greater}. The figure-of-merit \citep{Albrecht:2006} in the $\Omm$-$\sigma_8$ plane increases by a factor of 5.88. Finally, we note that the true cosmology is well within $1\,\sigma$ of the expected KiDS-1000 uncertainty for all three statistics. Additionally, we investigate the constraining power if only equal-scale aperture masses $\MapMapMap(\theta,\theta,\theta)$ are used. As displayed in the right panel of Fig.~\ref{fig:MCMC_T17}, the loss of constraining power by the limitation to equal-scale aperture masses is small, although not zero. This is in stark contrast to \citet{Kilbinger:2005}, who found a strong difference in constraining power using a Fisher forecast. However, their analysis was conducted using a covariance matrix from a significantly smaller set of simulations. Our results are roughly in line with the findings of \citet{Fu:2014}, who found rather marginal differences in an MCMC. We note that these findings might change when additional parameters (either cosmological or nuisance) are introduced in the MCMC. In that case the equal-scale aperture masses will suffer from some degeneracies that the aperture mass statistics containing all filter radii might be able to break. \begin{table}[] \centering \caption{Marginalised one-dimensional parameter constraints from the left part of Fig.~\ref{fig:MCMC_T17}. For $\expval{\Map^2}$ and $\expval{\Map^3}$ we do not cite upper limits on $\Omm$ or lower limits on $\sigma_8$ as they are dominated by the prior} \begin{tabular}{c|c c c} parameter & $\expval{\Map^2}$ & $\expval{\Map^3}$ & $\expval{\Map^{2,3}}$ \\ \hline & & & \\[-0.8em] $\Omm$ & $0.294_{-0.059}$ & $0.229_{-0.048}$ & $0.26^{+0.041}_{-0.04}$ \\[0.1em] $\sigma_8$ & $0.671^{+0.155}$ & $0.603^{+0.287}$ & $0.842^{+0.075}_{-0.074}$ \\[0.1em] $S_8$ & $0.813^{+0.023}_{-0.024}$ & $0.786^{+0.022}_{-0.041}$ & $0.792^{+0.017}_{-0.019}$ \end{tabular} \label{tab:parameter_constraints} \end{table} \section{Discussion} \label{sec:discussion} In this work, which is the first of a series on cosmological analysis with third-order shear statistics, we have shown that a cosmological parameter analysis with third-order aperture mass statistics is feasible and beneficial for Stage-III surveys. Both the shear 3pcf and the third-order aperture statistics can be modelled from the matter bispectrum, and we found that our models based on the \textsc{BiHalofit} bispectrum model are accurate enough for Stage-III surveys. In particular, the flat-sky and Limber approximations are valid for our selected range of scales, so the accuracy of our model is mainly limited by the accuracy of the bispectrum model. We note that we have not yet tested the impact of astrophysical or observational systematics. We developed a test for binning strategies of the three-point correlation functions in order to obtain unbiased estimates for aperture mass statistics and found that, at our selected scales, a measurement of the three-point correlation functions in $15^3$ bins yields good results. In particular, we found that the leakage between E- and B-modes is at the percent level, and the bias in the aperture mass statistics is well below the sample variance for a Stage-III survey. This extends upon the findings of \citet{Shi:2014}, who found a percent-level leakage for diagonal aperture mass statistics utilising a simplified bispectrum model. We emphasise that, in addition to the effect of a minimum scale in the shear 3pcf investigated by \citet{Shi:2014}, our approach also quantifies the leakage that stems from the binning choices in the shear 3pcf, resulting in an inaccurate evaluation of the integral in Eq.~\eqref{eq:MMM_from_gamma}. We have tested the information loss when converting the shear three-point correlation functions to third-order aperture mass statistics by performing a principal component analysis of the three-point correlation functions and comparing the constraining power of the principal components to the one of third-order aperture statistics. We found comparable information content, suggesting that third-order aperture statistics constitute a good data compression method for the shear three-point correlation functions. In addition to an easier modelling, the aperture statistics have the added advantage that they cleanly separate E- and B-modes. We demonstrate that the computational load of a cosmological parameter analysis with third-order aperture statistics is manageable, particularly when utilising an emulator to speed up the MCMC. We make a \textsc{cosmoSIS}-module of our modelling algorithm publicly available at \url{https://github.com/sheydenreich/threepoint/releases}. Finally, we compare the constraining power between second-order, third-order, and joint aperture statistics analysis. While second-order aperture statistics are not being used in modern cosmological parameter analyses, we assume that all second-order shear statistics exhibit similar constraining power and parameter degeneracies \citep[compare][]{Asgari:2021}. We find that third-order aperture statistics alone have a lower constraining power than their second-order counterpart, but they exhibit a different degeneracy direction in the $\Omm$--$\sigma_8$-plane so that a joint analysis almost doubles the constraining power on the structure growth parameter $S_8$ and increases the figure-of-merit in the $\Omm$--$\sigma_8$-plane by a factor of 5.9. However, the information gain predicted by Fisher analyses, especially for the difference between diagonal and full third-order aperture statistics \citep[compare][]{Kilbinger:2005}, appears overly optimistic. This suggests that a Fisher forecast might not be an optimal tool to forecast parameter constraints and mock sampling methods give more realistic constraints. While we have demonstrated here that cosmological analyses with third-order shear statistics are feasible and promising, there are steps left to do before applying our methods to a concrete cosmological survey. The first of these is the development of a model for the covariance of third-order statistics, which is essential for a tomographic analysis and will be addressed in the following paper of this series (Linke et al., in prep). Additionally, as mentioned above, our model does not yet incorporate systematic and astrophysical effects, like baryonic feedback or intrinsic alignments of source galaxies \citep{Semboloni:2013,Pyne:2022}. We will develop and test strategies for treating these effects in future works of this series. \begin{acknowledgements} We thank Benjamin Joachimi and Mike Jarvis for providing valuable insights to this project. We would like to thank Joachim Harnois-D\'eraps for making public the SLICS mock data, which can be found at \url{http://slics.roe.ac.uk/}. This work was funded by the TRA Matter (University of Bonn) as part of the Excellence Strategy of the federal and state governments. This work has been supported by the Deutsche Forschungsgemeinschaft through the project SCHN 342/15-1. SH acknowledges support from the German Research Foundation (DFG SCHN 342/13), the International Max-Planck Research School (IMPRS) and the German Academic Scholarship Foundation.\\ \emph{Author contributions.} All authors contributed to the development and writing of this paper. SH wrote the pipeline to model the bispectrum and shear 3pcf and the methods to measure the third-order statistics. LL implemented the modelling algorithm for aperture mass statistics, the GPU-integration, and the cosmosis module, aside from making various improvements to the codes. PB was responsible for everything regarding the T17 simulations and the MCMC runs, including the \textsc{CosmoPower} emulator. PS gave countless valuable insights into third-order shear statistics. \end{acknowledgements} \bibliographystyle{aa} \bibliography{cite} \appendix \onecolumn \section{Testing the \textsc{BiHalofit} bispectrum model} \label{sec:app_testing_bispectrum} \subsection{Measuring the bispectrum} \label{subsec:measuring_bispectrum} To measure the convergence bispectrum $\bkappa$ from simulations, we adapt the estimator developed by \citet{Watkinson:2017}. While their algorithm has been presented for three-dimensional density fields, it can be adapted to two-dimensional convergence fields. Given a convergence field $\kappa(\vtheta)$ and its Fourier transform $\hat{\kappa}(\vell)$, for an $\ell$-bin $\bar{\ell}_i = [\ell_\mathrm{min},\ell_\mathrm{max}]$ we define $\hat{\kappa}(\vell;\bar{\ell}_i)$ as \begin{equation} \hat{\kappa}(\vell;\bar{\ell}_i) = \begin{cases} \hat{\kappa}(\vell) \qquad & \ell_\mathrm{min}\leq |\vell| < \ell_\mathrm{max} \\ 0 & \mathrm{otherwise} \end{cases}\; , \label{eq:defn_kappa_cut} \end{equation} and $\kappa(\vtheta;\bar{\ell}_i)$ as its inverse Fourier transform. We also define $I(\vtheta;\bar{\ell}_i)$ as the inverse Fourier transform of $\hat{I}(\vell;\bar{\ell}_i)$ with $\hat{I}$ defined as in Eq.~\eqref{eq:defn_kappa_cut}: \begin{equation} \hat{I}(\vell;\bar{\ell}_i) = \begin{cases} 1 \qquad & \ell_\mathrm{min}\leq |\vell| < \ell_\mathrm{max} \\ 0 & \mathrm{otherwise} \end{cases}\; . \label{eq:defn_kappa_cut_2} \end{equation} The estimator for the convergence bispectrum is then defined as \begin{equation} \bkappa(\bar{\ell}_1,\bar{\ell}_2,\bar{\ell}_3) = \frac{\Omega^2}{\Npix^3}\frac{\sum_i^{\Npix} \kappa(\vtheta_i;\bar{\ell}_1)\kappa(\vtheta_i;\bar{\ell}_2)\kappa(\vtheta_i;\bar{\ell}_3)}{\sum_i^{\Npix} I(\vtheta_i;\bar{\ell}_1)I(\vtheta_i;\bar{\ell}_2)I(\vtheta_i;\bar{\ell}_3)} \; , \end{equation} where $\Omega$ is the solid angle of the respective field and $\Npix$ is the number of pixels. The advantage of this estimator is its speed: With seven Fourier transforms, we can extract the complete averaged bispectrum of a field, where the three Fourier transforms required for the computation of $I(\vtheta;\bar{\ell_i})$ only need to be performed once, even when computing the bispectra of multiple fields. Furthermore, $\kappa(\vtheta;\bar{\ell}_i)$ can be stored for computing bispectra of different triangle configurations containing the same $\bar{\ell}$-bin. Nevertheless, the estimator suffers from one significant drawback: The Fourier transform assumes periodicity of the field $\kappa$, which is normally not given for convergence maps (in contrast to the three-dimensional $N$-body simulation cubes, which usually exhibit periodic boundary conditions). Therefore the estimator can be biased for $\ell$-scales approaching the scales of either the field or individual pixels. We, therefore, discard scales smaller than 5 pixels or larger than a third of the field size. \subsection{Validation} \label{subsubsec:validation_n_body_sims_bispectrum} To validate our implementation of the \textsc{BiHalofit} algorithm and the Limber integration, we compare our model bispectrum with one extracted from the MS. For all 64 lines-of-sight, we take a convergence map at redshift $z=1$ and use the estimator described in Sect.~\ref{subsec:measuring_bispectrum} to extract the bispectrum for a set of triangle configurations. We compare these to our model predictions in Fig.~\ref{fig:bispectrum_MS_model_vs_theory}. We recover the bispectrum signal quite well, although we fall short of the 10-20\% accuracy reported in \citet{Takahashi:2020}. However, that may very well be due to the limited sample size provided in the MS or the smoothing inside the MS induced by the ray-tracing \citep{Hilbert:2009}. \section{Testing the integration routine for $\Gamma_i$} \label{sec:app_test_gamma_integration} When we define the three-point correlation function of the deflection potential as \begin{equation} \myexpval{\psi(\vec{X})\psi(\vec{Y})\psi(\vec{Z})} = \frac{1}{8\alpha^3}\ee^{-\alpha\left[(\vec{X}-\vec{Y})^2+(\vec{Y}-\vec{Z})^2+(\vec{X}-\vec{Y})^2\right]}\; , \end{equation} we can analytically compute both the three-point correlation functions and the bispectrum. For this, we define \begin{equation} \partial_X = \partial_{X_1} + \ii \partial_{X_2} \; , \quad \nabla^2_X = \partial_X\partial^*_X \; , \end{equation} and use the relations \begin{align} \myexpval{ \kappa(\vec{X})\kappa(\vec{Y})\kappa(\vec{Z}) }= & \left(\frac{1}{2}\nabla^2_X\right)\left(\frac{1}{2}\nabla^2_Y\right)\left(\frac{1}{2}\nabla^2_Z\right) \myexpval{ \psi(\vec{X})\psi(\vec{Y})\psi(\vec{Z})} \; ,\\ \myexpval{ \gamma(\vec{X})\gamma(\vec{Y})\gamma(\vec{Z})} = & \left(\frac{1}{2}\partial^2_X\right)\left(\frac{1}{2}\partial^2_Y\right)\left(\frac{1}{2}\partial^2_Z\right) \myexpval{ \psi(\vec{X})\psi(\vec{Y})\psi(\vec{Z})} \; ,\\ \myexpval{ \gamma(\vec{X})\gamma(\vec{Y})\gamma^*(\vec{Z})} = & \left(\frac{1}{2}\partial^2_X\right)\left(\frac{1}{2}\partial^2_Y\right)\left(\frac{1}{2}\partial^{*2}_Z\right) \myexpval{ \psi(\vec{X})\psi(\vec{Y})\psi(\vec{Z})} \; . \end{align} Defining $\vec{x} = \vec{X - Z}$ and $\vec{y} = \vec{Y - Z}$, the following equations hold: \begin{align} \myexpval{ \hat{\kappa}\hat{\kappa}\hat{\kappa}} (\vec{\ell_1}, \vec{\ell_2}, \vec{\ell_3}) = {}&{} -\frac{\pi^4}{6\alpha^5}\ell_1^2\ell_2^2\ell_3^2\,\diracd(\vec{\ell_1}+\vec{\ell_2}+\vec{\ell_3})\ee^{-(\ell_1^2+\ell_2^2+\ell_3^2)/12\alpha} \nonumber\\ = {}&{} -\frac{\pi^4}{6\alpha^5}\ell_1^2\ell_2^2(\ell_1^2+\ell_2^2+2\vec{\ell_1}\cdot\vec{\ell_2})\,\diracd(\vec{\ell_1}+\vec{\ell_2}+\vec{\ell_3}) \ee^{-(\ell_1^2+\ell_2^2+\vec{\ell_1}\cdot\vec{\ell_2})/6\alpha}\\ = {}&{} (2\pi)^2B_\kappa(\vec{\ell}_1,\vec{\ell}_2,\vec{\ell}_3) \diracd(\vec{\ell_1}+\vec{\ell_2}+\vec{\ell_3})\; ,\\ \myexpval{ \gamma\gamma\gamma} (\vec{x},\vec{y}) \overset{(*)}{=} {}&{} \alpha^3 \left[(\vec{x}-2\vec{y})(\vec{x}+\vec{y})(\vec{y}-2\vec{x})\right]^2\, \ee^{-2\alpha(x^2+y^2-\vec{x}\cdot \vec{y})}\; . \label{eq:gammagammagamma_analytic_model} \end{align} In the last equation, marked by $(*)$, the variables $\vec{x},\vec{y}$ are interpreted as complex numbers $\vec{x}=x_1+\ii x_2$; for their scalar product, $\vec{x}\cdot\vec{y}=x_1y_1+x_2y_2$ holds. The equation for $\langle\gamma\gamma\gamma^*\rangle$ was computed via \textsc{Mathematica} and is too long to denote here. We can now set \begin{equation} b(\ell_1,\ell_2,\varphi) = -\frac{\pi^2\ell_1^2\ell_2^2(\ell_1^2+\ell_2^2+2\ell_1\ell_2\cos\varphi)}{24\alpha^5}\ee^{-(\ell_1^2+\ell_2^2+\ell_1\ell_2\cos(\varphi))/6\alpha}\, . \label{eq:bkappa_analytic_model} \end{equation} We then transform the Cartesian shears in Eq.~\eqref{eq:gammagammagamma_analytic_model} to the orthocenters using Eq.~\eqref{eq:rotation_orthocenter}, and use the fact that $\Gamma^{(0)}=\expval{\gamma\gamma\gamma}$ and $\Gamma^{(3)}=\expval{\gamma\gamma\gamma^*}$ (see Eq.~\ref{eq:defn_natural_components}), to test our integration routine using an analytic model. As can be seen in Fig.~\ref{fig:shear_3pcf_analytic_model}, the integration routine is accurate to the sub-percent level. \section{Additional figures}
Title: Analytic models of dust temperature in high-redshift galaxies
Abstract: We investigate physical reasons for high dust temperatures ($T_\mathrm{dust}\gtrsim 40$ K) observed in some high-redshift ($z>5$) galaxies using analytic models. We consider two models that can be treated analytically: the radiative transfer (RT) model, {where a broad distribution of values for $T_\mathrm{dust}$ is considered}, and the one-tempearture (one-$T$) model, which assumes {uniform $T_\mathrm{dust}$}. These two extremes {serve to bracket the most realistic scenario}. We adopt the Kennicutt--Schmidt (KS) law to relate stellar radiation field to gas surface density, and vary the dust-to-gas ratio. As a consequence, our model is capable of predicting the relation between the surface density of star formation rate ($\Sigma_\mathrm{SFR}$) or dust mass ($\Sigma_\mathrm{dust}$) and $T_\mathrm{dust}$. We show that the high $T_\mathrm{dust}$ observed at $z\gtrsim 5$ favour low dust-to-gas ratios ($\lesssim 10^{-3}$). An enhanced star formation compared with the KS law gives an alternative explanation for the high $T_\mathrm{dust}$. The dust temperatures are similar between the two (RT and one-$T$) models as long as we use ALMA Bands 6--8. We also examine the relation among $\Sigma_\mathrm{SFR}$, $\Sigma_\mathrm{dust}$ and $T_\mathrm{dust}$ without assuming the KS law, and confirm the consistency with the actual observational data at $z>5$. In the one-$T$ model, we also examine a clumpy dust distribution, which predicts lower $T_\mathrm{dust}$ because of the leakage of stellar radiation. This enhances the requirement of low dust abundance or high star formation efficiency to explain the observed high $T_\mathrm{dust}$.
https://export.arxiv.org/pdf/2208.04546
\label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \begin{keywords} dust, extinction -- galaxies: evolution -- galaxies: high-redshift -- galaxies: ISM -- submillimetre: galaxies -- radiative transfer. \end{keywords} \section{Introduction}\label{sec:intro} The interstellar medium (ISM) of galaxies usually contains dust grains, which play an important role in various physical processes on galactic or sub-galactic scales. Dust absorbs and scatters the radiation from stars, and reradiates it at infrared (IR)--submillimetre (submm) wavelengths \citep[e.g.][]{Buat:1996aa,Calzetti:2000aa}. In this way, dust strongly modifies the spectral energy distribution (SED) of interstellar radiation field and that of galaxy emission \citep[e.g.][]{Silva:1998aa,Takagi:2003aa,Takeuchi:2005aa}. This means that a part of star formation activity in a galaxy can only be traced in the IR--submm \citep[e.g.][]{Kennicutt:1998ab,Inoue:2000aa}, and that when we extract galaxy properties (stellar mass, age, etc.) from SED fitting, it is crucial to appropriately consider dust extinction and reemission \citep[e.g.][]{da-Cunha:2008aa,Boquien:2019aa,Abdurrouf:2021aa,Ferrara:2022aa}. When dust shields ultraviolet (UV) light, it emits photoelectrons, contributing to the heating of the ISM \citep[e.g.][]{Tielens:2005aa}. Dust surfaces are reaction sites for molecular hydrogen formation \citep[e.g.][]{Gould:1963aa,Cazaux:2004aa}. This makes cold star-forming regions rich in H$_2$ molecules \citep{Yamasawa:2011aa,Chen:2018aa,Romano:2022aa}. Dust also induces fragmentation in the star formation process, determining a characteristic mass of a star. Trough this process, dust also affects the stellar initial mass function (IMF; e.g.\ \citealt{Omukai:2005aa,Schneider:2006aa}). The above effects of dust may have already been important at high redshift ($z$) since some galaxies are already dusty at $z>5$ \citep[e.g.][]{Capak:2015aa,Burgarella:2020aa,Fudamoto:2021aa}. The redshift frontier of dust observation has been expanded to $z>5$ \citep[e.g.][]{Dayal:2018aa} because of the high capability of the Atacama Large Millimetre/submillimetre Array (ALMA). For a 'typical' population of high-redshift galaxies, Lyman break galaxies (LBGs), dust emission has been detected even at $z>7$ \citep[e.g.][]{Watson:2015aa,Laporte:2017aa,Tamura:2019aa,Hashimoto:2019aa,Schouws:2022aa,Inami:2022aa}, although we should also note that most LBGs at such high redshift have too weak dust emission to be detected by ALMA with a limited time of integration \citep[e.g.][]{Bouwens:2016aa,Fudamoto:2020aa}. Observationally, correctly estimating the dust mass is of fundamental importance. The dust masses derived from ALMA observations for high-redshift galaxies are highly uncertain because it is difficult to obtain precise dust temperature. Some studies succeeded in obtaining dust temperatures in LBGs at $z\gtrsim 5$ from multi-wavelength ALMA data. A1689-zD, detected with ALMA in Band~6 (1,300~$\micron$; \citealt{Watson:2015aa}), is later followed up in Band~7 (870~$\micron$) by \citet{Knudsen:2017aa}, who obtained a dust temperature of 35--45~K, higher than those in nearby spiral galaxies ($\sim 20$--25 K; e.g.\ \citealt{Draine:2007aa}). The high dust temperature of this object was confirmed by further detections in Band 8 (730~$\micron$; \citealt{Inoue:2020aa}) and Band 9 (430 $\micron$; \citealt{Bakx:2021aa}). Including such short wavelengths may be important to trace LBGs with high dust temperatures \citep{Chen:2022aa}, {as also demonstrated for the above object by \citet{Bakx:2021aa}}. \citet{Burgarella:2020aa} compiled ALMA detections of LBGs at various $z(>5)$, which enabled them to statistically trace dust emission SEDs at different restframe wavelengths \citep[see also][]{Nanni:2020aa,Burgarella:2022aa}, and obtained dust temperatures of 40--70 K. \citet{Faisst:2020aa} estimated dust temperatures of four LBGs at $z\sim 5.5$ as 30--43 K \citep[see also][]{Faisst:2017aa}. \citet{Bakx:2020aa} obtained an even higher dust temperature for a LBG at $z=8.31$ ($>80$ K). \citet{Sommovigo:2021aa,Sommovigo:2022aa} indirectly derived dust temperatures by utilizing some empirical relations involving [C \textsc{ii}] 158 $\micron$ emission, obtaining similar dust temperatures to the above ($\sim 30$--70 K) for a sample of $z>5$ galaxies. These values imply not only systematically warmer dust than in nearby galaxies but also a large variety in dust temperature at $z\gtrsim 5$. The physical reason for high dust temperature is worth clarifying because it may give us a clue to the evolution of star formation activities and dust properties. In fact, a tendency of increasing dust temperature with redshift is observed at $z\lesssim 4$ \citep{Bethermin:2015aa,Schreiber:2018aa,Bethermin:2020aa,Bouwens:2020aa,Faisst:2020aa,Viero:2022aa}, although we need to be careful about the selection effect \citep{Lim:2020aa}. Cosmological simulations also predict high dust temperature at high redshift \citep{Behrens:2018aa,Aoyama:2019aa,Ma:2019aa,Liang:2019aa,Vijayan:2022aa,Pallottini:2022aa}. The tendency of higher dust temperature at higher redshift could be related to increasing star formation efficiencies (or equivalently decreasing gas-depletion time-scales; \citealt{Magnelli:2014aa,Sommovigo:2022aa}). High dust temperature could also be realized if star-forming regions have concentrated, compact morphologies \citep{Ferrara:2017aa,Behrens:2018aa,Liang:2019aa,Sommovigo:2020aa,Pallottini:2022aa}. \citet{Sommovigo:2022aa} also considered the effect of dust mass (as taken into account by other theoretical studies; e.g.\ \citealt{Hirashita:2002aa}) in determining the dust temperature, which effectively includes {shielding of stellar light}; that is, as the dust mass increases, the dust shields the stellar radiation and lowers the dust heating per dust mass. This means that low dust abundance, as well as high stellar radiation intensity, is important for rising dust temperature towards high redshift. Since the above conclusions are derived in different contexts, we here aim at further focusing on the possible essential quantities -- dust abundance and stellar radiation field -- that affect the dust temperature. This serves to clarify the physical conditions that could explain the observed high dust temperatures at high redshift. We formulate the problem by focusing on physical processes that determine the dust temperature -- heating from stellar radiation and dust radiative cooling. The balance between these two processes is treated by an equilibrium condition {as in \citet{Ferrara:2017aa} and \citet{Sommovigo:2020aa}}. In other words, this paper investigates how the equilibrium condition is affected by the star formation activities and dust properties. To make the physical processes transparent, we treat the problem analytically, which is complementary to some numerical simulations mentioned above. The transparency of our approach is also useful to {examine the dust shielding effects with a variety of dust distribution geometries and dust properties (grain sizes and compositions),} further serving to examine how robustly dust abundance and stellar radiation field affect the dust temperature. Utilizing the developed analytic models, we also address the effects of grain compositions and grain size distribution, which are suggested to influence the observational properties of galaxies at UV and IR wavelengths \citep{Yajima:2014aa}. In this paper, we focus on $z>5$, where the current redshift frontier of dust observation is located. Nevertheless, we emphasize that the physical processes treated in this paper are common for any redshift. Thus, the conclusion drawn this paper is qualitatively applicable to galaxies at $z<5$. In particular, we plan a separate study for local galaxies by using the framework developed in this paper to further test our theoretical predictions (Chiang et al., in preparation). Focusing on a certain range of redshift would be useful to minimize the variation in redshift-dependent physical processes such as redshift evolution of gas-depletion time \citep{Sommovigo:2022aa}, and systematic difference in stellar populations. This paper is organized as follows. We explain the models for dust temperature in Section~\ref{sec:model}. We show the results in Section~\ref{sec:result}. We discuss some further issues, especially dependence on various parameters in Section \ref{sec:discussion}. Section \ref{sec:conclusion} concludes this paper. We adopt the following cosmological parameters: $\Omega_\Lambda=0.7$, $\Omega_\mathrm{M}=0.3$, and $H_{0}= 70$ km s$^{-1}$ Mpc$^{-1}$. \section{Model}\label{sec:model} In this paper, we develop analytic models for dust temperature in a galaxy. To make an analytic treatment possible, we consider the following two extremes, which simplify the problem but still catch the essential physical factors affecting the dust temperature: {(i) one is the case where we consider a distribution of values for the dust temperature, while (ii) the other assumes a single dust temperature value.} We refer these models as the (i) \textit{radiative transfer (RT) model} and (ii) \textit{one-temperature (one-$T$) model}, respectively. The first model solves radiative transfer {in a simple dust--stars geometry}, while the second could treat another complexity -- dust distribution geometry. Two models are complementary, and {catch} different physical aspects that vary the dust temperature under a fixed star formation activity in a galaxy. In this paper we do not explicitly consider the effect of the cosmic microwave background (CMB) on the dust temperature \citep[e.g.][]{da-Cunha:2013aa} since it is redshift-dependent. Practically, the CMB sets a floor for the dust temperature; thus, any dust temperature below the CMB temperature, $2.73(1+z)$ K, is not physically permitted. However, since we are mainly interested in {galaxies whose observed dust temperatures are} significantly higher than the CMB temperature, the CMB does not affect our discussions and conclusions. Thus, we do not apply the redshift-dependent correction for the CMB temperature so that we do not have to specify the redshift for each result. Note that the background (including the CMB) is already subtracted from observational data used for comparison. \subsection{Basic setup} \subsubsection{Galaxy properties}\label{subsubsec:galaxy} We represent the masses of dust, gas, and stars by their surface densities, denoted as $\Sigma_\mathrm{dust}$, $\Sigma_\mathrm{gas}$ and $\Sigma_\star$, respectively. A quantity per surface area is convenient since the dust temperature is determined by the radiation intensity, which has the same dimension as the surface brightness (luminosity per surface area). For simplicity, we assume that the dust, gas and stars are distributed in a uniform disc so that the above three quantities represent the galaxy properties. Although our formulation implicitly assumes disc geometry, we expect that our results are not strictly limited to discs because the dust heating radiation in a galaxy has on average an intensity on the order of $\sim L/(\upi R^2)$, where $L$ and $R$ represent the stellar luminosity and the optical galaxy size, respectively. In particular, the geometry factor causes an uncertainty of at most factor 4 ($4\pi$ instead of $\pi$ in spherical shell geometry; e.g.\ \citealt{Inoue:2020aa}), which affects the dust temperature only by a factor of $\sim 4^{1/6}\sim 1.26$ at most. Thus, our results could be applied to any geometry with a 20--30 per cent uncertainty in the dust temperature. Since we are interested in normal LBGs, we neglect the contribution from AGN heating \citep[see e.g.][for the effect of AGN heating in high-redshift galaxies]{DiMascia:2021aa}. We also neglect small-scale inhomogeneity that could not be included in our treatment of smooth surface densities (as commented in Section~\ref{subsec:complex}). It is empirically established that the SFR is tightly related to the gas mass. This relation is described by the Kennicutt--Schmidt (KS) law as \citep{Kennicutt:1998aa} \begin{align} \left(\frac{\Sigma_\mathrm{SFR}}{\mathrm{M_{\sun}~yr^{-1}~kpc^{-2}}}\right)=1.0\times 10^{-12} \kappa_\mathrm{s}\left(\frac{\Sigma_\mathrm{gas}}{\mathrm{M_{\sun}~kpc^{-2}}}\right)^{1.4}, \label{eq:KS} \end{align} where $\Sigma_\mathrm{SFR}$ is the surface density of the SFR, and $\kappa_\mathrm{s}$ is the burstiness parameter. Following {\citet{Ferrara:2019aa} and} \citet{Sommovigo:2021aa}, we include the correction factor $\kappa_\mathrm{s}$ explicitly, and we adopt $\kappa_\mathrm{s}=1$ for the default KS law. We also define the formed stellar mass, $\Sigma_{\star ,0}$, as \begin{align} \Sigma_{\star ,0}=\Sigma_\mathrm{SFR}\tau_\star ,\label{eq:Mstar} \end{align} where $\tau_\star$ is the age of the star formation activity. For simplicity, we assume a constant SFR within the duration $\tau_\star$. The surface luminosity density (stellar luminosity per surface area per frequency $\nu$), $\mathcal{I}_{\star\nu}$, is calculated as \begin{align} \mathcal{I}_{\star\nu}=\Sigma_{\star, 0}\ell_\nu , \end{align} where $\ell_\nu$ is the luminosity density per formed stellar mass, and is calculated using a spectral synthesis model. To calculate the SED per stellar mass ($\ell_\nu$), we use \textsc{starburst99}\footnote{\url{https://www.stsci.edu/science/starburst99/docs/default.htm}} \citep{Leitherer:1999aa} with a constant SFR and an age of $\tau_\star$. For simplicity, we fix the stellar SED and assume $\tau_\star\sim 10^8$~yr, which is roughly a typical stellar age for high-redshift LBGs \citep[][and references therein]{Liu:2019aa}. Since UV radiation, which saturates in $\sim 10^8$ yr for a constant SFR, is the dominant source of dust heating \citep{Buat:1996aa}, the resulting dust temperature is not sensitive to the adopted age as long as $\tau_\star\gtrsim 10^8$ yr. Since we are interested in an early phase of metal enrichment, we set the stellar metallicity to a sub-solar value, 0.008 ($\sim 1/2$~Z$_\odot$). {Although we consider a wide range in the dust-to-gas ratio (note that the gas-phase metallicity is not used in our model), we fix the stellar metallicity, since it has much less impact than other parameters (e.g.\ dust-to-gas ratio) on the dust temperature.} We adopt the Kroupa initial mass function \citep{Kroupa:2002aa} with a stellar mass range of 0.1--100 M$_{\sun}$. \subsubsection{Dust properties}\label{subsubsec:dust_properties} To make the problem analytically tractable, we neglect scattering, and only consider absorption by dust. Scattering could raise the chance of absorption because it effectively increases the path length of the photons. However, the cross-section for scattering is at most comparable to that of absorption, so that the absorbed energy increases by a factor of $\sim$2 at most. The dust temperature, which depends on the absorbed energy to the power $\sim 1/6$, does not change significantly. Changing other parameters such as dust-to-gas ratio, which increases the dust opacity proportionally, has a larger impact on the dust temperature. Thus, neglecting scattering does not influence our conclusions in this paper. The mass absorption coefficient (absorption cross-section per gas mass), $\kappa_\mathrm{g,abs}(\nu )$, is evaluated, assuming compact spherical grains, as \begin{align} \kappa_\mathrm{g,abs}(\nu )=\mathcal{D}\, \frac{\int_0^\infty\pi a^2Q_\mathrm{abs}(a,\,\nu )n(a)\,\mathrm{d}a} {\int_0^\infty\frac{4}{3}\pi a^3sn(a)\,\mathrm{d}a},\label{eq:kappa} \end{align} where $\mathcal{D}$ is the dust-to-gas ratio, $a$ is the grain radius, $Q_\mathrm{abs}(a,\,\nu )$ is the ratio of absorption to geometrical cross-sections, $s$ is the dust material density, and $n(a)$ is the grain size distribution, which is defined such that $n(a)\,\mathrm{d}a$ is the number density of grains in the radius range from $a$ to $a+\mathrm{d}a$. The absorption cross-section, specifically $Q_\mathrm{abs}(a,\,\nu )$, is calculated using the Mie theory \citep{Bohren:1983aa} with silicate or graphite properties given in \citet{Weingartner:2001aa}. We adopt $s=3.5$ and 2.24 g cm$^{-3}$ for silicate and graphite, respectively \citep{Weingartner:2001aa}. We consider the following power law form for the grain size distribution with index $(-p)$: \begin{align} n(a)= \begin{cases} Ca^{-p} & \text{if $a_\mathrm{min}\leq a\leq a_\mathrm{max}$}, \\ 0 & \text{otherwise}, \end{cases} \end{align} where $C$ is the normalizing constant. In this paper, it is not necessary to determine $C$, because it is cancelled out in $\kappa_\mathrm{g,abs}$ (equation \ref{eq:kappa}). Since we are not interested in the detailed grain size distribution, we fix $a_\mathrm{min}=0.001~\micron$ and $a_\mathrm{max}=0.25~\micron$ and only move $p$. The Milky Way extinction curve can be fitted with $p=3.5$ by mixing silicate and graphite \citep[][hereafter MRN]{Mathis:1977aa}. Since dust properties at high redshift is uncertain, we examine silicate and graphite separately. In addition to $p=3.5$, we also examine $p=2.5$ and 4.5. The shallower (steeper) power $p=2.5$ (4.5) represents a case where large (small) grains dominate both dust mass and surface area. Each of these two values corresponds to an extreme case where small grains are efficiently destroyed by sputtering \citep{Hirashita:2015aa} or produced by shattering \citep{Hirashita:2013ab}. A larger value of $p$ tends to have a steeper rise of $\kappa_\mathrm{g,abs}(\nu )$ towards short wavelengths because of a higher abundance of small grains. Graphite has a similar mass absorption coefficient to silicate at $\lambda\lesssim 0.15~\micron$ (where $\lambda$ is the rest wavelength), but has larger values at longer wavelengths. {The above variations, especially those in $p$, already include extreme cases for extinction curves, since they produce a larger variety in the steepness of extinction curves than examined by \citet{Weingartner:2001aa} for nearby galaxies. Moreover, we will later show that even with those varieties, variation in grain properties has a minor influence on the dust temperature (Section \ref{subsec:dust_properties}).} \subsection{Two models} \subsubsection{RT model}\label{subsubsec:RT} In this model, we {consider} the multi-temperature effect realized by dust shielding of stellar light. As mentioned above, we assume homogeneity in the directions parallel to the disc plane {and that the disc thickness is much smaller than the radial extension of the disc}. We use coordinate $\zeta$ in the vertical direction with $\zeta =0$ corresponding to the disc mid-plane. {To examine the shielding effect of dust under the plane-parallel geometry}, we assume that all the stars are located in the mid-plane (i.e.\ at $\zeta =0$) and that the dust is distributed as `screens' symmetrically at $\zeta <0$ and $>0$. {In our treatment, each `layer' of dust has a different dust temperature, so that multi-dust-temperature structure emerges.} As mentioned in Section \ref{subsubsec:dust_properties}, we neglect scattering by dust. With the above setup, we derive the intensity at $\zeta$ on a light path whose direction has an angle of $\theta$ from the vertical {(positive $\zeta$)} direction. This intensity (as a function of frequency) is denoted as $I_\nu =I_\nu (\zeta ,\,\mu )$, where $\mu\equiv\cos\theta$. First, we consider UV--optical wavelengths where stellar emission is dominant (dust emission is negligible). The radiative transfer equation including dust absorption and stellar emission is written as \begin{align} \mu\frac{\mathrm{d}I_\nu}{\mathrm{d}\zeta}=-\kappa_\mathrm{g,abs}(\nu)\rho (\zeta ) I_\nu+\frac{1}{4\pi}\Sigma_{\star\nu}\delta (\zeta ),\label{eq:radtr} \end{align} where $\rho (\zeta )$ is the gas density at $\zeta$, and $\delta (\zeta )$ is Dirac's delta function. The above equation can be solved as \citep[see also][]{Hirashita:2019ab} \begin{align} I_\nu (\zeta ,\,\mu )=\frac{\mathcal{I}_{\star\nu}}{4\pi\mu}\exp\left( -\frac{1}{\mu}\kappa_\mathrm{g,abs}(\nu )\,\tilde{\Sigma}_\mathrm{gas} (\zeta )\right) , \end{align} where \begin{align} \tilde{\Sigma}_\mathrm{gas}(\zeta )= \int_0^\zeta\rho (\zeta' )\,\mathrm{d}\zeta' ,\label{eq:Sigma_gas} \end{align} is the {surface density measured from the disc mid-plane up to} height $\zeta$. In practice, we use $\tilde{\Sigma}_\mathrm{gas}$ instead of $\zeta$ for {the integration variable}. {Since $\mathrm{d}\tilde{\Sigma}_\mathrm{gas}=\rho (\zeta )\,\mathrm{d}\zeta$ (equation \ref{eq:Sigma_gas}), we do not need to specify the profile of $\rho (\zeta )$ if we use $\tilde{\Sigma}_\mathrm{gas}$ for the integration variable. Thus, we hereafter use $\tilde{\Sigma}_\mathrm{gas}$ not $\zeta$ to indicate the vertical coordinate.} We also note that $\tilde{\Sigma}_\mathrm{gas}$ always appears together with the mass absorption coefficient, so that we could also treat $\tilde{\Sigma}_\mathrm{dust}\equiv\mathcal{D}\tilde{\Sigma}_\mathrm{gas}$ as {an integration variable} once we give $\mathcal{D}$, which is treated as a constant parameter in this paper. {The integration is performed up to a point where $\tilde{\Sigma}_\mathrm{gas}=\Sigma_\mathrm{gas}/2$ (the total surface density in the upper plane) is reached.} The dust temperature at $\tilde{\Sigma}_\mathrm{gas}(\zeta )$, denoted as $T_\mathrm{dust}(\tilde{\Sigma}_\mathrm{gas})$ is estimated from the radiative equilibrium: \begin{align} \int_{912~\text{\AA}}^\infty\kappa_\mathrm{g,abs}(\nu )J_\nu (\tilde{\Sigma}_\mathrm{gas}) \,\mathrm{d}\nu = \int_0^\infty\kappa_\mathrm{g,abs}(\nu )B_\nu [T_\mathrm{dust}(\tilde{\Sigma}_\mathrm{gas})]\,\mathrm{d}\nu,\label{eq:radeq} \end{align} where $J_\nu (\tilde{\Sigma}_\mathrm{gas})$ is the intensity averaged for the solid angle as a function of $\tilde{\Sigma}_\mathrm{gas}$, and $B_\nu (T_\mathrm{dust})$ is the Planck function at frequency $\nu$ and dust temperature $T_\mathrm{dust}$. {The lower limit of the integration range on the left-hand side is set to 912 \AA, since radiation at shorter wavelengths is mostly absorbed by hydrogen.} The mean intensity is given by \begin{align} J_\nu (\tilde{\Sigma}_\mathrm{gas})=\frac{1}{2}\int_{-1}^1I_\nu (\tilde{\Sigma}_\mathrm{gas},\,\mu )\,\mathrm{d}\mu . \end{align} {The main contribution to the integration on the left-hand side of equation (\ref{eq:radeq}) comes from $\lambda\lesssim 4000$ \AA\ \citep[see also][]{Buat:1996aa}, while that on the right-hand side from IR wavelengths. The numerical integrations are executed in sufficiently wide wavelength ranges that cover the relevant wavelengths. We also note that} $\kappa_\mathrm{g,abs}$ is insensitive to the grain radius {(or equivalently to the grain size distribution)} at {IR} wavelengths {since the grain radii are much smaller than the wavelengths}. Thus, in the real calculation, we use the values and wavelength dependence derived by \citet{Hirashita:2014aa} when we evaluate the right-hand side of equation (\ref{eq:radeq}) to save the computational cost; that is, $\kappa_\mathrm{g,abs}(\nu )=\mathcal{D}\kappa_{158}(\nu /\nu_{158})^\beta$ with $(\kappa_{158},\,\beta )=(13.2~\mathrm{cm^2~g^{-1}},\, 2)$, $(20.9~\mathrm{cm^2~g^{-1}},\, 2)$ for silicate and graphite, respectively ($\kappa_{158}$ is the dust mass absorption coefficient at $\lambda =158~\micron$, $\nu_{158}$ is the frequency corresponding to $\lambda =158~\micron$, and $\beta$ is the emissivity index). {This power-law approximation holds for the wavelength range of interest for dust emission ($\lambda\gtrsim 40~\micron$).} Note that $\mathcal{D}$ is multiplied to obtain the absorption coefficient per gas mass. We solve equation (\ref{eq:radeq}) for $T_\mathrm{dust}$ as a function of $\tilde{\Sigma}_\mathrm{gas}$. Finally, the dust emission at each layer is superposed to obtain the observed dust SED. We assume that the dust emission is optically thin, which holds for the {surface} density range we are interested in (see below). We calculate the output dust SED per surface area, $\mathcal{I}_\mathrm{dust}^\mathrm{RT}(\nu )$, for the RT model as \begin{align} \mathcal{I}_\mathrm{dust}^\mathrm{RT}(\nu )= 2\int_0^{\Sigma_\mathrm{gas}/2}\kappa_\mathrm{g,abs}(\nu )(4\upi )B_\nu [T_\mathrm{dust}(\tilde{\Sigma}_\mathrm{gas})] \,\mathrm{d}\tilde{\Sigma}_\mathrm{gas}, \end{align} where the integration is multiplied by 2 to consider the lower half of the disc. To confirm the optically thin assumption for dust emission, we estimate the optical depth in the far-IR (FIR), $\tau_\mathrm{FIR}(\lambda )$, as $\tau_\mathrm{FIR}(\lambda )=0.069(\kappa_{158}/13.2~\mathrm{cm^2~g^{-1}}) (\lambda /100~\micron )^{-2} (\Sigma_\mathrm{dust}/10^7~\mathrm{M_{\sun}~kpc^{-2}})$. Since we are interested in the wavelength range $\lambda\gtrsim 100~\micron$ and dust {surface} density $\sim 10^7~\mathrm{M_{\sun}~kpc^{-2}}$, the optically thin assumption holds. We do not discuss {surface} densities $\Sigma_\mathrm{dust}>10^8~\mathrm{M_{\sun}~kpc^{-2}}$ and leave such a high optical depth regime for future work since we need a fully numerical iterative framework of energy balance and radiative transfer. \subsubsection{One-$T$ model}\label{subsubsec:oneT} In the one-$T$ model, we assume that the radiation field is uniform. This is an opposite extreme to the RT model in which non-uniformity of the dust temperature naturally emerges. We basically follow the treatment described by \citet{Inoue:2020aa} {\citep[see also][for a recent application to high-$z$ galaxies]{Fudamoto:2022aa}}. Because of the uniformity, we assume that the stars and dust are mixed homogeneously on a galactic scale. We consider two cases: one is the \textit{homogeneous geometry}, in which the distribution of dust is smooth and homogeneous, and the other is the \textit{clumpy geometry}, which allows for clumpy distribution of dust (but the spherical clumps, which have the same radius and density, are distributed homogeneously). The stars are assumed to be distributed uniformly in both geometries. To evaluate the dust temperature, we use equation (\ref{eq:radeq}), but we adopt the following estimate for the radiation field, $J_\nu =J_\nu^\mathrm{one}$ (note that $J_\nu$ does not depend on the position in the galaxy by assumption). We denote the escape fraction of the stellar radiation at frequency $\nu$ as $P_\mathrm{esc}(\tau_\nu )$, where $\tau_\nu$ is the effective optical depth of the dust at $\nu$. Since the stellar radiation that does not escape from the galaxy is absorbed by dust, we can relate the escape fraction and $\Sigma_\mathrm{gas}\kappa_\mathrm{g,abs}(\nu )J_\nu^\mathrm{one}$ (absorbed stellar radiation luminosity per surface area) as \begin{align} 4\upi\Sigma_\mathrm{gas}\kappa_\mathrm{g,abs}(\nu )J_\nu^\mathrm{one}= \left[ 1-P_\mathrm{esc}(\tau_\nu )\right]\mathcal{I}_{\star\nu} ,\label{eq:radeq_one} \end{align} We evaluate $P_\mathrm{esc}(\tau_\nu )$ using the averaged escape fraction in a plane-parallel disc, where we use the optical depth in the vertical direction for $\tau_\nu$ (evaluated below). The averaged escape fraction of a plane-parallel disc in the direction which has an angle $\theta$ ($0\leq\theta <\upi /2$) from the vertical direction is $(1-\mathrm{e}^{-\tau_\nu /\mu})/(\tau_\nu /\mu )$ . Thus, averaging it over all the solid angle, we obtain the escape fraction as \begin{align} P_\mathrm{esc}(\tau_\nu )=\int_0^1\frac{1-\mathrm{e}^{-\tau_\nu /\mu}}{\tau_\nu /\mu}\, \mathrm{d}\mu .\label{eq:Pesc} \end{align} {The optical depth $\tau_\nu$ is estimated in different ways for the homogeneous and clumpy geometries as explained below.} \paragraph*{Homogeneous geometry} For the homogeneous geometry, the effective optical depth $\tau_\nu =\tau_\nu^\mathrm{hom}$ is simply evaluated as \begin{align} \tau_\nu^\mathrm{hom}=\kappa_\mathrm{g,abs}(\nu )\Sigma_\mathrm{gas}. \end{align} Note that we evaluate the optical depth for plane-parallel geometry (while \citealt{Inoue:2020aa} adopted spherical geometry). \paragraph*{Clumpy geometry} For the clumpy geometry, $\tau_\nu =\tau_\nu^\mathrm{cl}$, which is given below. We adopt the high-contrast limit, in which the clumps are much denser than the interclump medium, since the case with low contrast is similar to the homogeneous case. As shown by \citet{Inoue:2020aa}, \begin{align} \tau_\nu^\mathrm{cl}=\tau_\nu^\mathrm{hom}P_\mathrm{esc}^\mathrm{sp} (\tau_\mathrm{c,\nu}), \end{align} where $\tau_\mathrm{c,\nu }$ is the radial optical depth of a single clump at $\nu$, and $P_\mathrm{esc}^\mathrm{sp}(\tau_\mathrm{c,\nu})$ is the escape fraction of a homogeneous sphere of optical depth $\tau_\nu^\mathrm{cl0}$ given by \citep{Varosi:1999aa,Osterbrock:2006aa} \begin{align} P_\mathrm{esc}^\mathrm{sp}(\tau_\mathrm{c,\nu})=\frac{3}{4\tau_\mathrm{c,\nu}} \left[ 1-\frac{1}{2{\tau_\mathrm{c,\nu}}^2}+ \left(\frac{1}{\tau_\mathrm{c,\nu}}+\frac{1}{2{\tau_\mathrm{c,\nu}}^2}\right) \mathrm{e}^{-2\tau_\mathrm{c,\nu}}\right] . \end{align} In the high-contrast approximation, $\tau_\nu^\mathrm{c,\nu}\simeq\tau_\nu^\mathrm{hom}\xi_\mathrm{cl}$, where $\xi_\mathrm{cl}$ is the clumpiness parameter: The limit $\xi_\mathrm{cl}\to 0$ corresponds to an infinite number of infinitely compact clumps (reduced to the homogeneous case), while the opposite $\xi_\mathrm{cl}\to\infty$ means a small number of infinitely compact clumps (reduced to no absorption; \citealt{Inoue:2020aa}). An intermediate value of $\xi_\mathrm{cl}$ is, thus, interesting for the clumpy geometry. \medskip The overall procedures are summarized here. We adopt $\tau_\nu =\tau_\nu^\mathrm{hom}$ for the homogeneous geometry or $\tau_\nu =\tau_\nu^\mathrm{cl}$ for the clumpy geometry to evaluate the escape fraction, $P_\mathrm{esc}(\tau_\nu )$ in equation~(\ref{eq:Pesc}). Using $P_\mathrm{esc}$, we obtain $J_\nu^\mathrm{one}$ in equation (\ref{eq:radeq_one}). This $J_\nu^\mathrm{one}$ is then used in equation~(\ref{eq:radeq}) by replacing $J_\nu (\tilde{\Sigma}_\mathrm{gas} )$ with $J_\nu^\mathrm{one}$. Solving this equation for $T_\mathrm{dust}$, we obtain the dust temperature in the one-$T$ model. We simply denote the dust temperature in the one-$T$ model as $T_\mathrm{dust}$. Assuming optically thin emission, the dust emission SED per surface area, $\mathcal{I}_\mathrm{dust}^\text{one-$T$}(\nu )$, in this model is estimated as \begin{align} \mathcal{I}_\mathrm{dust}^\text{one-$T$}(\nu )=4\upi\kappa_\mathrm{g,abs}(\nu )\Sigma_\mathrm{gas} B_\nu (T_\mathrm{dust}) . \end{align} \subsection{Colour temperature}\label{subsec:clr} Observationally, dust temperature is derived from multi-wavelength observations at rest-frame FIR wavelengths. Thus, observationally estimated dust temperatures are basically colour temperatures. The colour temperature is defined using the intensities at two wavelengths, $\lambda_1$ and $\lambda_2$ (corresponding frequencies $\nu_1$ and $\nu_2$, respectively), and is denoted as $T_\mathrm{clr}(\lambda_1,\,\lambda_2)$. For the RT model, the colour temperature is obtained by solving the following equation for $T_\mathrm{clr}$ \citep[e.g.][]{Krugel:2003aa}: \begin{align} \frac{\kappa_\mathrm{g,abs}(\nu_2)B_{\nu_2} [T_\mathrm{clr}(\lambda_1,\,\lambda_2)]} {\kappa_\mathrm{g,abs}(\nu_1)B_{\nu_1} [T_\mathrm{clr}(\lambda_1,\,\lambda_2)]}= \frac{\mathcal{I}_\mathrm{dust}^\mathrm{RT}(\nu_2)}{\mathcal{I}_\mathrm{dust}^\mathrm{RT}(\nu_1)}. \end{align} The colour temperature depends on the choice of $\lambda_1$ and $\lambda_2$. Strictly speaking, this colour temperature is not really the one derived from the observation, since we do not know the real frequency dependence of $\kappa_\mathrm{g,abs}$ in the observed galaxy. We need to keep in mind a larger uncertainty caused by the unknown frequency dependence of the mass absorption coefficient in actual observations, but we neglect it in this paper. We basically take $\lambda_1=100~\micron$ and $\lambda_2=200~\micron$, so that the two wavelengths are near to ALMA bands (Bands 6, 7, and 8) for galaxies at $z>5$. {A frequently adopted choice $\lambda_1 =88~\micron$ and $\lambda_2=158~\micron$ tuned to the [O \textsc{iii}] and [C \textsc{ii}] emissions, respectively, have similar results so that the following results holds for this choice. We further} examine the effects of wavelength choice in Section \ref{subsec:radtr_vs_oneT}. We also comment on a caution in using the 450 $\micron$ band (Band 9) for high-redshift galaxies there. For the one-$T$ model, the dust temperature has only a single value. Thus, $T_\mathrm{clr}(\lambda_1,\,\lambda_2)=T_\mathrm{dust}$ always holds in the one-$T$ model. \subsection{Observational data for comparison}\label{subsec:sample} \begin{table*} \caption{High-redshift galaxies used for comparison.} \label{tab:sample} \begin{tabular}{lccccccccc} \hline Name & $z$ & $T_\mathrm{dust}$ & $L_\mathrm{UV}$ & $L_\mathrm{IR}$ & $M_\mathrm{dust}$ & $\sqrt{ab}/2$ & $\Sigma_\mathrm{SFR}$ & $\Sigma_\mathrm{dust}$ & ref.$^a$\\ & & (K) & ($10^{11}$\,L$_{\sun}$)& ($10^{11}$\,L$_{\sun}$) & ($10^{7}$\,M$_{\sun}$) & (kpc) & (M$_{\sun}$\,yr$^{-1}$\,kpc$^{-2}$) & ($10^7$\,M$_{\sun}$\,kpc$^{-2}$) & \\ \hline {A2744\_YD4} & 8.38 & $>55$ & 0.25 & $>1.8$ & $<0.18$ & 0.50$^b$ & $>36$ & $<0.23$ & 1, 2\\ MACS0416\_Y1 & 8.31 & $>80^b$ & 0.45 & $>11.1^b$ & $<0.035^c$ & 0.45 & $>250$ & $<0.056$ & 3, 4\\ B14-65666 & 7.15 & 30--80$^d$ & 2.0 & 3--30$^c$ & 0.5--30$^d$ & $0.87\pm 0.30$ & $65^{+109}_{-38}$ & $1.7^{+11.6}_{-1.5}$ & 5, 6\rule[0mm]{0mm}{3mm}\\ A1689-zD1 & 7.13 & $43^{+13}_{-7}$ & 0.18 & $1.9^{+0.5}_{-0.4}$ & $1.7^{+1.3}_{-0.7}$ & $0.77\pm 0.10$ & $14^{+3}_{-3}$ & $0.91^{+0.70}_{-0.38}$ & 7, 8\rule[0mm]{0mm}{3mm}\\ J1211-0118 & 6.03 & $38^{+34}_{-12}$ & 2.7 & $3.2^{+18.7}_{-1.7}$ & $3.0^{+10.5}_{-2.3}$ & 2.0$^e$ & $7.3^{+19.4}_{-1.8}$ & $0.24^{+0.84}_{-0.18}$ & 9\rule[0mm]{0mm}{3mm}\\ J0217-0208 & 6.20 & $25^{+19}_{-5}$ & 4.3 & $1.4^{+2.5}_{-0.3}$ & $19^{+735}_{-16}$ & 2.0$^e$ & $8.5^{+2.7}_{-0.3}$ & $1.57^{+60.8}_{-1.32}$ & 9\rule[0mm]{0mm}{3mm}\\ HZ4 & 5.54 & $57^{+67}_{-17}$ & 1.8 & $8.1^{+10.9}_{-7.1}$ & $1.1^{1.2}_{-0.8}$$^f$ & 0.72$^g$ & $81^{+87}_{-57}$ & $0.65^{+0.76}_{-0.47}$ & 10, 11\rule[0mm]{0mm}{3mm}\\ HZ6 & 5.29 & $41^{+18}_{-7}$ & 2.1 & $5.4^{+3.5}_{-2.9}$ & $4.6^{+3.2}_{-2.5}$$^f$ & 3.36$^g$ & $3.0^{+1.3}_{-1.1}$ & $0.13^{+0.09}_{-0.07}$ & 10, 11\rule[0mm]{0mm}{3mm}\\ HZ9 & 5.54 & $49^{+29}_{-11}$ & 0.85 & $14^{+8.6}_{-8.9}$ & $4.3^{+3.5}_{-2.6}$$^f$ & 0.95$^g$ & $62^{+39}_{-41}$ & $1.5^{+1.2}_{-0.9}$ & 10, 11\rule[0mm]{0mm}{3mm}\\ HZ10 & 5.66 & $46^{+16}_{-9}$ & 2.3 & $31^{+13}_{-14}$ & $14^{+9}_{-7}$ & $1.50\pm 0.44$ & $57^{+23}_{-25}$ & $2.0^{+1.3}_{-0.9}$ & 10, 11\rule[0mm]{0mm}{3mm}\\ \hline \end{tabular} \\ \begin{minipage}{1.9\columnwidth} {Note -- corrected for lensing for A2744\_YD4, MACS0416\_Y1, and A1689-zD1 with correction factor $\mu=1.8$, 1.4, and 9.3, respectively.}\\ $^a$References: {1) \citet{Laporte:2017aa}; 2) \citet{Laporte:2019aa};} 3) \citet{Bakx:2020aa}; 4) \citet{Tamura:2019aa}; 5) \citet{Sugahara:2021aa}; 6) \citet{Hashimoto:2019aa}; 7) \citet{Inoue:2020aa}; 8) \citet{Bakx:2021aa}; 9) \citet{Harikane:2020aa}; 10) \citet{Capak:2015aa}: 11) \citet{Faisst:2020aa}.\\ {$^b$We assumed $0\farcs 5\times0\farcs 3$ from fig.\ 1 of \citet{Laporte:2017aa}, and corrected for lensing.}\\ $^c$The results for $\beta =2$ are adopted.\\ % $^d$The results for modified blackbody fitting with $\beta =2$ are adopted.\\ $^e$We adopt the diameter ($0\farcs 7$) used to measure the flux.\\ $^f$Estimated from the ALMA Band 7 flux using the dust temperature given in the literature (also listed in this table) and the silicate mass absorption coefficient used in this paper. We estimate the error based on the uncertainty in the dust temperature, which is dominant in the error budget.\\ $^g$Radius in the rest-frame UV (not spatially resolved by ALMA). \end{minipage} \end{table*} We compare the results with observational data at $z>5$, which are listed in Table \ref{tab:sample}. We selected galaxies with dust temperature measurements from multi-wavelength ALMA {observations}, as compiled by \citet[][see their fig.~4]{Bakx:2021aa}. We do not include indirect measurements through [C \textsc{ii}] 158 $\micron$ emission \citep{Sommovigo:2022aa} or with the help of UV optical depth \citep{Ferrara:2022aa}; these methods show a consistent range of dust temperature ($\sim 40$--60~K). The SFR is evaluated based on the UV and IR luminosities, which trace unobscured and obscured star formation activity, respectively. The IR luminosity, $L_\mathrm{IR}$, is the integrated luminosity in wavelength range 3--1000 $\micron$, and the UV luminosity, $L_\mathrm{UV}$ is estimated by $\nu L_\nu$ (luminosity density multiplied by the frequency) at a rest-frame wavelength in the range of 0.15--0.2 $\micron$ (in this range the exact choice of $\lambda$ does not affect the results significantly). We obtain the SFR as $\mathrm{SFR}=C_\mathrm{UV}L_\mathrm{UV}+C_\mathrm{IR}L_\mathrm{IR}$, where we adopt conversion coefficients {$C_\mathrm{UV}=2.0\times 10^{-10}$ M$_{\sun}$ yr$^{-1}$ L$_{\sun}^{-1}$ % and $C_\mathrm{IR}=1.3\times 10^{-10}$ M$_{\sun}$ yr$^{-1}$ L$_{\sun}^{-1}$ % derived from the stellar SED at $t=10^8$ yr adopted in this paper (Section \ref{subsubsec:galaxy}) based on the method described by \citep{Hirashita:2003aa}. These coefficients may change at most by a factor of 2 if a different stellar age is adopted (30--300 Myr); however, the change is smaller than the errors in $L_\mathrm{IR}$.} We evaluate the error in the SFR using the uncertainty in $L_\mathrm{IR}$, which is dominant over that in $L_\mathrm{UV}$ mainly because of the uncertainty in the dust temperature. For the dust mass ($M_\mathrm{dust}$), we confirmed that the adopted mass absorption coefficient in the literature is consistent with our silicate value within the uncertainty caused by the dust temperature. To derive surface densities, we also need the surface area, which is evaluated by $\upi ab$, where $a$ and $b$ are the semi-major and semi-minor axis of the physical size, respectively. We list $\sqrt{ab}/2$ for the mean radius in the table. Unless otherwise stated in the note, we adopt the size measurements of $\sqrt{ab}$ from ALMA. SFR and $M_\mathrm{dust}$ are divided by $\upi ab$ to obtain $\Sigma_\mathrm{SFR}$ and $\Sigma_\mathrm{dust}$. We note that the surface densities in the models ($\Sigma_\mathrm{dust}$ and $\Sigma_\mathrm{SFR}$) are measured in the vertical direction of the disc, whereas the observational data are not corrected for the inclination. However, the correction factor is $\sim 2$ on average (based on the average of $\cos\theta$ in all the solid angle), while the error bars are even larger. Moreover, a factor 2 shift of the observational data does not change the discussions and conclusions in this paper. Therefore, we neglect the inclination effects in comparison with observational data. \section{Results}\label{sec:result} We show how the dust temperatures in the two (RT and one-$T$) models are affected by various galaxy parameters. We display the dust (colour) temperature as a function of surface densities, for which we adopt $\Sigma_\mathrm{SFR}$ and $\Sigma_\mathrm{dust}$. The first quantity regulates the radiation field intensity, while the second directly reflects the effect of dust optical depth. It is useful to remind the reader that these two surface densities are related by the KS law (equation \ref{eq:KS}) as \begin{align} \left(\frac{\Sigma_\mathrm{SFR}}{\mathrm{M_{\sun}~yr^{-1}~kpc^{-2}}}\right)=1.0\times 10^{-12} \kappa_\mathrm{s}\mathcal{D}^{-1.4}\left(\frac{\Sigma_\mathrm{dust}}{\mathrm{M_{\sun}~kpc^{-2}}}\right)^{1.4}.\label{eq:KS_dust} \end{align} Therefore, $\kappa_\mathrm{s}$ and $\mathcal{D}$ are degenerate in such a way that the same value of $\kappa_\mathrm{s}\mathcal{D}^{-1.4}$ produces the same result. For example, lowering $\mathcal{D}$ has the same effect as raising $\kappa_\mathrm{s}$. Since the gas mass is difficult to obtain for high-redshift galaxies, we do not use $\Sigma_\mathrm{gas}$. However, since $\Sigma_\mathrm{gas}$ has a simple scaling with $\Sigma_\mathrm{SFR}$, it is easy to convert $\Sigma_\mathrm{SFR}$ to $\Sigma_\mathrm{gas}$ using equation (\ref{eq:KS}). In this section, we adopt silicate with $p=3.5$ and focus on the variation in the dust abundance ($\mathcal{D}$), and leave the discussion on the variation of dust properties to Section \ref{subsec:dust_properties}. In the one-$T$ model, we concentrate on the homogeneous geometry and we separately discuss the comparison with the clumpy geometry in Section \ref{subsec:dust_properties}. \subsection{Relation between SFR and dust temperature}\label{subsec:SFR_Tdust} We show the dust temperature as a function of SFR surface density. Note that high $\Sigma_\mathrm{SFR}$ implicitly indicates high dust {surface} density because of the KS law (equation \ref{eq:KS_dust}). % As mentioned in Section \ref{subsec:clr}, we take the colour temperature at rest-frame 100 and 200 $\micron$ in the RT model. We also vary $\mathcal{D}=\Sigma_\mathrm{dust}/\Sigma_\mathrm{gas}=10^{-4}$, $10^{-3}$ and $10^{-2}$ to examine the effect of dust abundance. We assume the KS law with $\kappa_\mathrm{s}=1$ by default, and also examine a bursty star formation with $\kappa_\mathrm{s}=10$. In Fig.\ \ref{fig:clr_SFR}a, we show the color temperature at 100 and 200 $\micron$ as a function of $\Sigma_\mathrm{SFR}$ for the RT model. The colour temperature rises as the SFR surface density increases, although high $\Sigma_\mathrm{SFR}$ also indicates high dust {surface} density (equation \ref{eq:KS_dust}). This is explained by the following scaling arguments: The KS law indicates that $\Sigma_\mathrm{dust}\propto\Sigma_\mathrm{SFR}^{1/1.4}$ while the increase of radiation field is proportional to $\Sigma_\mathrm{SFR}$. Thus, the increase of radiation field is more significant than that of dust {surface} density, which means that the dust temperature rises as $\Sigma_\mathrm{SFR}$ becomes higher. We also observe that the colour temperature is sensitive to the dust-to-gas ratio, especially at high $\Sigma_\mathrm{SFR}$. The rise is steeper for smaller $\mathcal{D}$. The trend of higher dust temperature for lower $\mathcal{D}$ is {due to the fact that in a dust poor environment, at fixed $\Sigma_\mathrm{SFR}$, the dust is exposed to less shielded UV field, thus being more efficiently heated \citep[see also][]{Sommovigo:2022aa}.} We show the results of the one-$T$ model in Fig.~\ref{fig:clr_SFR}b. Recall that the dust temperature is the same as the colour temperature because of the single-temperature assumption. Overall, we find similar dust temperatures to those shown for the RT model. The difference between the two models is much smaller than the variation caused by different dust-to-gas ratios. We also examine a burst mode of star formation, which is realized by raising $\kappa_\mathrm{s}$ (equation \ref{eq:KS}) in our model. For a burst mode, we examine $\kappa_\mathrm{s}=10$, which is {inferred} for some high-redshift starbursts \citep[e.g.][]{Vallini:2020aa,Vallini:2021aa,Sommovigo:2021aa,Ferrara:2022aa} and is expected from simulations \citep{Pallottini:2022aa}. We observe in Fig.\ \ref{fig:clr_SFR} that the dust temperatures are raised by the burst (i.e.\ higher radiation field). In this sense, raising $\kappa_\mathrm{s}$ has a similar effect to decreasing $\mathcal{D}$. This is because of the degeneracy mentioned above (equation \ref{eq:KS_dust}): The same result is obtained for the same value of $\kappa_\mathrm{s}\mathcal{D}^{-1.4}$. We also plot the observational data (Section \ref{subsec:sample}; Table \ref{tab:sample}) in Fig.~\ref{fig:clr_SFR}. We find that the data points favour low $\mathcal{D}$ and/or high $\kappa_\mathrm{s}$, although the large error bars make it difficult to obtain a firm constraint on these parameters. If the dust-to-gas ratio is comparable to the value seen in the Milky Way and nearby solar-metallicity galaxies ($\mathcal{D}\sim 0.01$), the dust temperature does not exceed 50 K even with $\kappa_\mathrm{s}=10$ at as high $\Sigma_\mathrm{SFR}$. Thus, if the dust temperature is higher than $\sim$50 K {as observationally indicated for some $z>5$ galaxies} it is highly probable that the dust-to-gas ratio is significantly lower than the Milky Way value. It is also interesting to point out that there is a hint of positive correlation between $\Sigma_\mathrm{SFR}$ and dust temperature in the observational data although the errors are admittedly large. This positive correlation is consistent with more dust heating radiation in more actively star-forming galaxies. \subsection{Relation between dust mass and dust temperature}\label{subsec:Sigma_dust} Dust {surface} density has a direct impact on dust temperature through the shielding of stellar radiation. Thus, we expect that it is useful to examine the relation between dust surface density and dust temperature. We show this relation in Fig.\ \ref{fig:clr_dust}. Note that we only present $\Sigma_\mathrm{dust}$ up to $10^8$ M$_{\sun}$ kpc$^{-2}$, beyond which the optically thin assumption for the FIR radiation breaks down (Section \ref{subsubsec:RT}). We confirm that the observational sample mostly has $\Sigma_\mathrm{dust}<10^8$ M$_{\sun}$ kpc$^{-2}$. In Fig.\ \ref{fig:clr_dust}, we observe that the difference in dust temperature among various dust-to-gas ratios and burstiness parameters are clear at all $\Sigma_\mathrm{dust}$. This is because, with a fixed value of $\Sigma_\mathrm{dust}$, $\Sigma_\mathrm{SFR}$ is higher for lower $\mathcal{D}$ (equation \ref{eq:KS_dust}). A high value of the burstiness parameter ($\kappa_\mathrm{s}$) also raises the dust temperature; thus, as noted above, a high burstiness parameter has the same effect as a low $\mathcal{D}$. Both of the RT and one-$T$ models predict similar dust temperatures at $\Sigma_\mathrm{dust}\lesssim 10^7$ M$_{\sun}$ kpc$^{-2}$. In the RT model, the rise of the dust temperature is saturated at $\Sigma_\mathrm{dust}\sim 10^7$ M$_{\sun}$ kpc$^{-2}$, beyond which contribution from shielded low-temperature dust to the emission makes the colour temperature lower.\footnote{{Although the dust temperature in the shielded layer could become lower than the CMB temperature (i.e.\ the CMB heating is important), such cold dust has a negligible impact on the colour temperature.}} In the one-$T$ model, in contrast, the dust temperature rises monotonically even at high $\Sigma_\mathrm{dust}$ {because of the increase in $\Sigma_\mathrm{SFR}$ (equation \ref{eq:KS_dust})}. We also plot the observational data of the same galaxy sample as above (Table \ref{tab:sample}) in Fig.\ \ref{fig:clr_dust}. As already noted in Section \ref{subsec:SFR_Tdust}, lower dust-to-gas ratios or bursty star formation activities are preferred by the data. In particular, the upper left object in this figure (MACS0416\_Y1) is likely to be dust-poor with $\mathcal{D}\sim 10^{-4}$ {\citep[see also][]{Sommovigo:2022aa}}. Since this object hosts an intense star formation activity (as shown by its high $\Sigma_\mathrm{SFR}$; Fig.\ \ref{fig:clr_SFR}) and a low dust abundance, the dust is efficiently heated with little shielding. Using both Figs.\ \ref{fig:clr_SFR} and \ref{fig:clr_dust}, we could roughly infer the typical dust-to-gas ratio of the sample. We exclude MACS0416\_Y1 already discussed above. If the KS law holds for high-redshift galaxies, $\mathcal{D}\lesssim 10^{-4}$ is not accepted because $\Sigma_\mathrm{SFR}$ becomes to high to be consistent with the observed SFR surface densities. For example, if $\mathcal{D}=10^{-4}$ and $\Sigma_\mathrm{dust}\sim 10^7$ M$_{\sun}$ kpc$^{-2}$, where $T_\mathrm{clr}$ peaks in Fig.~\ref{fig:clr_dust}a, the gas surface density is $\Sigma_\mathrm{gas}\sim 10^{11}$ M$_{\sun}$ kpc$^{-2}$, leading to $\Sigma_\mathrm{SFR}\sim 2.5\times 10^3$ M$_{\sun}$ kpc$^{-2}$ from the KS law (equation \ref{eq:KS}). This high value is beyond the range of observed SFR surface densities (Fig.\ \ref{fig:clr_SFR}), except for MACS0416\_Y1. This is why the peak of $T_\mathrm{clr}$ does not appear in the range of $\Sigma_\mathrm{SFR}$ plotted in Fig.\ \ref{fig:clr_SFR}. Therefore, the sample (except MACS0416\_Y1) should have $\mathcal{D}$ higher than $10^{-4}$ if the KS law holds. On the other hand, a high value of $\mathcal{D}\gtrsim 10^{-2}$ has difficulty in explaining the observed high dust temperatures as argued above. These arguments imply that the dust-to-gas ratios in $z>5$ LBGs are typically significantly lower than $10^{-2}$ but higher than $10^{-4}$; that is, of the order of $\mathcal{D}\sim 10^{-3}$. {This value implies a low metallicity: if we use the $\mathcal{D}$--$Z$ relation in nearby galaxies \citep{Remy-Ruyer:2014aa}, the above value of $\mathcal{D}$ roughly corresponds to $Z\sim 0.2$ Z$_{\sun}$.} Note that the above results and arguments assumed the KS law, which is poorly confirmed for galaxies at $z\gtrsim 5$. To avoid the conclusions being strongly affected by the assumed KS law, we also examine the relations that do not assume a star formation law in Section \ref{subsec:wo_KS}. \subsection{Relations without assuming the KS law}\label{subsec:wo_KS} In this subsection, we examine how the surface densities of SFR and dust mass determine the dust temperature without assuming the KS law (without equation \ref{eq:KS_dust}). That is, we treat $\Sigma_\mathrm{SFR}$ and $\Sigma_\mathrm{dust}$ as independent parameters. In this case, we do not meed to specify $\mathcal{D}$ or $\kappa_\mathrm{s}$. In Fig.\ \ref{fig:clr_dust_freeSFR}, we show the relation between dust temperature and $\Sigma_\mathrm{dust}$ with various $\Sigma_\mathrm{SFR}$. The dust temperature becomes lower as the dust surface density increases because of the shielding effect. We observe a large difference in the dust temperature ($\sim 30$--80 K) even at high dust surface density ($\Sigma_\mathrm{dust}\sim 10^7$ M$_{\sun}$ kpc$^{-2}$) for the range of $\Sigma_\mathrm{SFR}$ actually observed for high-redshift galaxies. The range of dust temperature is consistent with the variety in $T_\mathrm{dust}$ observed for the galaxies at $z\gtrsim 7$. We also compare the results with the observational data in Fig.~\ref{fig:clr_dust_freeSFR}. These data are roughly explained by $\Sigma_\mathrm{SFR}\sim 1$--$10^3$ M$_{\sun}$ yr$^{-1}$ kpc$^{-2}$. This range is broadly consistent with the actually observed SFR surface densities (Fig.\ \ref{fig:clr_SFR}). For comparison, we colour-code the observational data according to the range of $\Sigma_\mathrm{SFR}$ ($<10$, 10--70, and $>70$ M$_{\sun}$ yr$^{-1}$ kpc$^{-2}$, in blue, green, and orange, respectively). The blue-green-orange trend in the observational data is indeed consistent with the tendency of theoretical predictions with rising $\Sigma_\mathrm{SFR}$ (also shown by blue, green and orange lines). Although the large error bars hinder drawing a firm conclusion, the overall consistency in the $T_\mathrm{dust}$--$\Sigma_\mathrm{dust}$--$\Sigma_\mathrm{SFR}$ relation indicates the success of our models. Comparing the two panels in Fig.\ \ref{fig:clr_dust_freeSFR}, we find that the two (RT and one-$T$) models are similar. The largest difference between the two models appears at high dust surface density as also found in Fig.\ \ref{fig:clr_dust}: In the one-$T$ model, all dust has a single temperature, which monotonically drops as the dust mass increases. In the RT model, in contrast, the drop of dust temperature is saturated at high $\Sigma_\mathrm{dust}$ and low $\Sigma_\mathrm{SFR}$ because the shielded portion of dust has too low a temperature to contribute significantly to the luminosity at $\lambda\leq 200~\micron$. {It is reminded here that the CMB heating, which we neglected (Section \ref{sec:model}), should be included if we are interested in the temperature drop at high $\Sigma_\mathrm{dust}$. In particular, for the one-$T$ model, the dust temperature would not continue to drop below the CMB temperature towards high $\Sigma_\mathrm{dust}$.} The result shown in Fig.\ \ref{fig:clr_dust_freeSFR} also indicates that, if we obtain two of the three quantities ($T_\mathrm{dust}$, $\Sigma_\mathrm{dust}$, and $\Sigma_\mathrm{SFR}$), we can estimate the other using our model. An interesting application would be to obtain $T_\mathrm{dust}$ from $\Sigma_\mathrm{dust}$ and $\Sigma_\mathrm{SFR}$, both of which could be estimated from rest-frame UV data as well (through the UV spectral index and the UV luminosity; see section 2 of \citealt{Ferrara:2017aa}). \citet{Ferrara:2022aa} estimated the dust temperature basically in this way. Another application would be to derive $\Sigma_\mathrm{SFR}$ from $T_\mathrm{dust}$ and $\Sigma_\mathrm{dust}$. If the obtained $\Sigma_\mathrm{SFR}$ is converted to $\Sigma_\mathrm{gas}$ using the KS law, we could estimate the dust-to-gas ratio ($\Sigma_\mathrm{dust}/\Sigma_\mathrm{gas}$). We already constrained the dust-to-gas ratio in this way in Section \ref{subsec:Sigma_dust}, and argued that the typical dust-to-gas ratio is of the order of $\sim 10^{-3}$. \section{Discussion}\label{sec:discussion} \subsection{Radiative transfer and one-$T$ models}\label{subsec:radtr_vs_oneT} In the above, we have shown that the RT and one-$T$ models overall predict similar dust temperatures. However, the difference between the two models appears at $\Sigma_\mathrm{dust}\gtrsim 10^7$ M$_{\sun}$ kpc$^{-2}$, where the RT model shows a saturation or decrease of the dust temperature (colour temperature) because of shielding (Fig.\ \ref{fig:clr_dust}; Section \ref{subsec:Sigma_dust}). In the one-$T$ model, in contrast, the dust temperature always continues to increase even if the dust surface density increases beyond $\Sigma_\mathrm{dust}\sim 10^7$ M$_{\sun}$ kpc$^{-2}$. Thus, the dust--stars distribution geometry, which affects shielding of dust-heating radiation, has a significant impact on the dust temperature at $\Sigma_\mathrm{dust}\gtrsim 10^7$ M$_{\sun}$ kpc$^{-2}$. This in turn means that the geometry of dust and star distributions only has a minor influence on the dust temperature at lower dust surface densities. From the above results, we expect that the SED shapes of dust emission are different between the two (RT and one-$T$) models at high dust {surface} densities. To visualize this expectation, we present in Fig.~\ref{fig:SED_comp} the SEDs for various $\Sigma_\mathrm{dust}$ with a fixed $\Sigma_\mathrm{gas}$ (so a fixed $\Sigma_\mathrm{SFR}$, whose value is determined by the KS law). We choose the case of $\Sigma_\mathrm{SFR}=30$ M$_{\sun}$ yr$^{-1}$ kpc$^{-2}$ ($\Sigma_\mathrm{gas}\simeq 4.2\times 10^9$ M$_{\sun}$ kpc$^{-2}$), which is roughly in the middle of the observational sample we adopted (Fig.~\ref{fig:clr_SFR}). We examine $\Sigma_\mathrm{dust}=4.2\times 10^5$, $4.2\times 10^6$, and $4.2\times 10^7$ M$_{\sun}$ kpc$^{-2}$, which correspond to $\mathcal{D}=10^{-4}$, $10^{-3}$, and $10^{-2}$, respectively. Note that, in reality, the emission at wavelengths well below the SED peak position is not precisely predicted in our model, since stochastically heated very small grains \citep[e.g.][]{Draine:1985aa}, which are not included in our model, contribute to the emission significantly. {Moreover, complex dust--stars geometries on small spatial scales, which are not included in our models (see Section \ref{subsec:complex}) would also lead to hot dust components located in the vicinity of compact, actively star-forming regions. Such hot dust components contribute to emission at short wavelengths \citep{Sommovigo:2020aa}, which is missing in our prediction. Such compact region could enhance the optical depth, and possibly make the region optically thick for short-wavelength dust emission.} Thus, we do not discuss the difference in SED shape on the Wien side, but focus on the wavelengths around the SED peak and on the Rayleigh--Jeans side. We observe in Fig.\ \ref{fig:SED_comp} that the two (RT and one-$T$) models show different trends with increasing $\Sigma_\mathrm{dust}$. In the RT model, the SED extends to longer wavelengths as the dust abundance becomes larger with the luminosity at the shortest wavelengths fixed. This is because cold layers of dust are added if we increase $\Sigma_\mathrm{dust}$ with a fixed value of $\Sigma_\mathrm{SFR}$ (i.e.\ a fixed stellar luminosity). In contrast, in the one-$T$ model, the SED shifts towards longer wavelengths as $\Sigma_\mathrm{dust}$ increases, reflecting the drop of dust temperature. This is because the stellar radiation received per dust mass decreases as the dust increases. In both models, we see a slight rise of the SED peak with $\Sigma_\mathrm{dust}$ simply because of the increase in the energy absorbed by dust. At low $\Sigma_\mathrm{dust}$, the difference between the two models is small. This means that the dust temperature is well approximated with a single value in the RT model because {shielding} is weak. In contrast, at high $\Sigma_\mathrm{dust}$, the SEDs are different between the two models. Therefore, if the dust surface density is as high as $\Sigma_\mathrm{dust}\gtrsim 10^7$ M$_{\sun}$ kpc$^{-2}$, detailed dust temperature structures created by RT effects are important in the detailed SED shape. From the difference in SED shape between the two models, we expect that the colour temperature in the RT model depends on the selected wavelengths at high $\Sigma_\mathrm{dust}$. In Fig.\ \ref{fig:SED_comp}b, we show the colour temperature as a function of wavelength. We fix one of the wavelengths to 100 $\micron$ and move the other freely, and present $T_\mathrm{clr}(100~\micron ,\,\lambda )$. We observe that the colour temperature monotonically decreases as $\lambda$ increases because we selectively observe lower-temperature dust at longer wavelengths. If we focus on long (rest-frame) wavelengths ($\lambda\gtrsim 100~\micron$), which are often used by ALMA observations (Bands 6--8) of high-redshift galaxies, the colour temperature is not sensitive to the selected wavelengths. When we use Band 9 (450 $\micron$) for galaxies at $z\gtrsim 7$ (i.e.\ $\lambda\lesssim 60~\micron$), the colour temperature is systematically high {in the RT model} since high-temperature layers dominate the emission at such a short wavelength. Thus, dust temperature estimates including Band 9 need to be carefully interpreted by noting {a possibility of} multi-$T_\mathrm{dust}$ structures. At long wavelengths, such a multi-temperature effect is not important; indeed, the colour temperature is almost constant at $\lambda\gtrsim 200~\micron$, and is similar to the dust temperature in the one-$T$ model. Thus, the above predictions on the colour temperature are not altered significantly as long as we focus on ALMA Bands 6--8. {If we include a Band 9 observation in the SED analysis, it is safer to include multiple bands from Bands 6--8 as well in order to examine the multi-$T_\mathrm{dust}$ effect on the SED.} \subsection{Effects of dust properties}\label{subsec:dust_properties} We examine the variation of dust properties. We vary the dust composition and the grain size distribution (Section \ref{subsubsec:dust_properties}). We fix $\mathcal{D}=10^{-3}$. We show the relation between dust temperature and $\Sigma_\mathrm{dust}$ for silicate and graphite with $p=3.5$, and for $p=2.5$ and 4.5 with silicate in Fig.\ \ref{fig:dust_properties}. The burstiness parameter is fixed to $\kappa_\mathrm{s}=1$. We observe in Fig.\ \ref{fig:dust_properties} that the dust temperature is insensitive to $p$ in both RT and one-$T$ models. While large $p$ indicates more efficient absorption of UV radiation (because of more small grains), it also means more efficient shielding. These two effects counteract each other. The difference between silicate and graphite is more apparent, especially at high $\Sigma_\mathrm{dust}$. This is not only due to more efficient shielding, but also because of more efficient emission (higher mass absorption coefficient) for graphite. More efficient emission leads to a lower equilibrium dust temperature. Comparing the RT and one-$T$ models, we confirm significant difference at high $\Sigma_\mathrm{dust}$ as pointed out above. In both models, graphite predicts lower dust temperatures than silicate at dust {surface} densities ($\sim 10^7$ M$_{\sun}$ kpc$^{-2}$) appropriate for the $z>7$ sample above but the difference is only $\sim 10$ per cent. In the one-$T$ model (Fig.\ \ref{fig:dust_properties}b), we also show the clumpy geometry with $\xi_\mathrm{cl}=3$ (for silicate with $p=3.5$), which shows significantly lower dust temperature than the homogensous geometry at high $\Sigma_\mathrm{dust}$. This is because, as explained in Section \ref{subsubsec:oneT}, dust only covers the galaxy surface with a fraction of $1/\xi_\mathrm{cl}$. This means that dust can only absorb a part of stellar light even if the dust abundance is high. Since the total emission energy of dust is scaled with $T_\mathrm{dust}^{4+\beta}$, we obtain a scaling of $T_\mathrm{dust}\propto\xi_\mathrm{cl}^{-1/(4+\beta)}$ ($\xi_\mathrm{cl}>1$) at high dust surface density. This scaling is useful to infer the dust temperature with different values of $\xi_\mathrm{cl}>1$ (recall again that the result is similar to the homogensous case with $\xi_\mathrm{cl}\lesssim 1$) at high dust surface densities. At low dust surface densities, the dust is optically thin for UV radiation; in this case, the total radiation energy absorbed by dust is determined by the total dust mass, and is not sensitive to dust distribution geometry. Thus, clumpy geometry gives a conservative (low) dust temperature, which means that the requirement for low $\mathcal{D}$ or higher $\kappa_\mathrm{s}$ is pronounced if we aim at explaining the high dust temperatures with clumpy geometry compared with the uniform geometry. \subsection{Further complexities}\label{subsec:complex} Although the geometries of dust--stars distribution in real galaxies are complex, we still expect that our theory based on the surface densities are applicable to a variety of galaxies. This is because the radiation field is also a `surface' quantity in the sense that it has the same physical dimension as the surface luminosity (Section \ref{subsubsec:galaxy}). Our predictions are further checked with observations of nearby galaxies (Chiang et al., in preparation). Numerical simulations with more complex dust--stars geometries could also be used to examine the robustness of our predictions. However, the effect of local intense radiation sources may not be included in our simple treatment. Some analytic and numerical studies showed that a dust component concentrated nearby an intensely star-forming region could have a large contribution to the FIR luminosity of the galaxy \citep{Behrens:2018aa,Sommovigo:2020aa,Pallottini:2022aa}. Complex dust--stars geometries are also shown observationally for galaxies at $z\sim 7$ \citep{Willott:2015aa,Bowler:2022aa}. Therefore, our simple analytic treatments should be carefully applied to real high-redshift galaxies, and studies focusing on small-scale structures should supplement our understanding of what regulates the dust temperature in high-redshift galaxies. Interestingly, the necessity of low dust abundance in explaining the high dust temperature is common between our work and some studies that included small-scale or complicated geometries \citep{Liang:2019aa,Ma:2019aa,Sommovigo:2022aa}. \section{Conclusions}\label{sec:conclusion} For the purpose of interpreting observed dust temperatures at high redshift ($z>5$), we construct analytic models to calculate the dust temperature under given star formation activity and dust properties (especially dust abundance). The models are described by the surface densities of gas mass and SFR since the surface quantities are important to describe the radiation field intensity and the dust optical depth. We develop the following two models that can be treated analytically: (i) RT and (ii) one-$T$ models. In the first model, we {consider} the multi-temperature (or dust shielding) effect {within the framework of plane-parallel treatment} by putting the stars in the midplane of the disc and the dust in the screen geometry. The dust temperature in this model is defined by the colour temperature at two selected wavelengths (100 and 200 $\micron$ by default). In the second model, we consider an opposite extreme by considering that dust and stars are well mixed, so that the dust is assumed to have a single temperature. In this model, the dust temperature is determined by the global balance between the absorbed and radiated energy by the dust. {These two extremes serve to bracket the most realistic scenario}. We particularly focus on the dust temperature as a function of SFR surface density ($\Sigma_\mathrm{SFR}$) and dust surface density ($\Sigma_\mathrm{dust}$). As expected, the dust temperature rises with increasing $\Sigma_\mathrm{SFR}$ and $\Sigma_\mathrm{dust}$ (which has a positive relation with $\Sigma_\mathrm{SFR}$ because of the KS law). However, these relations depend on the dust-to-gas ratio ($\mathcal{D}$), since it affects the relation between $\Sigma_\mathrm{SFR}$ and $\Sigma_\mathrm{dust}$ (equation \ref{eq:KS_dust}). Lower values of $\mathcal{D}$ predicts higher dust temperatures. Thus, low dust abundance ($\lesssim 10^{-3}$) can be a reason for observed high dust temperatures {($T_\mathrm{dust}\gtrsim 40$ K)} in high-redshift galaxies. Another reason could be a burst of star formation (i.e.\ high $\kappa_\mathrm{s}$). The grain size distribution and the dust composition have less impacts on the dust temperature than $\mathcal{D}$ and $\kappa_\mathrm{s}$. The RT and one-$T$ models predict similar dust temperatures except at high dust surface density ($\Sigma_\mathrm{dust}>10^7$ M$_{\sun}$ kpc$^{-2}$). Some ALMA-detected galaxies at $z>5$ may be located in this high-$\Sigma_\mathrm{dust}$ regime, which means that a careful radiative transfer treatment is necessary to predict precise dust temperature. However, the difference among different values of $\mathcal{D}$ and $\kappa_\mathrm{s}$ is significant, and the conclusion that high-redshift LBGs favour low $\mathcal{D}\lesssim 10^{-3}$ (if $\kappa_\mathrm{s}\lesssim 10$) is not altered. We also examine the relation between dust temperature and $\Sigma_\mathrm{dust}$ without assuming the KS law; that is, we treat $\Sigma_\mathrm{SFR}$ as an independent parameter. Overall, higher $\Sigma_\mathrm{SFR}$ indicates higher dust temperature and we predict $T_\mathrm{dust}\sim 30$--80 K for the range of $\Sigma_\mathrm{SFR}$ and $\Sigma_\mathrm{dust}$ appropriate for high-redshift ($z>5$) LBGs. The observational data ($\Sigma_\mathrm{SFR}$, $\Sigma_\mathrm{dust}$, and $T_\mathrm{dust}$) of $z>5$ LBGs are consistent with the calculation results. Interestingly, we also find a trend that LBGs with higher $\Sigma_\mathrm{SFR}$ and lower $\Sigma_\mathrm{dust}$ tend to have higher $T_\mathrm{dust}$, which is consistent with our prediction (Fig.\ \ref{fig:clr_dust_freeSFR}). The difference between the two (RT and one-$T$) models is further examined. We observe a significant difference in SED shape between the two models at $\Sigma_\mathrm{dust}\gtrsim 10^7$ M$_{\sun}$ kpc$^{-2}$ since the superposition of layers with various dust temperatures is important at such a high dust surface density in the RT model. Thus, if the dust surface density is higher than $\sim 10^7$ M$_{\sun}$ kpc$^{-2}$, a detailed radiative transfer calculation is necessary to discuss the detailed shape of dust emission SED. We, however, find that, in the range of dust surface density appropriate for high-redshift LBGs, the colour temperature in the RT model is similar to the dust temperature in the one-$T$ model as long as we use $\lambda\sim 100$--200 $\micron$. Thus, the dust temperature measured in ALMA Bands 6--8 ($\sim 650$--1,200 $\micron$) for $z\gtrsim 5$ galaxies is not sensitive to the detailed radiative transfer effects. Note that Band 9 ($\sim 450~\micron$) may selectively observe high-dust-temperature layers {at $\Sigma_\mathrm{dust}\gtrsim 10^7$ M$_{\sun}$ kpc$^{-2}$, if the multi-$T_\mathrm{dust}$ structure is as significant as realized in the RT model.} In the one-$T$ model, we also investigate the effect of clumpiness in dust distribution geometry. We only consider the case where the density contrast between the clumps and the diffuse medium is large since otherwise the resulting dust temperature is similar to the homogeneous geometry. At low $\Sigma_\mathrm{dust}$, the clumpy geometry predicts almost the same dust temperature as the homogeneous geometry. However, at high $\Sigma_\mathrm{dust}\gtrsim 10^7$ M$_{\sun}$ kpc$^{-2}$, the dust temperature is lower in the clumpy case because the dust effectively covers only a certain fraction of the galaxy surface. This strengthens the requirement of low dust-to-gas ratio and/or high $\kappa_\mathrm{s}$ to achieve a high dust temperature. From the above, we conclude that the high dust temperatures {($T_\mathrm{dust}\gtrsim 40$ K)} in {some} ALMA-detected $z\gtrsim 7$ galaxies is caused by a low dust-to-gas ratio ($\mathcal{D}\lesssim 10^{-3}$) if the KS law holds in high-redshift galaxies. A burst-like star formation with $\kappa_\mathrm{s}\gtrsim 10$ could give another explanation for the high dust temperatures. These conclusions are not sensitive to the dust properties (dust composition and grain size distribution) and detailed radiative transfer effect. In the companion paper (Chiang et al., in preparation), we test our model using spatially resolved observations of nearby star-forming galaxies. \section*{Acknowledgements} {We are grateful to the anonymous referee for useful comments.} HH thanks the Ministry of Science and Technology (MOST) for support through grant MOST 108-2112-M-001-007-MY3, and the Academia Sinica for Investigator Award AS-IA-109-M02. \section*{Data Availability} Data related to this publication and its figures are available on request from the corresponding author. \bibliographystyle{mnras} \bibliography{/Users/hirashita/bibdata/hirashita} \bsp % \label{lastpage}
Title: Hyper-Eddington Black Hole Growth in Star-Forming Molecular Clouds and Galactic Nuclei: Can It Happen?
Abstract: Formation of supermassive black holes (BHs) remains a theoretical challenge. In many models, especially beginning from stellar relic "seeds," this requires sustained super-Eddington accretion. While studies have shown BHs can violate the Eddington limit on accretion disk scales given sufficient "fueling" from larger scales, what remains unclear is whether or not BHs can actually capture sufficient gas from their surrounding ISM. We explore this in a suite of multi-physics high-resolution simulations of BH growth in magnetized, star-forming dense gas complexes including dynamical stellar feedback from radiation, stellar mass-loss, and supernovae, exploring populations of seeds with masses $\sim 1-10^{4}\,M_{\odot}$. In this initial study, we neglect feedback from the BHs: so this sets a strong upper limit to the accretion rates seeds can sustain. We show that stellar feedback plays a key role. Complexes with gravitational pressure/surface density below $\sim 10^{3}\,M_{\odot}\,{\rm pc^{-2}}$ are disrupted with low star formation efficiencies so provide poor environments for BH growth. But in denser cloud complexes, early stellar feedback does not rapidly destroy the clouds but does generate strong shocks and dense clumps, allowing $\sim 1\%$ of randomly-initialized seeds to encounter a dense clump with low relative velocity and produce runaway, hyper-Eddington accretion (growing by orders of magnitude). Remarkably, mass growth under these conditions is almost independent of initial BH mass, allowing rapid IMBH formation even for stellar-mass seeds. This defines a necessary (but perhaps not sufficient) set of criteria for runaway BH growth: we provide analytic estimates for the probability of runaway growth under different ISM conditions.
https://export.arxiv.org/pdf/2208.05025
\vspace{-0.5cm} \begin{keywords} black hole physics -- accretion, accretion discs -- quasars: supermassive black holes -- galaxies: star formation \end{keywords} \vspace{-0.3cm} \section{Introduction} \label{sec:intro} Observations have demonstrated the existence of supermassive black holes (BHs) with masses $M_{\rm bh} \sim 10^9 M_\odot$ in quasars at very high redshift ($z \gtrsim 7$) when the Universe was less than a billion years old \cite[e.g.,][]{FanNarayanan2001,WangYang2021}, which implies that these BHs must accrete rapidly from their ``seeds'' \citep{InayoshiVisbal2020}. The physical origin of these seeds remains deeply uncertain, but popular models including direct collapse of super-massive stars with masses $\sim 10^{4}-10^{6}\,M_{\odot}$ \cite[e.g.,][]{BegelmanVolonteri2006,ReganVisbal2017,CorbettMoranGrudic2018,ChonOmukai2020}, runaway mergers in globular clusters \cite[e.g.,][]{PortegiesZwartBaumgardt2004,2020ApJ...891...94B,2020MNRAS.493.2352A,KremerSpera2020,RizzutoNaab2021,ShiGrudic2021,FragioneKocsis2021,DasSchleicher2021}, remnants from Population III stars \cite[e.g.,][]{MadauRees2001,RyuTanaka2016}, and relics of ``standard'' stellar evolution (e.g.\ Population II) stars generally produce seeds with masses $\ll 10^{4}\,M_{\odot}$. Given that the $e$-folding time of a BH growing at the Eddington limit\footnote{Throughout, we will follow standard convention and define the Eddington {\em luminosity} as the usual $L_{\rm Edd} = 3.2\times10^{4}\,L_{\odot}\,(M_{\rm bh}/M_{\odot})$, and the ``Eddington mass-accretion rate'' as the accretion rate which would produce $L_{\rm Edd}$ given a canonical reference radiative efficiency $\epsilon_{r} = 0.1$ ($L = \epsilon_{r}\,\dot{M}\,c^{2}$), so $\dot{M}_{\rm Edd} \approx M_{\rm bh}/(45\,{\rm Myr})$.} with a canonical radiative efficiency of $\sim 0.1$ is $\sim 50\,$Myr, almost all of these models require a sustained period of super or hyper-Eddington accretion in the early Universe to be viable \cite[e.g.,][]{PezzulliValiante2016}. This is especially important at masses $\ll 10^{5}\,M_{\odot}$, as various studies have shown that once larger ``super-massive'' mass scales are reached, the gravity of the BH can capture gas from larger radii and lead to runaway growth \citep{LiHernquist2007,DiMatteoColberg2008,2012MNRAS.424.1461L,2013ApJ...771..116J,WeinbergerSpringel2018,2018MNRAS.478.5063H,ZhuLi2020,2021ApJ...917...53A}. But unless one invokes exotic formation mechanisms, a sustained rapid accretion phase is necessary to grow BHs from the stellar ($\sim 10-100\,M_{\odot}$) to super-massive ($\gg 10^{4}\,M_{\odot}$) mass scale \citep{2012MNRAS.424.1461L,2016MNRAS.457.3356V} There is a well-established and rapidly-growing body of work demonstrating that compact objects can, in fact, exceed the naive ``Eddington accretion rate'' $\dot{M}_{\rm Edd}$ by large factors (up to $\gtrsim 1000$) on scales of the accretion disk itself (recently, see e.g.\ theoretical arguments by \citealt{2016MNRAS.459.3738I,2019ApJ...880...67J,2020ApJ...905...92P,2021PASJ...73..450K,2022PASJ..tmp...17B}, empirical arguments in \citealt{2021A&A...645A..78B,2022MNRAS.509.3599T}, or for reviews, \citealt{PezzulliValiante2016,2019ffbh.book..195M,2019ConPh..60..111S,2019BAAS...51c.352B} and references therein). But these studies generally assume a constant hyper-Eddington ($\sim 10^{3}\,\dot{M}_{\rm Edd}$) influx of gas from larger scales onto the accretion disk as their ``outer boundary condition.'' What remains deeply unclear is whether or not a seed BH -- especially at stellar mass scales -- could actually capture gas from the interstellar medium at a sufficient rate to sustain this accretion, and for long enough that the total mass supplied would be able to grow the BH by many $e$-foldings. There has been some theoretical work on the topic, but it has generally either considered idealized models where the gas around the seed sits in a common potential well and accretes instead of being multi-phase and turbulent, rapidly forming stars \citep[see e.g.][]{2020MNRAS.497..302T,2022arXiv220106584P}, or considered only galactic ($\gg$\,pc) scales \citep[e.g.][]{2022arXiv220108766M} where especially with BHs already $\gg 10^{4}\,M_{\odot}$, sustaining super-Eddington inflow to a nuclear region at least appears viable \citep{2018MNRAS.478.5063H,ReganDownes2019,ZhuLi2020,2021ApJ...917...53A}. The problem is that in the realistic ISM, order-of-magnitude estimates such as those in \citet{2013ApJ...771..116J} suggest that the rate of gravitational capture of gas from the surrounding ISM -- the Bondi-Hoyle rate \citep{HoyleLyttleton1939,Bondi1952} -- should be extremely small unless the seed is already super-massive. Consider the standard expression \begin{align} \dot{M}_{\rm Bondi} \approx \frac{4\pi\,G^2\,M_{\rm bh}^2\rho}{(c_{\rm s}^2+\delta V^2)^{3/2}}. \label{equ:bondi-hoyle-rate} \end{align} where $\rho$, $c_{\rm s}$, and $\delta V$ are the density, sound speed, and gas-BH relative velocity. In the diffuse/warm ISM, this gives $\dot{M}_{\rm Bondi}/\dot{M}_{\rm Edd} \sim 10^{-6}\,(M_{\rm bh}/10\,M_{\odot})\,(n/{\rm cm^{-3}})$ -- vastly sub-Eddington. In dense ($n \gtrsim 100\,{\rm cm^{-3}}$) cold molecular gas (sound speed $\sim 0.1\,{\rm km\,s^{-1}}$), $\dot{M}_{\rm Bondi}$ would be much larger {\em if the gas were laminar and the BH stationary} -- this is akin to the idealized non-turbulent models above. The problem is that realistic cold molecular gas in the ISM is clumpy and dynamical and turbulent, with star formation and stellar feedback generating large random motions -- i.e.\ large $\delta V$ \citep{larson:gmc.scalings,goodman:1998.lws.dependence.on.tracers,evans:1999.sf.gmc.review,stanimirovic:1999.smc.hi.pwrspectrum,elmegreen:2004.obs.ism.turb.review}. As we show below, assuming relative velocities are of order typical gravitational/virial velocities in the cloud then gives $\dot{M}_{\rm Bondi}/\dot{M}_{\rm Edd} \sim 10^{-4}\,(\langle n_{\rm cl} \rangle / 100\,{\rm cm^{-3}})^{1/2}\,(M_{\rm bh}/10\,M_{\odot})\,(10^{6}\,M_{\odot}/M_{\rm cl})$ -- once again, vastly sub-Eddington. Previous analytic and simulation models of this ``turbulent Bondi-Hoyle problem'' in idealized driven turbulence have argued that vorticity and turbulent magnetic fields will suppress the {\em average} accretion rates even relative to this (pessimistic) result \citep{KrumholzMcKee2006,BurleighMcKee2017}. However, it is also clear from many studies of star formation that turbulence in dense gas also promotes the existence of extremely dense shocks and clumps in the gas \citep[see e.g.][]{klessen:2000.pdf.supersonic.turb,elmegreen:sf.review,vazquez-semadeni:2003.turb.reg.sfr,MacLowKlessen2004,federrath:2008.density.pdf.vs.forcingtype,goodman:2009.dendrogram.sims,federrath:2010.obs.vs.sim.turb.compare,hopkins:2012.intermittent.turb.density.pdfs,squire.hopkins:turb.density.pdf}, which can have low internal velocity dispersions and play a crucial role in turbulent fragmentation and star formation \citep{mckee:2007.sf.theory.review,hennebelle:2008.imf.presschechter,hopkins:excursion.ism,hopkins:excursion.imf,hopkins:excursion.clustering,hopkins:excursion.imf.variation,hopkins:frag.theory,guszejnov:gmc.to.protostar.semi.analytic,murray:2017.turb.collapse}. So it is possible that a more realistic model might allow for hyper-Eddington accretion in rare (but not impossible) cases in these environments. In this study, we therefore extend the series of simulations of dense, star forming environments used previously to study star and star cluster formation in \cite{GrudicHopkins2018,guszejnov:2018.isothermal.nocutoff,GrudicGuszejnov2018,grudic:max.surface.density,grudic:sfe.gmcs.vs.obs,grudic:2019.imf.sampling.fx.on.gmc.destruction,ShiGrudic2021}, to explore BH seed growth in dynamic, star-forming environments akin to dense giant molecular clouds (GMCs) and galactic nuclei. In this first study, we neglect feedback from the accreting BHs themselves. This is obviously a major simplification, especially for BHs accreting above the Eddington limit -- however, the form and strength of feedback from BHs in this regime remains highly uncertain (see references above), and we wish to identify whether or not sustaining hyper-Eddington accretion is even remotely possible on these scales. Clearly, accretion {\em without} BH feedback represents a relatively strong upper limit to the maximum possible BH seed growth. We can then use the conditions identified here as necessary for such accretion to run simulations including BH feedback, with various parameterizations. In \S~\ref{sec:simulations}, we describe our simulation methods. Then in \S~\ref{sec:results} we present results, including BH mass evolution in different clouds and its dependence on different initial conditions (ICs). In \S~\ref{sec:discussions}, we analyze the effects of different physics and simulation ICs, give simple analytic formulae for the conditions required for runaway accretion, and discuss some major caveats of our work in \S~\ref{sec:caveats}. Finally, we conclude in \S~\ref{sec:conclusions}. \begin{footnotesize} \ctable[caption={{\normalsize Initial conditions (ICs) of our ``fiducial'' reference simulations. Here we show three groups of simulations with low, medium, and high initial mean surface density ($\Bar{\Sigma}_0$). In each group, the clouds have radii ($R_{\rm cl}$) of 5, 50, 500 pc.. Subsequent columns give the approximate initial total cloud mass ($M_{\rm cl}$), initial free-fall time ($t_{\rm ff,\,c}$), gas cell mass/resolution ($m_{\rm gas}$), Plummer-equivalent force softening for star particles ($\epsilon_{\rm soft}^{\rm star}$), and for BHs ($\epsilon_{\rm soft}^{\rm bh}$), and additional notes.}}\label{tab:clouds},center,star, ]{llllllll}{ }{ \toprule $\Bar{\Sigma}_0$ [$M_{\odot}$/pc$^2$] & $R_{\rm cl}$ [pc] & $M_{\rm cl}$ [$M_{\odot}$] & $t_{\rm ff,\,c}$ [Myr] & $m_{\rm gas}$ [$M_{\odot}$] & $\epsilon_{\rm soft}^{\rm star}$ [pc] & $\epsilon_{\rm soft}^{\rm bh}$ [pc] & Notes \\ \midrule 130 & 5 & $10^{4}$ & 2 & 0.005 & 0.04 & 0.04 & \\ 130 & 5 & $10^{4}$ & 2 & 0.04 & 0.09 & 0.09 & No-feedback (low-resolution) variant\\ 130 & 50 & $10^{6}$ & 6 & 0.5 & 0.21 & 0.21 & \\ 130 & 500 & $10^{8}$ & 20 & 50 & 0.96 & 0.31 & \\ \midrule 1300 & 5 & $10^{5}$ & 0.6 & 0.4 & 0.19 & 0.19 & \\ 1300 & 50 & $10^{7}$ & 2 & 40 & 0.89 & 0.31 & \\ 1300 & 500 & $10^{9}$ & 6 & 500 & 2.06 & 0.31 & \\ \midrule 13000 & 5 & $10^{6}$ & 0.2 & 0.5 & 0.21 & 0.21 & \\ 13000 & 5 & $10^{6}$ & 0.2 & 4 & 0.41 & 0.31 & Varied metallicity test series\\ 13000 & 50 & $10^{8}$ & 0.6 & 6 & 0.48 & 0.31 & Highest resolution; $M_{\rm bh}\in(10, 100)\,M_\odot$\\ 13000 & 50 & $10^{8}$ & 0.6 & 50 & 0.96 & 0.31 & \\ 13000 & 50 & $10^{8}$ & 0.6 & 400 & 1.91 & 0.31 & Varied BH seed number test series\\ 13000 & 500 & $10^{10}$ & 2 & 40000 & 8.89 & 0.31 & \\ \bottomrule } \end{footnotesize} \section{Simulations} \label{sec:simulations} Our simulation numerical methods are identical to those described and tested fully in \citet{GrudicHopkins2018,GrudicGuszejnov2018,HopkinsWetzel2018,GrudicGuszejnov2021,GrudicKruijssen2021}, modulo the addition of BH seeds described below, so we briefly summarize here. We use the code \textsc{GIZMO}\footnote{A public version of {\small GIZMO} is available at \gizmourl} \cite[]{Hopkins2015} in Meshless Finite Mass (MFM) mode, with magnetohydrynamics (MHD) solved as in \citet{hopkins:mhd.gizmo,hopkins:cg.mhd.gizmo}, self-gravity with adaptive Lagrangian force-softening, radiative cooling from $1-10^{10}$\,K, including molecular, metal-line, fine-structure, photo-electric, ionization and other processes as well as star formation in dense, locally-self-gravitating gas \citep{hopkins:virial.sf,GrudicHopkins2018}, and stellar feedback following the FIRE-2 implementation of the Feedback In Realistic Environments (FIRE\footnote{\FIREurl}) physics \citep{HopkinsWetzel2018,hopkins:fire3.methods}. In these models ``star particles'' each repreent IMF-averaged ensembles of stars (rather than resolving individual stars and proto-stars as in \citealt{grudic:starforge.methods,guszejnov:2020.starforge.jets}), which evolve along standard stellar evolution models to return mass, metals, momentum, and energy to the ISM in the form of supernovae and O/B and AGB winds \citep{hopkins:sne.methods} as well as acting on the gas via radiative heating, photo-ionization, and radiation pressure \citep{hopkins:radiation.methods}. Simulations with these methods have been previously used to study many properties of GMCs, galactic nuclei, and star clusters, including their observed star formation efficiencies, cluster dynamics and mass profiles, young massive cluster internal structure, globular cluster demographics, and gas emission properties \citep[see references above and e.g.][]{GrudicHopkins2018,GrudicGuszejnov2018,GrudicGuszejnov2021,GrudicKruijssen2021,FukushimaYajima2021}. We extend these simulations by adding a population of ``seed'' BHs (sink particles) to the ICs, which can accrete gas from the surrounding medium, but otherwise feel only gravitational dynamics (we do not model BH feedback or BH-BH mergers). \subsection{Black Hole Accretion} \label{sec:simulations:bh-accretion} Our BH seeds/sink particle prescription is a simplified version of that numerically presented in \citet{grudic:starforge.methods}. Gas is accreted onto a sink if it meets three criteria: \begin{enumerate} \item It is within the sink radius $r_{\rm sink}$ of the BH: $r=|\mathbf{r}_{\rm gas}-\mathbf{r}_{\rm bh}|< r_{\rm sink}$. \item It is bound to the BH, including kinetic, thermal, and magnetic support: $u_{\rm thermal} + (1/2)\,v_{\rm A}^{2} + (1/2)\,\delta V^{2} < G\,M_{\rm sink}/r$, where $u_{\rm thermal}$ is the specific thermal energy, $v_{\rm A}$ the \Alf\ speed, and $\delta V^{2} \equiv |\mathbf{v}_{\rm gas} - \mathbf{v}_{\rm bh}|^{2}$. \item Its angular momentum is insufficient to support a circular orbit with radius larger than $r_{\rm sink}$ \cite[]{BateBonnell1995}, i.e.\ $j_{\rm gas} <\sqrt{G\,M_{\rm sink}\,r_{\rm sink}}$ where $j_{\rm gas}$ is the specific angular momentum of the gas cell (evaluated at its center-of-mass location). \end{enumerate} If a gas cell somehow meets all these criteria with two BHs simultaneously, it will accrete onto whichever is closer. We must choose $r_{\rm sink}$ in each simulation. This is usually set to something like the simulation resolution (typical inter-cell separation $\delta r$), and would ideally resolve the Bondi radius, $R_{\rm Bondi} \sim G\,M_{\rm bh}/(c_{\rm s}^{2} + \delta V^{2})$, i.e.\ $r_{\rm sink} \sim R_{\rm Bondi} \gtrsim \delta r$. But in our Lagrangian, dynamical simulations (1) the spatial resolution is not fixed, but scales as $\delta r \sim (\rho/m_{\rm gas})^{1/3}$, and (2) the Bondi radius fluctuates dramatically (as we will show), and varies between seeds. In the ``worst case'' scenario, assume accretion is coming from the low-density diffuse intra-cloud medium (density $\rho \sim \langle \rho \rangle \sim 3\,M_{\rm cl}/4\pi\,R_{\rm cl}^{3}$) with virial or free-fall level relative velocities $\delta V \sim v_{\rm cl} \sim (G\,M_{\rm cl}/R_{\rm cl})^{1/2} \gg c_{s}$. This would give $R_{\rm Bondi} \sim (M_{\rm bh}/M_{\rm cl})\,R_{\rm cl}$, so resolving the Bondi radius ($\delta r \lesssim R_{\rm Bondi}$ in the same diffuse mean-density gas) would require a prohibitive number of cells $N_{\rm cells} \sim (R_{\rm cl}/\delta r)^{3} \gtrsim (M_{\rm cl}/M_{\rm bh})^{3}$. However, as we noted above and will show more rigorously below, the accretion rates from such diffuse gas are orders-of-magnitude below Eddington, and (even if well-resolved) would contribute essentially nothing to the total BH accretion in our simulations. Therefore consider instead the ``best-case'' scenario for accretion: since the turbulence in the molecular clouds has rms Mach numbers $\mathcal{M}_{\rm cl} \sim v_{\rm cl}/c_{s} \sim 10-100$, radiative shocks can produce regions with very high densities $\rho \sim \langle \rho \rangle\,\mathcal{M}_{\rm cl}^{2}$, and low relative velocities $\delta V \lesssim c_{s}$ \citep{Vazquez-Semadeni1994,PadoanNordlund1997,MacLowKlessen2004}. Under these conditions, the Bondi radii will be well-resolved ($\delta r \lesssim R_{\rm Bondi}$) so long as $N \gtrsim \mathcal{M}^{-8}\,(M_{\rm cl}/M_{\rm bh})^{3}$ -- a huge relief ($\propto \mathcal{M}^{8}$) in resolution requirements (which would be easily satisfied by every simulation in this paper). As we will show, regions akin to this idealized example dominate the actual accretion in the simulations. In practice, we choose a sink radius by estimating a ``characteristic'' Bondi radius $b_{\rm c}$ by assuming $M_{\rm bh,\,c}=100\,M_\odot$, and considering two limits: $\delta V \lesssim c_{\rm s}$ (assuming a mean temperature of $100\,$K, typical in our simulations) so {$b_1 = G\,M_{\rm bh,\,c}/c_{\rm s}^{2}$}, and $\delta V \sim v_{\rm cl} \gg c_{\rm s}$ so $b_2 \approx (M_{\rm bh}/M_{\rm cl}) R_{\rm cl}$, and then take $b_{\rm c}=\min (b_1, b_2)$. We have verified in post-processing that in all cases which produce ``interesting'' runaway BH growth, the Bondi radii {\em during the phase where the BH actually accretes rapidly} is at least marginally resolved, as expected from the argument above. {We wish to reminder the readers again that the mass ``accreted'' in the simulation is not the actual mass swallowed by BHs due to multiple feedback effects (for details see Sec.~\ref{sec:caveats}). The sink radius $r_{\rm sink}$ is the actual resolution limit for BH accretion, while the physics from $r_{\rm sink}$ to the Schwarzchild radius is not resolved in this simulation, but it does not impact the science goal of this article. For completeness, an estimate considering BH radiative feedback from the previous analytic work \cite[]{2016MNRAS.459.3738I} is included in Sec.~\ref{sec:discussions:hyper-Eddington}. } \subsection{Initial Conditions} \label{sec:simulations:ics-setups} We sample spherical, turbulent, and non-rotating molecular clouds or cloud complexes with different initial mean surface density ($\bar{\Sigma}_0 \equiv M_{\rm cl}/\pi\,R_{\rm cl}^{2} \approx 100, 10^3, 10^4\,M_\odot/{\rm pc}^2$) and initial radius ($R_{\rm cl}=5, 50, 500\,{\rm pc}$) following the setup and results of \citetalias{GrudicHopkins2018}, where each group with the same surface density was shown to have similar star formation efficiency. Note that these parameters are motivated by massive, dense star-forming cloud and ``clump'' complexes seen in high-redshift galaxies and starburst galaxy nuclei, with only the smaller and lowest-$\bar{\Sigma}_{0}$ clouds analogous to massive GMCs in the Milky Way. Each initial cloud is uniformly magnetized, we also set $E_{\rm turb}/|E_{\rm grav}|=1$ and $E_{\rm mag}/E_{\rm grav}=0.1$, where $E_{\rm turb}$, $E_{\rm mag}$, and $E_{\rm grav}$ are the turbulence (kinetic) energy, magnetic field density, and gravitational binding energy respectively. The clouds serve as the mass reservoirs for BH accretion. We then insert an ensemble of BH seeds into the IC. Typically, for every seed, the mass ranges within $1~M_\odot \le M_{\rm bh} \le 10^4~M_{\odot}$ and are uniformly distributed in $\log M_{\rm bh}$. The initial position of seeds are sampled randomly but statistically uniformly within the cloud. The initial velocity is sampled such that in each dimension it is uniformly distributed in $[-V_{\rm circ},V_{\rm circ}]$ while the total magnitude is suppressed below $V_{\rm circ}$ to ensure the seeds are bound to the cloud, where $V_{\rm circ}^2 = GM_{\rm cl}(<r)/r$ is the local circular velocity at radius $r$ (assuming uniform mass distribution). We resample seeds which would be within a small distance to the cloud ``edge'' with an outward radial velocity, since these would trivially escape without interesting dynamics. Rather than simulating only a few BH seeds in one cloud, we include a large number of seeds in every IC so that we can sample many different seed masses and positions and kinematics. However, to avoid significant interactions among the BHs and heavy computational costs, the number of BH seed is controlled to be either below 10000, or the number such that the total BH mass does not exceed $5\%$ of the cloud initial mass $M_{\rm cl}$. For low mass clouds, we decrease the lower and upper bounds of BH seed mass sampling to ensure a sufficient number of BH seeds, which also helps ensure the Bondi radii are resolved (e.g., for $M_{\rm cl}= 10^{4}\,M_{\odot}$, $1\,M_\odot \le M_{\rm bh} \le 100 \,M_\odot$; for $M_{\rm cl}= 10^{5}\,M_{\odot}$, $10\,M_\odot \le M_{\rm bh} \le 10^3 \,M_\odot$, for $M_{\rm cl} \gtrsim 10^{6}\,M_{\odot}$, $10^{2}\,M_\odot \le M_{\rm bh} \le 10^{4} \,M_\odot$). We use adaptive force softening to avoid divergences in our gravity evaluation or extremely small time steps. For the newly formed stars, which have the same mass as gas particles, the minimum softening length is $r_{\rm soft}^{\rm star} \sim ( m_{\rm gas}/\rho_{\rm sf})^{1/3}$, where $m_{\rm gas}$ is the mass resolution of the cloud and $\rho_{\rm sf}$ is the numerical minimum density for star formation ($1000\,{\rm cm}^{-3}$ in the simulation). For BHs, the softening radius is set as $r_{\rm soft}^{\rm bh} = \min (r_{\rm soft}^{\rm star}, b_{\rm c})$, where $b_{\rm c}$ is the characteristic Bondi-Hoyle accretion radius (introduced in \S~\ref{sec:simulations:bh-accretion}). In the simulation $r_{\rm sink}=r_{\rm soft}^{\rm bh}$, so the setup ensures the code resolves the Bondi-Holye accretion radius and the BHs interact reasonably with star particles. For reference, we show the initial-conditions in Table~\ref{tab:clouds}. The clouds are divided into three groups with different initial mean surface density $\Bar{\Sigma}_0 = M_{\rm cl}/(\pi R_{\rm cl}^2)$, though within each group the clouds have the same set of initial radii. The fiducial resolution (number of initial gas cells) of our simulations is $128^3$, while a few low ($64^3$) and high ($256^3$) resolution runs of a subset of clouds are also included for comparison. For each fiducial simulation, the termination time scale is $2\, t_{\rm ff}$, where $t_{\rm ff}=\pi\sqrt{R_{\rm cl}^3/(8G M_{\rm cl})}$ is the initial free-fall time scale of the cloud, while for low-resolution ones the termination time is $5\,t_{\rm ff}$. Finally, BH mergers are disabled since the event rate is not significant. \section{Results} \label{sec:results} In this section we show the major results of the simulations. As a first impression, we present the morphology of one example GMC in Fig.~\ref{fig:visualization}, which has $M_{\rm cl}=10^8\,N_\odot$, $R_{\rm cl}=50\,{\rm pc}$, and resolution of $256^3$. After $0.55\,t_{\rm ff}$ of evolution, the GMC has become quite turbulent. We also show the 5 BHs that show the most significant mass growth during the period, which are generally located near the center of the GMC. For the BH that grows most rapidly during the period (the orange star in Fig.~\ref{fig:visualization}), we show the zoomed-in distribution of gas\footnote{For illustration purposes, the color is scaled nonlinearly with the density field so as to better illustrate its morphology.} and its velocity field in the middle and right-hand panels. Near the BH's sink radius (0.313 pc), there is a dense gas clump which has very low velocity compared to the gas at the edge of the view ($\sim 50\,{\rm km/s}$); this is rapidly accreted in the time between the snapshot shown and the next simulation snapshot. This essentially fits our expectations from Bondi-Hoyle theory, applied {\rm locally} at scales of order the Bondi radius: high gas density and low relative velocity between the BH and nearby gas create the ideal conditions for growth. \subsection{Seed growth in different clouds} \label{sec:growth.vs.cloud} As described in the previous section, in each cloud we sampled a large number of BH seeds to study their mass growth. In Fig.~\ref{fig:mass-growth} we present the mass evolution of (up to) 5 BHs in each simulation that show the most significant mass growth. As we show below, these are {\em not} necessarily the most massive seeds in the ICs. For clouds with low initial surface density ($\Bar{\Sigma}_0=127\,M_{\odot}/{\rm pc^2}$) the mass growth is modest: essentially no BHs grow by more than a factor $\sim 2-3$, and in general even the most-rapidly growing only increase in mass by tens of percent. At the larger surface densities we sample, the mass of the most-rapidly-growing BHs typically increases by at least two orders of magnitude. Ignoring the low surface density complexes, if we consider clouds with fixed $\Bar{\Sigma}_{0}$ but different sizes $R_{\rm cl}$ (or equivalently, masses $M_{\rm cl} \equiv \pi\,\Bar{\Sigma}_{0}\,R_{\rm cl}^{2}$), we see that the final masses of the single most-rapidly-growing BHs increase with the total cloud mass, reaching as high as $\sim 3-10\%$ of the total cloud mass. Interestingly, for the lower-mass complexes, we often see several BHs exhibiting similar growth, while for the most massive complexes ($R_{\rm cl} = 500\,{\rm pc}$), one BH runs away early and then proceeds to accrete a huge amount of gas, ``stealing'' the gas supply from other seeds in the cloud. From the same plot, we also see that the BHs typically grow their mass quickly in a short range of time ($\Delta t \lesssim t_{\rm ff}$) starting at some time near $t \sim t_{\rm ff}$. However, for clouds with higher surface density, we see the time range becomes slightly longer; the BHs in those clouds also start to grow somewhat earlier. Moreover, as we will show below in more detail in some illustrative examples, BH growth always {\em follows} the formation of a significant mass of stars. All these features inspire us to study the effect of star formation and stellar feedback in different clouds, which is discussed in \S~\ref{sec:discussions:feeback}. As a different way to study the probability of mass growth, we show the cumulative distribution of the final-initial mass difference for all the BHs in Fig.~\ref{fig:cdf}. For most of the BHs there is no large significant mass growth except for a small fraction ($\lesssim 10\%$) of them, which we will discuss in more detail below. \subsection{Dependence on ICs} In Fig.~\ref{fig:initial-mass} we show the dependence of mass growth ($\Delta M_{\rm bh}$) on the initial mass of the BH seeds. As we showed above, most seeds did not grow significantly. But more strikingly -- and perhaps surprisingly, given the strong dependence of the Bondi-Hoyle rate on $M_{\rm bh}$ -- we see that there is almost no correlation between the initial seed mass and BH mass growth. The particular simulation here considers seeds from $10^{2}-10^{4}\,M_{\odot}$, but we find the same (in the extended tests described below) for initial seed masses down to $\sim 10\,M_{\odot}$. In Fig.~\ref{fig:position-velocity}, we present the initial velocity magnitude (relative to the cloud center-of-mass), initial position, and mass growth for all BHs in the simulation. As we can see, there is no strong dependence on either the initial position or velocity magnitude, {\em provided} the BH is (a) reasonably bound to the cloud (initial velocity not larger than $\sim v_{\rm cl} \sim (G\,M_{\rm cl}/R_{\rm cl})^{1/2}$), and (b) the BH does not begin too close to the edge of the cloud with a velocity directed away from the (irregular) centers of collapse (in which case the BH tends to drift away from the dense regions, rather than interact with them). Another factor that could change the result is the initial metallicity $Z$, which self-consistently alters the cooling physics, stellar evolution tracks, and radiative feedback (opacities) in the simulations. We test this simulating GMCs with $M_{\rm cl}=10^6\,M_\odot$ and $R_{\rm cl}=5\,{\rm pc}$ (from the high surface density group) with varying initial $Z$ in Fig.~\ref{fig:metallicity}. By comparing the distribution functions of BH final-initial mass difference, we see that all those clouds produce statistically similar results for runaway BH accretion, independent of $Z$. We note that there are caveats regarding uncertainties in stellar evolution and treatment of molecular chemistry at extremely low metallicities in these models -- these are reviewed in detail in \citet{GrudicHopkins2018} -- but for all but truly metal-free clouds (where Pop-III evolution may be quite different) we regard this as robust. We also note that \citet{CorbettMoranGrudic2018} showed that the fragmentation and turbulent clumping in even metal-free clouds under high-density conditions like those of interest here are quite similar, independent of different molecular chemistry networks used for the thermochemistry. {The result does not mean the metallicity is not important for the actual accretion flow onto BHs, but is only valid for larger-scale accretion flows to the BH+disk system. Due to complexities of physics below our resolution limit ($r_{\rm sink}$), e.g., high metallicity may enhace the radiative force due to BH feedback and thus suppress accretion \cite[]{YajimaRicotti2017,ToyouchiHosokawa2019}. } We also change the number of BH seeds in the ICs ($N_{\rm bh,tot}$) and check the number of seeds that undergo significant mass growth, in Fig.~\ref{fig:bh-number-dependence}. Here we use different criteria to denote ``significant'': the final-initial mass ratio of the BH is above a constant $r$ and $r=100, 500, 2500$. If we simulate an initial number of seeds $N_{\rm bh,\,ini} \lesssim 64$, it becomes unlikely to see even a single seed undergo runaway growth, while for $N_{\rm bh,\,ini} \gtrsim 100$, we are essentially guaranteed that at least one seed will experience runaway growth. We find the same applying a more limited version of this test to other clouds. Thus there appears to be a threshold $\sim 1\%$ probability for a randomly-drawn seed to undergo runaway growth. However, if we increase the number of seeds further, the absolute number of BHs undergoing runaway accretion clearly saturates at a finite value, of $\sim$\,a few to ten with factor $10-100$ growth and $\sim 1$ with extreme runaway growth. Thus a given cloud can only support at most a few runaway BHs. \section{Discussion} \label{sec:discussions} \subsection{Effects stellar feedback \&\ global cloud properties} \label{sec:discussions:feeback} Intuitively, stellar feedback can alter BH accretion in two ways. i) Feedback expels gas, which makes it harder for BHs to capture that gas. ii) Feedback can make the cloud more turbulent and create more dense regions. As an example, we include low-resolution simulations with and without feedback physics for the same ICs in Fig.~\ref{fig:feedback_effects}. As we see, for this low-surface density cloud, feedback effectively blows gas away after two free-fall time scales. As a result, feedback suppresses both BH accretion and star formation -- BH growth in particular is suppressed by more than an order of magnitude, the difference between there being a few versus essentially no ``runaway'' BHs. Star formation and stellar feedback in GMCs have been well studied in previous simulations that are related to this work (e.g.\ \citealt{GrudicHopkins2018}), as well as similar studies with different numerical methods which have reached very similar conclusions for star formation (e.g.\ \citealt{2019MNRAS.487..364L}). One important conclusion of these studies (as well as more analytic ones like \cite{Larson2010}) is that the integrated star formation efficiency, and effects of feedback, depend strongly on $\bar{\Sigma}_{0}$. Briefly: a young stellar population of mass $M_{\ast}$ produces a net feedback momentum flux (from the combination of radiation and stellar winds) $\dot{P} \sim \langle \dot{p}/m_{\ast} \rangle\,M_{\rm cl,\,\ast} \sim 10^{-7}\,{\rm cm\,s^{-2}}\,M_{\rm cl,\,\ast}$, while the characteristic gravitational force of the cloud on its gas is $F_{\rm grav} \sim G\,M_{\rm cl,\,tot}\,M_{\rm cl,\,gas} / R_{\rm cl}^{2} \sim G\,M_{\rm cl,\,gas}\,\bar{\Sigma}_{0}$. So the gas reservoir of a cloud is rapidly unbound and ejected when its stellar mass exceeds $M_{\rm cl,\,ast}/M_{\rm cl,\,gas} \gtrsim G\,\bar{\Sigma}_{0} / \langle \dot{p}/m_{\ast} \rangle \sim \bar{\Sigma}_{0} / (1000\,M_{\odot}\,{\rm pc^{-2}})$. So for our low-$\bar{\Sigma}_{0}$ clouds, almost all of the gas is rapidly un-bound after just a small fraction of the GMC forms into stars, preventing it from being accreted by BHs. We can see this reflected in Fig.~\ref{fig:velocity-different-clouds}, which shows the gas rms bulk velocity $\langle |\mathbf{v}_{\rm gas}|^{2} \rangle^{1/2}$, gas sound speed $\langle c_{\rm s} \rangle$, and BH rms velocity $\langle |\mathbf{v}_{\rm bh}|^{2}|\rangle^{1/2}$ as a function of time in different ICs, in units of the characteristic cloud gravitational velocity $v_{\rm cl} \sim (G\,M_{\rm cl}/R_{\rm cl})^{1/2}$. Not surprisingly, the rms velocity of BHs remains of order the gravitational velocities. The gas bulk velocities are dominated by gravity at first so remain of order $v_{\rm cl}$, but when feedback begins to disrupt the cloud they increase in magnitude by a factor $\sim 10$. This effect depends primarily on $\bar{\Sigma}_{0}$, as expected from the argument above. Similarly, the sound speed of the clouds initially drops extremely quickly owing to cooling until it reaches molecular temperatures (we arbitrarily start from higher temperature, but this has no effect on our results), with $c_{\rm s} \ll v_{\rm cl}$, but then rises once feedback disrupts the cloud owing to a combination of (1) photo-ionization, (2) shocks from stellar winds bubbles, and (3) lower gas densities increasing the cooling time. Since e.g.\ the characteristic photo-ionized $c_{\rm s} \sim 10\,{\rm km\,s^{-1}}$ is roughly constant, the importance of this effect depends primarily on $v_{\rm cl}$, which ranges from $\sim 3\,{\rm km\,s^{-1}}$ in our lowest-mass, lowest-density simulation, to $\sim 300\,{\rm km\,s^{-1}}$ in our highest-mass, highest-density simulation. In our low-density, low-mass clouds, we see disruption occurs very early (less than $\sim 2\,t_{\rm ff}$), with the gas bulk velocities and sound speeds reaching $\gg v_{\rm cl}$. This makes gravitational capture of gas by seeds nearly impossible. For the intermediate-density clouds, we see the disruption is significantly delayed, and the magnitude of the post-disruption velocities is reduced (with $c_{\rm s} \lesssim v_{\rm cl}$ even during disruption). For the highest-density clouds, there is no real disruption but just dispersal of some residual gas after star formation completes. We have also considered the impact of stellar feedback on the volume and mass fraction in dense clumps (Fig.~\ref{fig:density-different-clouds}). Specifically, we calculated the volume and mass (in units of the initial cloud volume and total mass) of regions/clumps within the cloud that satisfy $\rho > 100\,\langle \rho \rangle_{0}$ (where $\langle \rho \rangle_{0} \equiv 3\,M_{\rm cl}/(4\pi\,R_{\rm cl}^{3})$ is the initial mean cloud density). The volume/mass of dense clumps increases in all cases rapidly at early times as the cloud collapses, but in the low-density clouds it is rapidly truncated by feedback. In contrast, we see in the higher-density clouds a sustained ``lifetime'' of dense gas: this is driven by shocks and turbulence from feedback from the stars that have formed, but have insufficient power to completely disrupt the cloud. In the highest-density case we even see dense clumps re-emerge several times after $t\gtrsim 3\,t_{\rm ff}$ due to large-scale stellar feedback events -- these correspond to large wind/HII region shells colliding to form dense regions (see e.g.\ \citealt{MaGrudic2020} for more detailed discussion). Another obvious requirement for runaway BH growth to ``interesting'' IMBH or even SMBH masses is that the total cloud mass is sufficiently large, such that the mass of dense ``clumps'' accreted is interesting. As we show below, the characteristic gas clump masses at high densities which meet the conditions for runaway BH growth are typically $\sim 1\%$ of the cloud mass, neatly explaining the maximum final BH masses seen in \S~\ref{sec:growth.vs.cloud}. This requires a total cloud mass $\gtrsim 10^{5}-10^{6}\,M_{\odot}$ for growth to true IMBH (let alone SMBH) scales. Interestingly, since $v_{\rm cl} \sim 15\,{\rm km\,s^{-1}}\,(M_{\rm cl}/10^{6}\,M_{\odot})^{1/4}\,(\bar{\Sigma}_{0}/10^{3}\,{\rm M_{\odot}\,pc^{-2}})^{1/4}$, this plus the surface density condition above simultaneously ensure that complexes are not over-heated or disrupted by photo-ionized gas. \subsection{How Does Runaway Growth Occur?} \label{sec:runaway} We now consider the local conditions for runaway growth. In a small ``patch'' of cloud on scales $\sim r_{\rm sink}$ (small compared to the cloud but large compared to the BH accretion disk), it is not unreasonable to approximate the rate of gravitational capture of gas by a sink via the Bondi formula (Eq.~\eqref{equ:bondi-hoyle-rate}), given the local value of $\rho$, $\delta V$, and $c_{\rm s}$. Recall, from \S~\ref{sec:intro} and \S~\ref{sec:simulations:bh-accretion}, that if we consider the typical or diffuse/volume-filling conditions within the cloud, i.e.\ $\rho \sim \langle \rho\rangle_{0} \sim 3\,M_{\rm cl}/4\pi\,R_{\rm cl}^{3}$ and $\delta V \sim v_{\rm cl} \gg c_{\rm s}$, we would obtain \begin{align} \langle \dot{M}_{\rm bh} \rangle_{\rm diffuse}& \sim \frac{G^2 M_{\rm bh}^2 \rho}{\delta V^3} \sim \frac{G^{1/2}\,M_{\rm bh}^{2}}{M_{\rm cl}^{1/2}\,R_{\rm cl}^{3/2}} \sim \left(\frac{\langle n_{\rm cl} \rangle}{{\rm cm^{-3}}}\right)\,\left( \frac{M_{\rm bh}}{M_{\rm cl}} \right)\,\dot{M}_{\rm Edd} \end{align} where $\langle n_{\rm cl}\rangle = \langle \rho \rangle_{0}/m_{p}$. If we further assume that the timescale for accretion $\Delta t$ is of order the cloud lifetime, $\sim t_{\rm eff} \sim \sqrt{R_{\rm cl}^3/GM_{\rm cl}}$, then the total mass accreted $\sim \dot{M}_{\rm bh}\,\Delta t$ would be \begin{align} \Delta M_{\rm bh} \sim \langle \dot{M}_{\rm bh} \rangle_{\rm diffuse}\,t_{\rm ff} \sim \frac{M_{\rm bh}}{M_{\rm cl}}\,M_{\rm bh} \end{align} In other words, unless the ``seed'' is already a large fraction of the entire GMC complex mass (i.e.\ is not really a seed in any sense), then the diffuse accretion will be highly sub-Eddington and the BH will grow only by a tiny fractional amount. This immediately explains why {\em most} of the seeds we simulate indeed grow negligibly. However, in a highly turbulent cloud we argued above that two effects that may boost the mass growth: i) the dense clumps appear with $\rho \gg \langle \rho\rangle_{0}$, and ii) the turbulence velocity contributes to the relative velocity $\delta \mathbf{V}$ so locally, low $\delta V$ is possible. In Fig.~\ref{fig:bondi-hoyle-check} we follow one particular but representative example of a sink which undergoes runaway growth, considering how the relevant factors in the local Bondi rate evolve in the immediate vicinity of the sink. The thermal sound speed is negligible at basically all times in the cold molecular phases compared to $\delta V$, as expected. Runaway growth therefore occurs when the BH happens to encounter a region which simultaneously features a strong local density enhancement, $\rho \sim 10^{3}-10^{4}\,\langle \rho\rangle_{0}$, and low relative velocity $\delta V \lesssim 0.1\,v_{\rm cl}$, below the escape velocity of gas from the sink radius (so it is indeed captured). This boosts the local Bondi accretion rate by a factor of $\sim 10^{7}$, compared to our estimate above for the ``diffuse'' cloud medium. Visual examination shows this resembles Fig.~\ref{fig:visualization} -- the BH happens (essentially by chance) to be moving with a very low relative velocity to a dense clump created nearby by intersecting shock fronts (with Mach $\sim 30-100$ shocks, i.e.\ $v_{\rm shock} \sim 10\,{\rm km\,s^{-1}}$, producing the large density enhancement), and begins accreting it extremely rapidly. Since the Bondi rate scales as $\propto M_{\rm sink}^{2}$, this runs away and most of the clump mass ($\sim 10^{5}\,M_{\odot}$) is rapidly accreted and the clump is tidally disrupted and then captured before it fragments internally to form stars. Examination show this pattern is typical of the seeds which experience runaway accretion in the simulations. Analytically, therefore, let us assume that during evolution, a BH encounters a dense clump with local density $\rho_{\rm c}$, clump radius $r_{\rm c}$, mass $\delta M_{\rm c}$, at relative velocity $\delta V_{\rm c}$ (and define $C^{2} = \delta V_{\rm c}^{2} + c_{\rm s}^{2}$, where we can generally assume $C \sim \delta V_{\rm c} \gtrsim c_{\rm s}$, even in regions where $\delta V_{\rm c}$ is relatively low), and accretes in Bondi-Hoyle-like fashion. Fig.~\ref{fig:analytic-scaling} summarizes the resulting accretion for various assumptions. Integrating the Bondi accretion rate for some time $\Delta t$ (assuming the background is constant), we have \begin{align} \label{equ:runaway-accretion} \frac{1}{M_{\rm bh,0}} - \frac{1}{M_{\rm bh,final}} = \frac{1}{M_{\rm bh,0}} - \frac{1}{M_{\rm bh,0} + \Delta M_{\rm bh}} & \sim \frac{4\pi\,G^2 \rho_{\rm c}}{C^{3}} \Delta t \end{align} where $M_{\rm bh,\,0}$ is the ``initial'' BH mass. This diverges (formally $\Delta M_{\rm bh} \rightarrow \infty$), so in practice the entire clump will be accreted ($\Delta M_{\rm BH} \rightarrow M_{\rm c}$), in a finite time $\Delta t \rightarrow t_{\rm acc} \sim C^{3}/(4\pi\,G^{2}\,\rho_{\rm c}\,M_{\rm bh,0})$. In practice, the time $\Delta t$ will be limited by the shortest of either the dense clump lifetime (usually not shorter than its freefall time $t_{\rm ff,\,c} \sim 1/\sqrt{G\,\rho_{\rm c}}$), the timescale for the clump to fragment and form stars (also no shorter than $t_{\rm ff,\,c}$), or the crossing time $t_{\rm cross} \sim r_{\rm c} / \delta V_{\rm c}$ for the mutual interaction. A simple calculation shows that the ratio $t_{\rm cross}/t_{\rm ff,\,c} \sim (\delta M_{\rm c}/M_{\rm cl})^{1/3}\,(\rho_{\rm c}/\langle \rho\rangle_{0})^{1/6}\,(v_{\rm cl}/\delta V_{\rm c})$. Inserting numbers or considering Fig.~\ref{fig:analytic-scaling} shows that for the conditions of greatest interest for rapid accretion ($\delta V_{\rm c} \ll v_{\rm cl}$, $\rho_{\rm c} \gg \langle \rho\rangle_{0}$, and clump masses $\delta M_{\rm c}$ not incredibly small compared to the mass of the cloud so large BH growth is possible), we usually expect $t_{\rm cross} \gtrsim t_{\rm ff,\,c}$. So considering a ``worst-case'' scenario, then, accretion can run away to accrete the entire clump when $t_{\rm acc} \lesssim t_{\rm ff,\,c}$, which in turn requires: \begin{align} \label{eqn:critical.condition} \frac{\delta V_{\rm c}}{v_{\rm cl}} \lesssim 0.1\,\left( \frac{M_{\rm bh,\,0}}{10^{-5}\,M_{\rm cl}} \right)^{1/3}\,\left( \frac{\rho_{\rm c}}{100\,\langle \rho \rangle_{0}} \right)^{1/6} \end{align} This corresponds reasonably well to the conditions where, in the simulations, we indeed see runaway growth -- regions with enhanced $\rho_{\rm c}$, and (crucially here), low local $\delta V_{\rm c}$. This also naturally explains why we see only a very weak dependence on initial BH mass -- provided this condition is met (which does not depend strongly on $M_{\rm bh,\,0}$), then the ``growth limiter'' is not the BH mass or Bondi rate (which depends strongly on $M_{\rm BH}$), but the mass of the clump $M_{\rm c}$ (which is, of course, entirely independent of the mass of the BH). Moreover, accretion events can occur sequentially, so once a BH ``jumps'' in mass by accreting a clump, its ``effective'' mass will be larger making it easier to accrete subsequent clumps (in an extreme form of competitive accretion). Still, if the BHs were truly extremely small (e.g.\ $\ll 10\,M_{\odot}$), or the clouds extremely massive, then the probability of such an event would become small rapidly -- this may explain, in part, why for the most massive complexes we see fewer BHs grow (but those that do grow, grow to even larger masses). Finally, in Appendix~\ref{app:bh_accretion_rate_estimation}, we use this to make an order-of-magnitude estimate of the probability of a seed encountering a ``patch'' (i.e.\ clump) of gas meeting the criteria above. Assuming e.g.\ uncorrelated Gaussian velocity fields and lognormal density fields, we estimate that the probability of seeds encountering dense clumps is not low, but the probability of such an encounter also having low relative velocity meeting the condition above is, giving a net probability in the range $\sim 0.001-0.01$. This is remarkably similar (given the simplicity of these assumptions) to our estimate of $\sim 0.01$ from the simulations where we varied the number of seeds systematically, as discussed above. { \subsection{Hyper-Eddington accretion} Here we want to assess if the mass accretion onto BHs is hyper-Eddington. For this simulation without feedback, the mass flow onto BHs should already be enormous, which is a sufficient (though not necessary) condition for hyper-Eddington accretion. We first check this condition in Fig.~\ref{fig:Edd-CDF}. For each BH in the simulation, we can estimate its average mass accretion rate $\langle \dot{M}_{\rm BH} \rangle$ from neighboring snapshots. We define the Eddington ratio as $f_{\rm Edd} \equiv \langle \dot{M}_{\rm BH} \rangle / \dot{M}_{\rm Edd}$, and check the maximum $f_{\rm Edd}$ for each BH in its history. We then show the distribution of the maximum $f_{\rm Edd}$ for all BHs. There is a fraction of simulated BHs undergoing hyper-Eddington accretion (e.g., the fraction of BH with $f_{\rm Edd}\gtrsim 10^3$ is $\sim 2\%$ for GMCs with $\bar{\Sigma}_0 = 13000\,M_\odot/{\rm pc^2}$, and $\sim 0.5\%$ for those with $\bar{\Sigma}_0 = 1300\,M_\odot/{\rm pc^2}$). For BHs in GMCs with higher initial surface density, the possibility of hyper-Eddington accretion is also higher, in the same way as discussed in \S~\ref{sec:results}. Feedback from black holes, especially the radiative feedback will play a negative role in mass accretion (details to be expanded in \S ~\ref{sec:caveats}). Although in this study BH feedback is not included, we can infer the availability of hyper-Eddington accretion from theoretical studies. \citet{2016MNRAS.459.3738I} predicted that the critical density for hyper-Eddington accretion (with the accretion rate of $\gtrsim 5000 L_{\rm Edd}/c^2$) is \begin{align*} n_\infty \gtrsim 10^5 \left(\frac{M_{\rm BH}}{10^4\,M_\odot}\right)^{-1}\left(\frac{T_\infty}{10^4\,{\rm K}}\right)^{3/2} {\rm cm}^{-3}. \end{align*} Here $n_\infty$ and $T_\infty$ are the density and temperature near the BH. For each BH with mass accretion in our simulation, we track the time when the BH reaches the fastest accretion rate through its history, and measure the nearby gas density and temperature. We compare our simulation data with \citet{2016MNRAS.459.3738I} in Fig.~\ref{fig:critical_density}. We find that for most BHs the nearby density is above the critical density that allows hyper-Eddington accretion even if there is radiative feedback. For GMCs with higher surface density, the fraction of BHs above the critical density is also higher. } \subsection{{Effects of numerical parameters}} \label{sec:discussions:hyper-Eddington} At the end of the discussion section, we include the effects of several numerical parameters in the simulation. \subsubsection{Mass \&\ Force Resolution} Resolution could influence our results both directly and indirectly. Ideally we would wish to ensure $m_{\rm gas} \ll M_{\rm bh}$, and that the Bondi radii are resolved (see \S~\ref{sec:simulations:bh-accretion}), but there are of course physics we cannot hope to resolve in our larger cloud complexes, such as the formation of individual stars (e.g.\ predicting the IMF itself). Nonetheless, we have tested our results for several clouds at different resolution levels: $64^3$, $128^3$, and $256^{3}$ (See Appendix~\ref{app:resolution-convergence}). For most clouds (especially those with high initial mean surface density), we see no significant different in our statistical/qualitative conclusions across these resolution levels. Similarly, we have re-run two of our clouds (one low and one high density) with factor $\sim 3$ different force-softening values for the collisionless (star and BH) particles, and see no significant effect. Thus our conclusions appear robust, but we caution again that qualitatively new physical processes would need to be simulated if we went to higher resolution (which are represented here by our sub-grid models for e.g.\ IMF-averaged stellar evolution). \subsubsection{BH Sink Radii} As noted above, in the simulation the sink radius for BH accretion is set as the smaller value of the Bondi-Hoyle accretion radius with either $\delta V \sim c_{\rm s}$ or $\delta V \sim v_{\rm cl}$. Analysis of our simulation shows that runaway accretion almost always occurs in regions where $\delta V \sim c_{\rm s} \lesssim 0.1\,v_{\rm cl}$, with enhanced densities $\gtrsim 100\,(3\,M_{\rm cl}/4\pi\,R_{\rm cl}^{3})$, where (as noted above) the Bondi radii are orders-of-magnitude larger than one would calculate for the diffuse GMC gas with low relative velocities. As a result the simulations relatively easily resolve the Bondi radius where accretion is relevant. Nonetheless we have re-run a small subset varying $r_{\rm sink}$ by more than an order of magnitude in either direction. If we make $r_{\rm sink}$ much too small -- significantly smaller than the rms inter-cell separation (spatial resolution in the gas) at the highest cloud densities ($\gtrsim 100\,(3\,M_{\rm cl}/4\pi\,R_{\rm cl}^{3})$), then we artificially see suppressed accretion (simply because we require the separation between BH and gas cell center-of-mass be $<r_{\rm sink}$ for capture). If we make the sink radius more than an order of magnitude larger than our default value, we see spurious accretion, usually in the very early timesteps of the simulations (before turbulence is fully-developed), where diffuse gas with low relative velocities is rapidly accreted. But for more reasonable variations in $r_{\rm sink}$, even spanning an order of magnitude, our results are robust. And as we show below, the accretion corresponds fairly well with analytic post-processing expectations, further lending confidence to this estimate. \subsubsection{Initial BH Velocities} {We have considered some limited experiments where we add a systematic ``boost'' or arbitrarily increase the initial velocities of the BH seeds in the initial conditions (for details see Appendix~\ref{app:initial_velocity_dependece})}. As expected, if the seeds are moving well in excess of the escape velocity relative to the cloud, they escape rapidly and never capture a large amount of gas. So the rms velocities of ``interesting'' BH seeds can only exceed $v_{\rm cl}$ by a modest (at most order-one) factor. On the other hand, reducing the BH seed velocities to zero has very little effect (other than introducing some spurious transient accretion in the first few timesteps when there is no relative gas-BH motion), because they quickly acquire random velocities of order the gravitational velocity $v_{\rm cl}$ from the fragmenting cloud potential. \subsection{{Connections with observations}} Given that this is really a theoretical ``proof of concept'' and we do not yet include these crucial physics (which we expect may change the key conclusions), we hesitate to make specific observational predictions. Nonetheless, even if BH feedback did nothing to further suppress runaway BH growth, there are some important conclusions we can draw regarding observations of both active (star-forming) clouds and ``relic'' star clusters. \begin{enumerate} \item Runaway accretion would not occur in Milky Way/Local Group GMCs or cloud complexes: the necessary conditions much more closely resemble starburst galaxy nuclei and the most massive dense star-forming clumps observed in high-redshift massive galaxies. \item As a result, the ``relic'' star clusters from regions which could produce runaway accretion will not be typical star clusters or globular clusters. They are much more akin to nuclear star clusters (at the low-mass end) and dense galactic bulges (at the high-mass end). Even if the high-redshift clumps are off-center, these complexes would quickly spiral to the center of the galaxies to form proto-bulges \citep{noguchi:1999.clumpy.disk.bulge.formation,dekel:2009.clumpy.disk.evolution.toymodel}, which is important for SMBH seed formation mechanisms as it is almost impossible for seeds of the masses we predict here to ``sink'' to galaxy centers via dynamical friction in a Hubble time at high redshift, if they are not ``carried'' by more massive star cluster complexes \citep{ma:2021.seed.sink.inefficient.fire}. \item Regardless of which clusters could have hosted this runaway process, we again find the probability is low on a ``per seed'' basis. Therefore, whether we expect an IMBH/SMBH ``relic'' in the descendants depends fundamentally on the population of seeds and their dynamics. While we find stellar-mass seeds are viable, it is not obvious if these could come from the stars forming in the cloud itself (e.g. from the relics of the stars formed during the process). Most stellar-mass seeds form relatively late after star formation ($\gtrsim 30\,$Myr), in explosions (which could disrupt the cloud), and have large natal kicks (excessive relative velocity). It is possible, if kicks were somehow avoided, that the most massive stars which reach the end of the main sequence more rapidly (at $\sim 3\,$Myr) and collapse directly to BHs could be viable seeds, but then these are much more rare. Alternatively, the seeds could come from the ``pre-existing'' background of stars, as especially in e.g.\ galactic nuclei or $\sim$\,kpc-scale clump complexes in massive galaxies we expect a very large population of background stellar-mass BHs. The key then is their kinematics (i.e.\ whether a sufficient number can be ``captured'' to locally interact as we model). \item Almost by definition, the required conditions make it very difficult to observe this process ``in action.'' Complexes which meet the criteria above are, by definition, Compton-thick (and since the accretion occurs in over-dense sub-regions, these are even more heavily obscured). Moreover, if the maximum luminosity of accreting BHs (even if they are undergoing hyper-Eddington accretion) is not much larger than the traditional Eddington limit (as most models predict; see \S~\ref{sec:intro}), then the bolometric and even X-ray luminosities of the clouds/complexes will be dominated by the stars (not the runaway accreting BHs), unless the BH accretes an enormous fraction ($\sim 10\%$) of the entire cloud mass. \item Even if such enormous accretion were to occur (or if the luminosity could exceed Eddington), by the time the BH luminosity could ``outshine'' even a fraction of the stellar luminosity of the complex, its luminosity would be so large that it would not be a ULX-type source. Rather (especially, again, recalling that the complexes of interest are generally in or around distant galactic nuclei), it would much more closely resemble an off-center, obscured AGN (or a dual AGN, if the galaxy already has an accreting SMBH). Large populations of such AGN sources are, of course, well-known, and there are much more mundane ways to produce them (via galaxy mergers or irregular kinematics), but it is perhaps conceivable that a small fraction of them could be systems like what we simulate here. \end{enumerate} \section{Caveats} \label{sec:caveats} \subsection{Feedback from Accreting Black Holes} The most important caveat of this study is that we did not include any ``sub-grid'' model for BH accretion or feedback in the simulations. So ``BH accretion rate'' here should really be understood to be ``rate of gravitational capture of gas by the BH-disk system'' (akin to ``Bondi-Hoyle-like mass inflow rate'') and ``BH mass'' or ``sink mass'' represents a sum of the actual BH mass and its bound/captured material (whether that material has actually formed an accretion disk is another question itself). This is not, of course, because we expect feedback to be unimportant for the BHs which rapidly capture gas: indeed, models of super-Eddington accretion disks (models whose ``outer boundary condition'' is something like the sink radii or ``inner boundary condition'' of our simulations) predict both strong radiative (luminosities near or somewhat above the Eddington luminosity) and kinetic (broad-angle MHD outflows from the disk) feedback (see references in \S~\ref{sec:intro}). While it is conceivable that under sufficiently-dense conditions, the surrounding material could continue to accrete (see e.g.\ \citealt{QuataertGruzinov2000,TakeoInayoshi2018,ReganDownes2019}), this could also completely shut down BH growth and even star formation in the surrounding cloud \citep{SchawinskiKhochfar2006}. However, crucial details of such accretion and feedback processes remain deeply uncertain. This includes (i) the rate at which material can go from being ``gravitationally captured'' to actually accreted onto the BH (which determines the luminosity and other feedback); (ii) whether star formation and/or fragmentation occurs in the captured disk material if too much mass is captured; and (iii) for a given accretion rate, the radiated spectrum and energy, and the energy and momentum and mass and opening angle of accretion-disk winds. Our intention here is therefore to first identify a set of {\em necessary}, but perhaps not sufficient, pre-conditions for runaway hyper-Eddington seed growth in on ISM scales. Clearly, if a BH cannot sustain super-Eddington gravitational capture rates of sufficient total mass in the first place, then it is unlikely that adding feedback would do anything other than further decrease the (already minimal) accretion. This allows us to already rule out large segments of parameter space as viable locations for hyper-Eddington accretion (e.g.\ Milky Way-like low-density or low-mass clouds, systems with insufficient statistical sampling of ``seeds,'' highly-unbound seeds). In future work (in preparation), we will use this study as the basis for a follow-up set of simulations which do include BH feedback, systematically surveying parameterized models for the major uncertainties described above, but using the results of this study to specifically begin from conditions where we know that, {\em absent} BH feedback, rapid accretion is possible. \subsection{Other Caveats} There are also of course other caveats in our simulations themselves. While we survey a factor of $\sim 100$ in mass resolution and see robust results, we are certainly not able to ``converge'' in a strict sense, especially given some ISM micro-physics where the relevant dynamics occur on sub-au scales. We cannot resolve or predict the IMF or stellar evolution tracks of individual stars, let alone their late-stage evolution and potential collapse into BH relics. This is especially unfortunate as one might imagine one source of ``seed'' BHs bound to the cloud would be extremely massive stars that form in that cloud with very short lifetimes that might implode or collapse directly to massive BH remnants, rather than exploding as SNe. A new generation of simulations like STARFORGE might be able to address some of these, but the resolution required has thus far limited their explicit simulations to precisely the low-density, low-mass clouds of least interest here \citep{GrudicGuszejnov2021,GrudicKruijssen2021}. It is also possible that physics we neglect plays an important role. For example, on galactic scales, cosmic rays can influence the ISM significantly, although many have argued that because of their diffusive nature (smooth CR density gradients), they play little dynamical role (except perhaps via ionization) in the dense ISM clouds of interest here \citep{farber:decoupled.crs.in.neutral.gas,hopkins:cr.mhd.fire2,hopkins:cr.multibin.mw.comparison,bustard:2020.crs.multiphase.ism.accel.confinement}. More realistic initial conditions and boundary conditions for clouds (embedded in a full ISM, for example) could also be important \citep{lane:2022.turbcloud}. This is perhaps especially relevant for our most massive complexes. When we simulate regions with $R_{\rm cl} \sim 500\,$pc and $\bar{\Sigma}_{0} \sim 10^{4}\,{\rm M_{\odot}\, pc^{-2}}$ -- i.e.\ gas masses as large as $\sim 10^{10}\,M_{\odot}$, these are not really ``clouds'' as we think of GMCs in the Milky Way. Rather, these values are motivated by typical sizes and densities observed in systems like starburst and/or ultra-luminous infrared galaxy nuclei \citep[see e.g.][]{kennicutt98,gao:2004.hcn.compilation,narayanan:2008.sfr.densegas.corr,bigiel:2008.mol.kennicutt.on.sub.kpc.scales}, and seen in the common massive clump-complexes or nuclei of high-redshift massive galaxies \citep{tacconi:high.molecular.gf.highz,krumholz:2010.highz.clump.survival,narayanan:2011.z2.kslaw,orr:ks.law}. But under these conditions, there is usually also a large pre-existing stellar population and dark matter halo, defining the potential of the nuclear gas -- properly simulating these regimes would really require full galaxy-formation simulations. It is likely that this added potential would make the starburst even less able to disrupt, leaving behind a dense nuclear bulge \citep[e.g.][]{sanders88:quasars,tacconi:ulirgs.sb.profiles,rj:profiles,hopkins:cusps.mergers,hopkins:cusps.fp,hopkins:cusps.evol,hopkins:cusps.ell,hopkins:cusp.slopes}. \section{Conclusions} \label{sec:conclusions} We have simulated populations of dynamic, accreting BH seeds with masses $\sim 10^{1}-10^{4}\,M_{\odot}$ in massive cloud complexes (meant to resemble the most massive GMCs, high-redshift and starburst galaxy nuclei), with self-gravity, realistic cooling, detailed models for star formation and stellar feedback in the form of radiation, outflows, and supernovae, but neglecting the feedback from the BHs themselves. Our goal is to identify whether, under any conditions, such seeds can capture gas from the dense, cold ISM at the rates required to power hyper-Eddington accretion, and whether this can be sustained for long enough periods that it is conceivable such BHs could grow to IMBH or even SMBH mass scales. This forms a necessary, but not sufficient, condition for hyper-Eddington growth models for the origin of IMBHs and SMBHs. Based on our analysis above, we can draw the following conclusions (again, absent BH feedback): \begin{enumerate} \item Sustained hyper-Eddington gravitational capture from the ISM can occur, under specific conditions (detailed below). This occurs when BH seeds coincidentally find themselves in regions with locally enhanced densities (local densities well in excess of $\sim 100$ times the complex-mean), with (by chance) very low relative velocities (less than $\sim 10\%$ of the characteristic gravitational velocity of the complex). The dense clump is then captured extremely quickly (on less than its internal dynamical time), which can set of a ``runaway'' of competitive accretion by which the seed grows even more massive (reaching up to $\sim 1\%$ of the complex gas mass). \item Provided the right conditions are met, this process is only very weakly dependent on the initial seed mass, even for stellar-mass seeds in the $\sim 10-100\,M_{\odot}$ range. Thus, the ``seed'' does not need to already be an IMBH. \item Much like with star formation, stellar feedback plays a dual role. Stellar feedback overall suppresses star formation and unbinds gas, suppressing BH growth (especially in lower-density clouds). But in higher-density, more-massive complexes, feedback produces regions like colliding shocks/shells which promote exactly the conditions needed for runaway BH growth. \end{enumerate} For this runaway accretion to occur, we show that there are several necessary ``global'' criteria the molecular complex must meet, including: \begin{enumerate} \item The complex must have a high surface density/gravitational pressure, $\bar{\Sigma}_{0} \gtrsim 1000\,{\rm M_{\odot}\,pc^{-2}}$. Otherwise, stellar feedback disrupts the medium too efficiently, both reducing the time available for accretion but also unbinding dense gas instead of allowing it to remain trapped and thus potentially creating situations with low relative velocities. \item The complex must also be sufficiently high-mass, $M_{\rm cl} \gtrsim 10^{6}\,M_{\odot}$. This is to ensure both that there is sufficient total mass supply that if hyper-Eddington accretion occurs, the final mass is ``interesting'' (reaching IMBH, let alone SMBH mass scales), but also required, along with the $\bar{\Sigma}_{0}$ criterion, to ensure that the escape velocity of the cloud will be large enough that ionizing radiation does not rapidly unbind material or disrupt the complex and prevent accretion. \item The BH seeds must be ``trapped'' by the complex, with systematic relative velocities not significantly larger than the characteristic gravitational velocity of the cloud. This means, for example, that a BH moving isotropically in the background galaxy bulge, intersecting a cloud, would be unable to accrete, while BHs with small relative velocities to the cloud are viable. \item We require at least $\sim 100$ seeds, in complexes meeting all the criteria above, to have an order-unity probability of one showing sustained hyper-Eddington accretion. Thus even when all the criteria are met, the conditions are ``rare'' on a per-seed basis. Once the number of seeds is sufficiently large, the finite number of locations where runaway can occur, plus the competitive accretion dynamics noted above, mean that the number which actually do experience runaway growth saturates at one to a few. \end{enumerate} In future work, we will use this preliminary study to inform a more focused study which does include BH feedback, systematically exploring the uncertainties in BH feedback models but focused on cloud conditions where -- at least in the absence of said feedback -- we find runaway growth is possible. \acknowledgments{We thank Xiangcheng Ma and Linhao Ma for useful discussions and revisions of this draft. Support for the authors was provided by NSF Research Grants 1911233, 20009234, 2108318, NSF CAREER grant 1455342, NASA grants 80NSSC18K0562, HST-AR-15800. Numerical calculations were run on the Caltech compute cluster ``Wheeler,'' allocations AST21010 and AST20016 supported by the NSF and TACC, and NASA HEC SMD-16-7592.} \datastatement{The data supporting the plots within this article are available on reasonable request to the corresponding author. A public version of the GIZMO code is available at \gizmourl.} \bibliography{bib} \begin{appendix} \section{Estimating the Probability of a Runaway Accretion Event} \label{app:bh_accretion_rate_estimation} Based on arguments in \ref{sec:discussions}, we try here to estimate the probability of a runaway BH accretion event. Specifically, from \S~\ref{sec:runaway}, we want to estimate the probability of a random BH seed encountering a ``clump'' in a turbulent cloud complex which meets the conditions defined in Eq.~\ref{eqn:critical.condition}. For simplicity (although somewhat motivated by simulations, see e.g.\ \citet{burkhart:2009.mhd.turb.density.stats}) we will assume uncorrelated density and velocity fields, with Gaussian velocity statistics and lognormal density statistics as is usually assumed in supersonic turbulence. First consider the probability of a seed encountering a clump which is overdense in the manner of Figs.~\ref{fig:visualization} \&\ \ref{fig:bondi-hoyle-check}, or Eq.~\ref{eqn:critical.condition}. Assume the seeds have an rms velocity dispersion $\langle \mathbf{v}_{\rm bh}^{2} \rangle^{1/2}$ of order the gravitational velocity of the complex $v_{\rm cl}$, as does the gas, and that the complex is filled with some number density of clumps $n_{\rm c}$ with effective cross-section $\sigma_{\rm c} \sim {\rm Vol}_{\rm c} / r_{\rm c}$, where ${\rm Vol}_{\rm c}$ is the volume of a typical clump. Assume the seeds randomly move through the complex (uniformly sampling the volume) over its lifetime $\Delta t = \tau\,t_{\rm ff,\,cl}$ (where for the massive complexes of interest, $\tau \sim$\,a few, and $t_{\rm ff,\,cl} \equiv R_{\rm cl}/v_{\rm cl}$). The average number of dense clumps encountered is therefore $\langle N_{\rm cl} \rangle \sim \tau\,n_{\rm c}\,\sigma_{\rm c}\,v_{\rm cl}\,t_{\rm ff,\,cl} \sim \tau\,(R_{\rm cl}/r_{\rm c})\,f_{\rm V,\,c}$. We are interested in dense clumps or shocks, which simulations show tend to have a characteristic size/width of order the sonic scale, $r_{\rm c} \sim R_{\rm cl}/\mathcal{M}_{\rm comp}^{2}$ (where $\mathcal{M}_{\rm comp}$ is the compressive Mach number of the cloud; see e.g.\ \citealt{passot:1988.proof.lognormal,Vazquez-Semadeni1994,scalo:1998.turb.density.pdf}). We can also estimate $f_{\rm V,\,c}$ by integrating the standard volume-weighted lognormal density PDF for supersonic turbulence (Gaussian in $\ln{(\rho / \langle \rho\rangle_{0})}$, with mean $=-S/2$ required by continuity, and variance $S \approx \ln{[1 + \mathcal{M}_{\rm comp}^{2}]}$) above some critical $\rho \gtrsim \rho_{\rm c} \sim 100\,\langle \rho \rangle_{0}$. Plugging in numbers, we can see that $\langle N_{\rm cl} \rangle \gtrsim 1$ so long as $\mathcal{M}_{\rm comp} \gtrsim 10$, which is easily satisfied in cold molecular gas for the massive, high-density complexes of interest. So it is not particularly rare for a BH to encounter a dense clump over a duration of several free-fall times in the massive, dense complexes of interest. What is less common is for such an encounter to feature a low relative velocity $\delta V = |\mathbf{v}_{\rm bh} - \mathbf{v}_{\rm gas}|$. Let us assume, similar to the above, that the BH velocity is drawn from an isotropic Gaussian distribution with 1D dispersion $\sigma_{\rm v,\,bh} \sim v_{\rm cl}/\sqrt{3}$ in each dimension, and the gas or clump velocity is drawn from an independent isotropic Gaussian with similar 1D dispersion in each dimension $\sigma_{\rm v,\,gas} = \alpha_{v}\,\sigma_{\rm v,\,bh}$ (where $\alpha_{v}$ is an arbitrary order-unity constant). The velocity difference $\delta \mathbf{V}$ is therefore also Gaussian-distributed, and integrating we obtain the probability \begin{align} P_{v}(\delta V < \epsilon\,v_{\rm cl}) &= {\rm erf}\left(\frac{q}{\sqrt{2}}\right)-\sqrt{\frac{2}{\pi}}\,q\, \exp\left(-\frac{q^2}{2}\right)\ , \\ q &\equiv \frac{\sqrt{3}\,\epsilon}{\sqrt{1+\alpha_{v}^{2}}} \end{align} Assuming $\alpha_{v} \sim 1$, and $\epsilon \sim 0.1$ from Eq.~\ref{eqn:critical.condition}, we obtain $P_{v} \sim 5\times10^{-4}$; multiplying by $\langle N_{\rm cl} \rangle \sim$\,a few (for our massive cloud complexes), this gives a probability of $\sim 10^{-3} - 10^{-2}$ of an ``interesting'' event, per seed. We stress that this is only intended as a guide for intuition -- we have ignored a wide range of effects which modify the statistics, the fact that the density and velocity statistics are probably correlated \citep[see e.g.][]{konstandin:2012.lagrangian.structfn.turb,squire.hopkins:turb.density.pdf}, the fact that strong shocks and feedback tend to produce large local deviations from Gaussianity \citep{hopkins:2012.intermittent.turb.density.pdfs,beattie:2021.turb.intermittency.mhd.subalfvenic}, gravitational focusing (which probably significantly increases the rate of ``coincidences'' in velocity-density space), the size spectrum of different density structures \citep{guszejnov:gmc.to.protostar.semi.analytic,guszejnov:universal.scalings}, and more. Ultimately, capturing all of these effects is only possible in the full simulations, but the simple arguments here can provide some very approximate guide to the typical behaviors in the simulations. \section{Resolution convergence} \label{app:resolution-convergence} In Fig.~\ref{fig:resolution-test} we show the cumulative distribution of BH final-to-initial mass difference (same as Fig.~\ref{fig:cdf}) for simulations in different resolutions. Here we choose clouds with $R_{\rm cl}\le 50\,{\rm pc}$, such that all clouds weigh $M_{\rm cl} \le 10^8\,M_\odot$ and the condition $m_{\rm gas} \ll M_{\rm bh}$ is more likely to be satisfied. We see that for low (\texttt{M1e4R5}\footnote{In this section, we use the template \texttt{M\%eR\%d} to denote a cloud with its mass (in $M_\odot$) and radius (in pc). } and \texttt{M1e6R50}) and high (\texttt{M1e6R5} and \texttt{M1e8R50}) surface density clouds, there is good agreement in the CDFs under high (\texttt{Res128}) and low (\texttt{Res64}) resolutions. The resolution convergence is worse for the medium surface density group: for \texttt{M1e7R50} we see the same cut-off but different span of CDFs, while for \texttt{M1e7R50} there is a systematic difference. One possible reason might be the uncertainties in these near-breakup clouds. As a summary for the test, we find good quantitative resolution convergence for most clouds, especially dense ones where there is significant BH accretion (\texttt{M1e6R5} and \texttt{M1e8R50}). There are indeed some clouds with systematic offsets under different resolutions, but they fall into a ``less-interesting" category in terms of BH accretion and will not qualitatively change our conclusions. { \section{Initial velocity dependence} \label{app:initial_velocity_dependece} Due to considerations in computational costs, we made some confinements in the initial velocity distribution in order to capture BH accretion events more efficiently in a limited suite of simulations. In the main text the fiducial choice is to have all BH initial velocity $v_{\rm ini}$ below the circular velocity $v_{\rm cl}\equiv (GM_{\rm cl}/R_{\rm cl})^{1/2}$. Inevitably this will miss some BHs that are still bounded to the GMC, and possibly overestimate (or underestimate) the possibility of runaway accretion in a more general BH seed population. In this section we inspect the issue with a test. In Fig.~\ref{fig:new_velocity} we show the initial-velocity depends and cumulative distributions of BH accretion (featured by the mass accreted by each BH, $\Delta M_{\rm BH}$), for three cutoffs in the initial velocity magnitude: fiducial ($v_{\rm ini} \le v_{\rm cl}$), ``critically bounded'' ($v_{\rm ini} \le \sqrt{2} v_{\rm cl}$, which means some BHs may reach $v_{\rm ini} \lesssim \sqrt{2} v_{\rm cl}$ ), and ``unbounded'' ($v_{\rm ini} \le \sqrt{2} v_{\rm cl}$). For each test, other quantities like the initial position and velocity direction for each BH are the same. Compared with the fiducial case, there are a few ($\sim 5$) BHs of above $v_{\rm cl}$ accreting at least one gas cell, for both the ``crucially bounded'' and ``unbounded'' group. We also note that above $\sqrt{2} v_{\rm cl}$, no BH accretes more than one gas cell, which is not runaway accretion. From the cutoff of the CDF, we see that the fraction of BH seed with accretion events in the ``critically bounded'' and ``unbounded'' groups are lower, but still in the same order of magnitude. The maximum mass accretion for the three tests are also similar: although the cutoff of the orange line is higher, the general trend of the three lines at the high-mass end are very close if excluding the ``lucky'' BH. In conclusion, we find that most of BH accretion events happens when $v_{\rm ini} \le v_{\rm cl}$. Our BH population in the main text overestimated these events, but still well within the same order of magnitude. } \end{appendix}
Title: Testing the Momentum-driven Supernova Feedback Paradigm in M31
Abstract: Momentum feedback from isolated supernova remnants (SNRs) have been increasingly recognized by modern cosmological simulations as a resolution-independent means to implement the effects of feedback in galaxies, such as turbulence and winds. However, the integrated momentum yield from SNRs is uncertain due to the effects of SN clustering and interstellar medium (ISM) inhomogeneities. In this paper, we use spatially-resolved observations of the prominent 10-kpc star-forming ring of M31 to test models of mass-weighted ISM turbulence driven by momentum feedback from isolated, non-overlapping SNRs. We use a detailed stellar-age distribution (SAD) map from the Panchromatic Hubble Andromeda Treasury (PHAT) survey, observationally-constrained SN delay-time distributions, and maps of the atomic and molecular hydrogen to estimate the mass-weighted velocity dispersion using the Martizzi et al. ISM turbulence model. Our estimates are within a factor of 2 of the observed mass-weighted velocity dispersion in most of the ring, but exceed observations at densities $\lesssim 0.2$ cm$^{-3}$ and SN rates $>2.1\times 10^{-4}$ SN yr$^{-1}$ kpc$^{-2}$, even after accounting for plausible variations in stellar-age distribution models and ISM scale height assumptions. We conclude that at high SN rates the momentum deposited is most likely suppressed by the non-linear effects of SN clustering, while at low densities, SNRs reach pressure equilibrium before the cooling phase. These corrections should be introduced in models of momentum-driven feedback and ISM turbulence.
https://export.arxiv.org/pdf/2208.11055
command. \usepackage{amsmath} \usepackage{gensymb} \usepackage{subfigure} \usepackage{amssymb} \newcommand{\vdag}{(v)^\dagger} \newcommand\aastex{AAS\TeX} \newcommand\latex{La\TeX} \newcommand{\redpen}[1]{{\textbf{\textcolor{red}{[#1]}}} } \newcommand{\correction}[1]{{\textbf{\textcolor{black}{#1}}} } \newcommand{\newtext}[1]{{\textcolor{magenta}{#1}}} \shorttitle{Feedback from observations} \shortauthors{Sarbadhicary et. al} \begin{document} \title{Testing the Momentum-driven Supernova Feedback Paradigm in M31} \correspondingauthor{Sumit K. Sarbadhicary} \email{sarbadhicary.1@osu.edu} \author[0000-0002-0786-7307]{Sumit K. Sarbadhicary} \affil{Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA} \affil{Center for Cosmology and AstroParticle Physics (CCAPP), Ohio State University, Columbus, OH 43210, USA} \author{Davide Martizzi} \affiliation{DARK, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 Copenhagen, Denmark} \author{Enrico Ramirez-Ruiz} \affiliation{Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064, USA} \affiliation{DARK, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 Copenhagen, Denmark} \author{Eric Koch} \affiliation{University of Alberta, Department of Physics, 4-183 CCIS, Edmonton AB T6G 2E1, Canada} \affiliation{Center for Astrophysics $\mid$ Harvard \& Smithsonian, 60 Garden St., Cambridge, MA 02138, USA} \author{Katie Auchettl} \affiliation{School of Physics, The University of Melbourne, Parkville, VIC 3010, Australia} \affiliation{ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D)} \affiliation{Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064, USA} \author{Carles Badenes} \affiliation{Department of Physics and Astronomy and Pittsburgh Particle Physics, Astrophysics and Cosmology Center (PITT PACC), University of Pittsburgh, 3941 O’Hara Street, Pittsburgh, PA 15260, USA} \author{Laura Chomiuk} \affil{Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA} \keywords{editorials, notices --- miscellaneous --- catalogs --- surveys} \section{Introduction} \label{sec:intro} Supernova (SN) feedback plays a critical role in galaxy formation by regulating the phase structure of the interstellar medium (ISM; \citealt{MO77, MacLow2004, Joung2006}), launching galactic winds \citep{Strickland2009, Heckman2017, Zhang2018}, accelerating cosmic rays \citep{Drury1994, Socrates2008, Caprioli2011, Girichidis2016}, and enriching the intergalactic and circumgalactic medium with metals \citep{Andrews2017, Weinberg2017, Telford2018}. Through a combination of these phenomena, feedback regulates the global star-formation efficiency of galaxies \citep{Ostriker2011, Hopkins2012}. Simulations imply that, without feedback, galaxies would rapidly convert cold gas into stars, resulting in up to a factor of 100 overproduction of stars compared to what is observed \citep{Navarro1991, Hopkins2011}. Unfortunately, current state-of-the-art cosmological simulations that study the evolution of galaxy population over cosmic time cannot resolve the spatial scales on which supernova remnants (SNRs) interact with the ISM. Even modern `zoom-in' simulation of isolated galaxies can only marginally resolve SNRs \citep{Hopkins2014, Hopkins2018}, and properly resolved SNRs can only be obtained in simulations of smaller regions of the ISM disk \citep{Gatto2017, Kim2017,2020ApJ...896...66K}. This limitation motivated development of subgrid models of SN feedback at the resolution limit of simulations. Initial efforts to implement SN feedback in the form of thermal energy deposition were ineffective due to efficient radiative cooling in high-density star-forming regions \citep{Katz1992}. The quest to limit plasma cooling and runaway star-formation spawned a variety of subgrid models which employed techniques like delayed gas-cooling \citep{Stinson2006, Governato2007, Governato2010}, stochastic thermal feedback \citep{Dalla2012}, an effective equation of state for a pressure-supported multi-phase ISM, with hydrodynamically decoupled wind particles \citep{Springel2000, Springel2003, Oppenheimer2006, Vogelsberger2014} These techniques ranged from being unphysical in nature to being inaccurate in the details of the SN-ISM coupling \citep{Martizzi2015, Rosdahl2017, Smith2018}. More recent cosmological simulations \citep[e.g., FIRE,][]{Hopkins2018, Hopkins2018b} have explored subgrid models that deposit momentum, which unlike thermal energy, cannot be radiated away before impacting ambient gas \citep{Murray2005, Socrates2008, Agertz2013}. During the Sedov-Taylor phase of SNRs, the blast wave increases its momentum yield by a factor of 10--30 as it sweeps up ambient ISM. Later, it transitions into a cold, dense, momentum-conserving shell that ultimately merges with the ISM \citep{Chevalier1974, Cioffi1998, Thornton1998, Martizzi2015, Kim2015, 2020ApJ...896...66K}. This momentum budget per SN has been quantified by several realistic models of the ISM \citep[e.g.,][]{Martizzi2015, Kim2015, Li2015, Walch2015,2020ApJ...896...66K}. It has been shown to effectively drive turbulence and winds, and reproduce key features of galaxies such as the Kennicutt-Schmidt relation and galactic morphologies \citep{Martizzi2016, Smith2018, Hopkins2018}. More recent studies however have shown that the momentum deposition per SN depends sensitively on effects like SN clustering \citep{Sharma2014, Gentry2017}, entrainment of cold clouds \citep{Pittard2019}, the abundance pattern of the ISM \citep{2020ApJ...896...66K}, thermal conduction \citep{Badry2019}, enhanced cooling due to fluid-instability-driven mixing across the contact discontinuity \citep{Gentry2019}, the SN delay-time distribution (DTD) model \citep{Gentry2017, Keller2020}, and pre-SN feedback via winds, photoionization and radiation pressure \citep{Fierlinger2016, Smith2020} Observations that are specifically sensitive to SN feedback can identify a reliable subgrid model. Generally, cosmological simulations calibrate subgrid models to reproduce bulk properties of the galaxy population such as the stellar mass function and stellar mass to halo mass relation, but this necessarily limits the predictive power of the simulations \citep{Schaye2015}. Extragalactic multi-wavelength have served as useful references for setting subgrid model components such as SN rates \citep[e.g.][]{Mannucci2006}, the efficiency of SN energy driving ISM turbulence \citep[e.g.][]{Tamburro2009, Stilp2013} and mass-loading in supersonic winds \citep[e.g.][]{Martin1999, Veilleux2005, Strickland2009}. However, the main source of uncertainty in modern subgrid models stem from a poor understanding of the SN-ISM interaction physics that originates on scales of 10-100 pc, which is beyond the reach for most distant surveys. In this respect, the resolved environments of Local Group galaxies provide detailed information available on stellar populations, ISM distribution and kinematics, and SNRs at the highest affordable spatial resolution. They are the ideal testing grounds for SN feedback models. In this work, we test models of mass-weighted ISM turbulence predicted by SN momentum feedback models against observations of turbulence in M31, with a focus on the long-lived, prominent 10-kpc star-forming ring \citep{Lewis2015, Williams2015}. The proximity of M31 pushes the frontier of turbulence studies to $<100$ pc, where the effects of feedback are spatially-resolved, complete with detailed maps of the atomic ISM distribution \citep[e.g.][]{Braun1991,Nieten2006,Braun2009}, and spatially-resolved stellar age distribution (SAD) measurements with sensitivity down to masses $\approx 1.5$ M$_{\odot}$ obtained by the Panchromatic Hubble Andromeda Treasury survey \citep[PHAT;][]{Dalcanton2012, Williams2017}. We can use these SADs to estimate SN rates by taking into account currently known constraints on the efficiencies of the different progenitor channels of core-collapse and Type Ia SNe, expressed in the form of SN delay-time distributions (DTDs) \citep{Maoz2014, Zapartas17, Eldridge2017}. The SADs of the older stellar populations allow us to quantify the Type Ia SN rate as a function of location, which is important for a `green-valley' galaxy like M31 \citep{Mutch2011, Davidge2012}, and lacks correlation with conventional star-formation rate tracers. This paper is organized as follows. In Section \ref{sec:model} we describe our analytical momentum-driven ISM turbulence model, and how we use stellar population and ISM data to constrain ISM densities, SN rates and velocity dispersion in the M31 ring. Section \ref{sec:results} describes the results of our analysis and checks on potential systematics, and in Section \ref{sec:disc}, we discuss the implications of these results on the assumed subgrid models of feedback used in cosmological simulations. \section{Modeling ISM Velocity Dispersion in M31} \label{sec:model} Here, we compare the observed non-thermal velocity dispersion in M31's neutral (\hi) and cold ISM with the predicted turbulent velocity dispersion from the SN momentum-driven ISM turbulence model of \cite{Martizzi2016}. Our calculations are supplemented by measured SN rates from the SAD of the PHAT survey and known forms of the SN DTD. We describe these efforts below. For all measurements, we assume that the distance to M31 is 785 kpc \citep{McConnachie2005}, and 1\arcsec = 3.78 pc at the distance of M31. We restrict our analysis to the 10 kpc star-forming ring of M31 (Figure \ref{fig:maps}). We expect the main source of turbulence here to be star-formation, which we are mainly interested in testing, as opposed to other sources of turbulence observed in galaxies such as galactic spiral arms and magnetorotational instabilities \citep{Tamburro2009,Koch2018,utomo2019}. Both atomic and molecular gas are most abundantly located and detected at high signal-to-noise along the ring \citep{Braun2009, Nieten2006}. Additionally, since the gas scale height in pressure-supported star-forming disks can vary with radius \citep{utomo2019}, staying within the ring helps justify the use of a constant scale height in Eq.\ \eqref{eq:nh1} and \eqref{eq:nh2}. We do however assess the impact of variable scale heights later on in Section \ref{sec:modelvsobsveldisp}. In the following sub-sections, we describe our methodology for modeling the ISM velocity dispersion using momentum-driven turbulence from SNe, and the observations we use for comparison.% \subsection{Momentum Injection by Supernovae} \label{sec:predsigma} Following \cite{Martizzi2015} and \cite{Martizzi2016} -- hereafter, M15 and M16 respectively -- we assume that the non-thermal velocity dispersion in \hi is a result of SNR momentum-driven turbulence driven on spatial scales comparable to the radius at which the SNR merges with the ISM, i.e. the shock velocity becomes of the order of velocity dispersion in the ISM. M15 quantified the final momentum (\pfin) driven by an isolated SNR well past its shell formation stage in a turbulent ISM as \begin{equation} \label{eq:pfin_nh} \frac{p_{\rm fin}}{m_*} = 1110\ \mathrm{km/s} \left(\frac{Z}{Z_{\odot}}\right)^{-0.114} \left(\frac{n_h}{100\ \mathrm{cm^{-3}}}\right)^{-0.19} \end{equation} where \pfin/$m_*$ is the momentum deposited per mass of stellar population (we set $m_* = 100$ M$_{\odot}$ per M15), $Z$ is the metallicity and $n_h$ is the ISM density\footnote{The reader is refer to \citet{2020ApJ...896...66K} for revised prescriptions at low $Z$}. We note that this form of \pfin is similar to other independent high-resolution studies of momentum deposition by SNRs in an inhomogenous ISM \citep[e.g.][]{Kim2015}. M16 used this subgrid model of momentum feedback to simulate the SN-driven ISM at 2--4 pc spatial resolution, and showed that the resulting velocity dispersion in a steady-state Milky Way-like ISM can be described by an analytical equation where the energy injection rate of SN momentum-driven turbulence is balanced by its corresponding rate of decay. The resulting mass-weighted velocity dispersion ($\sigma_p$) is given by the Eq 22. in M15, which we repeat here for convenience, \begin{equation} \label{eq:sigma} \sigma_{p} = \dfrac{3}{4\pi} \left(\dfrac{32\pi^2}{9}\right)^{3/7} \left(\dfrac{p_{\rm fin}}{\rho}\right)^{4/7} \left(f\, \dot{n}_{SN}\right)^{3/7} \end{equation} % where $\rho$ is the density of gas, $\dot{n}_{SN}$ is the SN rate per unit volume, and $f$ is a factor that accounts for momentum cancellation when multiple blast waves interact. We set $f=1$ for our fiducial runs, then revisit the issue of $f$ in Section \ref{sec:disc}. We will use this predicted $\sigma_p$ in different regions of M31's ring, as a function of the measured $\rho$ and $\dot{n_{SN}}$, for comparison with observations in the subsequent sections. \subsection{Measurement of SN rate density ($\dot{n}_{SN}$)} \label{sec:ratesmodel} We set the SN rate per unit volume, $\dot{n}_{SN}$ using the detailed SAD maps from the PHAT survey and the known properties of SN DTD. We use the \cite{Williams2017} SAD map of the northern third of the disk of M31, spanning a total area of about 0.8 deg$^2$ (Figure \ref{fig:maps}). An SAD is measured in each of 826 spatial cells, each 83$^{\prime\prime}$ wide. Each cell contains the stellar mass formed per look-back time ($M_{ij}$, where $i$ is the cell and $j$ is the age bin), estimated by comparing resolved color-magnitude diagrams of the stars in the region with stellar isochrone models. We then convert these SAD maps into maps of the SN rate per cell using observationally constrained DTDs. The DTD is defined as the SN rate versus time elapsed after a hypothetical brief burst of star formation. When convolved with the SAD maps described in the previous section, the DTD provides the current SN rate in each region of the galaxy \citep{MaozMan2012, Maoz2014} in the following way: \begin{equation} \label{eq:r_i} R_i = \sum_{j=1}^{N} M_{ij} \Psi_j \end{equation} where M$_{ij}$ is the stellar mass formed in cell $i$ in the age-interval $j$ given by the SAD map, and $\Psi_j$ is the DTD value in the age bin $j$. We use the form of the core-collapse DTD given Eq A.2 of \cite{Zapartas17}, which accounts for the effects of binary stellar interactions at $Z_{\odot}$. For Type Ia SNe, we assume the parametric form of the Type Ia DTD from Maoz et al (2012) based on the compilation of all observational constraints to date, \begin{equation} \label{eq:psiIa} \Psi_{Ia}(t) = (4 \times 10^{-13} \mathrm{\ SN\ yr^{-1}\ M_{\odot}^{-1}}) \left(\frac{t}{1 \ \mathrm{Gyr}}\right)^{-1} \end{equation} where $t$ is the delay-time between star-formation and SN. The form of the Type Ia and core-collapse SN DTDs are shown in Figure \ref{fig:dtd}. The volumetric SN rate in cell $i$ ($\dot{n}_{SN, i}$ for use in Eq.\ \eqref{eq:sigma}) can then be estimated from R$_i$ as \begin{equation} \label{eq:dotsn} \dot{n}_{SN, i} = \frac{R_{i} \mathrm{cos}(i)}{2 A_{i} z_{sn}} \end{equation} where $A_i$ is the cell size of each SAD region ($\approx 83^{\prime\prime} \times83^{\prime\prime}$ or $310 \times 310$ pc$^2$) and $z_{sn}$ is scale height of the vertical distribution of SNe. The factor $\mathrm{cos}(i)$ accounts for the extended line of sight through the disk as a result of the galaxy's inclination angle $i = 77\degree$ \citep{Corbelli2010}, so $z_{sn} \rightarrow z_{sn}/\mathrm{cos}(i)$. We assume $z_{sn}=150$ pc for core-collapse SNe and $z_{sn}=600$ pc for Type Ia SNe, as explained in Section \ref{sec:scaleheights}. \subsection{Measurements of ISM density and velocity dispersion} \label{sec:measurementofismdensity} Most of the ISM mass in star-forming regions is in the atomic (\hi) and molecular phases, so we use maps of the 21 cm line of \hi \citep{Braun2009} and the 115 GHz line $^{12}$CO(J=1--0) \citep{Nieten2006} in M31. The data cubes of \cite{Braun2009} were obtained using the Westerbork Synthesis Radio Telescope (WSRT) and the Green Bank Telescope (GBT), with a spatial resolution of 30\arcsec (or 113 pc at the distance of M31). The \hi column density ($N_{HI}$) and non-thermal velocity dispersion ($\sigma_{HI}$) were measured by \cite{Braun2009} from the 21 cm emission along each line of sight assuming a model of an isothermal, turbulence-broadened line profile. We note that evidence for opacity-corrected \hi features in 21 cm is somewhat inconclusive in more recent observations in M31 and M33 \citep{Koch2018, Koch2021}, so we use the opacity-uncorrected map of \cite{Braun2009} (their Fig. 15). The difference in the predicted velocity dispersion from the two different version of the density maps is about $\approx 12\%$, which does not affect our conclusions later. Molecular hydrogen column densities were obtained from the $^{12}$CO(J=1--0) emission map of \cite{Nieten2006} using the single-dish IRAM 30m telescope. The survey covered $2^{\degree} \times 0.5^{\degree}$ of the M31 disk, yielding a map of CO-line intensity at a final angular resolution of 23$^{\prime\prime}$ (spatial resolution $\approx 87$ pc at the distance of M31). % The CO-line intensities were converted into H$_2$ column densities ($N_{H2}$) using the conversion factor $\mathrm{X_{CO}} = 1.9 \times 10^{20}$ mol cm$^{-2}$ (K km s$^{-1}$)$^{-1}$ assumed by \cite{Nieten2006}. The total mass of H$_2$ in M31 is about 14$\%$ that of \hi in the M31 ring. For convenience, we use the \hi velocity dispersion as a proxy for H$_2$ velocity dispersion using the radius-independent ratio of $\sigma_{HI}/\sigma_{H2} = 1.4$ measured by the HERACLES CO and THINGS \hi surveys of nearby galaxies \citep{Mogotsi2016}, as well as in M33 \citep{Koch2019}. We note here that our inclusion of H$_2$ measurements is done to account for the velocity dispersion of the `mass-weighted' ISM, in order to be consistent with the M15 and M16 models, which also predicts the mass-weighted turbulent velocity dispersion. We combine the density and velocity dispersion of the atomic and molecular phases into an effective mass-weighted ISM. The total mass-weighted non-thermal velocity dispersion in the \hi and molecular phases is then $\sigma_{obs} = \sqrt{(N_{HI}/N_{tot})\sigma_{HI}^2 + (N_{H2}/N_{tot})\sigma_{H2}^2}$, where $N_{tot}=N_{HI}+N_{H2}$. We assume the vertical distribution of ISM in M31 is centered on the disk midplane, and approximately Gaussian for the molecular phase and exponential for \hi, consistent with observations of our Galaxy \citep{Dickey1990, Ferriere2001}. Each phase is characterized by an `effective' scale height, which we discuss further in Section \ref{sec:scaleheights}. For each SAD cell with \hi column density N$_{HI}$, the \hi density along the line of sight $z$ is, \begin{equation} \label{eq:nh1} n_{HI}(z) = \frac{N_{HI}\, \mathrm{cos}(i)}{2 z_{HI}}\mathrm{exp}\left(- \left|\frac{z}{z_{HI}}\right|\right) \end{equation} As in Eq.\ \ref{eq:dotsn}, the scale height has been corrected for the inclination of M31 with the factor $\mathrm{cos}(i)$. Similarly for each SAD cell with H$_2$ column density N$_{H2}$, the corresponding H$_2$ volume density is, \begin{equation} \label{eq:nh2} n_{H2}(z) = \frac{N_{H2}\, \mathrm{cos}(i)}{\sqrt{2\pi z_{H2}^2}} \mathrm{exp}\left(-\frac{z^2}{2 z_{H2}^2}\right) \end{equation} For ease of interpretation (given our simplified ISM model), we will compare the observed velocity dispersions with the \emph{minimum} velocity dispersion predicted by models. We enforce this by assuming all SNe explode at the mid-plane density, i.e. $n_h = n_h(z=0)$. This is approximately a lower-limit on $\sigma_p$ (we call this $\sigma^{min}_p$) per SAD cell since SNe exploding away from the midplane but still within the scale height of gas would interact with lower densities than at the midplane, deposit greater momenta (Eq.\ \ref{eq:pfin_nh}), and contribute to an effectively higher $\sigma_p$ per SAD cell. This lower limit is also a good assumption since we neglect all other sources of stellar feedback (e.g. winds, cosmic rays) that could add to the momentum budget per SAD cell depending on the environment. Comparing this minimum feedback from SNe with observations leads to some interesting insight as we show in Section \ref{sec:disc}. For each value of ($N_{HI}$, $N_{H2}$), we derive ($n_{HI}$, $n_{H2}$) using Eq \eqref{eq:nh1} and \eqref{eq:nh2}, convert to a total hydrogen mass density $\rho = m_p (n_{HI} + 2n_{H2})/X_H$ in units of g/cm$^{3}$ (where $m_p = 1.67 \times 10^{-24}$ g and $X_H = 0.76$ is the mass fraction of hydrogen), and feed it into Eq.\ \eqref{eq:sigma}. We also take the total number density of hydrogen, in units of atoms cm$^{-3}$ as $n_h = n_{HI} + 2 n_{H2}$, for use in Eq \ref{eq:pfin_nh}. \subsection{Galactocentric radii of SAD cells} \label{subsec:ring} We first calculate the deprojected distance of each cell from the center of M31 using the method in \cite{Hakobyan2009}. Let $\left(\alpha,\delta \right)$ be the sky-projected location of each cell centroid, and $\left(\Delta \alpha, \Delta \delta\right)$ be the sky-projected angular offset from M31 center (located at $\alpha_{M31} = 00^h42^m44.3^s$, $\delta_{M31} = +41\degree16^{\prime}9^{\prime\prime}$)\footnote{\url{http://ned.ipac.caltech.edu/}}. Assuming a position angle of M31's disk, $\theta_p$= 38$\degree$ \citep{Corbelli2010}, the location ($u,v$) of each SAD cell in M31's coordinate system is \begin{align*} u &= \Delta \alpha\ \mathrm{sin} \theta_p + \Delta \delta\ \mathrm{cos} \theta_p \\ v &= \Delta \alpha\ \mathrm{cos} \theta_p - \Delta \delta\ \mathrm{sin} \theta_p \end{align*} The radial distance of each cell in the plane of M31 from the M31 center, corrected for M31's inclination ($i = 77\degree$; \citealt{Corbelli2010}), is, \begin{equation} d^2 = u^2 + \left(\frac{v}{\mathrm{cos}\, i}\right)^2 \end{equation} where $d$ is the angular distance from the center in arcseconds. We identify the ``ring'' as SAD cells with 10-13 kpc, as shown by the shaded region in Figure \ref{fig:maps}.% \subsection{Assumptions about SN and ISM scale heights} \label{sec:scaleheights} In this section, we describe plausible ranges and fiducial values for our free parameters: the atomic and molecular scale heights ($z_{HI}$ and $z_{H2}$) and SN scale heights $z_{sn}$ (here on, we will specify the separate scale heights of SNe Ia and CC as $z_{Ia}$ and $z_{cc}$ respectively). Core-collapse SNe generally occur at lower effective scale heights than SNe Ia \citep{Hakobyan2017}. In the Milky Way, open clusters younger than 100 Myrs are all situated within 200 pc of the midplane, with an effective scale height of 60--80 pc \citep{Joshi2016, Soubiran2018}. Since their age and velocity distribution nicely follows that of field stars \citep{Baumgardt2013, Soubiran2018}, we can assume that the general population of core-collapse SN progenitors in the Milky Way also has a scale height of 60--80 pc. However, the disk of M31 is kinematically hotter and more extended than the Milky Way \citep{Ivezic2008, Collins2011}. Based on the ratio of scale heights to scale lengths observed in edge-on disk galaxies \citep{Yoachim2006, Yoachim2008}, \cite{Collins2011} proposed that the M31 disk could be 2-3 times thicker than the Milky Way (although this may be an over-estimate as the galaxies in the \citealt{Yoachim2006} sample are different and less massive than M31). We therefore assume that in M31, 60 pc $<$ $z_{cc}$ $< 200$ pc is a plausible range for the scale height of core-collapse SNe. Older ($\sim$Gyr) stars are mostly concentrated in the thin and thick disk with measured scale heights in the range of 140-300 pc and 500-1100 pc respectively in the Milky Way \citep{Li2018, Mateu2018}. The thin disk is slightly younger, with ages in the range of $7-9$ Gyrs compared to the thick disk's age of $\sim 10$ Gyrs \citep{Kilic2017}. The measured shape of the Type Ia SN DTD suggests that progenitors younger than 10 Gyrs will produce the majority of Type Ia SNe \citep{Maoz2010}, so we assume Type Ia SN progenitors are roughly distributed at the same scale height as the thin disk, $\sim$300 pc. This is also consistent with the scale height of SDSS white dwarfs \citep{deGennaro2008, Kepler2017, Gentile2019} and about 4 times the scale height of young core-collapse progenitors, so for simplicity we assume that in M31, $z_{Ia} \approx 4 z_{cc}$, and $z_{cc}$ is in the range mentioned previously. \hi scale heights in M31 were measured by \cite{Braun1991} in the range of $z_{HI} = 275-470$ pc between radii of 10--13 kpc in M31. We are not aware of any scale height measurements of the molecular phase in M31, but the Milky Way can provide some supplementary information. Studies of the H$_2$ profiles traced by CO in the Milky Way have measured a half-width at half-maximum scale height of 50--80 pc (consistent with being a bit smaller than $z_{cc}$), which is about a factor of 3 lower than the scale height of \hi in the Milky Way \citep{Marasco2017}. Given these constraints, we can assume that $z_{cc}$ is always less than $z_{Ia}$, $z_{H2}$ is always less than $z_{HI}$, $z_{Ia} \gtrsim 4 z_{cc}$ and $z_{H2} \approx z_{HI}/3$ . Given the range of values allowed by observations, we first analyze our results for a fiducial model where $z_{cc} = 150$ pc, and $z_{HI}=350$ pc, giving $z_{Ia} = 600$ pc, and $z_{H2}=117$ pc. We then change the values of these parameters and their ratios within the plausible ranges discussed previously to assess the impact of assumptions in Section \ref{sec:modelvsobsveldisp}. \section{Results} \label{sec:results} In this section, we show the distribution of SN rates across the M31 ring as measured from our SAD map and DTDs, and a comparison of our predicted velocity dispersions predicted by these rates with the observed values along the ring. \subsection{Distribution of SN Rates} \label{subsec:snrates} Figure \ref{fig:snrate} shows our SN rate distribution estimated from the DTDs and SADs as described in in Section \ref{sec:ratesmodel} (Eq \ref{eq:r_i}). The integrated SN rate in the region we identify as M31's 10 kpc ring is $1.74 \times 10^{-3}$ \snyr, with roughly 39$\%$ contribution from SN Ia and $61\%$ from core-collapse SNe. The fraction of this SN rate of Type Ia versus core-collapse is shown in Figure \ref{fig:snfrac}. About 75$\%$ of the ring has a higher core-collapse rate than Type Ia. These regions coincide well with young star-forming regions identified in UV and IR images of M31 \citep{Lewis2017}, and are mostly concentrated in the inner parts of the ring. Regions with the highest core-collapse rates, exceeding that of Type Ia by more than a factor of 3, coincide with the well-known star-forming region OB54, with % nearly $3.8 \times 10^{5}$ M$_{\odot}$ of stars younger than 300 Myrs \citep[][also see Figure \ref{fig:overpredictedcells} in this paper]{Johnson2016}. SNe Ia generally dominate the total SN rate near the edges of our ring region, coinciding with the inter-arm region as seen in Figure \ref{fig:snfrac}, and exceeding the core-collapse rate by up to a factor of 3 in some SAD cells. As evidence of the high characteristic SN rate of the ring, we also show the distribution of optically-selected SNRs in M31 by \cite{Lee2014} in Figure \ref{fig:snfrac}. The majority of SNRs are concentrated along the M31 ring, and particularly associated with regions of higher core-collapse fraction. A more quantitative test of whether the observed SNR distribution is consistent with the SN rates will be the subject of a future paper, since it requires a more rigorous analysis of the poorly understood completeness of SNR catalogs (particularly at optical wavelengths). \subsection{Comparison of model and observed velocity dispersion} \label{sec:modelvsobsveldisp} We compare the observed ($\sigma_{obs}$) versus mininum predicted velocity dispersion ($\sigma^{min}_p$) in the mass-weighted ISM in Figure \ref{fig:modelvsobsvel}. The observed velocity dispersion exhibits a range of values spanning 4-12 km/s, whereas the predicted values extend up to 20 km/s or higher. On average, we find that for our fiducial model described in Section \ref{sec:scaleheights}, the $\sigma^{min}_p$ mostly exceed the observed values $\sigma_{obs}$, but within a factor of two for 84$\%$ of the SAD pixels in the ring. To understand why our velocity dispersion model over-estimates the observed values in Figure 4., we checked the ratio of observed to predicted velocity dispersion values, i.e. \sigrat against the column density and SN rate, the two fundamental parameters in our model, in Figure \ref{fig:sigrationh}. We find hint of a negative correlation in \sigrat with $N_H$ and a positive correlation with SN rate. % In particular, SAD pixels with log $(N_{tot}/\mathrm{cm}^{-2})<21.3$ and SN rate or Log $(R_i/\mathrm{yr}^{-1})>-4.7$ mostly have \sigrat$>1$. We examine this more closely in Section \ref{sec:disc}.% We briefly discuss the impact of model uncertainties which are certain to alter the \sigrat measurements. The SN rates can vary by an average of 15$\%$ (maximum of 50$\%$), depending on the isochrone model used for constructing SADs \citep{Williams2017}, but this has a relatively small impact on our result. For example, using the MIST SAD solutions (which we have been using), we have about 82$\%$ SAD pixels with \sigrat$>1$, whereas using the PARSEC SAD solutions results in 74$\%$ of SAD cells having \sigrat $>1$, and the correlations with density and SN rates remain. Assumptions about the scale height of gas and SNe directly affect the midplane densities and SN rate densities, which has a larger effect on \sigrat measurements. We therefore assess the impact of varying scale heights on the \sigrat values as as shown in Figure \ref{fig:checkz}. For smaller gas scale heights and larger core-collapse scale heights, % \sigrat decreases. This is because smaller gas scale heights imply a higher volume density of ISM for a given column density (Eq.\ \ref{eq:nh1}, \ref{eq:nh2}), which reduces the momentum deposition and turbulence driving based on Eq.\ \ref{eq:pfin_nh}. For larger SN scale heights, the SN rate per unit volume is smaller (Eq.\ \ref{eq:dotsn}), which likewise reduces the momentum deposition rate (Eq.\ \ref{eq:pfin_nh}). Within the plausible range of scale heights discussed in Section \ref{sec:scaleheights} and marked as a box in Figure \ref{fig:checkz}, about $60\%$ of SAD pixels still have over-predicted velocity dispersion. The part of the parameter space in Figure \ref{fig:checkz} where the fraction of over-predicted cells are below $10-20\%$ involves $z_{cc}>z_{HI}$, which is unlikely given the close association of core-collapse SNe with gas-rich star-forming regions. \section{DISCUSSION} \label{sec:disc} \subsection{Insight Into Momentum feedback efficiency} Our analysis has shown that simple models of ISM turbulence driven by isolated non-overlapping SNRs are consistent within a factor of 2 of observations for most of the star-forming/ISM environment of the M31 ring. Some of the discrepancy can be explained by variation in model parameters (e.g. scale heights of stars and gas) as explained in Section \ref{sec:modelvsobsveldisp}, but in this discussion, we particularly focus on cells with Log $(N_{tot}/\mathrm{cm}^{-2})<21.3$ and Log $(R_i/\mathrm{yr}^{-1})>-4.7$, since all the cells in this range have \sigrat$\gtrsim 1$ even after accounting for plausible variation in SN rates and scale heights in Section \ref{sec:modelvsobsveldisp}. Regions with \sigrat$>1$ values are interesting in the sense that there are multiple sources of stellar feedback such as stellar winds, radiation pressure, photoionization and cosmic rays, but here the hydrodynamical momentum from SN blast-waves alone over-predict the observed ISM turbulence. % One reason behind these over-predicted cells could be that $\sigma_{obs}$ were underestimated in our maps, but this is unlikely. More recent, sensitive VLA-based \hi surveys \citep[e.g.][]{Koch2018, Koch2021} suggest that a clean separation of thermal and non-thermal components of the 21 cm line is non-trivial. It is likely that the assumption of \cite{Braun2009} of an isothermal \hi component along the line of sight results in some residual thermal contribution to the non-thermal velocity dispersion. Thus the non-thermal velocity dispersion in M31 we are using from \cite{Braun2009} may be an upper-limit to the actual turbulence contribution. The regions with \sigrat$>1$, especially in the cells with Log $(N_{tot}/\mathrm{cm}^{-2})<21.3$ and Log $(R_i/\mathrm{yr}^{-1})>-4.7$, may therefore indicate a drawback in the SN momentum-driven turbulence model, so we investigate these regions visually in Figure \ref{fig:overpredictedcells}. From here on, we couch the column density and SN rate cutoff in terms of a volume density and SN rate surface density cutoff, to be consistent with values used in simulations. The low column density cutoff corresponds to $n_h < 0.2$ cm$^{-3}$ for our fiducial scale heights, and the SN rate cutoff corresponds to a SN rate surface density $\Sigma_{SN}>2.1\times10^{-4}$ \snyrperA. % The low-density cells are situated at the upper and lower edges of the star-forming ring as shown in Figure \ref{fig:overpredictedcells}. Comparison with Figure \ref{fig:snfrac} shows that these regions also have a higher rate of SN Ia than CC, with an average SN Ia/CC ratio $\approx 1.67$ in these cells. M16 noted that at low densities, SNRs have longer cooling timescales, and may come into pressure equilibrium with the ISM before cooling or depositing a significant amount of momentum \citep{MO77}. This missing physics in our model is likely the reason for the over-predicted $\sigma^{min}_p$. % SAD cells with Log $(R_i/\mathrm{yr}^{-1})>-4.7$ are spatially correlated with the prominent star-forming region OB54 as mentioned in Section \ref{subsec:snrates}. One possibility, as raised by M15 and M16, is that in high star-forming regions, overlapping shocks from close-proximity SNRs might cancel some of the outgoing momentum (parameterized by $f$ in Eq \ref{eq:sigma}, which we had set to 1). For example, a reduction of more than a factor of 2 in $\sigma^{min}_p$ is achieved with $f<0.2$, which is consistent, though slightly less than $f=0.3-0.4$ assumed in the SN-driven turbulent ISM simulations of M16. Other possibilities include a non-negligible fraction of the cold ISM mass is driven out by clustered SNe driving a hot, over-pressurized outflow \citep{Sharma2014, Gentry2017}. This explanation is plausible given the detection of X-ray emission in this region by \cite{Kavanagh2020}, and was also given as an explanation by previous energy-balance studies that similarly observed an excess of SN energy over the measured ISM turbulent energy in the central high star-forming regions of galaxies \citep{Tamburro2009, Stilp2013, Koch2018, utomo2019}. A remaining possibility is that a non-negligible ISM mass in these regions is sustained at warmer phases, invisible to 21 cm or CO-line maps, due to the cumulative heating by SNe and pre-SN processes like winds and photoionization. The results above indicate that fiducial models of momentum feedback from SNe used by most cosmological simulations, which generally assume non-overlapping, non-clustered SNRs, may require adjustment at low densities and at high SN rates due to aforementioned non-linear effects of clustering and SNR evolution at low-densities. This can be quantified by a suppression factor $f(n_h,\, \Sigma_{SN})<1$ for $n_h \lesssim 0.2$ cm$^{-3}$ and $\Sigma_{SN}>2.1\times10^{-4}$ \snyrperA, although a more precise form of this relation will be explored in a subsequent paper where we account for the energy and momentum carried away by any high-velocity outflows or warm diffuse gas from these regions. The result also highlights the role of Type Ia SN feedback in low-density regions of the ISM, where it can exceed core-collapse SN rates \citep{Li2020a, Li2020b}. The energetics of Type Ia SNe are particularly pronounced in the central few kpc of M31 (though not explored in this paper), where it is likely responsible for the bright X-ray halo emission and depleted metal abundances in the region \citep{Tang2009, Telford2018}. \subsection{Comparison with previous studies of ISM energy balance} As the molecular ISM in our data is only $\sim 14\%$ of the atomic ISM, our results primarily explore feedback in the atomic ISM, and therefore it is interesting to compare our work with previous studies of energy balance in the ISM traced by atomic hydrogen. \cite{Tamburro2009} showed that SN energy alone can drive turbulence in atomic gas within the optical radius of nearby galaxies, with an approximate coupling efficiency of $\sim 10\%$ \citep{Thornton1998, MacLow2004}. Similar results were also obtained by \cite{Stilp2013} with globally-averaged \hi observations. More recently, \cite{Koch2018} and \cite{utomo2019} extended these techniques to M33, with the latter study allowing coupling efficiency to vary with radius. A key difference between our work and previous ones is that we examine spatially-resolved \hi line profiles along different lines of sight, as opposed to globally or radially-stacked \hi profiles. This allows us to compare the observed ISM turbulence with the local properties of the star-forming and ISM environment. Phenomenologically, there are a few key differences between our work and previous studies also worth mentioning. Similar to \cite{utomo2019}, we account for turbulence driven by momentum-conserving phase of isolated SNRs, and constrain the efficiency of this momentum-feedback driving the observed turbulence, as opposed to previous studies that considered the efficiency of initial SN energy (= $10^{51}$ ergs) going into the ISM turbulence (the majority of which will be radiated away without impacting the gas). The M15 and M16 models also assume that turbulence is driven at the radius where SNR dissolves into the ISM, which strongly depends on the ISM density (i.e. when $v_s \sim \sigma$). This is different from the assumption of constant driving scale (equal to the scale height) or decay timescale in \cite{Tamburro2009} and \cite{Koch2018}. These assumptions affect the predicted SN feedback. For example, \cite{utomo2019} showed that a spatially-varying decay timescale allowed SNe to drive turbulence in M33 out to 7 kpc instead of 4 kpc, by which point the star-formation rate and gas densities decrease by an order of magnitude compared to the central region. \cite{Bacchini2020} similarly showed that a variable decay timescale makes SNe efficient enough to drive turbulence in the THINGS galaxies throughout, as opposed to just within the optical radius \citep{Tamburro2009}. Despite the differences in methodology, our work agrees with previous studies that SN energy driving is inefficient, particularly in regions characterized by high star-formation rates. The higher spatial resolution offered by a more nearby galaxy like M31 reveals that regions where our models disagree with observations also correlate with regions of clustered star-formation, signifying the importance of taking into account clustering effects in SN feedback models. A direct comparison with previous studies is complicated given the differences in methodology, but our work highlights the importance of spatially-resolved observations in Local Group galaxies in the study of SN feedback. \section{Conclusion} In this paper, we have tested the paradigm of SN momentum-driven ISM turbulence developed by recent high-resolution vertical disk simulations. We compare model prescriptions with resolved observations of stellar populations and ISM in the prominent 10-kpc star-forming of M31, where stellar feedback is expected to be the main source of turbulence. The spatially-resolved PHAT stellar photometry in the northern third of the disk provides detailed stellar-age distributions (SADs) in $\approx 310$ pc$^2$ cells, which we convolved with known forms of the SN delay-time distribution to predict the core-collapse and Type Ia rates across M31's ring. We used ISM densities of the neutral atomic gas (traced by 21 cm \hi line maps) and molecular gas (traced by $^{12}$CO(1-0) line maps) alongside the SN rates to predict the steady-state mass-weighted turbulent velocity dispersion, using the feedback prescriptions of \cite{Martizzi2015} and \cite{Martizzi2016}. We compared these model estimates against the scaled turbulent velocity dispersion obtained from \hi and CO maps of M31. We assumed all SNe explode in the galaxy midplane where the line-of-sight density is highest, effectively providing a lower-limit on the predicted velocity dispersion. We summarize the following key results from our work :- \begin{enumerate} \item We find an integrated rate of $\approx 1.7 \times 10^{-3}$ SN yr$^{-1}$ in the ring covered by PHAT, with 61$\%$ contribution from core-collapse SNe. Regions with dominant core-collapse contribution coincide with known star-forming regions as expected, while regions with dominant Type Ia contribution fall near the edges of the ring. \item We found that the minimum predicted velocity dispersion exceed observed values in 84$\%$ of the ring covered by PHAT for our fiducial model within a factor of 2. Some of the discrepancy can be explained by varying the assumptions regarding SADs and ISM/SN scale heights within plausible limits, but for densities $\lesssim 0.2$ cm$^{-3}$ and SN rates $>2.1 \times 10^{-4}$ \snyrperA, the discrepancy appears to increase. \item SAD cells with SN rates $>2.1 \times 10^{-4}$ \snyrperA where velocity dispersion is over-predicted are spatially correlated with dense concentration of young clusters embedded in a bright thermal X-ray region. This supports the possibility of clustering of SNe in this regime, which is not captured in our momentum feedback model. Clustering of SNe can lower the momentum deposited per SN and mass-weighted turbulence in the ISM as a result of converging shocks from adjacent explosions, mass-loaded outflows, or higher mass fraction in warmer ISM phases due to cumulative action of stellar winds and SNe. \item The low density ($\lesssim 0.2$ cm$^{-3}$) regions where velocity dispersion is over-predicted coincide with the edges of our ring region where Type Ia SNe dominate the injection rate by nearly a factor of 2. However given the overall low SN rate in these regions, it is likely that the discrepancy could be due to isolated SNRs coming into pressure equilibrium with the ISM before significant amount of cooling and momentum deposition takes place---another effect not included in our models. \end{enumerate} Our results provide observational support for including adjustments in fiducial subgrid models of momentum feedback, to account for SNR evolution in clustered and in low-density environments. The work underscores the importance of resolved stellar photometry and cloud-scale atomic and molecular ISM observations for assessing feedback models and ISM turbulence. Newer, more sensitive observations at high spectral resolution, such as the VLA maps of \cite{Koch2021} can provide more detailed characterization of turbulence in atomic clouds in M31. Preliminary comparison have shown that the 2nd moment line-widths of the VLA maps are within a factor of 2 of \cite{Braun2009} non-thermal values, although the former do not yet cover the full PHAT area or the M31 ring. We will expand our present analysis in future papers with data from the ongoing Local Group L-Band Survey\footnote{\url{https://www.lglbs.org/home}} which will cover all of M31, as well as M33 and four Local Group dwarfs, providing \hi maps of unprecedented sensitivity at a wide range of spatial resolution. This extension would allow us to test feedback models with spatially resolved observations across a wide range of star-forming conditions, and empirically obtain corrections to the fiducial models to be included in cosmological simulations. \acknowledgements S.K.S is grateful to Robert Braun for sharing the WSRT 21 cm maps of \hi density and non-thermal velocity dispersion in M31. SKS and LC are grateful for support from NSF grants AST-1412549, AST-1412980 and AST-1907790. Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. We acknowledge support from the Packard Foundation. E.R-R is supported by the Heising-Simons Foundation and the Danish National Research Foundation (DNRF132). \software{numpy \citep{numpy}, scipy \citep{scipy}, matplotlib \citep{matplotlib}, astropy \citep{astropy1, astropy2}} \bibliography{Feedback_letter_revised}
Title: Revisiting stellar properties of star-forming galaxies with stellar and nebular spectral modelling
Abstract: Spectral synthesis is a powerful tool for interpreting the physical properties of galaxies by decomposing their spectral energy distributions into the main luminosity contributors (e.g. stellar populations or ionised gas). However, the impact nebular emission has on the inferred properties of star-forming (SF) galaxies has been largely overlooked over the years. The objective of this work is to estimate the relations between stellar properties of SF galaxies from SDSS DR7 by simultaneously fitting the stellar and nebular continua with FADO and comparing them to the results derived using STARLIGHT, a representative of purely stellar population synthesis codes. Differences between codes regarding average mass, mean age and mean metallicity values can go as high as $\sim$0.06 dex for the overall population of galaxies and $\sim$0.12 dex for SF galaxies (galaxies with EW(H$\alpha$)>3 \AA), with the most prominent difference between both codes in the light-weighted mean stellar age. A closer look into the average light- and mass-weighted star formation histories of intensively SF galaxies (EW(H$\alpha$)>75 \AA) suggests that STARLIGHT is underestimating the average light-weighted age of intensively SF galaxies by up to $\sim$0.17 dex and overestimating the light-weighted metallicity by up to $\sim$0.13 dex compared to FADO (or vice versa). The comparison between the average stellar properties of passive, SF and intensively SF galaxy samples also reveals that differences between codes increase with increasing EW(H$\alpha$) and decreasing total stellar mass. This work finds indirect evidence that a purely stellar population synthesis approach negatively impacts the inferred stellar properties of galaxies with relatively high star formation rates. In turn, this can bias interpretations of fundamental relations such as the mass-age or mass-metallicity.
https://export.arxiv.org/pdf/2208.14036
\title{Revisiting stellar properties of star-forming galaxies with stellar and nebular spectral modelling} \author{ Leandro S. M. Cardoso \inst{\ref{adress1}} \and Jean Michel Gomes \inst{\ref{adress1}} \and Polychronis Papaderos\inst{\ref{adress1},\ref{adress2},\ref{adress3}} \and Ciro Pappalardo\inst{\ref{adress2},\ref{adress3}} \and Henrique Miranda\inst{\ref{adress2},\ref{adress3}} \and Ana Paulino-Afonso\inst{\ref{adress1}} \and José Afonso\inst{\ref{adress2},\ref{adress3}} \and Patricio Lagos\inst{\ref{adress1}} } \institute{ Instituto de Astrof\' isica e Ci\^ encias do Espa\c co, Universidade do Porto, CAUP, Rua das Estrelas, PT4150-762 Porto, Portugal\label{adress1} \and Instituto de Astrof\' isica e Ci\^ encias do Espa\c co, Universidade de Lisboa, OAL, Tapada da Ajuda, PT1349-018 Lisboa, Portugal\label{adress2} \and Departamento de Física, Faculdade de Ciências da Universidade de Lisboa, Edifício C8, Campo Grande, PT1749-016 Lisboa, Portugal\label{adress3} } \date{Received ?? / Accepted ??} \abstract {Spectral synthesis is a powerful tool for interpreting the physical properties of galaxies by decomposing their spectral energy distributions (SEDs) into the main luminosity contributors (e.g. stellar populations of distinct age and metallicity or ionised gas). However, the impact nebular emission has on the inferred properties of star-forming (SF) galaxies has been largely overlooked over the years, with unknown ramifications to the current understanding of galaxy evolution.} { The objective of this work is to estimate the relations between stellar properties (e.g. total mass, mean age, and mean metallicity) of SF galaxies by simultaneously fitting the stellar and nebular continua and comparing them to the results derived through the more common purely stellar spectral synthesis approach. } { The main galaxy sample from SDSS DR7 was analysed with two distinct population synthesis codes: \Fado, which estimates self-consistently both the stellar and nebular contributions to the SED, and the original version of \SL, as representative of purely stellar population synthesis codes. } { Differences between codes regarding average mass, mean age and mean metallicity values can go as high as $\sim$0.06 dex for the overall population of galaxies and $\sim$0.12 dex for SF galaxies (galaxies with EW(H$\alpha$)>3 \AA), with the most prominent difference between both codes in the two populations being in the light-weighted mean stellar age. \Fado\ presents a broader range of mean stellar ages and metallicities for SF galaxies than \SL, with the latter code preferring metallicity solutions around the solar value ($Z_{\odot} = 0.02$). A closer look into the average light- and mass-weighted star formation histories of intensively SF galaxies (EW(H$\alpha$)>75 \AA) reveals that the light contributions of simple stellar populations (SSPs) younger than $\leq 10^7$ ($10^9$) years in \SL\ are higher by $\sim$5.41\% (9.11\%) compared to \Fado. Moreover, \Fado\ presents higher light contributions from SSPs with metallicity $\leq Z_\odot / 200$ ($Z_\odot / 50$) of around 8.05\% (13.51\%) when compared with \SL. This suggests that \SL\ is underestimating the average light-weighted age of intensively SF galaxies by up to $\sim$0.17 dex and overestimating the light-weighted metallicity by up to $\sim$0.13 dex compared to \Fado\ (or vice versa). The comparison between the average stellar properties of passive, SF and intensively SF galaxy samples also reveals that differences between codes increase with increasing EW(H$\alpha$) and decreasing total stellar mass. Moreover, comparing SF results from \Fado\ in a purely stellar mode with the previous results qualitatively suggests that differences between codes are primarily due to mathematical and statistical differences and secondarily due to the impact of the nebular continuum modelling approach (or lack thereof). However, it is challenging to adequately quantify the relative role of each factor since they are likely interconnected. } { This work finds indirect evidence that a purely stellar population synthesis approach negatively impacts the inferred stellar properties (e.g. mean age and mean metallicity) of galaxies with relatively high star formation rates (e.g. dwarf spirals, `green peas', and starburst galaxies). In turn, this can bias interpretations of fundamental relations such as the mass-age or mass-metallicity, which are factors worth bearing in mind in light of future high-resolution spectroscopic surveys at higher redshifts (e.g. MOONS and 4MOST-4HS). } \keywords{galaxies: evolution - galaxies: starburst - galaxies: ISM - galaxies: fundamental parameters - galaxies: stellar content - methods: numerical} \section{Introduction}\label{Section_-_Introduction} Understanding the complex physical processes behind Galaxy formation and evolution can be a daunting endeavour, especially with the increasing technical difficulties when looking further back in time. However, several keystone results have been obtained over the past decades: (a) high-mass galaxies assemble most of their stellar content early in their lives, whereas low-mass galaxies display relatively high specific star-formation rates (sSFRs) at a late cosmic epoch (e.g. \citealt{Brinchmann_etal_2004, Noeske_etal_2007a}), a phenomenon known as galaxy downsizing (e.g. \citealt{Cowie_etal_1996, Heavens_etal_2004, Cimatti_Daddi_Renzini_2006}), (b) there is a relative tight correlation between the stellar mass and gas-phase metallicity (e.g. \citealt{Lequeux_etal_1979, Tremonti_etal_2004}), (c) galaxies display a bimodal distribution based on their colour, in which they can either be large, concentrated, red and quiescent \emph{or} small, less concentrated, blue and presenting signs of recent star formation (e.g. \citealt{Gladders_Yee_2000, Strateva_etal_2001, Blanton_etal_2003, Kauffmann_etal_2003b, Baldry_etal_2004, Mateus_etal_2006}), and (d) the seemingly linear correlation between the stellar mass and SFR in star-forming (SF) galaxies, an observable that is usually referred to as the SF main sequence of galaxies (e.g. \citealt{Brinchmann_etal_2004, Noeske_etal_2007a}). Notwithstanding these insights, the overall picture of galaxy formation and evolution is far from complete. For instance, a persistent issue in interpreting the physical characteristics of SF galaxies from spectral synthesis has been the relative contribution of nebular emission and its potential impact on the estimated stellar and overall galaxy properties. Indeed, several studies have shown that nebular emission (continuum and lines) can account up to $\sim$60\% of the monochromatic luminosity at $\sim$5000 \AA\ in AGN and starburst galaxies (e.g. \citealt{Krueger_etal_1995, Papaderos_etal_1998, Zackrisson_etal_2001, Zackrisson_Bergvall_Leitet_2008, Schaerer_deBarros_2009, Papaderos_Ostlin_2012}) and in optical broadband photometry, specially at low metallicities (\citealt{Anders_FritzeAlvensleben_2003}). For instance, \cite{Krueger_etal_1995} noted that strong star-formation activity can lead nebular continuum to contribute $\sim$30--70\% to the total optical and near-infrared emission, whereas emission lines alone account for up to $\sim$45\% in optical broadband photometry. From a different perspective, \cite{Reines_etal_2010} carried out spectral modelling of two young massive star clusters in NGC 4449 and showed that both the nebular continuum and line emission have a major impact on the estimated magnitudes and colours of young clusters ($\lesssim$ 5 Myr), thus also affecting their age, mass and extinction estimates. Moreover, \cite{Izotov_Guseva_Thuan_2011} studied a sample of 803 SF luminous compact galaxies (i.e. `green peas') from the SDSS (\citealt{York_etal_2000}) and warned that a purely stellar spectral modelling of such objects can lead to the overestimation of the relative contribution of old stellar populations by as much as a factor of four. As the authors noted, there are two reasons for this overestimation: (a) nebular continuum increases the galaxy luminosity and (b) the nebular continuum is flatter than the stellar, thus making the overall continuum redder that it would be if it were purely stellar. \cite{Papaderos_Ostlin_2012} also showed that nebular emission can introduce strong photometric biases of 0.4-1 mag in galaxies with high specific SFRs, whereas \cite{Pacifici_etal_2015} found that that SFRs can be overestimated by up $\sim$0.12 dex if nebular emission is neglected in spectral modelling. The impact of these biases on the stellar mass and gas-phase metallicity relation (e.g. \citealt{Tremonti_etal_2004}), SF main sequence (e.g. \citealt{Brinchmann_etal_2004, Noeske_etal_2007a}) or other scaling relations involving stellar properties is still largely unexplored. In fact, it is important to note that most spectral modelling of large-scale surveys using population synthesis has been carried assuming a purely stellar modelling approach, regardless of wether it is applied to passive, SF or even active galaxies (e.g. \citealt{Kauffmann_etal_2003a, Panter_Heavens_Jimenez_2003, CidFernandes_etal_2005, Asari_etal_2007, Tojeiro_etal_2009, Zhong_etal_2010, TorresPapaqui_etal_2012, Perez_etal_2013, SanchezBlazquez_etal_2014, LopezFernandez_etal_2016, Rosa_etal_2018, Kuzmicz_Czerny_Wildy_2019, Cai_etal_2020, Cai_Zhao_Bai_2021}). Although nebular continuum and line emission has been adopted in several evolutionary synthesis models (e.g. \citealt{Leitherer_etal_1999, Schaerer_2002, Molla_GarciaVargas_Bressan_2009, MartinManjon_etal_2010}), no consistent nebular prescription has been implemented in inversion population synthesis codes until recently. Indeed, \cite{Gomes_Papaderos_2017, Gomes_Papaderos_2018} presented in \Fado\ the first population synthesis code to fit self-consistently both the stellar and nebular spectral components while adopting a genetic optimisation framework. Tests revealed that this code estimates the main stellar population properties of SF galaxies (e.g. mass, mean age, and mean metallicity) within an accuracy of $\sim$0.2 dex (\citealt{Gomes_Papaderos_2017, Gomes_Papaderos_2018, Cardoso_Gomes_Papaderos_2019, Pappalardo_etal_2021}). For comparison, the typical level of deviations between input and inferred stellar properties when applying purely stellar population synthesis codes to evolved stellar populations with faint or absent nebular emission is of $\sim$0.15-0.2 dex (e.g. \citealt{CidFernandes_etal_2005, CidFernandes_etal_2014, Ocvirk_etal_2006a, Ocvirk_etal_2006b, Tojeiro_etal_2007, Tojeiro_etal_2009, Koleva_etal_2009}). Moreover, \cite{Cardoso_Gomes_Papaderos_2019} (hereafter CGP19) compared \Fado\ with a purely stellar population synthesis code using synthetic galaxy models for different star formation histories (SFHs; e.g. instantaneous burst, continuous, and exponentially declining) and different fitting configurations (e.g. with or without emission lines and including or excluding the Balmer and Paschen continuum discontinuities at 3646 and 8207 \AA, respectively). This work showed that applying the public version of the purely stellar population synthesis code \SL\footnote{The same version adopted in this work. Not to be confused with the one introduced by \cite{LopezFernandez_etal_2016}, which combines UV photometry with spectral fitting in the optical.} (\citealt{CidFernandes_etal_2005}) to spectra with a relatively high nebular continuum can lead to the overestimation of the total stellar mass by as much as $\sim$2 dex and the mass-weighted mean stellar age up to $\sim$4 dex, whereas the mean metallicity and light-weighted mean stellar age can both be underestimated by up to $\sim$0.6 dex. Moreover, it was found that these stellar properties can still be recovered with \Fado\ within $\sim$0.2 dex in evolutionary stages with severe nebular contamination. \cite{Pappalardo_etal_2021} (hereafter P21) continued this line of inquiry by adding the non-parametric purely stellar \Steckmap\ (\citealt{Ocvirk_etal_2006a, Ocvirk_etal_2006b}) to the code comparison while also exploring the impact of varying spectral quality on the derived physical properties, finding similar results and trends. It is important to note that (a) these tests were carried out using the same evolutionary models (\citealt{Bruzual_Charlot_2003}) both in the creation of the synthetic spectra and in the spectral modelling and (b) the input synthetic composite stellar populations were built having a constant solar metallicity ($Z_\odot=0.02$). Thus, these uncertainties should be viewed as upper limits and might not necessarily directly translate into biases affecting observations. Also recently, \cite{Gunawardhana_etal_2020} studied the stellar and nebular characteristics of massive stellar populations in the Antennae galaxy using an updated version of \textsc{Platefit} (\citealt{Tremonti_etal_2004, Brinchmann_etal_2004}), capable of self-consistent modelling of the stellar and nebular continua. Spectral fitting provides estimates of the stellar and gas metallicities, stellar ages and electron temperature $T_e$ and density $n_e$ by taking as reference model libraries of HII regions built using the evolutionary synthesis code \textsc{Starburst99} (\citealt{Leitherer_etal_1999}) and the photoionisation code \textsc{Cloudy} (\citealt{Ferland_etal_1998,Ferland_etal_2013}). This work found the stellar and gas metallicity of the starbursts to be near solar and that the metallicity of the star-forming gas in the loop of NGC 4038 appears to be slightly richer than the rest of the galaxy. Using a different approach, \cite{LopezFernandez_etal_2016} presented a new version of the population synthesis code \SL\ (\citealt{CidFernandes_etal_2005}) that combines optical spectroscopy with UV photometry. This work used a mixture of simulated and real CALIFA data (\citealt{Sanchez_etal_2012}) and found that the additional UV constraints have a low impact on the inferred stellar mass and dust optical depth. Although the mean age and metallicity of most galaxies remains unaffected by the additional UV spectral information, this work also showed that stellar populations of low-mass late-type galaxies are older and less chemically enriched than in purely-optical modelling. \cite{Werle_etal_2019} pursued further this line of inquiry by combining GALEX photometry with SDSS spectroscopy and found that the UV constraints lead to the increase of simple stellar populations (SSPs) with ages between $\sim$10$^7$ and 10$^8$ years in detriment to the relative contribution of younger and older populations, leading to slightly older mean stellar ages when weighted my mass. This redistribution of the SFH is particularly noticeable in galaxies in the low-mass end of the blue cloud. Later, \cite{Werle_etal_2020} adopted a similar approach to study early-type galaxies in the same sample and found that the UV constraints broadens the attenuation, mean stellar age and metallicity distributions. Moreover, galaxies with young stellar populations have larger H$\alpha$ equivalent widths (EWs) and larger attenuations, with the metallicity of these populations being increasingly lower for larger stellar masses. Although these three works successfully use UV spectral information to constrain the contribution of young stellar populations, one wonders if the nebular continuum in galaxies with relatively high specific SFRs still needs to be accounted for as intermediate-to-old stellar populations in purely stellar spectral synthesis, in which case the lack of nebular continuum treatment would still affect the inferred stellar properties. Taking all these factors into consideration, it is possible that the current understanding of galaxy evolution, specifically that of SF galaxies in the local universe based on large-scale survey analysis, has been affected by a lack of an adequate nebular modelling prescription in previous spectral synthesis codes. To address this subject, this work aims to revisit the relation between key stellar properties (e.g. mass, mean age, and mean metallicity) of SF galaxies by comparing the results obtained with \Fado\ and a representative of purely stellar population synthesis codes (i.e. \SL) when applied to a well-studied large-scale survey such as SDSS (\citealt{York_etal_2000}). This paper is organised as follows. Section \ref{Section_-_Methodology} details the methodology adopted to extract and analyse the SDSS DR7 spectra using spectral synthesis. The main results of this work are presented in Section \ref{Section_-_Results}, which is particularly focussed on the relation between the main stellar properties of SF galaxies when adopting two different spectral modelling approaches. Finally, in Sections \ref{Section_-_Discussion} and \ref{Section_-_Conclusions} the findings of this work are discussed and summarised, respectively. Unless stated otherwise, this work assumes $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_M=0.3$ and $\Omega_\Lambda=0.7$. \section{Methodology}\label{Section_-_Methodology} The population synthesis codes \Fado\ and \SL\ were applied to the galaxy sample from SDSS Data Release 7 (\citealt{Abazajian_etal_2009}). This data set contains multi-band spectrophotometric data for 926246 objects, each with a single-fibre integrated spectrum with the wavelength range of 3800-9200 \AA\ at a resolution of $R\sim 1800$--2200, with the fibre covering $\sim$5.5 kpc of the central region of an object at $z \! \sim \! 0.1$. There are several reasons to utilise this dataset: (a) photometric completeness, (b) relatively wide redshift coverage ($0.02 \! \lesssim \! z \! \lesssim \! 0.6$), (c) uniform spectral calibration, and (d) wide-range of topics related to galaxy formation and evolution that have been tackled with this data (e.g. mass-metallicity relation, \citealt{Tremonti_etal_2004}; stellar mass assembly, \citealt{Kauffmann_etal_2003a}; galaxy bimodality, \citealt{Strateva_etal_2001, Blanton_etal_2003, Kauffmann_etal_2003b, Baldry_etal_2004}; host-AGN relation, \citealt{Kauffmann_etal_2003c, Hao_etal_2005}; environmental effects on the colour-magnitude relation \citealt{Hogg_etal_2004, Cooper_etal_2010}). The raw spectra were corrected for extinction using the colour excess correction factor $\mathrm{E}(B-V)$ computed based on the \cite{Schlegel_Finkbeiner_Davis_1998} dust maps, considering the objects sky position using the right ascension and declination provided by the survey, combined with the correcting factor to $\mathrm{E}(B-V)$ of \cite{Schlafly_Finkbeiner_2011}. The full correction equation reads, \begin{equation} F_{\lambda}^{corrected} = F_{\lambda}^{raw} \cdot 10^{0.4 \, \cdot \, \mathrm{E}(B-V) \cdot \left( \frac{A_\lambda}{A_V} \right) \cdot R_V}, \end{equation} \noindent where $F_{\lambda}^{corrected}$ and $F_{\lambda}^{raw}$ are the corrected and raw fluxes as a function of wavelength $\lambda$, respectively, and $A_V$ is the extinction in the \emph{V}-band. The adopted extinction curve (also known as the reddening law) was \cite{Cardelli_Clayton_Mathis_1989}, with $R_V = 3.1$. The spectra were then converted to the observer restframe and rebinned so that $\Delta\lambda = 1$ \AA. As detailed in \cite{Gomes_Papaderos_2017}, the main task of the population synthesis code \Fado\ is to reproduce the observed spectral energy distribution (SED) through a linear combination of spectral components (e.g. individual stellar spectra or SSPs) as expressed by: \begin{multline}\label{Equation_-_FADO} F_\lambda = \sum_{i=1}^{N_\star} M_{i,\lambda_0} \cdot L_{i,\lambda} \cdot 10^{-0.4 \cdot A_V \cdot q_\lambda} \otimes S( v_\star, \sigma_\star ) \\ + \Gamma_\lambda(n_e,T_e) \cdot 10^{-0.4 \cdot A_V^{neb} \cdot q_\lambda} \otimes N(v_\eta,\sigma_\eta) , \end{multline} \noindent where $F_\lambda$ is the flux of the observed spectrum, $N_\star$ is the number of unique spectral components in the adopted base library, $M_{i,\lambda_0}$ is the stellar mass of the $i^{\mathrm{th}}$ spectral component at the normalisation wavelength $\lambda_0$, $L_{j,\lambda}$ is the luminosity contribution of the $i^{\mathrm{th}}$ spectral component, $A_V$ is the \emph{V}-band extinction, $q_\lambda$ is the ratio of $A_\lambda$ over $A_V$, $S( v_\star, \sigma_\star )$ denotes a Guassian kernel simulating the effect of stellar kinematics on the spectrum, with $v_\star$ and $\sigma_\star$ representing the stellar shift and dispersion velocities, respectively, $\Gamma_\lambda(n_e,T_e)$ is the nebular continuum computed assuming that all stellar photons with $\lambda \leq 911.76$ \AA\ are absorbed and reprocessed into nebular emission, under the supposition that case B recombination applies, $A_V^{neb}$ is the nebular \emph{V}-band extinction, and $N(v_\eta,\sigma_\eta)$ denotes the nebular kinematics kernel, with $v_\eta$ and $\sigma_\eta$ representing the nebular shift and dispersion velocities, respectively. It is important to note that all publicly available population synthesis codes before \Fado\ aimed to reconstruct the observed continuum using only the first term on the right-hand side of Equation \ref{Equation_-_FADO}, meaning, using a purely stellar scheme to reconstruct the overall observed SED (more details in \citealt{Gomes_Papaderos_2017, Gomes_Papaderos_2018}). Indeed, one of the innovative features of \Fado\ is the inclusion of the second term, which represents the relative spectral contribution of the nebular continuum inferred from the Lyman continuum production rate of the young stellar component and scaling with the intensity of prominent Balmer lines. This makes \Fado\ a tool specially designed to self-consistently model the stellar and nebular continuum components of SF and starburst galaxies. Subsequently spectral fitting was carried out with both \Fado\footnote{Version 1b: \url{http://www.spectralsynthesis.org}} and \SL\footnote{Version 4: \url{http://www.starlight.ufsc.br}} between 3400 and 8900 \AA\ using a base library with 150 SSPs from \cite{Bruzual_Charlot_2003} with a \cite{Chabrier_2003} IMF and Padova 1994 evolutionary tracks (\citealt{Alongi_etal_1993, Bressan_etal_1993, Fagotto_etal_1994a, Fagotto_etal_1994b, Girardi_etal_1996}). This stellar library contains 25 ages (between 1 Myr and 15 Gyr) and six metallicities ($Z = 0.0001; 0.0004; 0.004; 0.008; 0.02; 0.05 = Z_\odot \times \{1/200; 1/50; 1/5; 2/5; 1; 2.5\}$) and corresponds to an expanded version of the base adopted in CGP19, with the addition of $Z_\odot / 200$ and $Z_\odot / 50$ at the lower end of the metallicity range. The spectra were modelled assuming a \cite{Calzetti_etal_2000} extinction law, which was originally constructed taking into consideration integrated observations of SF galaxies. Moreover, the \emph{V}-band extinction, stellar velocity shift and dispersion free parameters where allowed to vary in both codes within the ranges of $A_V = 0$--4 mag, $v_\star =$ -500--500 km/s and $\sigma_\star = 0$--500 km/s, respectively. Identical parameter ranges for the velocity shift and dispersion were adopted for the nebular component in the \Fado\ runs. Moreover, two complete runs of the whole sample were performed with \SL\ while changing the initial guesses in the parameter space in order to evaluate the formal errors (e.g. \citealt{Ribeiro_etal_2016, Cardoso_Gomes_Papaderos_2016, Cardoso_Gomes_Papaderos_2017, Cardoso_Gomes_Papaderos_2019}). Moreover, a pre-fitting analysis was carried out with \SL\ using a smaller base library with 45 SSPs for 15 ages (between 1 Myr and 13 Gyr) and three metallicities ($Z = \{ 0.004; 0.02; 0.05 \} = Z_\odot \times \{ 1/5; 1; 2.5 \}$) that comes with the distribution package of \SL\ (c.f. \citealt{CidFernandes_etal_2005}). The results of this preliminary step were used to create individual spectral masks in order to exclude the most prominent emission lines in the main fitting. This was achieved by subtracting the continuum of the best-fit model from the observation and, from the resulting residual spectrum, by fitting Gaussians to the lines. The masking of these spectral regions ensures that the \SL\ results are more robust when it comes to SF galaxies, a procedure similar to that adopted by \cite{Asari_etal_2007} and \cite{Ribeiro_etal_2016}. At the same time, \Fado\ performs the masking of the most prominent emission lines as part of one of its built-in pre-fitting routines, as detailed in \cite{Gomes_Papaderos_2017}. Three additional spectral regions are masked in both codes which are associated with small bugs in the adopted evolutionary models: 6845--6945, 7165--7210 and 7550--7725 \AA\ (cf. \citealt{Bruzual_Charlot_2003} for more details). In addition to masking, spectral modelling was carried out while taking into consideration several spectral flags provided by the SDSS survey that are included in the mask array of each `\emph{fit}' file. These flags mark spectral regions with a wide range of {potential} problems (e.g. no observation, poor calibration, bad pixel, or sky lines) and, therefore, must be excluded from spectral synthesis. The following individual flags provided by the survey were adopted in this work: `Bad pixel within 3 pixels of trace', `Pixel fully rejected in extraction', `Sky level > flux + 10*(flux error)', and `Emission line detected here' (with the later being adopted only for \SL, for reasons previously mentioned). Choosing which individual flags to include when every pixel is important in a pixel-by-pixel code becomes an exercise in compromise. On the one hand, masking spectral regions affected by bad pixels or sky noise leads to more accurate estimates of the physical properties inferred through spectral synthesis. On the other hand, using all spectral flags computed by an automatic pipeline in any large-scale survey can lead to severe over-flagging and, thus, the removal of important spectral features. The chosen flags reflect this balancing act. Figure \ref{Fig_-_text_fit_example} shows an example of spectral fits from both codes for the object `52912-1424-151', a particularly noisy spectrum with $\mathrm{S/N}({\lambda_0}) = 4.4$. Black, red, and blue lines on the main panel represent the input, \SL\ and \Fado\ best-fit spectra, respectively. The right-hand side panels illustrate the best-fit SFHs based on the luminosity contribution $L_{4020}$ of the selected SSPs at the normalisation wavelength $\lambda_0 = 4020$ \AA\ (top panels) and on their corresponding mass contributions (bottom panels). Although the mass distributions obtained with both codes differ very little, with most mass coming from SSPs older than 1 Gyr, \SL\ best-fit solution includes more SSPs younger than 10 Myr compared to \Fado\ when it comes to the light distribution. Given the relatively noisy nature of the observation, this type of variations are not necessarily indicative of a strong divergence between the codes. More reliable trends are found when grouping galaxies in populations with similar spectral characteristics, as explored in Sections \ref{Section_-_Results} and \ref{Section_-_Discussion}. \section{Results}\label{Section_-_Results} The following analysis is a comparison between the results from \Fado\ and \SL, particularly regarding the stellar properties of SF galaxies (e.g. mass, mean age, and mean metallicity). Although these results could be compared to the vast number of previous works that analysed the SDSS (e.g. \citealt{Brinchmann_etal_2004, CidFernandes_etal_2005, Tojeiro_etal_2009}), this endeavour is outside of the scope of this work for several reasons. For one, this work is a continuation of CGP19 and P21, in that it follows similar methodologies specifically interested in looking into how (not) modelling the nebular continuum in SF galaxies can impact their inferred physical properties. Moreover, it is notably difficult to ascertain the specific details regarding how previous works dealt with spectral extraction and modelling (e.g. dust treatment, spectral masking, and evolutionary ingredients). This obstacle makes direct comparisons difficult and potentially misleading, since no clear route is available to quantify the assumptions behind the different methodologies. Last but not least, there is also the issue of aperture effects and corrections. Several works have issued cautionary remarks when attempting to apply photometric-based aperture corrections to physical properties inferred through spectral synthesis (e.g. \citealt{Richards_etal_2016, Gomes_etal_2016a, Gomes_etal_2016b, Green_etal_2017}), whilst others claim that aperture-free properties such as SFR can still be inferred (e.g. \citealt{DuartePuertas_etal_2017}). Since the main objective is to compare results between two different modelling approaches, no aperture corrections are necessary. Therefore, the following discussion focus only on the galaxy surface area covered by the spectroscopic fibre. \subsection{Sample selection}\label{SubSection_-_Sample_Selection} The analysis of the results is based on four main samples of galaxies. The {Main Sample (MS)} is defined by a cut in apparent magnitude in the $r-$band of $14.5 \leq m_r \leq 17.77$ and in redshift of $0.04 \leq z \leq 0.4$. The first criterion takes into consideration photometric completeness of the survey, whereas the second aims to remove low-redshift interlopers while also assuring that the most prominent optical emission lines (e.g. H$\beta\lambda$4861; [OIII]$\lambda$5007; [OI]$\lambda$6300; H$\alpha\lambda$6563; [NII]$\lambda$6583; [SII]$\lambda\lambda$6717,6731) fall within the observed wavelength range. Duplicates where removed by selecting objects classified as galaxies (`$\mathtt{specClass} = 2$') from the `$\mathtt{SpecObj}$' table created by the survey, which lists fibres categorised by `$\mathtt{SciencePrimary}$' (i.e. primary observation of the object). These criteria lead to the selection of 613592 objects ($\sim$66\% of the DR7 galaxy sample). The {Star-Forming Sample (SFS)} is defined by the criteria of MS in combination with: $\mathrm{EW(H\alpha)} \geq 3$ \AA , signal-to-noise at the normalisation wavelength of $\mathrm{S/N}(\lambda_{0}) \geq 3$, and the \cite{Kauffmann_etal_2003c} demarcation line. Similar to the criteria adopted by \cite{Asari_etal_2007}, these conditions select for emission-line galaxies with relatively good spectral quality located in the `SF' locus in the [NII]$\lambda$6583/H$\alpha$ and [OIII]$\lambda$5007/H$\beta$ emission-line diagnostic diagram (\citealt{Baldwin_Phillips_Terlevich_1981}). This selects 195479 objects ($\sim$31.9\% of MS). Moreover, the {Intensively Star-Forming Sample (ISFS)} is defined by the criteria of SFS with a cut of $\mathrm{EW(H\alpha)} \geq 75$ \AA. This isolates galaxies in which the nebular continuum contribution is particularly high. This selects 8051 objects ($\sim$1.3\% of MS). Finally, the {Passive Sample (PS)} is defined by the criteria of MS in combination with: $\mathrm{EW(H\alpha)} \leq 0.5$ \AA\ and $\mathrm{S/N}(\lambda_{0}) \geq 3$. Similarly, \cite{Mateus_etal_2006} defined `passive' or `lineless' galaxies based on \EWha\ and $\mathrm{EW(H\beta)} \leq 1$ \AA, whereas \cite{CidFernandes_etal_2011} defined `passive' as galaxies with \EWha\ and $\mathrm{EW(NII)} \leq 0.5$. This selects 103510 objects ($\sim$16.9\% of MS). Figure \ref{Fig_-_SN_F_EW} shows the distributions for the different samples of the signal-to-noise at the normalisation wavelength S/N(4020 \AA), the flux of the H$\alpha$ emission line F(H$\alpha$) and the H$\alpha$ equivalent width \EWha. Moreover, Figure \ref{Fig_-_BPT_selection} displays the location of the four samples in the \cite{Baldwin_Phillips_Terlevich_1981} diagnostic diagram, whereas Figures \ref{Fig_-_MS_-_fit_example_1}--\ref{Fig_-_PS_-_fit_example_2} show \Fado\ and \SL\ fit results for two randomly selected galaxies from each sample in the same format adopted in Figure \ref{Fig_-_text_fit_example}. \subsection{Total stellar mass, mean age and mean metallicity distributions}\label{SubSection_-_MtZ_Distributions} Figure \ref{Fig_-_MtZ_Histograms_-_MS_SFS} shows on the left-hand side panels the distributions of the total stellar mass ever-formed $M_\star^{ever}$ (top) and presently available $M_\star^{present}$ (bottom) (i.e. after correcting for mass loss through regular stellar evolution) for the MS and SFS galaxy populations. Results show that both codes yield very similar mass distributions. Indeed, the difference between the average total stellar mass from \SL\ and \Fado\ is $\sim$0.02 and 0.01 dex for both masses when it comes to the MS and SFS, respectively. Figure \ref{Fig_-_MtZ_Histograms_-_PS_ISFS} compares the ISFS and PS distributions and shows similar trends, with the difference between the average total stellar mass from \SL\ and \Fado\ being $\sim$0.04 and 0.03 dex, respectively. The age and metallicity distributions of the stellar populations contributing to the best-fit solution can be summarised in terms of their mean values, which can be interpreted as a first order moment of the overall stellar age and metallicity of a given galaxy. Following \cite{CidFernandes_etal_2005}, the mean logarithmic stellar age weighted by light or mass can be defined as, respectively, \begin{equation} \langle \log t_\star \rangle _L = \sum_{i=1}^{N_\star} \gamma_i \cdot \log t_i , \end{equation} \begin{equation} \langle \log t_\star \rangle _M = \sum_{i=1}^{N_\star} \mu_i \cdot \log t_i , \end{equation} \noindent where $t_i$ is the age of the $i^{\mathrm{th}}$ stellar element (i.e. SSP, in this case) in the base library, and $\gamma_i$ and $\mu_i$ are its light and mass relative contributions, respectively. The term $\mu_i$ refers to the mass fraction that takes into consideration the amount of stellar matter returned to the interstellar medium through stellar evolution, thus being related to $M_{\star}^{present}$. Congruously, assuming that $Z_i$ represents the relative metallicity contribution of the $i^{\mathrm{th}}$ SSP to the best-fit solution, then the mean logarithmic stellar metallicity weighted by light and mass can be defined as, respectively, \begin{equation} \log \langle Z_\star \rangle _L = \log \sum_{i=1}^{N_\star} \gamma_i \cdot Z_i , \end{equation} \begin{equation} \log \langle Z_\star \rangle _M = \log \sum_{i=1}^{N_\star} \mu_i \cdot Z_i . \end{equation} Applying these definitions, Figure \ref{Fig_-_MtZ_Histograms_-_MS_SFS} shows the distribution of the mean stellar age $\langle \log t_\star \rangle$ (centred panels) and mean stellar metallicity $\log \langle Z_\star \rangle$ (right-hand side panels) estimated with \Fado\ (blue) and \SL\ (red). The MS distributions show that, overall, the \Fado\ and \SL\ results are once more rather similar, with a few noticeable differences: (a) the metallicity distributions of \Fado\ are broader than that of \SL, (b) the $\langle \log t_\star \rangle_M$ absolute maxima differ by $\sim$0.1 dex between \Fado\ and \SL, and (c) the $\log \langle Z_\star \rangle_L$ and $\log \langle Z_\star \rangle_M$ absolute maxima differ by $\sim$0.1 dex between codes. Results for the SFS are relatively clearer: (a) the absolute maxima in the $\langle \log t_\star \rangle_L$ differ by $\sim$0.1 dex between \Fado\ and \SL, with the former estimating slightly higher mean ages, (b) \Fado\ displays a broader $\langle \log t_\star \rangle_M$ distribution than \SL, and (c) \Fado\ also displays broader metallicity distributions than \SL, with \SL\ preferring values closer to solar metallicity. It is worth noting that the overall MS mass and age distributions from \SL\ illustrated in Figure \ref{Fig_-_MtZ_Histograms_-_MS_SFS} are qualitatively in agreement with \cite{Mateus_etal_2006}, which analysed SDSS DR 7 with \SL. Moreover, the broadening of the age and metallicity distributions going from \SL\ to \Fado\ in MS is also qualitatively similar to that reported by \cite{Werle_etal_2020} when going from purely-optical spectroscopy to UV photometry plus optical spectroscopy using \SL, even though that work is focussed on early-type galaxies. Comparing results from ISFS and PS in Figure \ref{Fig_-_MtZ_Histograms_-_PS_ISFS} shows that: (a) \Fado\ and \SL\ mean age and metallicity distributions for the PS are very similar, with the main difference being the mass-weighted age distribution of \Fado\ being narrower and with higher maxima in relation to \SL, (b) \SL\ light-weighted age (metallicity) distribution peaks at a lower (higher) value than \Fado\ when it comes the ISFS, and (c) mass-weighted age and metallicity ISFS distributions in both codes are much wider than their light-weighted counterparts. Interestingly, the light-weighted age distributions for MS in Figure \ref{Fig_-_MtZ_Histograms_-_MS_SFS} show two peaks around $\sim$8.7 and 9.7 dex for both codes, which seem to visually match their SFS and PS distributions, respectively. This could be interpreted at first glance as evidence for the bimodal distribution of galaxies (e.g. \citealt{Gladders_Yee_2000, Strateva_etal_2001, Blanton_etal_2003, Kauffmann_etal_2003b, Baldry_etal_2004, Mateus_etal_2006}). Furthermore, the MS mass-weighted age distribution from \SL\ displays two peaks around $\sim$9.8 and 10.1 dex which, once again, match its SFS and PS distributions. However, the MS mass-weighted age distribution from \Fado\ shows only one prominent peak around $\sim$10.1 dex, which is congruent with its PS distribution. In addition, the SFS mass-weighted age distribution from \Fado\ (a) is much broader than that of \SL\ and (b) displays a conspicuous relative maximum at $\sim$9.1 dex of unknown origin, also present in the MS distribution. At least two factors could be behind the differences between the codes regarding these particular features: (a) the different mathematical (e.g. noise treatment and minimisation procedure) and physical approaches of the two codes (e.g. nebular continuum treatment) and (b) the intrinsic degeneracies between the SSPs in the adopted library (e.g. \citealt{Faber_1972, Worthey_1994, CidFernandes_etal_2005}). In fact, it is worth considering that CGP19 found that nebular contamination in synthetic SF galaxies leads \SL\ to favour best-fit solutions where most of the light comes from a combination of very young and very old SSPs (i.e. bimodal best-fit solutions). A dedicated study exploring, for instance, how variations of the SSPs in the stellar library (e.g. \citealt{Leitherer_etal_1999, Bruzual_Charlot_2003, SanchezBlazquez_etal_2006, Molla_GarciaVargas_Bressan_2009}) impact the distributions of these stellar properties would help to clarify the sources behind the MS distribution features observed in each code. To put the results of Figures \ref{Fig_-_MtZ_Histograms_-_MS_SFS} and \ref{Fig_-_MtZ_Histograms_-_PS_ISFS} into context, it is also worth noting that \cite{CidFernandes_etal_2005} showed that the original version of \SL\ can recover the stellar mass, mean age and mean metallicity within $\sim$0.1 and $\sim$0.2 dex when modelling purely stellar synthetic spectra with S/N$\sim$10. Later, \cite{CidFernandes_etal_2014} used simulations of CALIFA data (\citealt{Sanchez_etal_2012}) and found that the same estimated stellar properties can be affected by uncertainties between $\sim$0.1 and 0.15 dex related to noise alone and $\sim$0.15 and 0.25 dex when it comes to changes in the base models. Similar accuracy values were found in CGP19 regarding both \Fado\ and \SL. Among other factors, these biases can be attributed to the adopted stellar ingredients not adequately representing the wide range of stellar types and their evolutionary stages (e.g. \citealt{GonzalezDelgado_CidFernandes_2010, Ge_etal_2019}) or possible variations on the stellar initial mass function (e.g. \citealt{Conroy_Gunn_White_2009, Fontanot_2014, Barber_Schaye_Crain_2019}). \begin{table*}[] \begin{center} \caption{Average stellar properties of the galaxy samples.} \begin{tabular}{ccccccccc} & \multicolumn{2}{c|}{Main Sample} & \multicolumn{2}{c|}{Star-Forming Sample} & \multicolumn{2}{c|}{Intensively SF Sample} & \multicolumn{2}{c}{Passive Sample}\\ \cline{2-9} $\mu\pm\sigma$ & \Fado & \multicolumn{1}{c|}{\SL} & \Fado & \multicolumn{1}{c|}{\SL} & \Fado & \multicolumn{1}{c|}{\SL} & \Fado & \SL\\ \cline{1-9} ${\log M_{\star}^{ever}}$ & 10.64$\pm$0.55 & \multicolumn{1}{c|}{10.62$\pm$0.54} & 10.15$\pm$0.51 & \multicolumn{1}{c|}{10.14$\pm$0.52} & 9.85$\pm$0.51 & \multicolumn{1}{c|}{9.81$\pm$0.54} & 10.97$\pm$0.39 & 10.94$\pm$0.38\\ ${\log M_{\star}^{present}}$ & 10.35$\pm$0.54 & \multicolumn{1}{c|}{10.33$\pm$0.53} & 9.87$\pm$0.50 & \multicolumn{1}{c|}{9.86$\pm$0.51} & 9.59$\pm$0.50 & \multicolumn{1}{c|}{9.55$\pm$0.52} & 10.68$\pm$0.38 & 10.65$\pm$0.38\\ ${\langle \log t_\star \rangle_L}$ & 9.24$\pm$0.53 & \multicolumn{1}{c|}{9.18$\pm$0.57} & 8.71$\pm$0.41 & \multicolumn{1}{c|}{8.59$\pm$0.41} & 8.07$\pm$0.35 & \multicolumn{1}{c|}{7.90$\pm$0.36} & 9.66$\pm$0.22 & 9.66$\pm$0.21\\ ${\langle \log t_\star \rangle_M}$ & 9.89$\pm$0.28 & \multicolumn{1}{c|}{9.87$\pm$0.22} & 9.70$\pm$0.30 & \multicolumn{1}{c|}{9.70$\pm$0.21} & 9.47$\pm$0.43 & \multicolumn{1}{c|}{9.47$\pm$0.36} & 10.05$\pm$0.15 & 10.00$\pm$0.13\\ ${\log \langle Z_\star \rangle_L}$ & -0.01$\pm$0.24 & \multicolumn{1}{c|}{0.03$\pm$0.18} & -0.20$\pm$0.26 & \multicolumn{1}{c|}{-0.12$\pm$0.19} & -0.42$\pm$0.21 & \multicolumn{1}{c|}{-0.29$\pm$0.16} & 0.12$\pm$0.14 & 0.13$\pm$0.11\\ ${\log \langle Z_\star \rangle_M}$ & 0.06$\pm$0.21 & \multicolumn{1}{c|}{0.05$\pm$0.17} & -0.05$\pm$0.30 & \multicolumn{1}{c|}{-0.04$\pm$0.24} & -0.22$\pm$0.36 & \multicolumn{1}{c|}{-0.24$\pm$0.32} & 0.10$\pm$0.10& 0.09$\pm$0.09 \end{tabular} \end{center} { \textbf{Notes.} Average values $\mu$ and corresponding standard deviations $\sigma$ of the total stellar mass (in solar masses $M_{\odot}$) ever-formed $M_{\star}^{ever}$ and presently available $M_{\star}^{present}$, mean stellar age (in years) weighted by light $\langle \log t_\star \rangle_L$ and mass $\langle \log t_\star \rangle_M$, and mean stellar metallicity (in solar units $Z_{\odot}=0.02$) weighted by light $\log \langle Z_\star \rangle_L$ and mass $\log \langle Z_\star \rangle_M$ for the Main, Star-Forming, Intensively Star-Forming and Passive Samples, as defined in Subsection \ref{SubSection_-_Sample_Selection}. } \label{Table_-_MS_and_SFS} \end{table*} The average MS, SFS, ISFS and PS population values for mass, age and metallicity are gathered in Table \ref{Table_-_MS_and_SFS} for each code. The differences between codes for total mass, mean age and mean metallicity can go up to $\sim$0.06 for MS and $\sim$0.12 dex for SFS, respectively. The most significant difference in MS relates to the average light-weighted age, whereas for SFS the most prominent discrepancies between codes rest on the light-weighted age and metallicity and amount to $\sim$0.12 and 0.08 dex, respectively. Moreover, the ISFS and PS results in this table show stellar properties differences between codes in the ranges of $\sim$0.02--0.17 dex for ISFS and $\sim$0--0.05 dex for PS. Although the main discrepancy in ISFS falls again on the average light-weighted age ($\sim$0.17 dex), the main difference between codes for PS has to do with the average mass-weighted age ($\sim$0.05 dex). The possible origins for these differences are explored in Subsection \ref{SubSection_-_Light_and_Mass_Distributions}. Finally, the working definition for the mass for the remainder of this work is that of $M_\star^{present}$. \subsection{Light and mass distributions}\label{SubSection_-_Light_and_Mass_Distributions} As the previous results have shown, the mean age $\langle \log t_\star \rangle$ and mean metallicity $\log \langle Z_\star \rangle$ parameters are powerful tools to evaluate the stellar properties of galaxies, particularly when grouped in populations of the same type. However, these parameters also lack the detailed information required to understand some of the differences between \Fado\ and \SL\ reported in the previous subsection. The same would be true for any ad-hoc binning schema that can be adopted to downscale the best-fit population vector (PV) into more easily manageable parameters. Indeed, a better strategy to study the code differences for each galaxy population is to look into the full information encoded in the best-fit PVs regarding the light and mass contributions of each stellar element. With this in mind, Figure \ref{Fig_-_SFH_vs_age} shows the light (top panels) and mass relative contributions (bottom panels) of each stellar element (i.e. SSP) in the adopted base library as a function age. The relative contributions of the six metallicities in the base are summed up along each age step (represented by the black squares). These results can be viewed as the light and mass SFHs of each galaxy population, with Figure \ref{Fig_-_SFH_vs_age_-_cumulative} showing their cumulative versions. Several results are worth discussing. Firstly, the relative light contributions of young-to-intermediate SSPs with $t \! < \! 10^8$ yr increases with increasing \EWha\ in both codes, following the PS$\rightarrow$MS$\rightarrow$SFS$\rightarrow$ISFS sequence. The opposite trend is seen in old SSPs with $t \! > \! 10^9$ yr. This is to be expected, since young bright stellar populations dominate their older counterparts in terms of light output in ISFS and SFS galaxies, whereas the bulk of stellar content in PS is rather old and considerably less luminous. Secondly, light distribution results show a peak around $t \! \sim \! 10^9$ yr in both codes, ranging between $\sim$5 (ISFS) and 25\% (PS). \cite{Asari_etal_2007} reported a similar hump around 1 Gyr in the SFR when displayed as a function of stellar age. The authors carried tests with \SL\ and found that hump disappears after changing the stellar library of SSPs from STELIB (\citealt{Bruzual_Charlot_2003}), as in this work, to MILES (\citealt{SanchezBlazquez_etal_2006}). The comparison between \Fado\ and \SL\ is particularly revealing when it comes to the relative contribution of young and intermediate aged SSPs. Results show that SSPs with ages $t \! < \! 10^8$ yr contribute with more light in \SL\ than in \Fado, which is increasingly noticeable with increasing \EWha\ (PS$\rightarrow$MS$\rightarrow$SFS$\rightarrow$ISFS). Indeed, Figure \ref{Fig_-_SFH_vs_age_-_cumulative} shows that the light contribution in \SL\ is greater by 5.41\% than in \Fado\ at $t \! = \! 10^7$ yr for the ISFS, reaching 9.11\% around $t \! = \! 10^9$ yr. This means that \SL\ overestimates the relative light contribution of young stellar populations in relation to \Fado\ (or vice versa), thus accounting for the $\sim$0.12 and 0.17 dex difference in $\langle \log t_\star \rangle_L$ between the codes for the SFS and ISFS populations, respectively (Table \ref{Table_-_MS_and_SFS}). When it comes to the mass distributions, the differences between the codes are considerably less pronounced. The main reason for this lies in the fact that, in the local universe, most stellar mass is locked into older stellar populations. This is best exemplified by the increasing importance of $t \! > \! 10^{10}$ yr SSPs with decreasing \EWha, following the ISFS$\rightarrow$SFS$\rightarrow$MS$\rightarrow$PS sequence. Apart from the peak around $t \! \sim \! 10^9$ yr already noted, it is worth observing that \SL\ results in the ISFS show higher mass contributions from SSPs at $t \sim 3 \cdot 10^8$ yr than \Fado. This seems to be offset by an excess of $\sim$5\% of SSPs with $t > 10^{10}$ yr, which could explain why the average mean stellar age of \SL\ for the SFS ($\sim$9.7 dex) and ISFS ($\sim$9.47 dex) documented in Table \ref{Table_-_MS_and_SFS} is serendipitously identical to that of \Fado. In general, these differences between \Fado\ and \SL\ when it comes to both the light and mass distributions are somewhat similar to those reported by \cite{Werle_etal_2019} when adopting a new version of \SL\ that incorporates UV photometry. Exploring further the information encoded in the PVs, Figure \ref{Fig_-_SFH_vs_Z} shows the light (top panels) and mass relative contributions (bottom panels) of each stellar element in base as a function metallicity. In contrast to Figure \ref{Fig_-_SFH_vs_age}, the relative contributions of the 25 age steps in the base are now summed along the six metallicities in the base library (represented by the black squares). Other remaining plot details are similar to those of Figure \ref{Fig_-_SFH_vs_age}. The light distributions displayed on the top panels show for both codes that the relative contributions of the lowest metallicities increase with increasing \EWha\ following PS$\rightarrow$MS$\rightarrow$SFS$\rightarrow$ISFS, with the opposite trend at the highest metallicities. The inversion point occurs somewhere between $2 Z_\odot / 5$ and $Z_\odot$ (i.e. $\log ( Z_\star/Z_\odot) = -0.398$ and 0, respectively). Moreover, the best-fit solutions from \Fado\ have higher light contributions from the two lowest metallicities than \SL, with SSPs with $Z_\odot / 200$ and $Z_\odot / 50$ (i.e. $\log ( Z_\star/Z_\odot) \simeq -2.301$ and -1.699, respectively) contributing $\sim$20-25\% in \Fado\ and $\sim$15-20\% in \SL. This is best displayed in Figure \ref{Fig_-_SFH_vs_Z_-_cumulative}, which shows that the difference in the relative contribution between \Fado\ and \SL\ is already of 8.05\% at $Z_\odot / 200$ and reaches 13.51\% by $Z_\odot / 50$ for ISFS. This indicates that \SL\ overestimates the metallicity of SF galaxies in comparison to \Fado\ (or vice versa). However, this interpretation is strongly tempered by the particularly large standard deviations observed for SFS and ISFS populations. As a side note, CGP19 found using synthetic galaxy spectra that \SL\ systematically underestimates the light-weighted mean meallicity by up to $\sim$0.6 dex with decreasing age of the galaxy (i.e. increasing \EWha) for $t<10^9$ yr, whereas the mass-weighted can be underestimated for $10^7<t<10^9$ yr by up to $\sim$0.6 dex or overestimated by up to $\sim$0.4 dex for $t<10^7$ yr. Similar results where found in P21. Although the details of these trends depend on the SFH of the models (e.g. instantaneous or continuous), adopted fitting methodology (e.g. including or excluding the emission lines and the Balmer and Paschen discontinuities) and S/N, the important fact to bear in mind is that these tests were carried out assuming a constant solar stellar metallicity of $Z_\odot=0.02$. Therefore, metallicity results detailed in Figures \ref{Fig_-_MtZ_Histograms_-_MS_SFS} and \ref{Fig_-_SFH_vs_Z} cannot be easily compared to the results of CGP19 or P21. In contrast, the overall mean age trends documented in this work are compatible to those found in CGP19 or P21. Finally, the bottom panels of Figure \ref{Fig_-_SFH_vs_Z} shows that mass distributions differ very little between codes, in a similar vein to those in Figure \ref{Fig_-_SFH_vs_age}. However, it is interesting to note the negligible contribution of two lowest metallicities to the mass when it comes to the PS galaxies, in contrast to both SFS and ISFS. Coupled with the trends observed in the mass distributions as a function age, these results point again to the idea of a bimodal distribution of galaxies (e.g. \citealt{Gladders_Yee_2000, Strateva_etal_2001, Blanton_etal_2003, Kauffmann_etal_2003b, Baldry_etal_2004}). \subsection{Relations between mass, mean age and mean metallicity in star-forming galaxies}\label{SubSection_-_MtZ_Relations} Figures \ref{Fig_-_SFS_-_M_vs_Z}, \ref{Fig_-_SFS_-_M_vs_t} and \ref{Fig_-_SFS_-_t_vs_Z} show in the contour lines the relations between the total stellar mass $M_\star$, the mean stellar age $\langle \log t_\star \rangle$ and the mean stellar metallicity $\log \langle Z_\star \rangle$ for the SFS galaxy population (Figures \ref{Fig_-_MS_-_M_vs_Z}--\ref{Fig_-_MS_-_t_vs_Z} show similar plots for the MS), with special symbols representing average values for the ISFS, SFS and PS populations. % The mass-metallicity relation displayed in Figure \ref{Fig_-_SFS_-_M_vs_Z} shows an increasing divergence between the codes in light-weighted metallicity as \EWha\ increases and metallicity decreases. This is illustrated by the somewhat steeper linear regression of \Fado\ in comparison with \SL\ and the difference between the average values for ISFS, SFS and PS populations. In contrast, both codes show similar results when it comes to the mass-weighted metallicity. Both trends are in keeping with the discussion of Figure \ref{Fig_-_SFH_vs_Z}. The mass-age relation in Figure \ref{Fig_-_SFS_-_M_vs_t} also shows interesting results. For instance, \SL\ shows a systematic light-weighted age underestimation in comparison with \Fado\ with increasing \EWha, as shown by the increasing divergence between codes following the sequence PS$\rightarrow$SFS$\rightarrow$ISFS. This is particularly interesting since the age distributions from both codes are rather similar for SFS. At the same time, \Fado\ shows a mass-weighted mean age distribution that is broader and with a lower absolute maximum when compared to \SL, even though the average mean stellar ages of the ISFS and SFS samples for each code are very similar, as noted in Table \ref{Table_-_MS_and_SFS} and in Subsection \ref{SubSection_-_MtZ_Distributions}. Finally, the age-metallicity relation in Figure \ref{Fig_-_SFS_-_t_vs_Z} gives another perspective to the age and metallicity differences between codes seen in Figures \ref{Fig_-_SFS_-_M_vs_Z} and \ref{Fig_-_SFS_-_M_vs_t}. The contour lines alone clearly illustrate the wider age and metallicity distributions from \Fado\ in comparison with \SL. The mass-weighted linear regression lines indicate the overall SFS distributions are intrinsically different between codes, even though the average values for each population are rather similar. The \Fado\ results in Figure \ref{Fig_-_SFS_-_t_vs_Z} are particularly interesting in this regard, since light-weighted metallicity increases with age but its mass-weighted counterpart decreases with its corresponding age. Indeed, the distributions of $\langle \log t_\star \rangle_M$ and $\log \langle Z_\star \rangle_M$ from \Fado\ are significantly broader that those from \SL\ (as illustrated in Figure \ref{Fig_-_MtZ_Histograms_-_MS_SFS}), with relative maxima at $\langle \log t_\star \! \rangle_M \! \sim 9.1$ yrs and $\log \langle Z_\star \rangle_M \! \sim \! -2.1$, weighting towards an anti-correlation between age and metallicity. This anti-correlation can be attributed, at least partly, to the well-known age-metallicity degeneracy, with young metal-rich stellar populations being indistinguishable of old metal-poor populations from the point of view of spectral synthesis (e.g. \citealt{Faber_1972, OConnell_1980, Bressan_Chiosi_Tantalo_1996, Pelat_1997, Pelat_1998, CidFernandes_etal_2005}). Although this is especially noticeable when it comes to the SFS, the mass-weighted age-metallicity relation presented in Figure \ref{Fig_-_MS_-_t_vs_Z} suggests that this trend is also present in the MS. \section{Discussion}\label{Section_-_Discussion} \subsection{Impact of the nebular continuum}\label{Subsection_-_Impact_of_the_Nebular_Continuum} An important factor to consider in the interpretation of the results presented in Section \ref{Section_-_Results} is the potential impact of the nebular continuum modelling approach in each code (or lack thereof) has on the estimated stellar properties of SF galaxies. Moreover, one wonders if this effect can be distinguish from (a) the intrinsic uncertainties (e.g. adequacy of age and metallicity coverages) and degeneracies associated with the adopted physical ingredients (e.g. SSPs, extinction, and kinematics) and (b) the different mathematical methods adopted in each code (e.g. Metropolis algorithm coupled with simulated annealing in \SL\ and genetic differential evolution optimisation in \Fado). With this in mind, one of the main objectives of the methodology presented in Section \ref{Section_-_Methodology} was to focus on the impact of nebular continuum modelling in SF galaxies by reducing the number of variables in the code comparison, following a fitting strategy similar to that of CGP19+P21. However, there are important methodological differences between this study and CGP19+P21 that prevent a straightforward comparison between works, such as: (a) the synthetic galaxies analysed in CGP19+P21 have constant solar stellar metallicity, (b) the current work includes two extra sub-solar metallicities in the adopted stellar library, and (c) the most prominent emission lines were masked in \SL\ using individual spectral masks in this work, whereas CGP19+P21 adopted a general mask built from \SL\ tests using SDSS observations (\citealt{CidFernandes_etal_2005}). Notwithstanding, the question regarding the impact of the nebular continuum modelling approach on the inferred stellar properties remains. In order to address this issue from a different angle, \Fado\ was applied to the SFS and ISFS galaxies in a purely stellar mode (c.f. \citealt{Gomes_Papaderos_2017}) using the same input spectra and spectral fitting setup as in \SL . The objective is to model the spectra using only a combination of stellar components, similarly to \SL , and compare it with the previous \Fado\ results in which the stellar and nebular spectral continua were fitted self-consistently. Figure \ref{Fig_-_MtZ_Histograms_-_PurelyStellar} compares the distributions of the stellar properties using \Fado\ in `purely stellar mode' ($\mathtt{ST}$mode) with the results presented in Section \ref{Section_-_Results} for \Fado\ in `full-consistency mode' ($\mathtt{FC}$mode) and \SL . Several interesting results are worth noting: (a) \FadoSTmode\ mass distributions are very similar to those of \FadoFCmode , (b) \FadoSTmode\ light-weighted mean age distribution is close to that of \SL\ for SFS and falls between \SL\ and \FadoFCmode\ for ISFS, and (c) \FadoSTmode\ mass-weighted metallicity distributions are slightly more skewed to higher values than \FadoFCmode\ for both SFS and ISFS, a trend similar to that observed in the \SL\ results. On the one hand, the fact that the \FadoSTmode\ results are in general closer to \FadoFCmode\ than to \SL\ suggests that the code differences documented in Section \ref{Section_-_Results} are dominated by the fundamental mathematical and statistical differences between codes (e.g. different minimisation procedures), with different physical ingredients and methods (e.g. different nebular continuum modelling approaches and emission-line masking) playing a secondary role. However, it is reasonable to think that these two factors are likely interconnected, if not mutually dependent. On the other hand, the \FadoSTmode\ light-weighted age distributions for SFS and ISFS seem to skew from the \FadoFCmode\ distributions towards those of the \SL. This is more clearly illustrated in Figure \ref{Fig_-_SFH_vs_age_&_Z_-_PurelyStellar_-_cumulative}, which compares the light and mass distributions of the PVs of \FadoSTmode\ with those of \SL\ and \FadoFCmode . This figure shows that \FadoSTmode\ overestimates for ISFS the contribution of SSPs younger than $\leq 10^7$ yr ($10^9$) by $\sim$5.74\% (0.88\%) in relation to \FadoFCmode , while also overestimating the contribution of SSPs with metallicities $\leq Z_\odot / 200$ ($Z_\odot / 50$) by $\sim$9.68\% (8.7\%). These results show that the nebular continuum modelling approach impacts the \Fado\ results similarly to the light-weighted age trends presented in Subsection \ref{SubSection_-_Light_and_Mass_Distributions} and, therefore, indirectly suggest that the SFS \SL\ results are likely affected by the lack of a nebular continuum modelling recipe (even if its impact is mixed with other uncertainties inherent to population synthesis). One way to rigorously quantify this impact would be to apply \SL\ in `self-consistent mode' to these samples of SF galaxies, which obviously is not an option, even considering the code version first presented in \cite{LopezFernandez_etal_2016}. These results highlight an obvious yet elusive idea worth considering during the development of the next generation of spectral synthesis codes. The comparison of new codes, with evermore relevant physical ingredients (e.g. self-consistent dust or AGN treatment), with their older counterparts will necessarily be increasingly complex due to the introduction of more statistical uncertainties and degeneracies between the physical ingredients. Parallel tests with corresponding increasingly physical complexity using synthetic spectral data (e.g. CGP19; P21) will continue to be essential in such an endeavour. \subsection{Potential implications}\label{Subsection_-_Implications} Assuming that purely stellar modelling in fact leads to an overestimation of mean stellar metallicity with decreasing mass or increasing \EWha, this means that the mass-metallicity relation presented in Figure \ref{Fig_-_SFS_-_M_vs_Z} could become steeper with increasing redshifts as gas reservoirs are both increasingly more abundant and less chemically enriched. This raises the further doubt of how well the synthesis models can mimic the stellar content at large distances. Most commonly adopted evolutionary models are based on stellar libraries of stars in the solar vicinity (e.g. \citealt{Leitherer_etal_1999, Bruzual_Charlot_2003, Vazquez_Leitherer_2005, SanchezBlazquez_etal_2006, Molla_GarciaVargas_Bressan_2009, Rock_etal_2016}), which might not be representative of the stellar populations at high-redshifts, especially when it comes to extremely low metallicities and Population III stars (e.g. \citealt{Schaerer_2002}). The fact that AGN and nebular continua dilute absorption features (e.g. \citealt{Koski_1978, CidFernandes_StorchiBergmann_Schmitt_1998, Moultaka_Pelat_2000, Kauffmann_etal_2003c, Vega_etal_2009, Cardoso_Gomes_Papaderos_2016, Cardoso_Gomes_Papaderos_2017}) further exacerbates this problem from the point of view of spectral synthesis. However, this concern is counterbalanced with the increasingly sophisticated theoretical synthesis models (e.g. \citealt{Coelho_etal_2007, Molla_GarciaVargas_Bressan_2009, Leitherer_etal_2010, Coelho_2014, Stanway_Eldridge_2019, Coelho_Bruzual_Charlot_2020}) which can help bridge the gap towards better hybrid evolutionary models. Moreover, a systematic mean stellar age underestimation when adopting a purely stellar modelling approach has repercussions for the current interpretation of the physical properties of young stellar populations. As noted by \cite{Reines_etal_2010}, estimating the physical properties (e.g. age and metallicity) of star clusters through population synthesis can only be reliably accomplished when modelling both the nebular continuum and emission-lines. The same is true for galaxies with relatively high specific SFRs in the local universe, such as dwarfs (e.g. \citealt{Papaderos_etal_1998, Papaderos_Ostlin_2012}) and `green peas' (e.g. \citealt{Izotov_Guseva_Thuan_2011, Amorin_etal_2012}), or at higher redshifts (e.g. \citealt{Zackrisson_Bergvall_Leitet_2008, Schaerer_deBarros_2009}). From a different perspective, variations in the emission-line measurements could impact SFR estimations and, thus, indirectly affect estimations of the cosmic SF history (\citealt{Panter_etal_2007}). The fact that \Fado\ measures emission-lines after subtracting the nebular continuum means that the EWs of the Balmer lines will always be greater than those based on a purely stellar modelling approach. At the same time, SFR estimators based on emission-line fluxes are not expected to change since the modelling (or not) of the nebular continuum does not impact the measured fluxes. \section{Summary and conclusions}\label{Section_-_Conclusions} The population synthesis code \Fado\ (\citealt{Gomes_Papaderos_2017, Gomes_Papaderos_2018}) was applied to the main sample of galaxies from the SDSS (\citealt{York_etal_2000}) DR 7 (\citealt{Abazajian_etal_2009}) with the aim of re-evaluating the relations between the main stellar properties of galaxies (e.g. mass, mean age, and mean metallicity), particularly those of SF galaxies. The main reason for such re-analysis is the fact that \Fado\ is the first publicly available population synthesis tool to self-consistently model both the stellar and nebular continua. In fact, previous studies adopted purely stellar population synthesis codes (e.g. \SL, \citealt{CidFernandes_etal_2005}) to infer the physical properties of galaxies, regardless of their spectral and morphological type (e.g. \citealt{Kauffmann_etal_2003a, Panter_Heavens_Jimenez_2003, CidFernandes_etal_2005, Asari_etal_2007, Tojeiro_etal_2009}). Comparing the physical properties inferred by the population synthesis codes \Fado\ and \SL\ for four distinct galaxy samples: Main Sample (i.e. general population of galaxies), Star-Forming (EW(H$\alpha$)>3), Intensively Star-Forming (EW(H$\alpha$)>75), and Passive (EW(H$\alpha$)<0.5), shows that: \begin{itemize} \item[$\bullet$] Mass distributions for the different galaxy samples are similar between \Fado\ and \SL . \item[$\bullet$] Mean age and mean metallicity distributions in SFS from \Fado\ are broader than those of \SL, especially when weighted by mass. Moreover, the average light-weighted age of \SL\ is lower by $\sim$0.17 dex than \Fado\ for ISFS galaxies, whereas the light-weighted metallicity of \SL\ is $\sim$0.13 dex higher than \Fado . \item[$\bullet$] Even though both codes show very similar mass, age and metallicity distributions for the PS population of galaxies, \Fado\ displays a narrower and higher mass-weighted age peak than \SL . \item[$\bullet$] Cumulative light distributions of the best-fit PVs as a function of age show for ISFS that the contribution of SSPs with $t < 10^7$ yr is greater by 5.41\% in \SL\ than in \Fado , reaching 9.11\% for SSPs younger than $t < 10^9$ yr, which means that \SL\ overestimates the relative light contribution of young stellar populations in comparison with \Fado\ (or vice versa). Moreover, cumulative light distributions as a function of metallicity indicate that the light difference between \Fado\ and \SL\ for ISFS is $\sim$8.05\% and 13.51\% for $Z_\star \leq Z_\odot/200$ and $Z_\odot/50$ (i.e. the lowest metallicities in the adopted stellar library), respectively. This means that \Fado\ underestimates the metallicity when compared to \SL\ (or vice versa). Meanwhile, mass distributions as a function of age and metallicity from both codes are very similar for all samples, except that \SL\ once again fits more SSPs with ages $t<10^9$ yr than \Fado\ for both SFS and ISFS. \item[$\bullet$] Comparing the different relation permutations between total mass, mean age and mean metallicity shows all previous results in a more general way: (a) light-weighted age and metallicity differences between codes increases with increasing EW(H$\alpha$) and decreasing total mass, (b) mass-weighted age and metallicity differences between codes are minimal, except for the age underestimation of \SL\ in relation to \Fado\ when it comes specifically to PS galaxies, and (c) SFS age and metallicity distributions of \Fado\ are broader than those of \SL. \end{itemize} These results indicate that the nebular continuum modelling approach significantly impacts the inferred stellar properties of SF galaxies, even if the negative effects of a purely stellar modelling approach are mixed with other uncertainties and degeneracies associated with population synthesis synthesis. For instance, this work found that the modelling of the nebular continuum with \Fado\ yields steeper light-weighted mass-metallicity correlation and a flatter light-weighted mass-age correlation when compared to \SL, a purely stellar population synthesis code. Among other potential implications, this means that the stellar populations of low-mass galaxies in the local universe with relatively high specific SFRs are both more metal poor and older than previously thought. These results are particularly relevant in light of future high-resolution spectroscopic surveys at higher redshifts, such as the 4MOST-4HS (\citealt{deJong_etal_2019}) and MOONS (\citealt{Cirasuolo_etal_2011, Cirasuolo_etal_2020, Maiolino_etal_2020}), for which the fraction of intensively SF galaxies is expected to be higher. \begin{acknowledgements} The authors thank the anonymous referee for valuable comments and suggestions, and colleagues Andrew Humphrey, Isreal Matute and Tom Scott (Instituto de Astrofísica e Ciências do Espaço; IA) for engaging scientific discussions. This work was supported by Fundação para a Ciência e a Tecnologia (FCT) through the research grants UID/FIS/04434/2019, UIDB/04434/2020 and UIDP/04434/2020. L.S.M.C. acknowledges support by the project `Enabling Green E-science for the SKA Research Infrastructure (ENGAGE SKA)' (reference POCI-01-0145-FEDER-022217) funded by COMPETE 2020 and FCT. J.M.G. is supported by the DL 57/2016/CP1364/CT0003 contract and acknowledges the previous support by the fellowships CIAAUP-04/2016-BPD in the context of the FCT project UID/FIS/04434/2013 and POCI-01-0145-FEDER-007672, and SFRH/BPD/66958/2009 funded by FCT and POPH/FSE (EC). P.P. is supported by the project "Identifying the Earliest Supermassive Black Holes with ALMA (IdEaS with ALMA)" (PTDC/FIS-AST/29245/2017). C.P. acknowledges support from DL 57/2016 (P2460) from the `Departamento de Física, Faculdade de Ciências da Universidade de Lisboa'. A.P.A. acknowledges support from the Fundação para a Ciência e a Tecnologia (FCT) through the work Contract No. 2020.03946.CEECIND, and through the FCT project EXPL/FIS-AST/1085/2021. J.A. acknowledges financial support from the Science and Technology Foundation (FCT, Portugal) through research grants UIDB/04434/2020 and UIDP/04434/2020. P.L. is supported by the DL 57/2016/CP1364/CT0010 contract. Funding for the SDSS and SDSS-II has been provided by the Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is \url{http://www.sdss.org/}. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. \end{acknowledgements} \normalsize \clearpage \begin{appendix} \section{Supplements}\label{Appendix_-_Supplements} Figures \ref{Fig_-_MS_-_fit_example_1}--\ref{Fig_-_PS_-_fit_example_2} show the results of the \Fado\ and \SL\ spectral fits for two randomly selected spectra from each galaxy population defined in Subsection \ref{SubSection_-_Sample_Selection}: Main Sample, Star-Forming Sample, Intensively Star-Forming Sample and Passive Sample, respectively. The light-weighted SFH results for the SFS (Figures \ref{Fig_-_SFS_-_fit_example_1} \& \ref{Fig_-_SFS_-_fit_example_2}) and ISFS spectra (Figures \ref{Fig_-_ISFS_-_fit_example_1} \& \ref{Fig_-_ISFS_-_fit_example_2}) show in particular the differences between \Fado\ and \SL, with the relative contribution of SSPs younger than 100 Myr in \SL\ being much more prominent than those in the \Fado\ results. Figures \ref{Fig_-_SFH_vs_age_-_cumulative} and \ref{Fig_-_SFH_vs_Z_-_cumulative} show the cumulative versions of Figures \ref{Fig_-_SFH_vs_age} and \ref{Fig_-_SFH_vs_Z} with the light and mass contributions being summed with increasing age and metallicity, respectively. The fact that the assembly histories for the PS are practical identical between codes is an important sign that both codes estimate similar SFHs for galaxies with negligible to none nebular contribution to the continuum (\EWha <0.5 \AA). Figures \ref{Fig_-_MS_-_M_vs_Z}, \ref{Fig_-_MS_-_M_vs_t} and \ref{Fig_-_MS_-_t_vs_Z} shows the relations between the total stellar mass, mean stellar age and mean stellar metallicity for the MS of galaxies for both codes. The contour and histograms show that the distributions for the three stellar properties in both codes are similar, with the only interesting difference being that \Fado\ displays a slightly larger light-weighted mean metallicity distribution than \SL. \end{appendix}
Title: Evolution of the Hub-filament Structures in IC 5146 in the Context of the Energy Balance of Gravity, Turbulence, and Magnetic Field
Abstract: We present the results of 850 $\mu$m polarization and C$^{18}$O (3-2) line observations toward the western hub-filament structure (W-HFS) of the dark Streamer in IC 5146 using the James Clerk Maxwell Telescope (JCMT) SCUBA-2/POL-2 and HARP instruments. We aim to investigate how the relative importance of the magnetic field, gravity, and turbulence affects core formation in HFS by comparing the energy budget of this region. We identified four 850 $\mu$m cores and estimated the magnetic field strengths ($B_{\rm pos}$) of the cores and the hub and filament using the Davis-Chandrasekhar-Fermi method. The estimated $B_{\rm pos}$ is $\sim$80 to 1200 $\mu$G. From Wang et al., $B_{\rm pos}$ of E-47, a core in the eastern hub (E-hub), and E-hub were re-estimated to be 500 and 320 $\mu$G, respectively, with the same method. We measured the gravitational ($E_{\rm G}$), kinematic ($E_{\rm K}$), and magnetic energies ($E_{\rm B}$) in the filament and hubs and compared the relative importance among them. We found that an $E_{\rm B}$-dominant filament has $aligned$ fragmentation type, while $E_{\rm G}$-dominant hubs show $no$ and $clustered$ fragmentation types. In the $E_{\rm G}$ dominant hubs, it seems that the portion of $E_{\rm K}$ determines whether the hub becomes to have $clustered$ (the portion of $E_{\rm K}\sim20\%$) or $no$ fragmentation type ($\sim10\%$). We propose an evolutionary scenario for the E- and W-HFSs, where the HFS forms first by the collision of turbulent flows, and then the hubs and filaments can go into various types of fragmentation depending on their energy balance of gravity, turbulence, and magnetic field.
https://export.arxiv.org/pdf/2208.07891
\title{Evolution of the Hub-filament Structures in IC~5146 in the Context of the Energy Balance of Gravity, Turbulence, and Magnetic Field} \author{Eun Jung Chung} \affiliation{Department of Astronomy and Space Science, Chungnam National University, Daejeon, Republic of Korea} \author{Chang Won Lee} \affiliation{Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea} \affiliation{University of Science and Technology, Korea (UST), 217 Gajeong-ro, Yuseong-gu, Daejeon 34113, Republic of Korea} \author{Woojin Kwon} \affiliation{Department of Earth Science Education, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea} \affiliation{SNU Astronomy Research Center, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea} \author{Hyunju Yoo} \affiliation{Department of Astronomy and Space Science, Chungnam National University, Daejeon, Republic of Korea} \author{Archana Soam} \affiliation{Indian Institute of Astrophysics, Kormangala (IIA), Bangalore 560034, India} \author{Jungyeon Cho} \affiliation{Department of Astronomy and Space Science, Chungnam National University, Daejeon, Republic of Korea} \keywords{Interstellar magnetic fields (845), Interstellar medium (847), Polarimetry (1278), Submillimeter astronomy (1647), Star forming regions (1565)} \section{Introduction} Stars are known to mainly form in the dense clumps/cores which have been developed in filamentary molecular clouds \citep[e.g.,][]{andre2010}. Hence, painstaking efforts have been made to find how filaments and dense cores form and evolve \citep[e.g.,][]{arzoumanian2019,chung2019}. Though being still under debate in details, it is suggested that the molecular filaments first form by the dissipation of large-scale turbulence and then the dense clumps/cores are generated in the gravitationally supercritical filaments via fragmentation \citep{andre2014}. The magnetic field is generally considered to play a key role in star formation. In the molecular filament on pc scales, the striations of filaments are observed to be parallel to the magnetic field, and it is suggested that the magnetic field plays a role as an aisle for material to flow along them onto main filaments till the filaments accrete sufficient mass to collapse by gravity and form cores \citep[e.g.,][]{palmeirim2013}. In the stage of gravitational collapse, theoretically, the magnetic field is expected to provide significant support against the collapse under the self-gravity to explain the longer lifetime of molecular cloud than their free-fall collapse time \citep[e.g.,][and references therein]{mckee2007}. However, the hourglass morphology of the magnetic fields is frequently found in massive star-forming clouds, indicating that the magnetic field can be modified by gravity or outflows in sub-parsec scales \citep[e.g.,][]{wang2019,lyo2021}. Besides, the shocks from outflows, stellar feedback of expanding ionization fronts of {\sc H~ii} region, and gas flow driven by gravity are considered to cause the magnetic field distortions \citep[e.g.,][]{hull2017b,pillai2020,arzoumanian2021,eswar2021}. Hence, the significance of magnetic field may change from time to time as well as from cloud to cloud. In this paper, we replace the question, how stars form, into that how gravity, turbulence, and magnetic field play roles in forming stars, especially in the process of the fragmentation from cloud to clumps/cores. The precise roles of the gravity, turbulence, and magnetic field are still unclear, especially at the different evolutionary stages of filaments and dense cores. The clouds and star formation models suggest that the magnetic field and turbulence may have different importance in the evolution stages \citep[e.g.,][]{crutcher2012}. Moreover, the subtle difference in the relative significance among the gravity, turbulence, and magnetic field can make different evolution from the clump to the core scale \citep[e.g.,][]{hennebelle2011,tang2019}. % In this paper, we investigate the roles of gravity, turbulence, and magnetic field of the western hub-filament structure (W-HFS) of the dark Streamer in IC~5146. Hub-filament structures (HFSs), consisting of a central hub with relatively higher column density ($> 10^{22} ~\rm cm^{-2}$) and several filaments extended from the hub with relatively low column density and high aspect ratio, are easily found in nearby star-forming molecular clouds and more distant infrared dark clouds \citep{myers2009}. The central hub of HFS is frequently observed to be associated with stars and stellar clusters and thus likely a birth place of stellar clusters \citep[e.g.,][]{gutermuth2009,myers2009,kumar2020}. Therefore, HFSs of nearby star-forming molecular clouds may be one of the best laboratories to test the initial conditions in the formation of stars and stellar clusters. The dark Streamer of IC~5146 locates on the northwest of the Cocoon Nebula, in the constellation Cygnus. It has a long filamentary shape, and two prominent HFSs locate in the eastern and the western parts as shown in Figure~\ref{fig:obsregion}. The properties of filaments and dense cores in IC~5146 are studied with various molecular lines as a part of the ``TRAO survey of the nearby Filamentary molecular clouds, the Universal Nursery of Stars" \citep[TRAO FUNS;][]{chung2021}. Velocity coherent filaments in the IC~5146 region were identified using the 3-dimensional information of $\ceo~(1-0)$ line data which have 49$^{\as}$ spatial resolution (corresponding to 0.14~pc at the distance of 600~pc) and 0.1~$\kms$ channel width. It was found that there is a velocity coherent filament, referred to as F4 hereafter, over the dark Streamer of IC~5146, and smaller filaments and clumps with different velocities from that of F4 are overlapped in the line-of-sight direction. F4 is gravitationally supercritical, and interestingly there are two hubs in the eastern- and western-end, which are named as E-hub and W-hub, respectively. Dense cores were identified with the $\nthp~(1-0)$ data for which spatial resolution and velocity channel width are 52$^{\as}$ and 0.06~$\kms$, respectively. E- and W-hub regions are found to have one dense core each. The $\ceo~(1-0)$ line which traces the filament gas material reveals that the two hubs are supersonic ($\sigma_{\rm NT} / c_{\rm s} \sim 3$), but the $\nthp~(1-0)$ line which traces the dense cores shows that the core material is less turbulent than the filament gas ($\sigma_{\rm NT} / c_{\rm s} \sim 2$) \citep{chung2021}. Polarization observations of IC~5146 made with Planck show that the magnetic field is nearly uniform and prefers a perpendicular orientation to the gas column density contours \citep{planck35}. The optical and near-infrared (NIR) polarization observations toward the whole IC~5146 region present uniform magnetic field vectors perpendicular to the dark Streamer \citep{wang2017}. The submillimeter polarization observations are made toward the E-hub as a part of the BISTRO survey \citep[][W2019 hereafter]{wang2019}, showing a curved magnetic field morphology which implies the possible modification of magnetic field by gravitational contraction in the hub. We adopt the distance of IC~5146 in this study as 600$\pm$100~pc measured by \citet{wang2020a} using GAIA DR2. The paper is organized as follows. In Section~\ref{sec:obsdr}, we describe the observation and data reduction. In Section~\ref{sec:results}, we present the results of the observations and the measured magnetic field strength. We analyze and discuss implications on our results in Section~\ref{sec:anal} and \ref{sec:disc}, respectively, and summarize all the main results in our study in Section~\ref{sec:summ}. \\ \section{Observations} \label{sec:obsdr} \subsection{Polarization Observations} The western HFS of IC~5146 was observed with the SCUBA-2/POL-2 instrument on the James Clerk Maxwell Telescope (JCMT) between 2020 June 17 and July 8. The region was observed 21 times, and each data set has an average integration time of 41 minutes at the weather band~2 ($0.05 < \tau_{\rm 225~GHz} \leq 0.08$). The observations are made by POL-2 daisy map mode which covers a circular region of 11$^\prime$ diameter with the best sensitivity coverage of central 3$^\prime$ of the map. SCUBA-2/POL-2 simultaneously obtains the data at 450~$\mu$m and 850~$\mu$m wavelengths with the effective beam sizes of 9.6$^{\as}$ and 14.1$^{\as}$ (0.028 and 0.041~pc at a distance of 600~pc), respectively. We present the results from the 850~$\mu$m data only in this paper. The 850~$\mu$m data were reduced using the STARLINK/SMURF package {\tt pol2map}. The data reduction process follows three main steps. In the first step, the raw bolometer time streams for each observation are converted into separate Stokes {\em I}, {\em Q}, and {\em U} time streams using the process {\it calcqu}. Then, an initial Stokes {\em I} map is created for all observations via the iterative map-making process {\it makemap}. In the second step, with the initial {\em I} map, a mask is iteratively determined based on the signal-to-noise ratio, and the background pixels defined by the mask are set to zero at the end of each iteration within {\it makemap}. The use of a mask produces an improved individual {\em I} map by preventing the growth of gradients and any artificial large scale structure, and protecting the various noise models' evaluation from bright sources. The final {\em I} map is produced by co-adding the improved individual {\em I} maps. In the final step, {\em Q} and {\em U} maps are created from the {\em Q} and {\em U} time streams with the same masks used in the previous step. The instrumental polarization is corrected with the final improved {\em I} map using the `August 2019' IP model \citep{friberg2018}. Then final {\em I}, {\em Q}, and {\em U} maps are produced with a pixel size of 4$^{\as}$, and the final debiased polarization vector catalog is provided with a bin-size of 12$^{\as}$ which is close to the beam size of the JCMT/POL-2 at 850~$\mu$m to increase the signal-to-noise ratios in the polarization data. The Stokes {\em I} parameter is the total intensity of the incoming light, and the Stokes {\em Q} and {\em U} parameters are defined as: \begin{equation} Q = I \times P \times \rm cos(2\phi) \label{eq:q} \end{equation} and \begin{equation} U = I \times P \times \rm sin(2 \phi), \label{eq:u} \end{equation} where $P$ is the polarization fraction and $\phi$ is the polarization angle. Because the polarized intensity {\em PI} is a form of quadratic sum of {\em Q} and {\em U}, $PI=\sqrt{Q^{2} + U^{2}}$, the noises of {\em Q} and {\em U} always make a positive contribution to the polarization intensity \citep[e.g.,][]{vaillancourt2006}. The debiased polarization intensity is estimated by the modified asymptotic estimator \citep{plaszczynski2014} as follows: \begin{equation} PI = \sqrt{Q^{2} + U^{2}} - \sigma^{2} \frac{1 - e^{-(Q^{2} + U^{2})/\sigma^{2}}}{2\sqrt{Q^{2} + U^{2}}}, \end{equation} \noindent where $\sigma^{2}$ is the weighted mean of the variances on {\em Q} and {\em U}: \begin{equation} \sigma^{2} = \frac{Q^{2} \sigma_{Q}^{2} + U^{2} \sigma_{U}^{2}}{Q^{2} + U^{2}}, \end{equation} and $\sigma_{Q}$ and $\sigma_{U}$ are the standard errors in {\em Q} and {\em U}, respectively. The debiased polarization fraction $P$ and its corresponding uncertainty are calculated as \begin{equation} P = \frac{PI}{I} \label{eq:p} \end{equation} and \begin{equation} \sigma_{P} = \sqrt{\frac{\sigma^{2}}{I^{2}} + \frac{\sigma_{I}^{2}(Q^{2} + U^{2})}{I^{4}}} , \end{equation} where $\sigma_{I}$ is the standard error in {\em I}. We selected polarization measurements with the criteria that (1) the signal-to-noise ratio (S/N) of total intensity is larger than 10 ($I / \sigma_{I} > 10$) and (2) the polarization fraction is larger than 2 times of its uncertainty ($P / \sigma_{P} > 2$). The Flux-Conversion Factor (FCF) of 668~Jy~beam$^{-1}$~pW$^{-1}$ is used for the Stokes {\em I}, {\em Q}, and {\em U} data at 850~$\mu$m data in this paper. The FCF is determined by multiplying the standard 850~$\mu$m SCUBA-2 flux conversion factor 495~Jy~beam$^{-1}$~pW$^{-1}$ by 1.35 to correct the additional losses from POL-2 \citep{mairs2021}. The rms noise values in the {\em I}, {\em Q}, and {\em U} data binned to a pixel size of 12$\as$ are 3.1, 2.9, and 2.8~mJy~beam$^{-1}$, respectively. \\ \subsection{The $\ceo~(3-2)$ Line Observations} \label{sec:c18o} \noindent To estimate the velocity dispersion of cores in the W-HFS, we carried out the $\ceo~(3-2)$ line observations toward the $9^{\prime} \times 9^{\prime}$ area of W-HFS with the Heterodyne Array Receiver Programme (HARP) on the JCMT \citep{buckle2009}. The observations have been conducted over six nights between 2020 August 28 and September 20 with the weather band~2 ($0.05 < \tau_{\rm 225~Ghz} \leq 0.08$). The data were taken in the raster mode at the default sample spacing of 7.3~arcsec. The total observing time for the $\ceo~(3-2)$ line is about 10 hours. The spatial resolution is $\sim 14^{\as}$ at 330~GHz, which is the same as that of JCMT/POL-2 850~$\mu$m data. The data are reduced using the ORAC Data Reduction (ORAC-DR) pipeline in {\sc starlink} software \citep{buckle2012}. The mean rms level is $\sim 0.06$~K in the final datacube with 7.3~arcsec pixel size and 0.15~$\kms$ channel width. \\ \section{Results} \label{sec:results} \subsection{Identification of the 850~$\mu$m Cores} \label{ssec:coreid} Figure~\ref{fig:850cores} shows the 850~$\mu$m Stokes {\em I} contours on Herschel 250~$\mu$m image. The 250~$\mu$m emission demonstrates well the structure of W-HFS where several elongated structures are connected with the hub. The guide lines of filaments identified with the $\ceo~(1-0)$ emission \citep{chung2021} are drawn with white dashed lines. Simply introducing the structure of W-HFS which is revealed with the 3-dimensional $\ceo~(1-0)$ datacube here, there are four filaments converging into the central hub. However, the northern filament (N-filament hereafter) is well separated from the W-hub in velocity dimension ($\Delta v \sim 2~\kms$) while the other three filaments are connected in the position-position-velocity space. Hence, the W-HFS consists with the W-hub and the three southern filaments, while the N-filament seems physically well separated from the W-hub. \begin{deluxetable*}{lccccccccccc} \input{tbl_850coreswJ} \end{deluxetable*} The 850~$\mu$m emission appears to trace the dense cores on the filaments. The W-hub has the dust emission of $\sim 10 - 400$~mJy~beam$^{-1}$, which is similar to that of the E-hub shown in W2019. We identified dense cores by applying \textsc{FellWalker} clump-finding algorithm \citep{berry2015} to the 850~$\mu$m emission. Pixels with intensities $> 3 \sigma$ are used to find cores, and an object having a peak intensity higher than 10$\sigma$ and a size larger than 2$\times$beam size of 14.1$^{\as}$ is identified as a real dense core. In the case that there are neighboring peaks, these two peaks are considered to separately exist if the difference between the peak values and the minimum value (dip value) between the peaks is larger than 2$\sigma$. We found a few tens of dense cores in the circle of 11$^\prime$ diameter and 3 dense cores in the W-hub. In this paper, we analyzed the central four dense cores named as C1, C2, C3, and C4 from north to south. C1 and C2 are found to contain Young Stellar Objects (YSOs) from Spitzer data \citep{harvey2008}, while C3 and C4 are more likely starless. C1 has one Class~I YSO while C2 has multiple YSOs, two Class~I and one Class~II YSOs. Particularly one YSO in C2, IRAS 21429+4726, shows a prominent outflow \citep{dobashi2001}, suggesting that C2 is a more active star-forming region than C1. The blue- and red-shifted lobes driven by IRAS 21429+4726 are indicated with two colored arrows in Figure~\ref{fig:850cores}. We calculated the mass of 850~$\mu$m core with a following equation \citep[e.g.,][]{hildebrand1983}: \begin{equation} {\it M} = \frac{S_{\nu} ~d^{2}}{\kappa_{\nu}~ B_{\nu}(T_{\rm d})}, \label{eq:m850} \end{equation} where $S_{\nu}$, $\kappa_{\nu}$, $B_{\nu}$, $T_{\rm d}$, and $d$ are the integrated flux density, opacity, Planck function at the wavelength of 850~$\mu$m, the dust temperature, and the distance, respectively. The dust opacity is obtained by the equation of $\kappa_{\nu} = 0.1(\nu / 10^{12} \rm Hz)^{\beta} \rm cm^{2}~g^{-1}$ assuming a dust-to-gas ratio of 1:100 \citep{beckwith1991}, and the dust emissivity index of $\beta=2$ \citep{draine1984}. The dust temperature was taken from Herschel data \citep{andre2010,arzoumanian2011} after convolution with the JCMT resolution of 14.1$^{\as}$ using the Gaussian convolution kernel. The masses of the cores range between $\sim$2 and 9~$M_{\odot}$. The W-hub and N-filament are also identified with {\sc FellWalker} algorithm. Pixels with intensity $> 0.5 \sigma$ are used, and the resulting coverages of the W-hub and N-filament are presented with white polygons in Figure~\ref{fig:850cores}. It shows that W-hub covers the area of C2, C3, and C4, and N-filament includes three cores, i.e., C1, another core to the north (N-C1), and the other one to the west of C1 (W-C1). Their masses are calculated using Equation~\ref{eq:m850}. The length and width of N-filament are derived with {\sc filfinder} algorithm \citep{koch2015}. Figure~\ref{af:c18oprof} shows the $\ceo~(3-2)$ integrated intensity map and the averaged spectra of the N-filament, W-hub, and cores. The moment~0 map is integrated over the velocity range from 0.5 to 6.0~$\kms$. As shown in the Figure, % the spectrum of C3 looks like having a single Gaussian component, and its peak velocity (4.12~$\kms$) is well consistent to the core velocity derived with $\nthp~(1-0)$ \citep[4.20~$\kms$;][]{chung2021}. But, the spectra of some cores have the other secondary velocity component. It is revealed that some filaments have multiple velocity substructures of fibers \citep[see][and references therein]{pineda2022}, and the $\ceo~(1-0)$ observations of this region found that there are overlaps of different velocity components in the plane of the sky \citep{chung2021}. Hence, to estimate the $\ceo~(3-2)$ line width associated with W-hub, N-filament, and cores, we performed a multicomponent Gaussian fit to the averaged spectra extracted over the area of W-hub, N-filament, and each core. The fitting results are presented with red and blue lines on the observed spectra in the Figure~\ref{af:c18oprof}. We compared the resulting peak velocities of Gaussian decomposed components with the velocities of core materials derived with the $\nthp~(1-0)$ line data \citep{chung2021}. The major components of C1 (1.63~$\kms$), C2 (4.28~$\kms$), and C4 (4.38~$\kms$) agree to the $\nthp~(1-0)$ peak velocity of the cores (1.70, 4.20, 4.20, and 4.20~$\kms$, respectively). Hence, we used the line width of major component as a representative of gas velocity dispersion of the three cores. % The velocity dispersions ($\sigma_{\rm obs}$) are 0.25$\pm$0.09, 0.42$\pm$0.10, 0.40$\pm$0.10, and 0.11$\pm$0.08~$\kms$ for C1, C2, C3, and C4, respectively. The major components of W-hub (4.24~$\kms$) and N-filament (1.60~$\kms$) are well matched to those of the cores included in them, and $\sigma_{\rm obs}$ of the W-hub and N-filament are 0.34$\pm$0.12 and 0.23$\pm$0.08~$\kms$, respectively. The given uncertainty of the velocity dispersion is the standard deviation of the velocity dispersions of the spectra in each core. The total 1-dimensional velocity dispersion ($\sigma_{\rm tot}$) is given with the sum of non-thermal ($\sigma_{\rm NT}$) and thermal ($\sigma_{\rm T}$) components in quadrature of $\sigma_{\rm tot}^{2} = \sigma_{\rm NT}^{2} + \sigma_{\rm T}^{2}$ \citep{myers1983}. The thermal velocity dispersion of the observed molecule is \begin{equation} \sigma_{\rm T, obs} = \sqrt{\frac{k_{\rm B} T}{\mu_{\rm obs} m_{\rm H}}}, \end{equation} where $k_{\rm B}$, $T$, $\mu_{\rm obs}$, and $m_{\rm H}$ are the Boltzmann constant, the gas temperature, the atomic weight of the observed molecule, and the hydrogen mass, respectively. Then, the non-thermal velocity dispersion can be calculated by extracting the thermal velocity dispersion from the observed total velocity dispersion: \begin{equation} \sigma_{\rm NT} = \sqrt{\sigma_{\rm obs}^{2} - \frac{k_{\rm B} T}{\mu_{\rm obs} m_{\rm H}}}, \label{eq:sigmaNT} \end{equation} where $\sigma_{\rm obs}$ is the observed velocity dispersion from the line width of the observed spectrum ($\sigma_{\rm obs} = \Delta v / \sqrt{8 \rm ln 2}$). We used $\sigma_{\rm obs}$ for the $\ceo~(3-2)$ line. % The estimated properties of the W-hub, N-filament, and cores are listed in Table~\ref{tab:ppcore}. \\ \subsection{Polarization Properties} \label{ssec:pp} Figure~\ref{fig:i_pi_lsq} shows the polarization fraction ($P$) as a function of the total intensity ($I$). The dependence of $P$ on $I$ is described with a power-law index $\alpha$ of $P \propto I^{- \alpha}$. $\alpha$ is closely related to the grain alignment efficiency. If the dust grains align % in a same fashion at all optical depths, $\alpha$ would equal zero, but if the grain alignment linearly decreases along with the increasing optical depth, $\alpha$ would be 0.5. The unity of $\alpha$ indicates that the grains align only in the thin layer at the surface of the cloud, while grains at higher densities do not align in any special direction. In the W-HFS of IC~5146, it shows a decreasing trend of polarization fraction at higher intensity regions ($\alpha=0.86$), suggesting higher degree of depolarization at the higher density region. The depolarization at the denser region is also well represented in the relationship between the polarized intensity and the total intensity shown in the right panel of Figure~\ref{fig:i_pi_lsq}. The polarization fractions at $I \lesssim 40 \rm ~mJy~beam^{-1}$ are in a range of 5 and 20\%, while those at $I > 40 \rm ~mJy~beam^{-1}$ are less than 5\%. The polarization vectors are presented in the left panel of Figure~\ref{fig:pvectors} on the 850~$\mu$m image. Polarized emission is found at the less dense filaments as well as at the more dense hub. As shown in Figure~\ref{fig:i_pi_lsq}, Figure~\ref{fig:pvectors} also displays that the polarization fraction decreases at the dense core regions. \\ \subsection{Magnetic Field Morphology and Strength} \subsubsection{Magnetic Field Morphology} \label{sss:bmor} Magnetic field orientations can be obtained by rotating submillimeter polarization vectors by 90 degree. The right panel of Figure~\ref{fig:pvectors} shows the magnetic field vectors at the W-HFS. Around C1, the main direction of magnetic field is southeast-northwest. This is nearly perpendicular to the filament direction of northeast-southwest. On the contrary, the magnetic field morphology around the cores in the hub is much more complex. The main direction is likely to be south-north, but the magnetic field vectors with east-west direction can be found too. There are two main characters of magnetic fields in the W-hub. One is the abrupt changes of the magnetic field vectors around C2, and the other is the curved magnetic field in the near vicinity of C3. The sudden change of the orientations of magnetic field vectors near the center of C2 is seemingly related to the bipolar outflows observed in CO \citep{dobashi2001}. The outflow is along the east-west direction. The magnetic field vectors show a slightly curved, hourglass morphology at the northern and western region of C2. Hence, the magnetic fields lines near C2 are possibly getting modified by the outflows. The magnetic fields near C3 seem to have a pinched shape as presented with the green lines in Figure~\ref{fig:pvectors}. This hourglass shape of magnetic field morphology can be the result of gravitational contraction of C3 \citep[e.g.,][]{pattle2017,wang2019}. Another possibility is that the active gas motions such as infalls and accretion flows which are observed in the W-hub modify the magnetic field. Infall signatures are observed around the cores of C3 and C4 in the W-hub by the $\hcop~(1-0)$ molecular line observations, and the velocity gradients of $\ceo~(1-0)$ in the W-HFS are found, implying the existence of possible accretion flows from filaments to the hub \citep[][]{chung2021}. Numerous observations propose that the infall motion and accretion flow can modify the magnetic field \citep[e.g.,][]{pillai2020}. The curved magnetic field lines going with the elongated filamentary structure in the W-hub can be an evidence of the modification of the magnetic fields due to the gas motions of infall and accretion flows. \\ \subsubsection{Magnetic Field Strength} We measured the magnetic field strengths of the cores C1, C2, and C3 using the David-Chandrasekhar-Fermi (DCF) method \citep{davis1951,chandrasekhar1953}. The magnetic field strength of C4 is not calculated since the number of magnetic field vectors are too small. The total magnetic fields of the W-hub and N-filament are also estimated. By assuming that the underlying magnetic field is uniform but distorted by the turbulence, the DCF method estimates the magnetic field strength in the plane of the sky ($B_{\rm pos}$) in $\mu$G from the magnetic field angular dispersion ($\delta \phi$), the velocity dispersion of the gas ($\sigma$), and the gas density ($\rho$) using the equation \citep{crutcher2004etal}: \begin{align} B_{\rm pos} &= Q_{\rm c} \sqrt{4 \pi \rho} \frac{\sigma}{\delta \phi} \nonumber \\ &\approx 9.3 \sqrt{\bar{n}_{\rm H_{2}}} \frac{\Delta v}{\delta \phi} , \end{align} \noindent where $Q_{\rm c}$ is the correction factor for the underestimation of angular dispersion in polarization map due to the beam integration effect and hence overestimation of the magnetic field strength, adopted as 0.5 from \citet{ostriker2001}. $\bar{n}_{\rm H_{2}}$ is the mean volume density of the molecular hydrogen in cm$^{-3}$, and $\Delta v = \sigma_{\rm NT} \sqrt{\rm 8 ln 2}$ in $\kms$. We applied an unsharp-masking method to remove the underlying ordered magnetic field structure, and then measured the magnetic field angle dispersion \citep{pattle2017}. Firstly, we estimated the large scale, background magnetic field structure by smoothing the magnetic field map with a 3$\times$3 pixel boxcar filter ($36^{\as} \times 36^{\as}$)\footnote{The size of boxcar filter is chosen to remove the underlying curved magnetic fields that seem to be caused by the gas motions such as outflow, infall, and accretion flows as mentioned in Section~\ref{sss:bmor}. % \citet{pattle2017} reported that the use of larger filter sizes than 3$\times$3 pixel can cause an overestimation of angular dispersion for even shallow field curvature. When applying 5$\times$5 pixel boxcar filter to our data, we failed to reproduce the curved magnetic field shapes in the core regions, especially around C2, and resulting angular dispersions are found to be larger than those measured from 3$\times$3 pixel boxcar filter.}. Then, the observed magnetic field map was subtracted from the smoothed map. Finally, the angular dispersion is measured from the residual map. The standard deviation of the polarization angle error is given as the uncertainty of the estimated angular dispersion in Table~\ref{tab:bfield_hf}. The angular dispersions of the regions range from $\sim$7 to 20 degree and thus the DCF method is found to be well applicable. The mean H$_{2}$ volume densities ($\bar{n}_{\rm H_{2}}$) of cores and W-hub are estimated with the total mass and ellipsoid volume assuming that the thickness % is equal to the geometric mean of the observed major and minor axis obtained from the 2D Gaussian fit. The volume of assumed spheroid is same to that of sphere having a radius of geometric mean of the observed semi-major and semi-minor size, but larger and smaller than those of oblate and prolate spheroids, respectively. The mean of volume difference of the assumed spheroid to the oblate and prolate spheroids is used for the propagation error of $\bar{n}_{\rm H_{2}}$. $\bar{n}_{\rm H_{2}}$ of the N-filament is estimated with the total mass and cylindrical volume assuming that the radius is same to the half of the filament's width. $\Delta v$ was estimated from the non-thermal velocity dispersion obtained in the Equation~\ref{eq:sigmaNT} using the measurement of the line width of the averaged spectrum of $\ceo~(3-2)$ line. The applied $\bar{n}_{\rm H_{2}}$ and $\Delta v$ and the measured magnetic field strengths are tabulated in Table~\ref{tab:bfield_hf}. The measured magnetic field strengths of the N-filament and W-hub are 80 and 600~$\mu$G, respectively, and those of the cores in the W-HFS are found to range between 0.8 and 1.2~mG which is close to the median value of the magnetic field strengths of molecular clouds belonging to the Gould Belt studied by JCMT BISTRO survey (120$\pm 60 \mu$G in the Perseus B1 by \citet{coude2019}; 6.6$\pm 4.7$~mG in the OMC~1 by \citet{pattle2017}). \\ \begin{deluxetable*}{lcccccccc} \label{tab:bfield_hf} \input{tbl_bfield4_hf} \end{deluxetable*} \vspace{5mm} \section{Analysis} \label{sec:anal} \subsection{Magnetic field strength versus Gravity} \label{sec:bvsg} The observed mass-to-magnetic flux ratio is compared with the critical mass-to-magnetic flux ratio to discuss the relative importance of magnetic fields and gravity. % The observed mass-to-magnetic flux ratio in units of the critical ratio ($\lambda_{\rm obs}$) is described as follows \citep{crutcher2004etal}: \begin{equation} \lambda_{\rm obs} = \frac{(M/\Phi)_{\rm obs}}{(M/\Phi)_{\rm crit}}. \end{equation} The observed mass-to-magnetic flux ratio is \begin{equation} (M/\Phi)_{\rm obs} = \frac{\mu m_{\rm H} N_{\rm H_{2}}}{B_{\rm pos}}, \end{equation} where $\mu$ is the mean molecular weight of 2.8 and $N_{\rm H_{2}}$ is the H$_{2}$ column density, and the critical mass-to-magnetic flux ratio is \begin{equation} (M/\Phi)_{\rm crit} = \frac{1}{2 \pi \sqrt{G}}. \end{equation} \noindent \citet{crutcher2004} proposed $\lambda_{\rm obs} = 7.6 \times 10^{-21} N_{\rm H_{2}} / B_{\rm pos}$ with $N_{\rm H_{2}}$ in cm$^{-2}$ and $B_{\rm pos}$ in $\mu$G. The real mass-to-magnetic flux ratio can be estimated using a statistical mean correction factor of one third accounting for the random inclinations for an oblate spheroid core, flattened perpendicular to the orientation of the magnetic field \citep[$\lambda = \lambda_{\rm obs} / 3$;][]{crutcher2004etal}. The estimated mass-to-magnetic flux ratios are given in Table~\ref{tab:bfield_hf}. $\lambda$ of N-filament and W-hub are 0.2$\pm$0.1 and 1.2$\pm$0.9, and they are magnetically subcritical and supercritical, respectively. $\lambda$ of C1, C2, and C3 are 0.5$\pm$0.3, 1.5$\pm$0.8, and 0.8$\pm$0.6, respectively, implying that C1 is magnetically subcritical while C2 and C3 are marginally supercritical within the uncertainty. The magnetic field strength measured from the DCF method tends to be overestimated due to the limited angular resolution and the possible smoothing effect along the line of sight which cause underestimation of angular dispersion \citep[e.g.,][]{heitsch2001,ostriker2001,crutcher2012}. Meanwhile, the non-thermal velocity dispersion used in the estimation of magnetic field strength is possibly overestimated because motions such as mass flow, infall, and outflow are possibly added into the turbulence motion. Hence, the magnetic field strengths of the cores presented here would be the upper limit of the true value and the mass-to-magnetic flux ratio would be the lower limit. \\ \subsection{Magnetic field strength versus Turbulence} \label{sec:bvst} The importance of magnetic fields with respect to the kinetic energy is investigated using the Alfv\'enic Mach number ($M_{\rm A}$) with the following equation: \begin{equation} M_{\rm A} = \frac{\sigma_{\rm NT}}{V_{\rm A}}, \end{equation} where $\sigma_{\rm NT}$ is the non-thermal velocity dispersion and $V_{\rm A}$ is the Alfv\'en velocity. Alfv\'en velocity is estimated by \begin{equation} V_{\rm A} = \frac{B}{\sqrt{4 \pi \bar{\rho}}} \end{equation} where $\bar{\rho}$ is the mean density ($\mu m_{\rm H} \bar{n}_{\rm H_{2}}$). For the total magnetic field strength of $B$, the statistical average value of $B_{\rm pos}$, $(4/\pi)B_{\rm pos}$, is used \citep{crutcher2004etal}. The calculated Alfv\'en velocities and Mach numbers are presented in Table~\ref{tab:bfield_hf}. $M_{\rm A}$ of the N-filament, W-hub, and cores are found to range between 0.2 to 0.5, being sub-Alv\'enic. Therefore, the magnetic field dominates turbulence in W-HFS. \\ \subsection{Energy balance} \label{ssec:energies} We estimated the gravitational, kinematic, and magnetic energies of the N-filament, W-hub, and cores in the W-HFS % following \citet{mckee2007}. The virial theorem to discuss the relative importance of gravitational, kinematic, and magnetic energies in a molecular cloud can be written as \begin{equation} \frac{1}{2} \ddot{I} = 2E_{\rm K} + E_{\rm B} + E_{\rm G} , \end{equation} where $E_{\rm K}$, $E_{\rm B}$, and $E_{\rm G}$ are the total kinematic energy, magnetic energy, and gravitational potential energy, respectively. The quantity $I$ is proportional to the inertia tensor of the cloud, and hence the positive and negative $\ddot{I}$ present the acceleration of expansion and contraction of the cloud. The total kinematic energies of the W-hub and cores are estimated as \begin{equation} E_{\rm K}^{\rm sphere} = \frac{3}{2} M \sigma_{\rm tot}^{2} , \end{equation} and that of the N-filament is derived as \begin{equation} E_{\rm K}^{\rm cylinder} = M \sigma_{\rm tot}^{2} , \end{equation} where $\sigma_{\rm tot}$ is the observed total velocity dispersion \citep[e.g.][]{fiege2000i}. The observed total velocity dispersion is estimated with the mean free particle of molecular weight $\mu_{\rm p}$=2.37 \citep{kauffmann2008} by the equation: \begin{equation} \sigma_{\rm tot} = \sqrt{\sigma_{\rm NT}^{2} + \frac{k_{\rm B} T}{\mu_{\rm p} m_{\rm H}}} . \end{equation} The magnetic energy is calculated with the equation of \begin{equation} E_{\rm B} = \frac{1}{2} M V_{\rm A}^{2}. % \end{equation} The gravitational energies for the W-hub and cores are estimated from the equation of \begin{equation} E_{\rm G}^{\rm sphere} = - \frac{3}{5} \frac{GM^{2}}{R}. \label{eq:eqegs} \end{equation} \noindent The geometric mean values of the semi-major and semi-minor sizes of the W-hub and cores are used for $R$. The gravitational energy for the N-filament is calculated from the equation of \begin{equation} E_{\rm G}^{\rm cylinder} = - \frac{GM^{2}}{L} \label{eq:eqegf}, \end{equation} where $L$ is the length of filament \citep{fiege2000i}. In these calculations of energies, the surface kinetic energy is ignored, and thus the estimated total kinematic energy should be treated only in the aspect of self-stability in the enclosed region \citep{wang2020b}. As the sign of $\ddot{I}$ indicates the expansion (plus) or contraction (minus) of the core, thus $|(2E_{\rm K}+E_{\rm B})/E_{\rm G}|$ can be used as an indicator of the expansion ($> 1$) or contraction of the core ($< 1$). The estimated quantities are tabulated in Table~\ref{tab:bfield_hf}. \\ \section{Discussion} \label{sec:disc} \subsection{HFSs in the both ends of Streamer} \label{ssec:compew} The dark Streamer of IC~5146 has two prominent hubs, one in the east-end and the other in the west-end. The column density map from Herschel data shows that the E-hub has the higher column density than the W-hub \citep{andre2010,arzoumanian2011}. The E- and W-hubs contain ten and four YSO candidates, respectively \citep{harvey2008,poglitsch2010,chung2021}. \citet{chung2021} have made various molecular line observations toward IC~5146 and revealed that the two hubs, located in a velocity coherent filament F4 (see the peak velocity map of F4 in Figure~\ref{fig:ewhubs}), have similar physical properties of the mass accretion rate and the nonthermal velocity dispersion. The estimated mass accretion rates from filaments to the dense cores in the E- and W-hubs are $26\pm14$ and $35\pm11~\rm M_{\odot}~ Myr^{-1}$, respectively. The nonthermal velocity dispersions of the two hubs are about 3 times of the sound speed. Meanwhile, the 850~$\mu$m image toward the E- and W-hubs shows slightly different fragmentation features. The W-hub has two cores with similar masses to each other and less massive one core, while the E-hub has a dominant clump and minor cores. \begin{sidewaysfigure*} \begin{center} \vspace{-3.3in} \includegraphics[width=\textwidth,keepaspectratio]{ewhubs4land3.key.pdf} \end{center} \caption{The eastern-HFS (E-HFS) and western-HFS (W-HFS) regions of the IC~5146 Streamer which have been observed with JCMT/POL-2. The hub regions (E- and W-hubs) are indicated with white dashed circles with a 3$^{\prime}$ diameter on the E- and W-HFSs' 850~$\mu$m images of the left and right panels. The white circles at the top right corner of the left and right panels show the POL-2 850~$\mu$m beam size of 14.1~arcsec. The white solid curves are the guide lines of velocity coherent filament F4 and the dashed curves are those of filaments having different velocities to that of F4. The red lines in the left and right panels are given to show the curved magnetic fields in the hubs. {\it Left}: The magnetic field orientation map of the E-HFS of IC~5146 taken from W2019. {\it Middle}: The schematic figure of edge-driven collapse and fragmentation scenario from W2019 and $\ceo~(1-0)$ peak velocity map of the Streamer by the TRAO FUNS \citep{chung2021}. The velocity coherent filament, F4, where the E- and W-hubs are located is outlined with red. The filaments and clumps with different velocities from that of F4 are overlapped in the line-of-sight direction, and their velocity fields and outlines are drawn with images on the background and thin solid lines in the foreground of F4, respectively). The two navy circles indicate the E- and W-hub regions observed with JCMT/POL-2. The smaller dashed circles represent the inner 3$^{\prime}$ region with the best sensitivity. {\it Right}: The magnetic field orientation map of the W-HFS (this study). \label{fig:ewhubs}} \end{sidewaysfigure*} The E-hub of IC~5146 has been investigated by W2019 as a part of BISTRO survey \citep{wardthompson2017}. They suggested the edge-driven collapse and fragmentation scenario for the formation mechanism of the two hubs in the Streamer based on the larger aspect ratio of Streamer than 5 where the edge-driven collapsing efficiently happens and the curved magnetic field shape in the hub possibly caused by the gravitational contraction. According to the edge-driven collapse and fragmentation scenario, the magnetic field of E- and W-hubs should have `)' and `(' shapes, respectively, as presented in the middle top panel of Figure~\ref{fig:ewhubs}. In that sense, we examined the magnetic field orientations in the E- and W-hubs. % As shown in the Figure~\ref{fig:ewhubs}, the magnetic fields in the E-hub tend to be well ordered, i.e., perpendicular to the large scale filament guided with white lines. The shape of bending magnetic field in E-hub agrees to the edge-driven collapse and fragmentation scenario. In the W-hub, however, the observed shape of the magnetic field near C3 does not exactly match to the expectation of the scenario. The magnetic field in the W-hub is shown to be curved at the vicinity of C3, but has a pinched shape, guided by the red lines in the right panel of Figure~\ref{fig:ewhubs}. It is almost parallel to the direction of filament, rather than a `('-shape being perpendicular to the filament direction. The magnetic fields in the W-hub are changing its directions, and they are also connected to the elongated filamentary structures of which feature can be seen in the E-hub, too. In Section~\ref{sss:bmor}, we proposed two possibilities of the gravitational contraction and the gas motion of accretion flows. And, the accretion rates at the E-hub from filaments to cores are revealed to be similar to those at the W-hub \citep{chung2021}. Hence, the curved magnetic fields in the E- and W-hubs may be the results of the modification by the accretion flows in addition to the gravitational contraction proposed by W2019. As a whole, the magnetic fields in the W-hub are more complex than those in the E-hub. This is seemingly related to the more complicate conditions of W-hub. E-hub has a major clump at the center (Clump Number~47, E-47 hereafter, in the top panel of the Figure~\ref{fig:ewhubs}) with minor cores at the surroundings (Clump Number~46, 48, 52, and etc.). However, W-hub has three cores (C2, C3, and C4), out of which C2 and C3 have similar masses, implying that the magnetic fields can be locally modified by the gravitational contraction among the cores in the W-hub. We estimated the magnetic field strength of E-47 using the angular dispersion of 17.4~degree and the velocity dispersion of 0.37~$\kms$ given in W2019 and H$_{2}$ number density re-estimated with the mass and size given in \citet{johnstone2017} using the distance of 600~pc. The B-field strength of E-hub is measured with the same $\delta \phi$ and $\Delta v$ to that of E-47 but $\bar n_{\rm H_{2}}$ calculated with the mass and size of E-hub given in Table~\ref{tab:ppcore}. The mass-to-flux ratio, Alfv\'enic Mach number, $E_{\rm G}$, $E_{\rm K}$, and $E_{\rm M}$ of E-47 and E-hub are also measured, and the results are tabulated in Table~\ref{tab:bfield_hf}. The magnetic field strength of E-47 is $500 \pm 100$~$\mu$G which is consistent with that of W2019. The mass-to-magnetic flux ratio is $2.2 \pm 0.7$, and the Alfv\'enic Mach number is $0.5 \pm 0.2$. W2019 reported that gravity and magnetic field are currently of comparable importance in the E-hub and turbulence is less important. The mass-to-magnetic flux ratio recalculated in this study presents that E-47 is magnetically supercritical and sub-Alfv\'enic. \\ \subsection{Energy balance and fragmentation types of hubs and filaments into cores} \label{ssec:ebfrac} This section discusses the fragmentation types of the HFSs of IC 5146 into cores with help of the energy budget. The cores would be the fragmentation results of their natal hub and filament, and their current distributions could be determined by the fragmentation types ruled by the energy balance of gravity, magnetic field, and turbulence in the hub and filament. At the same time, the cores may (or may not) fragment into smaller substructures and form protostars in the future, and their present energy balance probably gives the key to their future evolution. Here we discuss the fragmentation from filament/hub to cores in the past. As for the possibility of formation of protostars from cores in the future, we will discuss in the next section in a manner of evolution of HFSs (\S~\ref{ssec:evol}). Recently, \citet{tang2019} proposed that the differences in relative importance of gravitational, magnetic, and turbulent energies can determine the fragmentation types of molecular clumps. They suggested three types of fragmentation, i.e., $clustered$ fragmentation where magnetic field is not so dominant that turbulence can make scattered cores, leading those to be in collapse, $aligned$ fragmentation in which magnetic field dominates turbulence and matters mainly collapse along the field lines, and $no$ fragmentation where gravity dominates both magnetic field and turbulence. The N-filament of W-HFS has three cores, i.e., C1, another in the north of C1 (N-C1), and the other in the west of C1 (W-C1). % The N-filament seems to be a type of the $aligned$ fragmentation. The E- and W-hubs of IC~5146 both have multiple cores, but they show slightly different fragmentation features. As shown in Figure~\ref{fig:ewhubs}, E-hub has a dominantly large and massive core, E-47, at the center and several small cores around it. The mass of E-47 is about 10 to 100 times larger than those of the other minor cores in the E-hub. What is more interesting is that the minor cores around E-47 place at the overlapped points of the E-hub and filament having different velocity from that of E-hub. % On the contrary to the E-hub, W-hub has three cores but no dominant core. Rather than that, two cores (C2 and C3) among the three have similar sizes and masses. Hence, E-hub and W-hub appear to be close to $no$ and $clustered$ fragmentation types, respectively. The gravitational, kinematic, and magnetic energies are given in Table~\ref{tab:bfield_hf}. This table shows that E-hub has larger energies in gravity, kinematics, and magnetic field than those of N-filament and W-hub. This is mainly caused by the larger mass of E-hub than those of N-filament and W-hub. Figure~\ref{fig:energies1} shows the relative distribution of $E_{\rm G}$, $E_{\rm K}$, and $E_{\rm B}$ for the regions of N-filament, W-hub, and E-hub. N-filament has the largest portion of 64\% in $E_{\rm B}$, while W- and E-hubs have the largest portion of 57\% and 76\% in $E_{\rm G}$. In case of N-filament, its energy budget and fragmentation type agree to the proposal of \citet{tang2019} where B-field dominates turbulence, local gravitational collapse happens along B-field lines, and fragments line up perpendicular to the B-field. In case of W- and E-hubs, $E_{\rm G}$ dominates both $E_{\rm K}$ and $E_{\rm B}$. However, W- and E-hubs have a mode of $clustered$ and $no$ fragmentation, respectively. For the reason why the two hubs have different fragmentation shapes even though both have dominant gravitational energy, we note that W-hub has $E_{\rm K}$ of 18\% while E-hub has only $E_{\rm K}$ of 11\%. In the proposal of \citet{tang2019}, to have $clustered$ fragmentation type, turbulence should have such meaningful importance that it can scatter the materials irregularly. Even though $E_{\rm G}$ in W-hub is relatively dominant to $E_{\rm K}$ and $E_{\rm B}$, but its portion (57\%) is smaller than that of E-hub (76\%). Moreover, the portion of $E_{\rm K}$ in W-hub is about twice larger than in E-hub. Hence, we presume that the different portions of $E_{\rm K}$ in W- and E-hubs make them have different fragmentation types. The value of $|(2E_{\rm K}+E_{\rm B})/E_{\rm G}|$ indicates whether the interstellar clouds and/or cores are contracting ($< 1$) or expanding ($> 1$). N-filament, W-hub, and E-hub have $|(2E_{\rm K}+E_{\rm B})/E_{\rm G}| = 18 \pm 11$, 1.0$\pm$0.6, and 0.5$\pm$0.3, respectively. These values well represent the current fragmentation shapes of the filament and hubs, i.e., N-filament with $|(2E_{\rm K}+E_{\rm B})/E_{\rm G}| > 1$ dispersed and has aligned cores, while E-hub with $|(2E_{\rm K}+E_{\rm B})/E_{\rm G}| < 1$ contracted and has one massive core. W-hub has $|(2E_{\rm K}+E_{\rm B})/E_{\rm G}| \sim 1$ being on the border of contraction and dispersion and having clustered cores of similar masses. \\ \subsection{Evolution of HFSs in the Streamer} \label{ssec:evol} Hub-filament structure is recognized as a birthplace of stars, especially high-mass stars and stellar clusters. It is suggested that the HFS forms by a layer fragmentation in which the fragmentation occurs in gas layers threaded with magnetic field \citep[e.g.,][]{myers2009}. W2019 proposed a scenario for a core-scale HFS formation in the Streamer of IC~5146 according to which the E- and W-hubs form first by edge driven fragmentation in a long filament with strong magnetic field and then further fragmentation occurs in the dense hubs, making the local magnetic field morphology to be modified. However, the magnetic fields orientations of the W-hub are found to have slightly different bending direction to that the scenario expects (see \S~\ref{ssec:compew}). Meanwhile, cloud-cloud collision is also suggested to be a possible HFS formation mechanism \citep[e.g.,][]{kumar2020}, and \citet{chung2021} suggested that the different nonthermal velocity dispersions between the hub and dense cores in them is an evidence of E- and W-HFSs formation by collision of turbulent flows. In this section, we proposed a formation and evolution scenario of the E- and W-HFSs in the Streamer with what we found in our previous and present studies. % As shown in Figure~\ref{fig:hubevol}, it is given as the turbulence driven stage at first and then fragmentation stage with its characteristic relative importance of gravity, magnetic field, and turbulence. At the turbulence driven stage, hubs are formed by the collision of turbulent flows. The colliding model is suggested in which filaments form by collision of turbulent flows and then the cores form in the turbulent dissipated stagnation point \citep{ballesteros1999,padoan2002}. \citet{chung2021} show that the E- and W-hubs in IC~5146 are supersonic while the cores forming in the two hubs are transonic which is consistent with the expectation of the colliding model. At the fragmentation stage, the different energy balances of gravity, magnetic field, and turbulence make the hubs to be differently fragmented. As discussed in the previous section, E-hub may have such dominant $E_{\rm G}$ but less $E_{\rm K}$ that it evolves into $no$ fragmentation. The small cores around E-47 may form at the very early stage of HFS when the turbulent flows collide, because the positions of small cores are matched with the overlapped regions of the E-hub and filaments (see the clumps and filaments in Figure~\ref{fig:ewhubs}). Hence, at the fragmentation stage, E-hub has not been fragmented any more but becomes a massive core, E-47. In the W-hub, turbulence is dissipated slowly and becomes important to disturb the regions in the hub and make a number of irregular cores. The turbulence of N-filament may have been quickly dissipated, making the filament to be possibly in a mode of the $aligned$ fragmentation type because of its dominant magnetic field energy. We can expect how the cores evolve in the future with their energy balances and $|(2E_{\rm K}+E_{\rm B})/E_{\rm G}|$ values. The gravitational, kinematic, and magnetic field energies of cores are tabulated in Table~\ref{tab:bfield_hf}, and the relative portions of energies are presented in Figure~\ref{fig:energies2}. In all the cores, the dominant energy of each core is found to be consistent with that of their natal filament or hub, but the relative portions slightly differ from those of the filament or hub. In E-47, $E_{\rm G}$ has the largest portion like in E-hub. Moreover, E-47 has $|(2E_{\rm K}+E_{\rm B})/E_{\rm G}| = 0.4 \pm 0.2$, so it will continue to contract, maybe without further fragmentation. C2 has dominant $E_{\rm G}$ as W-hub has, and C2 and W-hub show almost the same relative portions of energies as well as $|(2E_{\rm K}+E_{\rm B})/E_{\rm G}|$ values (0.9$\pm$0.6 and 1.0$\pm$0.6, respectively), being close to the gravitational equilibrium. Hence, C2 is expected to fragment into smaller structures having $clustered$ distributions like its natal cloud, W-hub. C3 has slightly larger $|(2E_{\rm K}+E_{\rm B})/E_{\rm G}|$ than 1 (2$\pm$1). Though C3 has the largest portion of 43\% in $E_{\rm G}$, but it is hard to say $E_{\rm G}$ significantly dominates $E_{\rm B}$ whose portion is 39\%. C1 has dominant $E_{\rm B}$ (64\%) as N-filament has. In fact, the smallest $\delta \phi$ of C1 given in Table~\ref{tab:bfield_hf} already represented that $E_{\rm B}$ is dominant in C1. The $|(2E_{\rm K}+E_{\rm B})/E_{\rm G}|$ value of C1 is larger than 1 (3$\pm$2) and thus is believed to be in dispersion in future. According to the energy balance, C1 is expected to have smaller fragments lined up along the filament. \\ \section{Summary} \label{sec:summ} To study the roles of the magnetic field, turbulence, and gravity in the evolution of HFS, we have carried out polarimetry and $\ceo~(3-2)$ observations with JCMT SCUBA-2/POL-2 and HARP toward the western HFS (W-HFS) of IC~5146. \begin{enumerate} \item We identified a few tens of cores with 850~$\mu$m emission, and made analyses on the four 850~$\mu$m cores in the central $\sim 3^{\prime}$ area of observation with the best sensitivity. Among the four cores, one (C1) is on a northern filament (N-filament) of the W-HFS and three (C2, C3, and C4) are in the W-hub. The magnetic field geometry of C1 is perpendicular to the filament axis, while that in the W-hub is quite complex. The plane-of-the sky magnetic field strengths ($B_{\rm pos}$) are estimated for N-filament, W-hub, and cores. $B_{\rm pos}$ of C4 was not measured because the number of detected magnetic field vectors of C4 was not enough. The magnetic field strengths are in the range of $\sim$80 to 1200~$\mu$G. The mass-to-magnetic flux ratios are measured, finding that C1 is magnetically subcritical and C2 and C3 are marginally supercritical within the uncertainties. The Alfv\'enic Mach numbers estimated show that all the cores are in subsonic motions and have larger magnetic energy than the kinematic energy. \item We investigated the magnetic field morphologies of the W-hub and eastern hub (E-hub) to probe the edge-driven collapse and fragmentation scenario for the HFS formation in the Streamer. The magnetic field geometry of E-hub agrees to the expectation of the scenario, but that of W-hub does not. The curved B-fields of both hubs are supposed to be modified by the accretion flows and/or gravitational contraction in the hubs. % The re-estimated magnetic field strength of E-47, the dominant central clump of E-hub, using the same method as the one for the cores in the W-HFS with the magnetic field angular dispersion and velocity dispersion given in W2019 is 500$\pm 100 ~\mu$G, being consistent to that of W2019. % \item Referring to the suggestion for the fragmentation types of the HFS by \citet{tang2019}, we discussed the fragmentation scenario in E-HFS and W-HFS by using the relative importance among gravitational, kinematic, and magnetic energies ($E_{\rm G}$, $E_{\rm K}$, and $E_{\rm B}$) of the N-filament and W-hub as well as of E-hub. With the distribution of cores in them, we classified the N-filament of W-HFS into $aligned$, the W-hub into $clustered$, and E-hub into $no$ fragmentation, respectively. The relative portions of $E_{\rm G}$, $E_{\rm K}$, and $E_{\rm B}$ of filament and hubs are examined, and N-filament has dominant $E_{\rm B}$ (64\%) and E-hub has dominant $E_{\rm G}$ (76\%). This is well matched to the suggestion by \citet{tang2019}. W-hub is dominant in $E_{\rm G}$ (57\%), but it has relatively larger portion of $E_{\rm K}$ ($\sim$20\%) than E-hub has ($\sim$10\%). We argue that the slightly higher portion of kinematic energy might cause the $clustered$ fragmentation in the W-hub. \item We propose the evolutionary scenario of the E- and W-HFSs in the dark Streamer of IC~5146. From the turbulence properties of the HFSs and cores in them \citep{chung2021}, both of E- and W-HFSs have formed first by the collision of turbulent flows. In the E-hub, turbulence has been quickly dissipated and made $no$ fragmentation due to the dominant $E_{\rm G}$ and less $E_{\rm K}$. N-filament of W-HFS which has dominant $E_{\rm B}$ fragmented into $aligned$ cores. % The current energy balance of W-hub indicates that the turbulence dissipation in the region has been slowly progressed, and turbulence has worked on disturbing the material, making W-hub to be in its $clustered$ fragmentation into C2, C3, and C4. From the current energy balances and the values of $|(2E_{\rm K}+E_{\rm B})/E_{\rm G}|$ of cores, it is expected that E-47 may continue contracting without any further fragmentation, and C1 and C2 may be fragmented into $aligned$ and $clustered$ substructures, respectively. \\ \end{enumerate} The results support that the subtle different balance in gravitational, kinematic, and magnetic energy can cause the different type of fragmentation in a cloud proposed by \citet{tang2019}. We tentatively propose that, in the $E_{\rm G}$ dominant clouds, $\sim$20\% of $E_{\rm K}$ can make the material scattered and cause the $clustered$ fragmentation while $\sim$10\% $E_{\rm K}$ cannot disturb the material and hence make the cloud to be $no$ fragmentation. To improve the criteria of energy balance precisely, we will make more investigations toward various types of clouds and also with multi-scale observations. \\ \acknowledgments The authors would like to acknowledge the anonymous referee for valuable comments that improved the quality of the paper. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2019R1I1A1A01042480) and the National R \& D Program through the National Research Foundation of Korea Grants funded by the Korean Government (NRF-2016R1A5A1013277 and NRF-2016R1D1A1B02015014). E.J.C. is supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2022R1I1A1A01053862). C.W.L. is supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2019R1A2C1010851), and by the Korea Astronomy and Space Science Institute grant funded by the Korea government (MSIT) (Project No. 2022-1-840-05). W.K. was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2021R1F1A1061794). H.Y. was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (NRF-2021R1A6A3A01087238). \makeatletter \renewcommand\@biblabel[1]{} \makeatother
Title: Dark matter effects on tidal deformabilities and moment of inertia in a hadronic model with short-range correlations
Abstract: In this work we study the outcomes related to dimensionless tidal deformability $(\Lambda)$ obtained through a relativistic mean-field (RMF) hadronic model including short-range correlations (SRC) and dark matter (DM) content [Phys. Rev. D 105, 023008 (2022)]. As a dark particle candidate, we use the lightest neutralino interacting with nucleons through the Higgs boson exchange. In particular, we test the model against the constraints regarding the observation of gravitational waves from the binary neutron star merger GW170817 event provided by LIGO and Virgo collaboration (LVC). We show that $\Lambda$ decreases as the dark particle Fermi momentum ($k_F^{DM}$) increases. This feature favors the RMF-SRC-DM model used here to satisfy the limits of $\Lambda_{1.4}=190^{+390}_{-120}$ ($\Lambda$ of a $1.4M_\odot$ neutron star), and $\tilde{\Lambda}=300^{+420}_{-230}$ given by the LVC. We also show that as $k_F^{DM}$ increases, $\Lambda_1$ and $\Lambda_2$, namely, tidal deformabilities of the binary system, are also moved to the direction of the GW170817 observational data. Finally, we verify that the inclusion of DM in the system does not destroy the \mbox{$I$-Love} relation (correlation between $\Lambda$ and dimensionless moment of inertia, $\bar{I}$). The observation data for $\bar{I}_\star\equiv\bar{I}(M_\star)=11.10^{+3.68}_{-2.28}$, with $M_\star=1.338M_\odot$, is attained by the RMF-SRC-DM model.
https://export.arxiv.org/pdf/2208.06067
\title{Dark matter effects on tidal deformabilities and moment of inertia in a hadronic model with short-range correlations} \author{O. Louren\c{c}o, C. H. Lenzi, T. Frederico, and M. Dutra} \affiliation{Departamento de F\'isica, Instituto Tecnol\'ogico de Aeron\'autica, DCTA, 12228-900, S\~ao Jos\'e dos Campos, SP, Brazil} \date{\today} \section{Introduction} The study of strongly interacting matter at high density regime, i.e., regions whose densities attain some times the nuclear matter saturation density ($\rho_0$), can be performed through the analysis of astrophysical compact objects such as neutron stars (NSs). In the last years, a huge amount of data related to these systems was provided by the LIGO/Virgo Collaboration (LVC) since their first detection of gravitational waves (GW), a phenomenon predicted by Albert Einstein in 1916 after the formulation of General Relativity~\cite{einstein1,einstein2}. LVC published in Ref.~\cite{bholes1} their results regarding the GW produced by two colliding black holes, detected in 2015 in an event called GW150914. In 2017 the event GW170817, also detected by LVC~\cite{ligo17}, was confirmed to have produced GW from the merger of two NSs in a binary system. This former event gave rise to constraints on the tidal deformabilities of each companion star, namely, an effect analogous to the tides observed in our planet due to presence of the Moon. Neutron stars can also be used as source of study of dark matter (DM)~\cite{dmrev,zwicky,oort,lensing}. Despite the interaction between dark particles and luminous matter is extremely weak (otherwise DM would be easily detected), the gravitational force can bound this exotic matter to the ordinary one present in the massive astrophysical systems. In that direction, many investigations were performed in which DM is coupled to hadronic relativistic mean field (RMF) models, see Refs.~\cite{rmfdm1,rmfdm2,rmfdm3,rmfdm4,rmfdm5,rmfdm6,rmfdm7,rmfdm8,rmfdm9,rmfdm10,rmfdm11,rmfdm12} for instance. In the most of these studies. the lightest neutralino, belonging to a class of weakly interacting massive particles (WIMP)~\cite{cand1,cand2}, is used as dark particle candidate, but there are other ones, namely, gravitinos, axinos, axions, sterile neutrinos, WIMPzillas, supersymmetric Q-balls, and mirror matter~\cite{cand1,cand2}. In Ref.~\cite{dmnosso} the lightest neutralino was implemented as the dark particle in a RMF model with short-range correlations (SRC)~\cite{nature,hen2017,Duer2019,rev3,cai,baoanlicai} included. These kind of correlations are observed in electron-induced quasielastic proton knockout experiments, in which nonindependent nucleons are verified to correlate each other in pairs with high relative momentum. Probes of this phenomenon were performed in experiments at the Thomas Jefferson National Accelerator Facility (JLab), where it was found that the most of the emerged pairs are deuteron-like~\cite{orhen}, around $90\%$ in measurements of the \mbox{$^{12}$C nucleus}~\cite{subedi2008}, for instance. Based on this \mbox{RMF-SRC} hadronic model, it was shown in Ref.~\cite{dmnosso} that it is possible to describe NSs with DM content presenting masses in the limits given in Ref.~\cite{cromartie}, namely, $M=2.14^{+0.10}_{-0.09}M_{\odot}$ ($68.3\%$ credible level), and simultaneously in agreement with the recent observational data provided by the NASA’s Neutron star Interior Composition Explorer (NICER) mission that, provided constraints on the mass-radius profiles~\cite{nicer1,nicer2,nicer3,nicer4}. The ``best'' parametrizations of this \mbox{RMF-SRC-DM} model were constructed by taking into account the variation in the symmetry energy slope in a range compatible with the results reported by the updated lead radius experiment~(PREX-2)~\cite{piekaprex2,prex2}, and also overlapping with the boundaries obtained from the analysis of charged pions spectra~\cite{pions}. The results provided in Ref.~\cite{dmnosso} are also compatible with more recent data given in Ref.~\cite{cromartie-apj} regarding the most massive neutron star known, namely, $M=2.08^{+0.07}_{-0.07}M_{\odot}$ ($68.3\%$ credibility). In this work, we investigate whether it is also possible to describe the constraints related to the dimensionless tidal deformabilities regarding the GW170817 event by using the \mbox{RMF-SRC-DM} model of Ref.~\cite{dmnosso}. In particular, we verify that the system with DM content is able to reproduce the limits of $\Lambda_{1.4}=190^{+390}_{-120}$~\cite{ligo18} (dimensionless tidal deformability of a $ 1.4M_\odot$ NS), the range of $\tilde{\Lambda}=300^{+420}_{-230}$~\cite{ligo19} (quantity related to the dimensionless deformabilities of the binary system stars: $\Lambda_1$ and $\Lambda_2$), and the $\Lambda_1\times\Lambda_2$ regions. Furthermore, we also show that the \mbox{$I$-Love} relation is preserved even in the system with DM. Moreover, we found that the model also satisfies the indirect observational data related to the dimensionless moment of inertia, namely, $\bar{I}(M_\star)=11.10^{+3.68}_{-2.28}$, with $M_\star=1.338M_\odot$. Regarding this last quantity, we remark that the obtained range is not coming from a direct measured observable. It is a derived quantity under certain assumptions, as we make clear later on. We organize all these findings as follows. In Sec.~\ref{form}, we address the main equations regarding the \mbox{RMF-SRC-DM} model. The predictions of the model concerning the GW170817 constraints on the tidal deformabilities and moment of inertia are given in Sec.~\ref{stellar}. Our summary and concluding remarks are presented in Sec.~\ref{summ}. \section{Hadronic model with SRC and DM } \label{form} We start by presenting the model that describes the hadronic matter considered here, defined by its Lagrangian density. It reads \begin{align} &\mathcal{L}_{\mbox{\tiny HAD}} = \overline{\psi}(i\gamma^\mu\partial_\mu - M_{\mbox{\tiny nuc}})\psi + g_\sigma\sigma\overline{\psi}\psi - g_\omega\overline{\psi}\gamma^\mu\omega_\mu\psi \nonumber \\ &- \frac{g_\rho}{2}\overline{\psi}\gamma^\mu\vec{\rho}_\mu\vec{\tau}\psi +\frac{1}{2}(\partial^\mu \sigma \partial_\mu \sigma - m^2_\sigma\sigma^2) - \frac{A}{3}\sigma^3 - \frac{B}{4}\sigma^4 \nonumber\\ &-\frac{1}{4}F^{\mu\nu}F_{\mu\nu} + \frac{1}{2}m^2_\omega\omega_\mu\omega^\mu + \frac{c}{4}(g_\omega^2\omega_\mu\omega^\mu)^2 -\frac{1}{4}\vec{B}^{\mu\nu}\vec{B}_{\mu\nu} \nonumber \\ &+ \frac{1}{2}\alpha'_3g_\omega^2 g_\rho^2\omega_\mu\omega^\mu \vec{\rho}_\mu\vec{\rho}^\mu + \frac{1}{2}m^2_\rho\vec{\rho}_\mu\vec{\rho}^\mu. \label{dlag} \end{align} In this expression $\psi$ represents the nucleon field, and $\sigma$, $\omega^\mu$, and $\vec{\rho}_\mu$ are the scalar, vector, and isovector-vector fields related to the mesons $\sigma$, $\omega$, and $\rho$, respectively, with tensors $F_{\mu\nu}$ and $\vec{B}_{\mu\nu}$ given by $F_{\mu\nu}=\partial_\nu\omega_\mu-\partial_\mu\omega_\nu$ and $\vec{B}_{\mu\nu}=\partial_\nu\vec{\rho}_\mu-\partial_\mu\vec{\rho}_\nu$. The mesons masses are $m_\sigma$, $m_\omega$, and $m_\rho$. $M_{\mbox{\tiny nuc}}$ is the nucleon rest mass. Regarding the inclusion of the dark matter content, we proceed as in Ref.~\cite{dmnosso} and consider a dark fermion (mass $M_\chi$, Dirac field $\chi$) interacting with nucleons through the exchange of the Higgs boson (mass $m_h$, scalar field $h$). In this perspective, the Lagrangian density of the total system becomes \begin{align} \mathcal{L} &= \overline{\chi}(i\gamma^\mu\partial_\mu - M_\chi)\chi + \xi h\overline{\chi}\chi +\frac{1}{2}(\partial^\mu h \partial_\mu h - m^2_h h^2) \nonumber\\ &+ f\frac{M_{\mbox{\tiny nuc}}}{v}h\overline{\psi}\psi + \mathcal{L}_{\mbox{\tiny HAD}}, \label{dlagtotal} \end{align} with $fM_{\mbox{\tiny nuc}}/v$ being the Higgs-nucleon coupling ($v=246$~GeV is the Higgs vacuum expectation value). The constant $\xi$ is the strength of the Higgs-dark particle coupling. By using the mean-field approximation to the fields, one has $\sigma\rightarrow \left<\sigma\right>\equiv\sigma$, $\omega_\mu\rightarrow \left<\omega_\mu\right>\equiv\omega_0$, $ \vec{\rho}_\mu\rightarrow \left<\vec{\rho}_\mu\right>\equiv \bar{\rho}_{0(3)}$, $h\rightarrow \left<h\right>\equiv h$, that leads to the following field equations \begin{align} m^2_\sigma\,\sigma &= g_\sigma\rho_s - A\sigma^2 - B\sigma^3 \\ m_\omega^2\,\omega_0 &= g_\omega\rho - Cg_\omega(g_\omega \omega_0)^3 - \alpha_3'g_\omega^2 g_\rho^2\bar{\rho}_{0(3)}^2\omega_0, \\ m_\rho^2\,\bar{\rho}_{0(3)} &= \frac{g_\rho}{2}\rho_3 -\alpha_3'g_\omega^2 g_\rho^2\bar{\rho}_{0(3)}\omega_0^2, \\ [\gamma^\mu (&i\partial_\mu - g_\omega\omega_0 - g_\rho\bar{\rho}_{0(3)}\tau_3/2) - M^*]\psi = 0, \\ m^2_h\,h &= \xi\rho_s^{\mbox{\tiny DM}} + f\frac{M_{\mbox{\tiny nuc}}}{v}\rho_s \\ (\gamma^\mu &i\partial_\mu - M_\chi^*)\chi = 0, \end{align} with $\tau_3=1$ for protons and $\tau_3=-1$ for neutrons. The effective nucleon and dark effective masses are $M^* = M_{\mbox{\tiny nuc}} - g_\sigma\sigma - f\frac{M_{\mbox{\tiny nuc}}}{v}h$ and $M^*_\chi = M_\chi - \xi h$, respectively. Here we use $\xi=0.01$ and $M_{\chi}=200$~GeV (lightest neutralino). Concerning $f$, we use the central value obtained in Ref.~\cite{cline}, namely, $f=0.3$. Such a combination of values gives a spin-independent scattering cross-section around $10^{-47}$~cm$^2$~\cite{rmfdm4} compatible with experimental data from PandaX-II~\cite{pandaxII}, LUX~\cite{lux}, and DarkSide~\cite{darkside} collaborations. Furthermore, the densities are given by $\rho_s=\left<\overline{\psi}\psi\right>={\rho_s}_p+{\rho_s}_n$, $\rho=\left<\overline{\psi}\gamma^0\psi\right> = \rho_p + \rho_n$, $\rho_3=\left<\overline{\psi}\gamma^0{\tau}_3\psi\right> = \rho_p - \rho_n=(2y_p-1)\rho$, and $\rho_s^{\mbox{\tiny DM}} = \left<\overline{\chi}\chi\right>$, where \begin{eqnarray} \rho_s^{\mbox{\tiny DM}} &=& \frac{\gamma M^*_\chi}{2\pi^2}\int_0^{k_F^{\mbox{\tiny DM}}} \hspace{-0.5cm}\frac{k^2dk}{(k^2+M^{*2}_\chi)^{1/2}}. \end{eqnarray} Here $p,n$ defines protons and neutrons, and $\gamma=2$ is the degeneracy factor. The proton fraction is given by $y_p=\rho_p/\rho$, with proton/neutron densities given by $\rho_{p,n}=\gamma{k_F^3}_{p,n}/(6\pi^2)$. ${k_F}_{p,n}$ and $k_F^{\mbox{\tiny DM}}$ are the Fermi momenta related to protons/neutrons, and to the dark particle, respectively. The thermodynamics of the entire system composed by hadrons and dark matter is determined from the energy density and the pressure, both quantities obtained through the energy-momentum tensor $T^{\mu\nu}$ as $\mathcal{E}=\left<T_{00}\right>$ and $P=\left<T_{ii}\right>/3$. In our case such quantities are given by \begin{align} &\mathcal{E} = \frac{m_{\sigma}^{2} \sigma^{2}}{2} +\frac{A\sigma^{3}}{3} +\frac{B\sigma^{4}}{4} -\frac{m_{\omega}^{2} \omega_{0}^{2}}{2} - \frac{Cg_{\omega}^4\omega_{0}^4}{4} - \frac{m_{\rho}^{2} \bar{\rho}_{0(3)}^{2}}{2} \nonumber\\ &+ g_{\omega} \omega_{0} \rho + \frac{g_{\rho}}{2} \bar{\rho}_{0(3)} \rho_{3} -\frac{1}{2} \alpha'_3 g_{\omega}^{2} g_{\rho}^{2} \omega_{0}^{2} \bar{\rho}_{0(3)}^{2} + \mathcal{E}_{\mathrm{kin}}^{p} + \mathcal{E}_{\mathrm{kin}}^{n} \nonumber\\ &+ \frac{m_h^2h^2}{2} + \mathcal{E}_{\mathrm{kin}}^{\mbox{\tiny DM}}, \label{eden} \end{align} and \begin{align} &P = -\frac{m_{\sigma}^{2} \sigma^{2}}{2} - \frac{A\sigma^{3}}{3} - \frac{B\sigma^{4}}{4} + \frac{m_{\omega}^{2} \omega_{0}^{2}}{2} + \frac{Cg_{\omega}^4\omega_0^4}{4} \nonumber\\ &+ \frac{m_{\rho}^{2} \bar{\rho}_{0(3)}^{2}}{2} + \frac{1}{2} \alpha'_3 g_{\omega}^{2} g_{\rho}^{2} \omega_{0}^{2} \bar{\rho}_{0(3)}^{2} + P_{\mathrm{kin}}^{p} + P_{\mathrm{kin}}^{n} - \frac{m_h^2h^2}{2} \nonumber\\ &+ P_{\mathrm{kin}}^{\mbox{\tiny DM}}, \label{press} \end{align} with the kinetic terms of the dark particle written as \begin{eqnarray} \mathcal{E}_{\mbox{\tiny kin}}^{\mbox{\tiny DM}} &=& \frac{\gamma}{2\pi^2}\int_0^{k_F^{\mbox{\tiny DM}}}\hspace{-0.3cm}k^2(k^2+M^{*2}_\chi)^{1/2}dk, \label{ekindm} \\ P_{\mbox{\tiny kin}}^{\mbox{\tiny DM}} &=& \frac{\gamma}{6\pi^2}\int_0^{{k_F^{\mbox{\tiny DM}}}}\hspace{-0.5cm}\frac{k^4dk}{(k^2+M^{*2}_\chi)^{1/2}}. \label{pkindm} \end{eqnarray} In the hadronic side of the system, the implementation of the SRC implies in the replacement of the usual step functions in the kinetic terms by the one including the high momentum tail~\cite{cai,lucas}, namely, $n_{n,p}(k) = \Delta_{n,p}$ for $0<k<k_{F\,{n,p}}$ and $n_{n,p}(k) = C_{n,p}\,(k_{F\,{n,p}}/k)^4$ for $k_{F\,{n,p}}<k<\phi_{n,p} \,k_{F\,{n,p}}$ in which $\Delta_{n,p}=1 - 3C_{n,p}(1-1/\phi_{n,p})$, $C_p=C_0[1 - C_1(1-2y_p)]$, $C_n=C_0[1 + C_1(1-2y_p)]$, $\phi_p=\phi_0[1 - \phi_1(1-2y_p)]$ and $\phi_n=\phi_0[1 + \phi_1(1-2y_p)]$. Here we use $C_0=0.161$, $C_1=-0.25$, $\phi_0 = 2.38$ and $\phi_1=-0.56$~\cite{cai,lucas}. Such change leads to modified expressions to the kinetic terms, namely, \begin{eqnarray} \mathcal{E}_{\text {kin }}^{n,p} &=& \frac{\gamma \Delta_{n,p}}{2\pi^2} \int_0^{{k_{F\,{n,p}}}} k^2dk({k^{2}+M^{* 2}})^{1/2} \nonumber\\ &+& \frac{\gamma C_{n,p}}{2\pi^2} \int_{k_{F\,{n,p}}}^{\phi_{n,p}\, {k_{F\,{n,p}}}} \frac{{k_F}_{n,p}^4}{k^2}\, dk({k^{2}+M^{* 2}})^{1/2}, \nonumber \\ P_{\text {kin }}^{n,p} &=& \frac{\gamma \Delta_{n,p}}{6\pi^2} \int_0^{k_{F\,{n,p}}} \frac{k^4dk}{\left({k^{2}+M^{*2}}\right)^{1/2}} \nonumber\\ &+& \frac{\gamma C_{n,p}}{6\pi^2} \int_{k_{F\,{n,p}}}^{\phi_{n,p}\, {k_{F\,{n,p}}}} \frac{{k_F}_{n,p}^4dk}{\left({k^{2}+M^{*2}}\right)^{1/2}}, \end{eqnarray} and \begin{align} &{\rho_s}_{n,p} = \frac{\gamma M^*\Delta_{n,p}}{2\pi^2} \int_0^{k_{F\,{n,p}}} \frac{k^2dk}{\left({k^{2}+M^{*2}}\right)^{1/2}} \nonumber\\ &+ \frac{\gamma M^*C_{n,p}}{2\pi^2} \int_{k_{F\,{n,p}}}^{\phi_{n,p}\, {k_{F\,{n,p}}}} \frac{{k_F}_{n,p}^4}{k^2} \frac{dk}{\left({k^{2}+M^{*2}}\right)^{1/2}}. \end{align} This last quantity is the scalar density of protons and neutrons. \section{Stellar matter: analysis of the GW170817 constraints} \label{stellar} For the description of a NS of mass~$M$ it is needed to solve the widely known TOV equations~\cite{tov39,tov39a} given by $dp(r)/dr=-[\epsilon(r) + p(r)][m(r) + 4\pi r^3p(r)]/r^2g(r)$ and $dm(r)/dr=4\pi r^2\epsilon(r)$, where $g(r)=1-2m(r)/r$, whose solution is constrained to $p(0)=p_c$ (central pressure) and $m(0) = 0$. The condition of $p(R) = 0$ and $m(R)=M$ is satisfied in the star surface, with $R$ defining the NS radius. For the equation of state (EoS) of the matter in the NS core we use the hadronic model with SRC and DM content included. For the NS crust we consider two regions, namely, the outer and the inner crust. For the former we use the EoS proposed by Baym, Pethick and Sutherland (BPS)~\cite{bps} in a density region of $6.3\times10^{-12}\,\mbox{fm}^{-3}\leqslant\rho_{\mbox{\tiny outer}}\leqslant2.5\times10^{-4}\,\mbox{fm}^ {-3}$. For the latter, we follow previous literature~\cite{poly0,poly1,poly2,gogny2,cc2,gogny1,kubis04} and use a polytropic EoS of the form $p(\epsilon)=\mathcal{A}+\mathcal{B}\epsilon^{4/3}$ from $2.5\times10^{-4}\,\mbox{fm}^ {-3}$ to the transition density. The constants $\mathcal{A}$ and $\mathcal{B}$ are found by matching this polytropic formula to the BPS EoS at the interface between the outer and the inner crust, and to the EoS of the homogeneous core at the core-crust transition determined through the thermodynamical method~\cite{gogny1,cc2,kubis04, gonzalez19}. In the case of systems composed by binary NS's, the phenomenon of tidal forces originated from the gravitational field takes place, with the consequence of inducing tidal deformabilities in each companion object. The particular deformations due to quadrupole moment produces gravitational waves (GW) whose phase depends on the tidal deformability~\cite{tanj10,read,pozzo}. The first measurement of GW detected from a binary NS's, the called GW170817 event, is due to the LIGO/Virgo Collaboration~\cite{ligo17}. Based on the study related to this new data, the LVC established constraints on the dimensionless tidal deformabilities $\Lambda_1$ and $\Lambda_2$ for each companion star of the binary system, as well as on the tidal deformability related to the star of $M=1.4 M_\odot$ ($\Lambda_{1.4}$). An updated version of the constraints regarding these quantities was published in Refs.~\cite{ligo18,ligo19}. Here we test the capability of the hadronic model with SRC and DM included in satisfying these constraints provided by LVC. In order to do that, we calculate the dimensionless tidal deformability as $\Lambda = 2k_2/(3C^5)$, with $C=M/R$ (compactness). The second Love number is given by \begin{eqnarray} &k_2& = \frac{8C^5}{5}(1-2C)^2[2+2C(y_R-1)-y_R]\nonumber\\ &\times&\Big\{2C [6-3y_R+3C(5y_R-8)] \nonumber\\ &+& 4C^3[13-11y_R+C(3y_R-2) + 2C^2(1+y_R)]\nonumber\\ &+& 3(1-2C)^2[2-y_R+2C(y_R-1)]{\rm ln}(1-2C)\Big\}^{-1},\qquad \label{k2} \end{eqnarray} with $y_R\equiv y(R)$. $y(r)$ is obtained through the solution of $r(dy/dr) + y^2 + yF(r) + r^2Q(r)=0$, solved as part of a coupled system also containing the TOV equations. The quantities $F(r)$ and $Q(r)$ read \begin{eqnarray} F(r) &=& \frac{1 - 4\pi r^2[\epsilon(r) - p(r)]}{g(r)} , \\ Q(r)&=&\frac{4\pi}{g(r)}\left[5\epsilon(r) + 9p(r) + \frac{\epsilon(r)+p(r)}{v_s^2(r)}- \frac{6}{4\pi r^2}\right] \nonumber\\ &-& 4\left[ \frac{m(r)+4\pi r^3 p(r)}{r^2g(r)} \right]^2, \label{qr} \end{eqnarray} where the squared sound velocity is $v_s^2(r)=\partial p(r)/\partial\varepsilon(r)$. Detailed derivations can be found in Refs.~\cite{tanj10,new,hind08,damour,tayl09}. The input for the TOV equations coupled to the equation for $y(r)$ is the total equation of state of a system under charge neutrality and $\beta$-equilibrium. In our case, we consider a system composed by protons, neutrons, electrons, muons and dark matter. The total energy density and pressure are given by $\epsilon=\mathcal{E}+\sum_l\epsilon_l$ and $p=P + \sum_lp_l$, with $\mathcal{E}$ and $P$ given in Eqs.~(\ref{eden}) and~(\ref{press}), respectively. The index $l$ refer to the leptons (electons and muons). The equations are solved by taking into account the following conditions: $\mu_n - \mu_p = \mu_e=\mu_\mu$ and $\rho_p - \rho_e = \rho_\mu$, where \mbox{$\rho_l=[(\mu_l^2 - m_l^2)^{3/2}]/(3\pi^2)$} for $l=e, \mu$ (we use $m_e=0$ and $m_\mu=105.7$~MeV). The chemical potentials of protons, neutrons, electrons and muons are given, respectively, by $\mu_p$, $\mu_n$, $\mu_e$, and $\mu_\mu$. Electron and muon densities are $\rho_e$, and $\rho_\mu$. In the case of the hadronic model with SRC included, $\mu_p$ and $\mu_n$ are given by \begin{eqnarray} &\mu_{p,n}& = 3 C_{p,n} \left[ \mu^{p,n}_{\mathrm{kin}} - \frac{\left({\phi_{p,n}^2 {k^2_F}_{p,n} + M^{*2}}\right)^{1/2}}{\phi_{p,n}} \right] \nonumber\\ &+& {4}C_{p,n} {k_F}_{p,n} \ln\left[\frac{\phi_{p,n} {k_F}_{p,n} + \left(\phi_{p,n}^2{k_F^2}_{p,n}+M^{*2}\right)^{1/2} }{ {k_F}_{p,n} + \left( {k^2_F}_{p,n} + M^{*2}\right)^{1/2}}\right] \nonumber\\ &+& \Delta_{p,n}\mu^{p,n}_{\mathrm{kin}} + g_{\omega} \omega_{0} \pm \frac{g_\rho}{2}\bar{\rho}_{0_{(3)}}, \end{eqnarray} with $\mu^{p,n}_{\mathrm{kin}}=({k^2_F}_{p,n}+M^{*2})^{1/2}$, where we have used the definitions $\mu_{p,n}=\partial\mathcal{E}/\partial\rho_{p,n}$. As in Ref.~\cite{dmnosso}, we use in the hadronic side of the model the updated version of the parametrization FSU2R~\cite{fsu2r-new}, with the following bulk parameters at the saturation density of symmetric nuclear matter: $\rho_0=0.15$~fm$^{-3}$, $B_0=-16.27$~MeV (binding energy), $M^*_0=556.8$~MeV (effective nucleon mass at $\rho_0$), and $K_0=237.7$~MeV (incompressibility at $\rho_0$). We also use $C=0.004$, $M_{\mbox{\tiny nuc}}=939$~MeV, $m_\sigma=497.479$~MeV, $m_\omega=782.5$~MeV, and $m_\rho=763$~MeV. In Ref.~\cite{dmnosso} the authors have also considered uncertainties in $M_0^*$, $K_0$ and $L_0$ (symmetry energy slope at $\rho_0$). It was verified that changes in $L_0$ produce parametrizations that give mass-radius profiles in agreement with astrophysical observations, such as the boundaries of $M=2.14^{+0.10}_{-0.09}M_{\odot}$~\cite{cromartie}, simultaneously with recent data obtained by the NICER mission~\cite{nicer1,nicer2,nicer3,nicer4}. Here we focus on the variation of this specific isovector quantity. In particular we use~\cite{piekaprex2} \begin{eqnarray} L_0=(106\pm37)~\mbox{MeV}, \label{sloperange} \end{eqnarray} range compatible with the updated results provided by the \mbox{PREX-2} collaboration concerning neutron skin thickness measurements of $^{208}\rm Pb$~\cite{prex2}, and also overlapping with the limits determined from the analysis of charged pions spectra~\cite{pions}. For each value of $L_0$ chosen in this variation, we fix in $\tilde{J}=25.68$~MeV (FSU2R parametrization) the value of the symmetry energy at $\rho=2\rho_0/3$. This value is consistent with the findings presented in Refs.~\cite{piekaprex2,pieka2001}. By taking this procedure, we impose to the hadronic part of the model the linear correlation between $L_0$ and the symmetry energy at the saturation density, $J$. This is a particular relationship verified in literature, see for instance Refs.~\cite{drischler,baoanli,bianca,wei}. We start by showing in Fig.~\ref{def} the dimensionless tidal deformability generated by the \mbox{RMF-SRC} model with DM included. In Fig.~\ref{def}{\color{blue}a} we present $\Lambda$ as a function of the NS mass in units of $M_\odot$. Each band represents the set of parametrizations generated by the variation of $L_0$ given in Eq.~(\ref{sloperange}). The content of dark matter is defined by the dark Fermi momentum taken here as~$0$, $0.02$~GeV and $0.03$~GeV. As shown in Ref.~\cite{dmnosso}, $k_F^{\rm DM}=0$ represents the system without dark matter. It is clear that in this case (gray band), the parametrizations obtained by using Eq.~(\ref{sloperange}) do not satisfy the constraint of $\Lambda_{1.4}=190^{+390}_{-120}$~\cite{ligo18}. However, the inclusion of DM favors the system to be compatible with the limit provided by LVC. In particular, for $k_F^{\rm DM}=0.03$~GeV (blue band) it is verified that all parametrizations constructed through Eq.~(\ref{sloperange}) are completely inside the range of $\Lambda_{1.4}$. This value of $k_F^{\rm DM}$ was show in Ref.~\cite{dmnosso} to produce NS's in agreement with the recent observational data regarding the mass-radius diagram. Here we confirm that the system composed by this amount of DM is also consistent with the LVC constraint of $\Lambda_{1.4}$. In Fig.~\ref{def}{\color{blue}b} we show the tidal deformabilities $\Lambda_1$ and $\Lambda_2$ of the binary NS's system related to the GW170817 event, with component masses $M_1$, in the range of $1.37\leqslant M_1/M_\odot \leqslant 1.60$~\cite{ligo17}, and $M_2<M_1$. The diagonal dotted line corresponds to the $\Lambda_1=\Lambda_2$ case, in which $M_1=M_2$. The mass of the companion star is calculated through the relationship between $M_1$, $M_2$ and the chirp mass $\mathcal{M} = (M_1M_2)^{3/5}/(M_1+M_2)^{1/5}=1.188M_\odot$~\cite{ligo17}, i.e., $1.17 \leqslant M_2/M_\odot \leqslant 1.36$~\cite{ligo17,ligo18}. The upper and lower orange lines of the figure correspond to the 90\% and 50\% confidence limits, respectively, also obtained from the GW170817 event~\cite{ligo18}. From this figure, we also verify that the inclusion of DM in the system moves the bands in the direction of satisfying the LVC constraints. Notice that the system in which $k_F^{\rm DM}=0.03$~GeV is totally compatible with the 90\% region for all values chosen for $L_0$ in the range of Eq.~(\ref{sloperange}). These general features presented in Fig.~\ref{def} are also observed in Refs.~\cite{rmfdm3,eftdm1}, where other RMF-DM models are used (without SRC) including a chiral effective hadronic model. Therefore, our results point out to a particular pattern regarding RMF models with DM included, at least concerning the tidal deformability. However, it is important to mention that by increasing the amount of DM can also enhance~$\Lambda$. This is the case of some models in which a dark matter halo~\cite{nelson,ellis,sagun} is generated. In Ref.~\cite{nelson}, for instance, it is verified an increase of $\Lambda$ for a total dark matter TOV mass~($M_{\mbox{\tiny DM}}$) exceeding $10^{-5}M_\odot$. In the analysis performed in Ref.~\cite{sagun}, bosonic self-interaction dark matter is coupled to a hadronic model through a two-fluid formalism (different from that used in this work). It is shown that for DM particle masses smaller than~$\sim 300$~MeV, $\Lambda$ increases with the DM fraction (here we fix the fermionic DM particle mass in $200$~GeV). Finally, in Ref.~\cite{ellis} the authors show that in the case of formation of dark matter halo, the tidal deformability increases by raising~$M_{\mbox{\tiny DM}}$. The opposite situation is verified in the case of a neutron star with a DM core. For the neutron stars described in the aforementioned works, a more sophisticated treatment of the contribution of the dark matter is performed, namely, the DM Fermi momentum is not taken as constant along the star radius. In our work, we implement the simpler case of fixing $k_F^{\mbox{\tiny DM}}$ as also performed in Refs.~\cite{rmfdm3,eftdm1}, for instance. However, this latter treatment is completely appropriate for the purposes of the present study, namely, the investigation of tidal deformabilities and its relation with the moment of inertia. We also performed an additional analysis by taking into account those \mbox{RMF-SRC-DM} parametrizations with a different range for the symmetry energy slope, namely, $40\mbox{ MeV}\leqslant L_0 \leqslant 60\mbox{ MeV}$, value often predicted by some hadronic models. We verified that these specific parametrizations are also compatible with the LIGO/Virgo predictions presented in Fig.~\ref{def}. In order to identify, in another perspective, the effect on $\Lambda_{1.4}$ of the DM content of the parametrizations generated from Eq.~(\ref{sloperange}), we show in Fig.~\ref{def14} how $\Lambda_{1.4}$ correlates with the isovector quantities $L_0$ and $J$ by taking into account different values of the dark particle Fermi momentum. In Fig.~\ref{def14}{\color{blue}a} we see that $\Lambda_{1.4}$ decreases as $k_F^{\rm DM}$ increases, regardless the value of $L_0$. The same occurs in Fig.~\ref{def14}{\color{blue}b}, now with respect to the symmetry energy at the saturation density. Notice that the dependence of $\Lambda_{1.4}$ on $L_0$ and $J$ reinforce the existence of a linear correlation between these two isovector quantities. Concerning $\Lambda_{1.4}\times L_0$, we remark that this pattern is also observed in a study performed in Ref.~\cite{lucas} in which hadronic model with SRC but without DM was analyzed. However, notice that the inclusion of DM content reduces the increasing of $\Lambda_{1.4}$ as function of $L_0$, since we have $\Delta\Lambda_{1.4}\equiv \Lambda_{1.4}(143)-\Lambda_{1.4}(69)$ given by $272$, $216$ and $138$, respectively, for $k_F^{\rm DM}=0$, $0.02$~GeV and $0.03$~GeV. We also remark that $\Lambda_{1.4}$ as an increasing function of $J$ in our analysis is totally different from the correlation exhibited in Ref.~\cite{lucas}. In that study, the authors considered independent variations of $J$ and $L_0$ and observed a decrease of $\Lambda_{1.4}$ with increase of $J$. Here, the opposite behavior is verified due to the linear correlation presented between $L_0$ and $J$. This relationship emerges since we are forcing a crossing point in the density dependence of the symmetry energy density. As aforementioned, we impose a value of $25.68$~MeV for the symmetry energy at $\rho\simeq 0.1$~fm$^{-3}$. We address the reader to Ref.~\cite{bianca} for a detailed study concerning crossing points and linear correlations of nuclear matter bulk parameters. Moreover, we emphasize that the relationship between $J$, $L$ and tidal deformabilities has been subject of investigation in many other works, such as those pointed out in Refs.~\cite{baoanli,baoanli2,baoanli3,malik19,cpc,fanji,sinha,ptep,liu,angli}. A quantity directly related to the tidal deformabilities of a binary NS system is the coefficient $\tilde{\Lambda}$ defined as \begin{eqnarray} {\tilde{\Lambda}} = {16\over{13}}{{(M_{1}+12M_{2})M_{1}^{4}\Lambda_{1} + (M_{2}+12M_{1}) M_{2}^{4}\Lambda_{2}} \over {(M_{1}+M_{2})^{5}}},\qquad \label{tilde} \end{eqnarray} where $\Lambda_{1}$ and $\Lambda_{2}$ are the dimensionless tidal deformabilities of each star. In the inspiral final phase of a binary colliding NS system, periodic gravitational waves are emitted. The phase of these waves can be expressed in terms of a post-Newtonian expansion yielding a term proportional to $\tilde{\Lambda}$ at the lowest order~\cite{FH}. This result is used in order to investigate the response of the stellar system to the tidal field, being extracted directly from the observed waveform. In Fig.~\ref{deftilde} we show the plots $\tilde{\Lambda}\times L_0$ and $\tilde{\Lambda}\times J$ generated through the \mbox{RMF-SRC} model with different DM contents. In this figure, $\tilde{\Lambda}$ is calculated as a function of the mass of one of the stars of the binary system, i.e., $\tilde{\Lambda}=\tilde{\Lambda}(M_1)$, or $\tilde{\Lambda}=\tilde{\Lambda}(M_2)$. As $M_1$, or $M_2$, is defined into a particular range according to the GW170817 event, each parametrization presenting a specific value of $L_0$, or $J$, produce a range for $\tilde{\Lambda}$. We compare the results obtained for the model with $k_F^{\rm DM}=0$, $0.02$~GeV, and $0.03$~GeV with the constraint $\tilde{\Lambda}=300^{+420}_{-230}$ provided by LVC~\cite{ligo19}. Once again, we notice that the inclusion of DM in the system favors the observational data from the GW170817 event. The decreasing of $\tilde{\Lambda}$ as a function of $k_F^{\rm DM}$ is also observed. Furthermore, as well as the behavior between $\Lambda_{1.4}$ and $L_0$ depicted in Fig.~\ref{def14}, there is also a strong correlation between $\tilde{\Lambda}$ and $L_0$. The same is true for the relationship between $\tilde{\Lambda}$ and $J$. As a last result, we show in Fig.~\ref{inertia} the dimensionless moment of inertia, $\bar{I}= I/M^3$, calculated from the \mbox{RMF-SRC} model for different values of $k_F^{\rm DM}$. This quantity is determined from the solution of the Hartle's slow rotation equation~\cite{land18,hartle,yagi13}. It is a differential equation for one of the metric decomposition functions~\cite{yagi13}, $\omega(r)$, coupled to the TOV equations. The moment of inertia is defined in terms of $\omega_R\equiv \omega(R)$ as $I=R^3(1-\omega_R)/2$. $\omega_R$ is the frame-dragging function evaluated at the star surface~\cite{land18}. The authors of Refs.~\cite{science,yagi13} showed that the relation between $\bar{I}$ and $\Lambda$ is independent of the neutron/quark star structure in the case of slowly-rotating stars. In Ref.~\cite{land18} the same result was obtained for a set of~$53$ Skyrme and RMF parametrizations. In Fig.~\ref{inertia}{\color{blue}a} it is clear that the parametrizations generated by the variation in Eq.~(\ref{sloperange}) are indistinguishable regardless the value of $k_F^{\rm DM}$. Therefore, we can conclude that the universal relation between $\bar{I}$ and $\Lambda$, called \mbox{$I$-Love} relation, is preserved even with the inclusion of dark matter in the system. The dashed line in Fig.~\ref{inertia}{\color{blue}a} represents the fitting curve determined in Ref.~\cite{land18}. We see that the model with DM studied here is compatible with this fitting. The authors of Ref.~\cite{land18} also determined a range for $\bar{I}$ related to the \mbox{PSR J0737-3039} primary component pulsar, namely, $\bar{I}_\star\equiv\bar{I}(M_\star)=11.10^{+3.68}_{-2.28}$, with $M_\star=1.338M_\odot$. This range was determined by using Skyrme and RMF parametrizations. Initially, it was verified a relation between $\Lambda_\star$ ($\Lambda$ related to $M_\star$) and $\Lambda_{1.4}$ (\mbox{binary-Love} relation). Then, a fitting for the $\Lambda_\star\times \Lambda_{1.4}$ curve was used with the \mbox{$I$-Love} relation in order to determine $\bar{I}_\star$ as a function of~$\Lambda_\star$. Lastly, the observational range $\Lambda_{1.4}=190^{+390}_{-120}$ from LVC was used to establish the limits for $\Lambda_\star$, and consequently, the range $\bar{I}_\star=11.10^{+3.68}_{-2.28}$. In Fig.~\ref{inertia}{\color{blue}b} we verify that the increase of $k_F^{\rm DM}$ produces a decrease of $\bar{I}$. We also find that the system with $k_F^{\rm DM}=0.03$~GeV is completely inside the limits for the moment of inertia of pulsar \mbox{PSR J0737-3039A} predicted in Ref.~\cite{land18}. Furthermore, we verify that parametrizations generated by the \mbox{RMF-SRC} model, i.e., with no dark matter, are in agreement with the mass-radius diagrams obtained from chiral effective theory calculations performed in Refs.~\cite{hebeler,kruger,drischler}, for $R\lesssim 14$~km. Curiously, on the other hand, the compatibility is fully attained with inclusion of dark matter content, specifically for $k_F^{\rm DM}=0.03$~GeV. In summary, this specific content of dark matter, implemented in the RMF model with short-range correlations, is compatible with all constraints derived from the GW170817 event concerning tidal deformabilities and moment of inertia. \section{Summary and concluding remarks} \label{summ} In this work, we investigate the capability of a hadronic relativistic model, with short-range correlations and dark matter content included~\cite{dmnosso}, in reproducing the observational data provided by the LIGO and Virgo Collaboration regarding the binary neutron star system of the GW170817 event, i.e., the one in which gravitational waves emitted from neutron stars merger were detected. We use the lightest neutralino, interacting with nucleons through the exchange of the Higgs boson, as the dark particle. In Ref.~\cite{dmnosso} it was already show that this model also reproduces the recent observational data obtained by the NICER mission~\cite{nicer1,nicer2,nicer3,nicer4}. We show that the dimensionless tidal deformability~$\Lambda$ decreases as the Fermi momentum of the dark particle increases. In particular, this feature favors the model in satisfying the constraints of $\Lambda_{1.4}=190^{+390}_{-120}$ and $\tilde{\Lambda}=300^{+420}_{-230}$. Furthermore, a clear correlation between $\Lambda_{1.4}$ and the symmetry energy slope, $L_0$, and between $\tilde{\Lambda}$ and $L_0$ is verified for different values of $k_F^{\rm DM}$. Specifically, we use the variation of $L_0=(106\pm37)$~MeV~\cite{piekaprex2}, compatible with the updated results provided by the \mbox{PREX-2} collaboration concerning neutron skin thickness measurements of $^{208}\rm Pb$~\cite{prex2}, and also overlapping with the range found from the analysis of charged pions spectra~\cite{pions}. We also show that the $\Lambda_1\times\Lambda_2$ curves are moved to the direction of the GW170817 observational data. Finally, we also analyze that the \mbox{$I$-Love} relation, namely, the relationship between $\Lambda$ and dimensionless moment of inertia, $\bar{I}$, is preserved even with the inclusion of dark matter in the system. The constraint of $\bar{I}_\star\equiv\bar{I}(M_\star)=11.10^{+3.68}_{-2.28}$, with $M_\star=1.338M_\odot$, is also satisfied for the system with $k_F^{\rm DM}=0.03$~GeV. \section*{ACKNOWLEDGMENTS} This work is a part of the project INCT-FNA proc. No. 464898/2014-5. It is also supported by Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (CNPq) under Grants No. 312410/2020-4 (O.L.), No. 308528/2021-2 (M.D.), and 308486/2015-3 (T.F.). We also acknowledge Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP) under Thematic Project 2017/05660-0 and Grant No. 2020/05238-9 (O.L., C.H.L, M.D.).
Title: Accelerating Tests of General Relativity with Gravitational-Wave Signals using Hybrid Sampling
Abstract: The Advanced LIGO/Virgo interferometers have observed $\sim 100$ gravitational-wave transients enabling new questions to be answered about relativity, astrophysics, and cosmology. However, many of our current procedures for computing these constraints will not scale well with the increased size of future transient catalogs. We introduce a novel hybrid sampling method in order to more efficiently perform parameterized tests of general relativity with gravitational-wave signals. Applying our method to the binary black hole merger GW150914 and simulated signals we find that our method is approximately an order of magnitude more efficient than the current method with conservative settings for our hybrid analysis. While we have focused on the specific problem of measuring potential deviations from relativity, our method is of much wider applicability to any problem that can be decomposed into a simple and more complex model(s).
https://export.arxiv.org/pdf/2208.12872
\title{Accelerating Tests of General Relativity with Gravitational-Wave Signals using Hybrid Sampling} \author{Noah E.~Wolfe} \email{noah.wolfe@ligo.org} \affiliation{Department of Physics, North Carolina State University, Raleigh NC 27695, USA} \affiliation{Department of Mathematics, North Carolina State University, Raleigh NC 27695, USA} \author{Colm Talbot} \email{colm.talbot@ligo.org} \affiliation{LIGO Laboratory, Massachusetts Institute of Technology, 185 Albany St, Cambridge, MA 02139, USA} \affiliation{Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA} \author{Jacob Golomb} \affiliation{LIGO Laboratory, California Institute of Technology, Pasadena, CA 91125, USA} \date{\today} \section{Introduction} General relativity (GR) is currently our most successful theory of gravity. Previous observations of sources within our solar system, including the Gravity Probe B experiment and time-delay measurements with the Cassini space probe, have placed constraints on deviations from general relativity in the non-dynamical, weak-field regime~\citep{gr-tests_will_2014}. These tests have been replicated and expanded with radio observations of pulsars, which probe similarly slow-motion, but strong-field gravitational physics \citep{pulsar-timing-tgr_ingrid_2003, pulsar-tgr_wex_2014}, through measurements of the orbital decay rate of the first discovered binary pulsar system~\cite{hulse-taylor-tgr_damour_1974} to modern constraints on dipolar gravitational-wave emission constructed with multiple such systems \cite{binary-pulsar-tgr_zhao_2022}. Simultaneously, probes of large-scale cosmological structure including weak gravitational lensing and the Cosmic Microwave Background have provided complimentary weak-field tests of general relativity across cosmic epochs and length scales \cite{cosmo-tgr-review_ferreira_2019}. Over the past decade new observations have unlocked the strong-field regime for tests of general relativity, including measurements of two supermassive black hole shadows \citep{EHT_M87_gr-test, EHT_SgrA_metric, EHT_SgrA_shadow} and gravitational waves from stellar-mass compact object mergers observed by Advanced LIGO~\cite{LIGO} and Advanced Virgo~\cite{Virgo}, using both single observations \cite{single-BBH_gr-tests_LIGO_2016, single-NSNS_gr-tests_LIGO_2019} and the burgeoning population of gravitational wave transients \cite{O1_gr-tests_LIGO_2016, GWTC1_gr-tests_LIGO_2019, GWTC2_gr-tests_LIGO_2020, GWTC3_gr-tests_LIGO_2021}. To date, none of these experiments have found significant disagreement with the predictions of general relativity. However, alternative theories of gravity that could emerge in the strong-field regime may be relevant to constructing unified field theories or the understanding of unexplained phenomena like dark energy (e.g. in scalar-tensor theories, among others \cite{scalar-tensor-review}). Modern developments in theoretical physics have generated testable predictions of modifications to gravitational-wave emission from compact binary coalescence under alternative formulations of gravity, both analytically \citep{ppE_Yunes_2009, analytical-scalar-tensor_lang_2014, analytical-scalar-tensor_lang_2015, analytical-scalar-tensor-gw_sennett_2016, ppe-waveforms_tahura_2018, ppE_bonilla_2022, edgb-analytical_shiralilou_2022} and numerically~\citep{bns-scalar-tensor_barausse_2013, bns-scalar-tensor_shibata_2014, dCS_nr_okounkova_2017, dCS-nr-gw150914_okounkova_2020, edgb-nr-gw150914_okounkova_2020, eft-gravity_cayuso_2020, edgb-nr_east_2021, edgb-nr-bbh_east_2021}, further enabling tests of general relativity in the most extreme gravitational environments yet accessible to us. The number of observed mergers will only continue to grow, further enhancing our resolution on potential deviations from general relativity. However, this also necessitates that our statistical and computational techniques improve to support larger and more complex analyses. Since the first observation of gravitational waves from a compact binary merger, the ``TIGER'' formalism and related methods have been some of the flagship analyses performed by the LIGO-Virgo-KAGRA scientific collaborations~\citep{Agathos2014,O1_gr-tests_LIGO_2016,single-NSNS_gr-tests_LIGO_2019,GWTC1_gr-tests_LIGO_2019, GWTC2_gr-tests_LIGO_2020, GWTC3_gr-tests_LIGO_2021,Mehta2022,meidam2018}. These methods require performing many independent, but largely identical analyses as for each potential deviation from relativity, the parameters describing the GR signal must be inferred from scratch. The goal of this work is to improve this analysis procedure with a new method for parameter estimation: hybrid sampling. This method is more computationally efficient, allowing us to scale our analysis as the population of observed mergers grows, and further constrain deviations in gravitational-wave signals predicted by general relativity. The remainder of the paper is structured as follows. In Section \ref{sec:stat_methods}, we provide relevant background and introduce our hybrid sampling method. After this, we provide a demonstration of our method on a simple toy model in Section~\ref{sec:demonstration}. We then describe our model for observed gravitational waves according to general relativity and the parameterized deviations we consider in Section~\ref{sec:astro_methods}. In Section~\ref{sec:hybrid-sampling_gws}, we apply our hybrid sampling method to simulated and real gravitational-wave signals. Specifically, we demonstrate that our method returns equivalent results to the previous method at a fraction of the computational cost and introduce an extension to the previous method. Finally, we provide closing thoughts in Section~\ref{sec:conclusions}. \section{Methods} \label{sec:stat_methods} \subsection{Bayesian Inference for Gravitational-Wave Transients} We begin with a brief review of Bayesian inference in the context of gravitational-wave astronomy. In Bayesian inference, we wish to infer a set of parameters $\theta$ of a model $M$ given some data $d$; formally, we want to construct the posterior distribution $p(\theta | d, M)$. For example, in this work, we will have a set of parameters that include properties of binary black hole systems (e.g.~mass and spin), with additional parameters to denote deviations from the predictions of general relativity that we wish to infer from observations of gravitational-wave transients. For additional details, see, e.g.~\cite{bayesian_inference_gws_thrane_2019}. Bayesian inference allows us to construct the posterior distribution via Bayes' theorem, \begin{equation} \label{eq:bayes} p(\theta | d, M) = \frac{ \mathcal{L}(d|\theta, M) \pi(\theta | M) }{ \mathcal{Z}(d | M) } \end{equation} where $\mathcal{L}(d|\theta, M)$ is the likelihood of observing the data given parameter values, and $\pi(\theta | M)$ is the prior distribution, which encodes our assumptions about the Universe before considering the data. The normalization factor $\mathcal{Z}(d | M)$ is known as the evidence and is the probability of observing the data given the parametric model we choose \begin{equation} \label{eq:evidence} \mathcal{Z}(d | M) \equiv \int d\theta \mathcal{L}(d | \theta, M) \pi(\theta | M). \end{equation} We may suppress the model $M$ in subsequent expressions, however, everything is conditioned on a model. When analyzing gravitational-wave transients we assume that the noise in each of our interferometers is a stationary Gaussian process described by a power spectral density $S$ in the frequency domain. Additionally, our analysis is triggered by matched filter search pipelines that tell us a coherent non-Gaussian transient that is most likely an astrophysical signal is present in the data. To model this, we use the Whittle likelihood approximation~\cite{Romano2017} for the residual noise after subtracting the response of each detector to our template $h$ for the astrophysical signal \begin{equation} {\cal L}(d | \theta) = \prod_{i,j} \frac{1}{2\pi S_{ij}} \exp\left( -\frac{4}{T}\frac{|d_{ij} - h(\theta)_{ij}|^2}{S_{ij}} \right). \end{equation} Here, the products run over the interferometers in the network, and frequencies for the data in each interferometer are (generally) assumed to be uncorrelated. We note that our parameters only describe the astrophysical template and the response of the detector, however, it is also possible to construct parameterized models for the power spectrum. The quantity $T$ is the duration of data being analyzed and is the inverse of the frequency resolution. We observe that $p(\theta | d)$ provides a distribution on the entire (multi-dimensional) set of parameters $\theta$. To extract information on specific parameters of interest $\theta_i$, we must ``marginalize", i.e.~integrate, over the rest of the parameters: \begin{equation} p(\theta_i | d) = \int \left( \prod_{k \neq i} d \theta_{k} \right) p(\theta | d). \end{equation} This integration may be difficult to compute through standard numerical methods, especially in a high-dimensional parameter space. One common method to approximate this integral is to utilize a Monte Carlo Markov Chain (MCMC), wherein a ``walker" explores the parameter space of $\theta$ under rules such that, given enough iterations, the combined steps along its path form a representative sample of the posterior distribution. Another, more recent method is nested sampling \citep{skilling_2004, skilling_2006}, which instead focuses on estimating the evidence $\mathcal{Z}$, from which the posterior distribution can then be calculated. In this work, we will utilize both of these approaches and, in turn, detail specific implementations of these methods in the following subsections. \subsection{Nested Sampling} \label{sec:dynesty} Nested sampling, as developed in \citep{skilling_2004, skilling_2006}, is an algorithm to estimate the evidence $\mathcal{Z}$ and posterior probability density by climbing up discrete contours on the likelihood surface and has been widely adopted in astrophysics including gravitational-wave astronomy~\cite{Veitch2015, Ashton2019, speagle_2020_dynesty, RomeroShaw2020}. We direct interested readers to~\cite{Ashton2022} for a recent review. The core insight of nested sampling is that the high dimensional integral to compute the evidence ${\cal Z}$ can be approximated as a one-dimensional integral over a quantity known as the ``prior mass'' $X$. The prior mass corresponding to a likelihood value $\lambda$ is the fraction of the volume that has a likelihood greater than $\lambda$ \begin{equation} X(\lambda) = \int_{\mathcal{L}(\theta) > \lambda} \pi(\theta) d\theta. \end{equation} If the mapping from $\theta \rightarrow X$ can be found, the evidence (Eq.~\ref{eq:evidence}) can be rewritten as \begin{equation} \mathcal{Z} = \int_0^{1} \mathcal{L}(X) dX. \end{equation} The nested sampling algorithm constructs this mapping numerically by gradually climbing the likelihood surface and we approximate $\mathcal{Z}$ as a weighted sum of values $\mathcal{L}(X)$, e.g., \begin{equation} \label{eq:nested_z_approx} \mathcal{Z} \approx \sum_{i=1}^{N} w_i \mathcal{L}_i \end{equation} for some number of samples $N$, with prior weights $w_i$ that follow a known distribution $\mathcal{L}(X)$, $X$. For this work, we use the implementation of nested sampling in \code{dynesty}~\cite{speagle_2020_dynesty}. Another widely used feature of nested sampling is that the elements in the sum in Eq.~\ref{eq:nested_z_approx} are the posterior weights associated with nested sampling. We can therefore generate samples from the posterior distribution by weighting the nested samples according to a normalized version of that quantity \begin{equation} \label{eq:posterior_weights} p_i = \frac{\mathcal{L}_i w_i}{\mathcal{Z}}. \end{equation} We note that after a sufficient number of iterations, nested sampling no longer produces additional posterior samples. This is because the algorithm continually climbs the likelihood surface and eventually the reduction in prior volume overcomes the increase in the likelihood and the posterior weights begin to decrease. This means that the number of posterior samples generated by a nested sampling analysis is completely determined by the shape of the likelihood surface. We note that the value of the ${\cal L}_{i}$ are irrelevant to the nested sampling algorithm, and that only their order matters~\footnote{Strictly speaking, this is only true if one has an infinite chain of nested samples.}. Therefore, we are free to perform any monotonic operation on the likelihood and can then trivially recompute the evidence and generate samples from the posterior distribution. In this work, we will focus on a specific family of operations that change the effective inverse ``temperature'' $\beta_{T}$ of the posterior distribution~\footnote{The temperature, in this case, is defined in analogy with statistical physics which historically shares strong links with Monte Carlo analyses.} \begin{align} {\cal{L}} &\rightarrow {\cal L}^{\beta_{T}} \\ p_{i,\beta_{T}} &= \frac{\mathcal{L}^{\beta_{T}}_i w_i}{\mathcal{Z}_{\beta_{T}}}. \end{align} This ``athermal'' property of nested sampling has been known since the first introduction of the algorithm but has not been widely utilized. In Figure~\ref{fig:temperature_ladder}, we show the posterior weight $p_{i,\beta_{T}}$ as a function of iteration for various temperatures for a simple model described in Section~\ref{sec:demonstration}. We note that for the $\beta_{T} = 1$ case, we recover the usual posterior weights. As we increase the temperature (decreasing $\beta_{T}$) the posterior weights peak earlier in the nested sampling chain. \subsection{Parallel-Tempered Monte Carlo Markov Chains} \label{sec:ptemcee} In contrast with nested sampling, MCMC methods directly explore the posterior and can be run as long as necessary, continually generating additional samples from the posterior distribution. Ensemble MCMC methods build upon existing MCMC methods by replacing a single walker, as used in traditional approaches~\citep{metropolis1953, hastings1970}, with an ensemble of walkers that explore the parameter space in parallel, e.g.,~\cite{goodman2010_ensemble}. A key feature of this approach is that we can reduce the number of iterations we need to evolve the MCMC to accurately resolve the posterior distribution, as ensembles of walkers have a far shorter auto-correlation length, measuring the correlation between sampling steps, than single walkers \citep{goodman2010_ensemble, emcee_Foreman-Mackey_2013}. Additionally, at any step, the state of our ensemble is a representative estimate of the posterior distribution. For a recent review, we direct the readers to~\cite{Hogg2018}. Further, ensemble MCMC results can be parallel-tempered to explore the posterior distribution at arbitrary temperatures. A parallel-tempered ensemble MCMC method then uses many walkers, in parallel, each exploring a tempered posterior surface $p_{\beta_T}$. For higher temperatures (smaller $\beta_{T}$), the ensemble can more easily explore the full prior volume, and by allowing walkers to jump between different temperature ensembles the convergence time of the $\beta_{T}=1$ ensemble is greatly reduced. In this work, we use the \code{ptemcee} implementation of parllel-tempered ensemble MCMC \cite{emcee_Foreman-Mackey_2013, Vousden_2015_ptemcee}. While MCMC methods may not always be as computationally efficient as nested sampling methods, once they have reached a stationary state, they have a relatively high computational efficacy; i.e.~we can continually ask our MCMC sampler for more samples, increasing our resolution of the posterior as much as we desire. Therefore, if we can initialize our MCMC ensembles to closely approximate the target distributions, we can achieve very large efficiencies. \subsection{Hybrid Sampling} \label{sec:hybrid-sampling} Motivated by the properties of nested sampling and parallel-tempered ensemble MCMC, we propose a hybrid sampling scheme to explore high-dimensional, degenerate parameter spaces. Our hybrid sampling method is a two-step procedure, that uses the exponential compression of the prior mass provided by nested sampling to optimally (or at least efficiently) seed a set of parallel tempered MCMC ensembles that can be used to generate an arbitrary number of samples from the target distribution. Seeding the MCMC ensembles from tempered posteriors obtained using nested sampling is the optimal starting state for the MCMC as every temperature is starting from a realization of the target distribution. We go beyond this simple use case and demonstrate that our method can be applied to more complex cases where the MCMC evolution explores an extended parameter space compared to the nested sampling analysis. We define two models $M_{1}$ and $M_{2}$ described by parameter sets $\theta_{1} \subseteq \theta_{2}$ with likelihoods ${\cal L}_{1}$ and ${\cal L}_{2}$. We note that we assume $M_{2}$ contains $M_{1}$. We first perform a nested sampling analysis for model $M_{1}$. The posterior distribution for $\theta_{1}$ can then be used to efficiently initialize the MCMC ensembles for $\theta_{1}$. Since $M_{1}$ is contained within $M_{2}$ there must exist some value of the parameters $\theta_{2} \backslash \theta_{1}$ for which $M_{2}$ reduces to $M_{1}$. We, therefore, initialize those parameters to be clustered around the corresponding value for the sub-model $M_{1}$. \section{Hybrid Sampling with a Generalized Gaussian Distribution}\label{sec:demonstration} As a demonstration of our hybrid sampling framework, we use a toy model where our reduced model in the first step is the standard Gaussian distribution characterized by mean $\mu$ and standard deviation $\sigma$ and the complex model in the second step is a generalized Gaussian distribution characterized by mean $\mu$, scale $\alpha$, which can take alternative shapes parameterized by $\beta$. For comparison, the probability density function of the standard Gaussian is \begin{equation} P(x | \mu, \sigma) = \frac{1}{\sqrt{\pi \alpha^2}} e^{-(x - \mu)^2 / \alpha^2} \end{equation} while that of the generalized Gaussian we employ is \begin{equation} P(x | \mu, \alpha, \beta) = \frac{\beta}{2 \alpha \Gamma(1 / \beta)} e^{-(|x - \mu|/\alpha)^{\beta}} \end{equation} where $\Gamma$ is the Gamma function. For consistency with the generalized model, we parameterize our Gaussian distribution with the parameter $\alpha = \sqrt{2} \sigma$. When the shape $\beta = 1$, we recover the Laplace distribution, while as $\beta \rightarrow \infty$, we recover a tophat distribution on $(\mu - \alpha, \mu + \alpha)$. When $\beta = 2$, we recover the standard Gaussian distribution. As an example of these two distributions, we show the data used in the two examples considered in this section in Figure~\ref{fig:toy-model-data}. In blue we show samples from the standard normal distribution, while the orange shows samples from the generalized distribution with $\beta = 8$. Thus, if the data follow a distribution with $\beta \neq 2$, the value of $\alpha$ will be incorrectly estimated. In the remainder of this section, we verify that our hybrid sampling method can recover $\mu$, $\alpha$, and $\beta$ when we correctly assume that the data follows a standard Gaussian distribution during the first step of hybrid sampling and when the underlying data do not follow a Gaussian distribution. For our analyses, we use prior distributions as described in Table~\ref{tbl:gaussian_priors}. \begin{center} \begin{table} \begin{tabular}{|c c|} \hline Parameter & Distribution \\ \hline $\mu$ & ${\cal U}(0, 5)$ \\ $\alpha$ & ${\cal U}(0, 10 \sqrt{2})$ \\ $\beta$ & ${\cal U}(0, 10)$ \\ \hline \end{tabular} \caption{ Prior distributions for the parameters of the generalized Gaussian model. We denote a uniform distribution over $[a, b]$ as ${\cal U}(a, b)$. We note that the initial phase of the hybrid sampling fixes $\beta=2$. }\label{tbl:gaussian_priors} \end{table} \end{center} \subsection{Well-Specified Initial Model} \label{sec:toy_well-specified} First, we verify that hybrid sampling can recover model parameters when the data follows a normal distribution with $\mu=3$ and $\alpha=5$. Our data $d$ are $N=10000$ random samples from this distribution. In the first step of hybrid sampling, we use $\code{dynesty}$ to sample in $\{ \mu, \alpha \}$, assuming that the data follows a standard Gaussian distribution, generating a posterior distribution we denote $p_1(\mu, \sigma | d)$ using 500 live points. Before the second step of hybrid sampling, we prepare initial points $\{ \mu_0, \alpha_0 \}$ for an ensemble of 200 walkers at seven temperatures as described in Section~\ref{sec:ptemcee}. We generate initial values of the shape parameter $\beta_0$ by sampling from a standard Gaussian distribution with a standard deviation of $0.01$ and centered on the value of $\beta$ assumed in the first hybrid step, $\beta = 2$. We then evolve the ensemble using \code{ptemcee} for 128 iterations. For comparison, we also analyze the data under the generalized model with \code{dynesty} directly. In Figure \ref{fig:well-specified_comparison}, we compare the posterior generated by \code{dynesty} alone (blue) to the one generated by hybrid sampling (purple). The dashed black lines show the true values of the three parameters. We see that both methods recover equivalent posterior distributions indicating that the hybrid sampling method is well converged. \subsection{Misspecified Initial Model} \label{sec:toy_misspecified} Next, we verify that hybrid sampling can recover model parameters when the model likelihood used in the first step has been inappropriately specified for the data. We repeat the first step of hybrid sampling as in the previous section. However, the input data we generate is not Gaussian. Instead, we generate $N = 10000$ random samples from a generalized Gaussian distribution with $\mu = 3$, $\alpha = 5$, and $\beta = 8$, shaped approximately like a tophat function. We perform the same analyses as in Section~\ref{sec:toy_well-specified}. In Figure~\ref{fig:misspecified_comparison}, we show the posterior distributions estimated using our two methods. Once again, we see that both methods recover equivalent results. We are additionally interested in understanding the performance of the hybrid sampling stage. In Figure~\ref{fig:alpha-beta_evolution} we provide two-dimensional snapshots from the evolution. The top and left-hand panels show the one-dimensional marginal distributions for $\beta$ and $\alpha$ at each iteration. The colors are consistent between the panels and darker shades correspond to later iterations of the evolution. We note that $\alpha$ and $\beta$ are strongly correlated, specifically the variance of the distribution \begin{equation}\label{eq:sigma} \sigma^2 = \alpha^2 \frac{\Gamma\left(\frac{3}{\beta}\right)}{\Gamma\left(\frac{1}{\beta}\right)} \end{equation} is approximately conserved. The orange curve is constant $\sigma$ intersecting the true values of $\alpha$ and $\beta$. We note that the parameters $\alpha$ and $\beta$ are correlated and so the evolution follows the direction of the correlation. We observe that as the ensemble evolves, it follows this curve of constant $\sigma$. This is suggestive that ensemble sampling can readily explore problems when the extended parameter space is strongly correlated with the initial parameter space. In Figure~\ref{fig:misspecified_trace}, we show trace plots for $\mu$, $\alpha$, $\beta$, and $\sigma$ from the second step of hybrid sampling. At each iteration, we show the current state of the $\beta_{T}$ ensemble. The color at each iteration matches the colors in Figure~\ref{fig:alpha-beta_evolution}. The dashed lines indicate the true value of each parameter. We see that in this case, the ensemble evolves from the initial state to the correct distribution within $<100$ iterations. \section{Gravitational Wave Source Model} \label{sec:astro_methods} \subsection{Modeling Gravitational Waveforms from Black Hole Mergers} To infer the properties of the source of a gravitational-wave signal, we require a model for the gravitational waveform. A quasi-circular binary black hole (BBH) merger can be described by 15 source parameters, divided between eight ``intrinsic'' parameters (the masses and spins of the component black holes) and seven ``extrinsic'' parameters (the location and orientation of the source with respect to an observer). As these parameters describe the signal predicted by general relativity, we refer to these as ``GR parameters", later denoted $\theta_{\mathrm{GR}}$. When modeling the signal from a binary black hole merger, the coalescence is typically broken down into three temporally-distinct regimes: the \textit{inspiral}, an \textit{intermediate} phase and the \textit{merger-ringdown} \cite{GWTC2_gr-tests_LIGO_2020,IMRPhenomPv2-II_khan2016,Agathos2014}. The inspiral begins when the black holes have formed a binary system, however, we typically only model the waveform after the emission has surpassed the lowest sensitivity frequency of our instruments (typically $20$Hz for current detectors). This regime is typically characterized using the post-Newtonian expansion with higher-order corrections tuned to numerical relativity simulations. During the intermediate regime, the orbital frequency of the binary increases to a point where the post-Newtonian expansion breaks down and the binary ``plunges'' and the horizons merge. This regime can only be accurately described through numerical methods and is usually modeled using a fit to numerical relativity simulations. Finally, after the merger, the remnant black hole undergoes a ``ringdown'' phase, in which gravitational-wave emission is modeled via quasi-normal modes. This regime is well-described by analytical models and provides strong tests of the ``no-hair'' theorem~\cite{gw150914-ringdown-tgr_isi_2019} and the black hole area law~\cite{Isi2021}. In this work, we use \code{IMRPhenomPv2}, a computationally efficient, phenomenological model of the gravitational waveform~\citep{IMRPhenomPv2-I_husa2016, IMRPhenomPv2-II_khan2016, HannamIMRPhenomP, Bohe2016}. For a given set of BBH source parameters, \code{IMRPhenomPv2} returns a frequency-domain representation of the gravitational wave signal, taking the form \begin{equation} \tilde{h}(f) = A(f) e^{-i \Psi(f)} \end{equation} where $\tilde{h}(f)$ is the gravitational-wave strain as a function of frequency, $A(f)$ is the amplitude, and $\Psi$ is the phase of the signal. Both $A$ and $\Psi$ depend on the intrinsic and extrinsic parameters of the BBH, although in general, the intrinsic parameters have a larger impact on the phase, while the extrinsic parameters primarily determine the amplitude. In this work, we focus on modifications to the phase $\Psi$ as the first test of our hybrid sampling method for gravitational-wave signals, as current detectors are more sensitive to the phase of the signal~\citep{GWTC3_gr-tests_LIGO_2021}. During the inspiral regime, $\Psi$ is approximated as a modified version of the post-Newtonian expansion: % \begin{align} \Psi_{\mathrm{ins}}(f) &= 2\pi f t_{c} - \phi_{c} -\frac{\pi}{4} + \frac{3}{128 \eta v^5} \sum_{i=0}^{7} \left(\varphi_i + \varphi_{iL}\ln v \right) v^i \\ &+ \frac{1}{\eta} \left( \sigma_{0} + \sigma_{1} f + \frac{3}{4}\sigma_{2} f^{4/3} + \frac{3}{5}\sigma_{3} f^{5/3} + \frac{1}{2}\sigma_{4} f^{2} \right). \nonumber \end{align} Here $\eta = m_1 m_2 / (m_1 + m_2)^2$ is the symmetric mass ratio of the binary, $v = \left( \pi M f G c^{-3} \right)^{1/3}$ is the dimensionless orbital frequency of the system, the phase coefficients $\varphi_i$ are determined by the post-Newtonian expansion and the $\sigma_{j}$ are tuned to numerical relativity waveforms. Terms $\varphi_{iL}$ are those post-Newtonian coefficients leading $\ln{v}$ at order $i$. Both $\varphi_{i}$ and $\sigma_{j}$ depend on the intrinsic parameters of the source. The parameters $t_{c}$ and $\phi_{c}$ are the coalescence time and the orbital phase at coalescence respectively. In the intermediate phase, \code{IMRPhenomPv2} adopts the following form for $\Psi$, \begin{equation} \Psi_{\mathrm{int}}(f) = \frac{1}{\eta} \left( \beta_0 + \beta_1 f + \beta_2 \log(f) - \frac{\beta_3}{3} f^{-3} \right) \end{equation} where $\beta_0$ and $\beta_1$ are chosen to require a smooth continuation of $\Psi$ in the coalescence time and phase from the inspiral to intermediate phases. The parameters $\beta_2$ and $\beta_3$ depend on the intrinsic parameters of the source. Finally, in the merger-ringdown phase, \code{IMRPhenomPv2} adopts another parameterized form for $\Psi$, \begin{equation} \begin{aligned} \Psi_{\mathrm{MR}} = \frac{1}{\eta} \biggl[ \alpha_0 + \alpha_1 f - \alpha_2 f^{-1} + \frac{4}{3} \alpha_3 f^{3/4} \\ + \alpha_4 \tan^{-1}\left( \frac{f - f_{\mathrm{RD}}}{f_{\mathrm{damp}}} \right) \biggr]. \end{aligned} \end{equation} As with the intermediate phase, $\alpha_0$ and $\alpha_1$ are chosen so that $\Psi$ continues smoothly from the intermediate phase to the merger-ringdown phase, and $\alpha_{2-4}$ depend on the intrinsic parameters of the source. The frequencies $f_{\mathrm{RD}}$ and $f_{\mathrm{damp}}$ describe the complex ringdown frequency and are computed from the mass and spin of the remnant black hole~\citep{Bohe2016}. We note that the above discussion applies to the aligned-spin \code{IMRPhenomD} model; the \code{IMRPhenomPv2} phasing is obtained by ``twisting-up'' the \code{IMRPhenomD} phasing to account for precession of the orbital plane as described in~\cite{Bohe2016}. \subsection{Generating Beyond-GR Waveforms} \label{sec:bgr-waveforms} Following~\cite{Agathos2014, single-BBH_gr-tests_LIGO_2016, GWTC1_gr-tests_LIGO_2019, GWTC2_gr-tests_LIGO_2020}, we model deviations from general relativity as fractional corrections to the parameters described above, specifically, we define \begin{equation} p^{\rm BGR} = (1 + \delta p) p^{\rm GR} \end{equation} for $p^{\rm GR} \in \{ \delta \varphi_{1-4}, \varphi_{5L,6L}, \varphi_{6,7}, \alpha_{2-4}, \beta_{2,3} \}$. Since under GR $\varphi_1=0$, we model $\delta \varphi_{1}$ as an absolute rather than fractional deviation. We note that the parameters describing global phase and time shifts are not modified as any such modification is indistinguishable from changes in phase or time. In principle, any combination of these parameters could have non-zero deviations, however, most previous analyses modify just a single parameter at a time. We use the implementation of \code{IMRPhenomPv2} provided by \code{LALSuite} \cite{lalsuite} which allows us to apply these fractional deviations when generating our waveforms. Although these fractional deviations can take any real value, we do not expect them to be large as at worst we expect the general relativistic description of gravitational wave emission to be ``mostly" correct. Thus, we limit the space of allowed deviations by limiting the allowed deviation between a beyond-GR waveform we generate with $p_{i, \mathrm{BGR}}$ and the associated waveform generated with $p_i$. We measure this deviation between waveforms via the overlap $\mathcal{O}$, \begin{equation} \mathcal{O} = \max_{\phi_{c}} \frac{\left< \tilde{h}_{\mathrm{GR}}(f), \tilde{h}_{\mathrm{BGR}}(f) \right>}{ \sqrt{ \left< \tilde{h}_{\mathrm{GR}}(f), \tilde{h}_{\mathrm{GR}}(f) \right> \left< \tilde{h}_{\mathrm{BGR}}(f), \tilde{h}_{\mathrm{BGR}}(f) \right> } }. \end{equation} Here $\tilde{h}_{\mathrm{GR}}$ and $\tilde{h}_{\mathrm{BGR}}$ are the GR frequency-domain waveform and associated beyond-GR waveform with the same intrinsic parameters. We maximize over the merger phase of the signal by taking the absolute value of the overlap (e.g.,~\cite{Allen2012}). One can similarly maximize over the merger time. However, as detailed in Appendix~\ref{app:time-maximization}, we find that an overlap cut maximized over the merger phase and time introduces sufficient flexibility that the GR parameters can deviate significantly from the corresponding value with no beyond-GR deviation. Finally, $\langle \cdot, \cdot \rangle$ denotes a discrete inner product between the frequency-domain waveforms, weighted by the detector spectral power density, as, \begin{equation} \left< \tilde{h}_{1}(f), \tilde{h}_{2}(f) \right> = \frac{4}{T} \sum_{i}^{N} \frac{ \tilde{h}_{1,i} \tilde{h}^{*}_{2,i} }{ S_i }, \end{equation} between two generic frequency-domain waveforms $\tilde{h}_{1}$ and $\tilde{h}_{2}$, where $i$ enumerates $N$ discrete sampling frequencies spaced by $1 / T$. In practice, we use the +-polarization of the waveform for computing the overlaps. The quantity $S$ is the harmonic sum of the power spectral densities for each of the interferometers in the network. In this work, for some cases of hybrid sampling, we enforce a cut on the priors of $\delta p_i$ by enforcing that all beyond-GR waveforms we generate must have an overlap $\mathcal{O} > 0.9$ with their associated GR waveform. \section{Hybrid Sampling in Gravitational Wave Signals} \label{sec:hybrid-sampling_gws} We now apply our method to real and simulated gravitational-wave signals. We follow the procedure described in Section~\ref{sec:stat_methods} to jointly infer $\theta_{\rm GR}$ and each of the $\delta p$ parameters. For each analyzed signal, we first analyze the data using \code{dynesty} under the GR model.\footnote{We note that, in practice, this analysis is typically performed by the LIGO/Virgo/Kagra collaboration and so would not be required in production scenarios if the nested samples are released for future analyses.} Unless otherwise specified, we then perform 28 subsequent analyses with \code{ptemcee}, two each allowing one of the $\delta p$ parameters to vary either applying the condition that ${\cal O} > 0.9$ or no overlap cut. For all analyses, we numerically marginalize the likelihood over distance and the coalescence phase using standard methods~\cite{bayesian_inference_gws_thrane_2019}. Full details of the sampler configurations can be found in Appendix~\ref{sec:sampler_settings}. The prior distribution we use for $\theta_{\rm GR}$ is given in Table~\ref{tbl:gw150914_gr_priors}. We note that throughout we work with detector-frame mass quantities which differ from the source mass by a distance-dependent factor due to cosmological redshifting. For the \code{ptemcee} stage, we initialize the $\theta_{\rm GR}$ from the tempered posterior distribution obtained with \code{dynesty}. The prior and initialization distributions for the $\delta p$ are shown in Table~\ref{tbl:gw150914_dpi_distributions}. \begin{center} \begin{table} \begin{tabular}{|c c c|} \hline Parameter & Distribution & Unit \\ \hline $m_{1}, m_{2}$ & ${\cal U}(1, 1000)$ & $ M_{\odot}$ \\ $\mathcal{M}$ & $[21.418, 41.974]$ & $M_{\odot}$ \\ $q$ & $[0.05, 1.0]$ & - \\ $a_{1}$, $a_2$ & $\mathcal{U}(0, 0.99)$ & - \\ $\theta_{1}, \theta_{2}, \theta_{\rm JN}, \kappa$ & Sin & rad\\ $\phi_{12}, \phi_{jl}, \phi_{c}, \epsilon$ & $\mathcal{U}(0, 2\pi)$ & rad \\ $\psi$ & $\mathcal{U}(0, \pi)$ & rad \\ $t_c$ & ${\cal U}(t_{0} - 0.1, t_{0} + 0.1)$ & $s$ \\ $d_L$ & ${\cal P}(2, 10, 10^4)$ & Mpc \\ \hline \end{tabular} \caption{ Prior distributions for $\theta_{GR}$ used in both steps of hybrid sampling to estimate the source properties of the gravitational-wave signals we consider. We denote a uniform distribution over $[a, b]$ as ${\cal U}(a, b)$, ${\cal P}(\alpha, a, b)$ is a power-law distribution with spectral index $\alpha$ over the same domain. The sine distribution for a quantity $x$ is equivalent to a uniform distribution of $\cos(x)$. The notation $[a,b]$ denotes a parameter that is constrained to lie within that interval with the functional form defined in terms of other parameters. The prior for the coalescence time is centered on either the trigger time from the matched filter search pipelines for GW150194 or the known injection time for simulated signals. Parameter definitions follow~\cite{RomeroShaw2020}. }\label{tbl:gw150914_gr_priors} \end{table} \end{center} \begin{center} \begin{table} \begin{tabular}{|p{3cm} p{2cm} p{2cm}|} \hline Parameter & Prior & Initialization \\ \hline $\delta \varphi_0$ & $\mathcal{U}(-1, 1)$ & $\mathcal{N}(0, 10^{-2})$ \\ $\delta \varphi_1$ & $\mathcal{U}(-2, 2)$ & $\mathcal{N}(0, 10^{-1})$ \\ $\delta \varphi_2, \delta \varphi_3, \delta \varphi_4, \delta \varphi_{5l}$ & $\mathcal{U}(-5, 5)$ & $\mathcal{N}(0, 1)$ \\ $\delta \varphi_6$ & $\mathcal{U}(-10, 10)$ & $\mathcal{N}(0, 1)$ \\ $\delta \varphi_{6l}, \delta \varphi_7$ & $\mathcal{U}(-30, 30)$ & $\mathcal{N}(0, 5)$ \\ $\delta \alpha_2, \delta \alpha_3, \delta \alpha_4$ & $\mathcal{U}(-5,5)$ & $\mathcal{N}(0, 1)$ \\ $\delta \beta_2, \delta \beta_3$ & $\mathcal{U}(-5, 5)$ & $\mathcal{N}(0, 1)$\\ \hline \end{tabular} \caption{ Prior (center) and initialization (right) distributions for the post-Newtonian deviation parameters $\delta p_i$ used in the \code{ptemcee} step of hybrid sampling for GW150914. The prior distributions were chosen to fully include the $\delta p_i$ posteriors for GW150914 in \cite{single-BBH_gr-tests_LIGO_2016}. The initialization distributions were chosen to be narrower than the expected posterior distributions. Here, ${\cal U}(a, b)$ denotes a uniform distribution in $[a, b]$ and ${\cal N}(\mu, \sigma)$ a normal distribution with mean $\mu$ and standard deviation $\sigma$. }\label{tbl:gw150914_dpi_distributions} \end{table} \end{center} \subsection{Analysis of a Real Signal - GW150914} First, we apply our hybrid sampling method on GW150914, the first observed gravitational-wave signal \cite{gw150914_properties}. This signal was produced by the coalescence of a binary black hole system with a detector-frame chirp mass of $\mathcal{M} \sim 30 M_{\odot}$ and a network signal-to-noise ratio of $\sim 25$. These properties mean it is still one of the highest SNR signals to date and also lies at the mode of the observed binary black hole mass distribution~\cite{O3bPop} making it an excellent representative test case. Following~\citep{gw150914_properties}, we analyze $8s$ of data ending $2s$ after the trigger time produced by matched-filter search pipelines for both of the Advanced LIGO interferometers. We use the power spectral densities and calibration envelopes used in the LIGO/Virgo collaboration analyses available at~\cite{GWTC1Data}. Marginalizing over uncertainty in the detector calibration adds 40 free parameters to the analysis, and we use the same prior distribution for those parameters as~\cite{GWTC1}. We downsample the data to $2048$ Hz and analyze the data from $20-1024$ Hz. In Figure~\ref{fig:gw150914_violinplot}, we show the posterior distributions for each of the $\delta p$ obtained with (purple) and without (magenta) a cut in the GR vs. non-GR overlap respectively. We also overlay the results from the LIGO/Virgo collaboration analysis in blue obtained with \code{LALInference}~\cite{Veitch2015,GWTC1_gr-tests_LIGO_2019}. The differences between the blue and magenta are likely due to sampler differences. We note that for the inspiral deviation parameters the requirement that ${\cal O} > 0.9$ impose a significant constraint compared to the constraining power of the data. This is because of the strong degeneracy between the inspiral deviation parameters and the intrinsic GR parameters as can be seen in Figure~\ref{fig:gw150914_dphi2_overlap-comparison}. However, for the $\delta \alpha$ and $\delta \beta$ parameters, the posteriors are unaffected by the requirement that ${\cal O} > 0.9$. This is because these parameters are not strongly correlated with the GR parameters and so an equivalent waveform cannot be obtained by changing, e.g., the black hole masses and $\delta \alpha_{2}$. In Figure~\ref{fig:gw150914_dphi2_overlap-comparison}, we show joint posterior distributions on the beyond-GR deviation parameters $\delta \varphi_2$, intrinsic parameters chirp mass $\mathcal{M}$ and mass ratio $q$, and the extrinsic sky parameters right ascension and declination from our estimation of $\delta \varphi_2$ when enforcing an overlap cut of $\mathcal{O} > 0.9$ (purple) as well as enforcing no overlap cut (magenta). We also compare these distributions to the posteriors in $\mathcal{M}$, $q$, right ascension, and declination generated during the first step of hybrid sampling, where we do not yet sample in deviations from general relativity. From the construction of the post-Newtonian inspiral phase coefficients, we expect deviations from $\varphi_2$ to be correlated with changes in the mass parameters, particularly $\mathcal{M}$, and we can observe this correlation in both results. Since the extrinsic parameters do not affect the phase evolution of the signal, we do not expect a correlation between $\delta \varphi_{2}$ and the extrinsic parameters. As expected, we do not see a correlation between $\delta \varphi_2$ and the extrinsic sky parameters. We also observe the effect of the overlap cut, which prevents our ensemble from exploring far away from the GR solution for the mass parameters. In Figure~\ref{fig:gw150914_evolution_no-overlap}, we examine the evolution of the ensemble sampler for our analysis allowing $\delta \varphi_{2}$ to vary with no minimum allowed overlap. We show the distribution of $\mathcal{M}$ and $\delta \varphi_2$ at various iterations of the \code{ptemcee} analysis. As in Section~\ref{sec:toy_misspecified}, the hybrid analysis method is well able to capture the correlation between chirp mass and the new parameter added in the second stage of our hybrid analysis. We find that, by iteration 1024, the ensemble has converged to the correct solution. In Appendix~\ref{app:150914-traces}, we provide additional plots showing the evolution of the $\beta_{T}=1$ ensemble for each of the analyses of GW150914. In general, we find $\sim 1000$ iterations are sufficient to ensure convergence of the algorithm. \begin{center} \begin{table} \begin{tabular}{|c | r|} \hline Parameter & $n_{\rm likelihood}$ \\ \hline GR & 23,200,000 \\ $\delta \varphi_0$ & 2,955,000 \\ $\delta \varphi_1$ & 2,952,500 \\ $\delta \varphi_2$ & 2,952,500 \\ $\delta \varphi_3$ & 2,952,500 \\ $\delta \varphi_4$ & 2,911,250 \\ $\delta \varphi_{5l}$ & 2,951,250 \\ $\delta \varphi_6$ & 2,953,750 \\ $\delta \varphi_{6l}$ & 2,951,250 \\ $\delta \varphi_7$ & 2,490,000 \\ $\delta \alpha_2$ & 2,951,250 \\ $\delta \alpha_3$ & 2,952,500 \\ $\delta \alpha_4$ & 2,951,250 \\ $\delta \beta_2$ & 2,951,250 \\ $\delta \beta_3$ & 2,952,500 \\ \hline \end{tabular} \caption{ The number of likelihood evaluations required to estimate $\delta p_i$ in GW150914 using hybrid sampling. For reference, we include the number of likelihood evaluations required for the initial GR analysis. With the (optimistic) assumption that performing nested sampling to infer the $\delta p_{i}$ requires the same number of likelihood evaluations as with the GR model, our method is $\sim 8\times$ more efficient. } \label{tbl:gw150914_cost} \end{table} \end{center} We now assess whether our hybrid sampling method is more computationally efficient than the previously employed direct sampling method. To do this, we compare the number of likelihood evaluations needed to produce well-converged results. The computational cost for hybrid sampling scales linearly with the number of extensions to the base model. A fixed number of likelihood evaluations are necessary for the first step of sampling with \code{dynesty}, followed by additional evaluations for each second step analysis performed with \code{ptemcee}. Using \code{dynesty} alone in a ``standard" methodology also scales linearly without an initial fixed cost but, in general, each analysis with \code{dynesty} is more expensive than the same second-step hybrid analysis. Thus, if we only seek to estimate a small number of $\delta p_i$, using \code{dynesty} alone may be more efficient, but we expect hybrid sampling to be more efficient after some break-even number of deviation parameter estimations. We summarize the computational cost of each of the analyses we performed for our analysis of GW150914 in Table~\ref{tbl:gw150914_cost}. For the initial GR-only inference we required 23.6 million likelihood evaluations and each \code{ptemcee} analysis required $< 3$ million likelihood evaluations. We don't have access to the computational cost for the LIGO/Virgo analysis, however, we can conservatively estimate that direct \code{dynesty} sampling for each non-GR parameter will be at least as expensive as the GR-only analysis. We can therefore estimate that our hybrid sampling scheme is $\sim 10\times$ more efficient than the direct sampling method for this event. \subsection{Simulated Non-GR Signals} Analyses of real gravitational-wave transients have not revealed significant deviations from relativity, however, it is important to test whether our method will be sensitive to such effects if they are present. To accomplish this, we analyze a simulated signal with a non-zero value of $\delta \varphi_{2}$ with our hybrid method; the specific injection parameters are described in Table~\ref{tbl:injected_parameters}. We add this signal to the Advanced LIGO Livingston and Hanford interferometers assuming their design sensitivities~\cite{observing_scenarios} resulting in an injection with a network signal-to-noise ratio ${\rm SNR} \approx 370$. \begin{center} \begin{table} \begin{tabular}{|c | c c|} \hline Parameter & Value & Unit \\ \hline $\mathcal{M}$ & $30$ & $M_{\odot}$ \\ $q$ & 0.8 & - \\ $a_1, a_2$ & 0 & - \\ $\theta_1$, $\theta_2, \phi_{12}, \phi_{jl}, \theta_{\rm JN}, \phi_c, \psi$ & 0 & rad \\ right ascension & 1.35 & rad \\ declination & -1.21 & rad \\ $\delta \varphi_2$ & 0.2 & - \\ other $\delta p$ & 0 & - \\ $t_c$ & 0 & s \\ $d_L$ & 100 & Mpc \\ \hline \end{tabular} \caption{ Parameters of the simulated signal injected into the Advanced LIGO gravitational wave detector network. } \label{tbl:injected_parameters} \end{table} \end{center} We follow the same hybrid sampling procedure as in our analysis of GW150914, and with the sampler settings found in Appendix \ref{sec:sampler_settings}, Table \ref{tbl:injected_sampler_kwargs}. We also perform the beyond-GR analyses with \code{dynesty} without imposing an overlap cut, to compare the results between the two methods. In Figure~\ref{fig:highsnr_corner} we show the one- and two-dimensional marginal posterior probability distributions for three parameters for this simulated signal. From left to right (top to bottom) these are a non-GR deviation parameter $\delta \varphi_{2}$ and two intrinsic binary parameters, the chirp mass and mass ratio. We note again that the deviation parameter is correlated with the intrinsic parameters. In Figure~\ref{fig:highsnr_evolution}, we consider snapshots of the $\beta_{T}=1$ ensemble at various stages of the \code{ptemcee} analysis. For this analysis, the injected chirp mass is strongly excluded from the posterior distribution obtained after the first, GR-only, analysis due to correlations between ${\cal M}$ and $\delta \varphi_{2}$. As in Figure~\ref{fig:alpha-beta_evolution}, the ensemble of walkers evolves to explore the extended parameter space and converge on the correct solution. As with our analysis of GW150914, $\sim 1000$ iterations are required until the ensemble converges. In Figure \ref{fig:highsnr_violinplot}, we show the posteriors for $\delta p_i$ for our simulated signal. Despite the deviation only being non-zero for $\delta \varphi_{2}$, the posterior distributions for all of the inspiral and intermediate deviation parameters are inconsistent with zero at high significance. For the merger-phase deviation parameters, the deviations from zero are less pronounced. This is consistent with previous work that has demonstrated that deviations at one post-Newtonian order can be identified with other deviation parameters~\cite{meidam2018, pi_haster2020} due to correlations between the parameters~\cite{Saleem2022}. Additionally, in Figure \ref{fig:chirp-mass_violinplot}, we note that the posterior distributions for chirp mass we obtain while estimating $\delta p_i$ are \textit{only} consistent with the injected value when allowing one of the inspiral phase deviation coefficients $\delta \varphi_i$ to vary. In general, corrections at similar post-Newtonian orders are more strongly correlated, and this is visible from our results. Thus, if we receive a signal whose phase evolution is inconsistent with general relativity we cannot trust our estimate of the chirp mass and require a model with additional degrees of freedom to capture the mass term accurately. Comparing the number of likelihood evaluations for each analysis, we find that each \code{dynesty} analysis requires $\sim 10^7$ likelihood evaluations and each \code{ptemcee} analysis requires $\sim 3 \times 10^6$ likelihood evaluations. For each \code{dynesty} analysis we use only 500 live points in this case, compared to 2000 for our analysis of GW150914 and so we expect the \code{dynesty} analysis to require a factor of four fewer likelihood evaluations. Taking this into account, we see a comparable (or even larger) computational saving with our hybrid method as for our analysis of GW150914. \section{Conclusions} \label{sec:conclusions} In this work, we introduced a novel hybrid sampling method for exploring models that can be described as extensions of a simpler underlying model. By seeding a parallel-tempered ensemble MCMC with initial posterior estimates generated by performing nested sampling on a base model, hybrid sampling efficiently explores the extended parameter space of a more complex model. While previous methods have employed similar hybrid sampling methods, e.g.,~\cite{Miller2019,Psaltis2021}, we exploit the athermal property of the nested sampling algorithm to optimally seed the ensembles of walkers at each temperature. First, we demonstrated our framework with a toy model, using hybrid sampling to estimate the parameters of a generalized Gaussian distribution. We saw that we are able to successfully recover the parameters of the true model, even when the base model is misspecified and the parameters of the extended model are correlated with those of the base model. Following this, we applied our method to a widely performed test of general relativity with gravitational-wave transients: parameterized deviations from the waveform predicted by general relativity. Using our method, we accurately reproduced the tests of general relativity using GW150914 as performed by the LIGO/Virgo scientific collaborations and estimate that our method is approximately an order of magnitude more efficient than the current direct sampling method \cite{single-BBH_gr-tests_LIGO_2016}. Finally, we analyzed a simulated signal with a measurable deviation from the prediction of relativity. We found that the efficiency of our hybrid sampling method is still far superior to direct sampling in this case. Previous analyses have suffered from large computational costs as the parameters describing the waveform predicted by relativity are strongly correlated with the deviation parameters. In order to mitigate this, we introduced a ``closeness'' criterion between the non-GR waveform being considered and the corresponding GR signal. Specifically, this is implemented as a minimum overlap threshold between the two signals. This acts as an additional prior constraint that the signal must be similar to the GR prediction, given the previous success of relativity. This is particularly beneficial for lower signal-to-noise ratio systems where the data are less informative. While we have focused on a narrow application of measuring single additional parameters describing deviations from relativity, the method presented here can be used for more exploratory analyses that allow multiple non-GR parameters simultaneously that otherwise have exploding computational costs due to the number of possible combinations of parameters to vary simultaneously. More generically, this method can be applied to any case where importance sampling to include a more physically realistic, but expensive model breaks down. For example, measuring eccentricity in compact binary mergers~\cite{RomeroShaw2021}, estimating the impact of calibration uncertainty on inference~\cite{Payne2020}, and analyzing pairs of potentially gravitationally-lensed events~\cite{Janquart2022}. \section{Acknowledgements} We thank Sylvia Biscoveanu, Max Isi, Nathan Johnson-McDaniel, Ralph Smith, Salvatore Vitale, and Alan Weinstein for helpful discussions and comments. CT is supported by an MKI Kavli Fellowship. JG is supported by PHY-1764464. NW acknowledges support from the National Science Foundation (NSF) and the Park Scholarships program at NC State. We are grateful to the LIGO Caltech SURF program where this project began which is supported by the NSF REU program. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. The authors are grateful for computing resources provided by the California Institute of Technology and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. The analysis in this work made use of data available from the Gravitational Wave Open Science Center (gw-openscience.org)~\cite{gwosc}. This analysis used the following software: \code{numpy} \cite{numpy}, \code{scipy} \cite{scipy}, \code{matplotlib} \cite{matplotlib}, \code{corner.py} \cite{corner}, \code{pandas} \cite{reback2020pandas, mckinney-proc-scipy-2010}, \code{LALSimulation} \cite{lalsuite}, \code{bilby} \cite{Ashton2019, RomeroShaw2020}, \code{dynesty} \cite{speagle_2020_dynesty}, \code{ptemcee} \cite{Vousden_2015_ptemcee}. We provide analysis scripts, notebooks, and some data at \url{https://github.com/noahewolfe/tgr-hybrid-sampling}. \bibliography{refs}{} \appendix \section{Sampler Settings} \label{sec:sampler_settings} To enable reproducibility of our results, we provide the settings used during each stage of our hybrid sampling algorithm. All of the parameters are as defined in the \code{Bilby} implementation of the respective sampling code. Additionally, configuration files can be found at \url{https://github.com/noahewolfe/tgr-hybrid-sampling}. We note that for our analysis of GW150914, we used \code{nlive}$ = 2000$ rather than 500 as for the simulated signals. \begin{center} \begin{table} \begin{tabular}{|c c|} \hline Sampling Argument & Value \\ \hline \multicolumn{2}{|c|}{\code{dynesty}} \\ \hline \code{nlive} & $500$ \\ \code{sample} & \code{`rwalk'} \\ \code{walks} & 50 \\ \code{nact} & 10 \\ \hline \multicolumn{2}{|c|}{\code{ptemcee}} \\ \hline \code{ntemps} & 5 \\ \code{nwalkers} & 250 \\ \code{burn\_in\_fixed\_discard} & 2000 \\ \hline \end{tabular} \caption{Sampler arguments for hybrid sampling used in our analysis of injected signals as defined for the \code{Bilby} implementations of \code{dynesty} and \code{ptemcee}. For the \code{dynesty}-only analyses of injected signals, we also use \code{dynesty} sampler settings in this table.} \label{tbl:injected_sampler_kwargs} \end{table} \end{center} \section{Further results for GW150914 Analysis}\label{app:150914-traces} In this appendix, we provide trace plots showing the evolution of the $\beta_{T}=1$ ensemble for the deviation parameters $\delta p$ for our analysis of GW150914. In Figure, we show the results of analyses without (left) and with (right) the requirement that ${\cal O} \geq 0.9$ respectively. In general, the sampler has converged to a steady-state after $\sim1000$ iterations and always after 2000 iterations. We note that in most cases implementing our overlap condition reduces the number of iterations required for the ensemble to converge to a steady-state. \section{Effect of Time-Maximized Overlap Cut} \label{app:time-maximization} In Section~\ref{sec:bgr-waveforms}, we introduced the overlap $\mathcal{O}$ to measure the deviation in a waveform induced by a beyond-GR deviation, maximized over the merger phase of the signal. Changing the merger time $t_c$ introduces a frequency-dependent shift in the phase of the signal that is degenerate with a beyond-GR deviation; thus, some parametric tests of general relativity maximize $t_c$ as well when calculating the overlap (see, for example, \cite{ppE_bonilla_2022}). In Figure~\ref{fig:gw150914_dphi2_overlap-comparison_time-maximized}, we present posterior distributions on the chirp mass, mass ratio, and the inspiral deviation parameter $\delta \varphi_2$ during our estimation of $\delta \varphi_2$ in GW150914, similar to the results presented in Figure~\ref{fig:gw150914_dphi2_overlap-comparison}. Here, however, we have maximized $\mathcal{O}$ over both the merger phase and time. This reduces the cut on the prior for $\delta \varphi_2$ imposed by requiring $\mathcal{O} > 0.9$, as a larger range of deviations in the waveform induced by $\delta \varphi_2$ can be accounted for by varying the merger time. In turn, time maximization allows the mass parameters, in particular the chirp mass, to vary more widely as well, reducing the efficacy of an overlap cut.
Title: Gradient corrections to the one-loop effective action
Abstract: We derive the one-loop effective action to second order in gradients. This expansion of the effective action is useful to study problems in cosmological settings where spatial or time gradients are important, such as bubble nucleation in strong first-order phase transitions. Assuming space-time dependent background fields, we work in Wigner space and perform a midpoint gradient expansion, which is consistent with the equation of motion satisfied by the propagator. In particular, we consider the fact that the propagator is non-trivially constrained by an additional equation of motion, obtained from symmetry requirements. We show the calculations for the case of a single scalar field and then generalise the result to the multi-field case. While we find a vanishing result in the single field case, the one-loop second-order gradient corrections can be significant when considering multiple fields. Finally, we apply our result to a simple toy model of two scalar fields with canonical kinetic terms and mass mixing at tree-level.
https://export.arxiv.org/pdf/2208.12142
\title{Gradient corrections to the one-loop effective action} \author{Sofia Canevarolo} \email[]{s.canevarolo@uu.nl} \affiliation{Institute for Theoretical Physics, Spinoza Institute \& $\rm EMME\Phi$, Faculty of Science, Utrecht University, Postbus 80.195, 3508 TD Utrecht, The Netherlands \looseness=-1} \author{Tomislav Prokopec} \email[]{t.prokopec@uu.nl} \affiliation{Institute for Theoretical Physics, Spinoza Institute \& $\rm EMME\Phi$, Faculty of Science, Utrecht University, Postbus 80.195, 3508 TD Utrecht, The Netherlands \looseness=-1} \tableofcontents \section{Introduction} In order to address unresolved questions regarding the Early Universe, it is often crucial to go beyond the classical description of the phenomena and account for their quantum physical nature. To do so, the quantum effective action is an important tool to consider. Indeed, it is often used to study phenomena, such as spontaneous symmetry breaking mechanisms, as it allows to account for quantum corrections to the classical action while respecting all the symmetries of the theory. The effective action is the generating functional of the one-particle irreducible correlation functions, and thus it encodes the full quantum physics of a theory. It is a functional of the background fields, and its minima correspond to the vacuum expectation value of the quantum fields. It is therefore regarded as a powerful tool to estimate the quantum corrected stationary points of the theory. On the other hand, since the quantum effective action cannot be calculated in an exact way, we have to resort to perturbation theory where it can be evaluated by functional methods in a loop expansion~\cite{Coleman-Weinberg,Jackiw,Buchbinder:1992rb}. \\ When the background fields are assumed to be space-time independent, the effective action reduces to the effective potential, which at tree-level is simply given by all the non-derivative terms in the Lagrangian. The effective potential is widely used to study spontaneous symmetry breaking in the semi-classical approximation. Indeed, it allows to probe the structure of the minima of a theory, accounting for the radiative corrections up to desired order in loop expansion. This is of particular interest for theories such as the conformal extensions of the Standard Model (SM) \cite{Rezacek, Carone, Englert,Mohamadnejad}. These models are well-motivated candidates in which a conformal symmetry prohibits any mass-dimensionful term in the tree-level Lagrangian. Therefore, the negative mass term present in the scalar sector of the SM is absent in the Lagrangian of the conformal models. The breaking of the conformal symmetry is then realized by radiative corrections via the Coleman-Weinberg mechanism \cite{Coleman-Weinberg}. In these models, the effective potential is essential to find the non-vanishing expectation value of the scalar fields produced by the quantum corrections, and, in turns, the mass terms of the other particles.\\ It is therefore clear that the effective potential is a powerful tool which, nonetheless, is not enough to describe all the dynamical phenomena in Cosmology. Indeed, in an evolving Universe the assumption on the constant background fields often shows to be too stringent, as spatial or time gradients are crucial to achieve a complete description of the dynamics. It is therefore of outstanding importance to compute quantum corrections to the kinetic terms and understand in which cases these corrections are relevant. \\ An important example of application is provided by the Electroweak Phase Transition where the effective action is used to calculate the tunneling rate from the false to the true vacuum state of the theory. In this scenario, quantum corrections to the effective potential are necessary to correctly estimate the minima of the potential. However, this alone is not sufficient to compute the tunneling path of the fields. Indeed, the full effective action is needed to obtain the correct field configuration, known as the bounce, that describes the tunneling dynamics. Similarly, we expect a wide range of instances within the history of the Universe when the gradient terms of the scalar fields cannot be neglected: sphaleron and instantons calculations~\cite{Klinkhamer,Klinkhamer 2}, baryogenesis~\cite{Sakharov}, $\alpha-$attractor models~\cite{Kallosh-Linde-Roest,Galante-Linde-Roest,Carrasco-Linde-Roest}, multi-field inflationary models~\cite{Dvali,Liu-Prokopec,Barnaveli} are some other important examples of applications.\\ To properly account for these quantum corrections, a derivative expansion of the effective action can be performed. This is valid when the background fields are weakly dependent on the space-time or, equivalently, for soft momenta flowing in the vertices. In the literature, attempts in this direction are already present~\cite{Fraser,Aitchison-Fraser,Aitchison-Fraser2,Chan,Iliopoulos-Itzykson-Martin}. Using a different technique, we perform a gradient expansion of the effective action at one-loop order. We expand the scalar propagator in Wigner space using the midpoint and relative coordinates and check that both the symmetric propagator equations of motion are satisfied. This allows us to obtain one-loop second-order gradient corrections for the scalar kinetic terms and check if it can compete with the quantum corrections of the effective potential of the same loop order. In Section II, we start with the simple case of a single real scalar field and show that at one-loop we have vanishing kinetic corrections. In Section III, we then generalise the calculations to a theory with a generic number of real scalar fields with canonical kinetic terms but with mass-mixing interactions in the potential. Moreover, we investigate the renormalisation scale dependence of our results. The one-loop corrections to the effective potential are divergent and ought to be renormalised, breaking scaling symmetry and introducing an unphysical scale dependence in the effective potential. It is therefore interesting to understand if a similar behaviour is also found when considering the kinetic corrections. Finally, we apply the results to a simple toy model composed by two real scalar fields. Our discussion and conclusion are in Section IV. \subsection*{Quantum effective action} To compute the one-loop contribution to the effective action $\Gamma^{(1)}[\phi_{a}]$, we start from the following expression, valid for $n$ real scalar fields $\phi_a(x)\;(a=1,2,\cdots n)$~\footnote{ Here we consider tree-level action to be a functional of $n$ real scalar fields $\Phi_a(x)$. The background field method posits that the effective action $\Gamma[\phi_a]$ is obtained by inserting into the classical action, $\Phi_a(x)=\phi_a(x) +\varphi_a(x)\; (a=1,2,\cdots n)$, and integrating over the fluctuating fields $\varphi_a(x)$ according to, $\exp(i\Gamma[\phi_a]) = \int {\cal D}\varphi_a {\exp}(iS[\phi_a+\varphi_a])$. The thus-obtained effective action is a nonlocal functional of the background fields $\phi_a(x)$, and it is not known how to obtain it for general $\phi_a(x)$. In this work, we construct a framework which can be used to construct an approximate quantum effective action by utilising standard perturbative loop expansion and a gradient expansion, which is valid for slowly varying background fields.}: \begin{eqnarray} \Gamma[\phi_{a}]\!&=&\! S[\phi_{a}]+\Gamma^{(1)}[\phi_{a}]+ {\cal O}(\hbar ^2) \label{effective action}\\ \Gamma^{(1)}[\phi_{a}] \!&=&\! \hbar\frac{i}{2} \text{Tr} \log\big[ \mathcal{D}_{ab}[\phi_c](x;x')\big] \,. \label{1 loop effective action} \end{eqnarray} $S[\phi_{a}]$ is the tree-level (classical) action, expressed in terms of the background fields $\phi_{a}$. The tree-level two-point vertex function $\mathcal{D}_{ab}$ in equation~(\ref{1 loop effective action}) is obtained by varying the tree-level action, \begin{equation} \mathcal{D}_{ab}[\phi_c](x;x') =\frac{\delta^2 S[\Phi_c]}{\delta \Phi_a(x) \delta \Phi_b (x')} \bigg|_{\Phi_{a}=\phi_{a}}, \end{equation} where we stress that the tree-level two-point vertex function is evaluated on the background fields. The tree-level propagator $i\Delta_{cb}$ is defined as the operator inverse of the tree-level two-point vertex function, \begin{equation} \int d^Dy\; \mathcal{D}_{ac}[\phi_e](x;y)[\phi_e] i\Delta_{cb}[\phi_e](y;x') =\delta_{ab} i \delta^{D} (x\!-\!x') \,, \label{prop eq of motion} \end{equation} where $D$ is the number of dimensions and summation over repeated field indices is assumed. With this equation~(\ref{1 loop effective action}) can be recast as, \begin{equation} \Gamma^{(1)}[\phi_{a}] = - \frac{i}{2} \text{Tr} \log\big[ \Delta_{ab}[\phi_c](x;x')\big] \,, \label{1 loop effective action 2} \end{equation} where we set $\hbar=1$. The Feynman propagator is a two-point function with time-ordered product of fields, and therefore it satisfies the following symmetry requirement: \begin{equation} i\Delta_{ac}[\phi_e](x;x')=i\Delta_{ca}[\phi_e](x';x) \,. \label{propagator: symmetry requirement} \end{equation} In the multifield case, the propagator is invariant under the simultaneous exchange of points $x \leftrightarrow x'$ and transposition in flavour $a \leftrightarrow c$~\footnote{ Needless to say, in the single field case, the propagator is invariant under the exchange of the points.}. This directly translates into another equation of motion for the propagator: \begin{equation} \int d^Dy \; i\Delta_{bc}[\phi_e](x';y)\overleftarrow{\mathcal{D}}_{ca}[\phi_e](y;x) =\delta_{ab} i\delta^{D}(x\!-\!x') \,. \label{symmetric prop eq of motion} \end{equation} Taking the Lagrangian density to be local function of the fields and its derivatives, then the vertex function is diagonal in position space, % $ \mathcal{D}_{ac}[\phi_e](x;y)\rightarrow \mathcal{D}_{ac}[\phi_e](x)\delta^D(x\!-\!y)$, and the equations of motion~(\ref{prop eq of motion}) and~(\ref{symmetric prop eq of motion}) simplify to, \begin{eqnarray} \mathcal{D}_{ac}[\phi_e](x)i\Delta_{cb}[\phi_e](x;x') &=& \delta_{ab} i \delta^{D} (x\!-\!x'),\\ i\Delta_{bc}[\phi_e](x;x')\overleftarrow{\mathcal{D}}_{ca}[\phi_e](x') &=& \delta_{ab} i \delta^{D} (x\!-\!x'). \label{integrated prop eq of motion} \end{eqnarray} Therefore, it is worth stressing that the Feynman propagator satisfies both the symmetric equations of motion. In the following, when computing the gradient expansion of the propagator, we will verify that the symmetry requirement remains satisfied.\\ As we have already mentioned, the one-loop corrections of the effective action cannot be calculated exactly, and one has to resort to an expansion in number of derivatives of the fields. Considering for simplicity the case of a single scalar field, the derivative expansion of the effective action in real space yields, \begin{equation} \Gamma[\phi]=\int d^4x\Big[-V_{\rm eff}(\phi)+\frac{1}{2}Z(\phi)(\partial_{\mu} \phi) (\partial^{\mu} \phi) +{\cal O} (\partial^4)\Big] , \end{equation} where ${\cal O} (\partial^4)$ stands for the terms with four or more derivatives.\\ The zeroth-order term of this expansion is then given by the effective potential of the theory, whose analytic expression can be found assuming to have space-time independent background fields. Indeed, with such background fields, it is possible to solve for the functional inverse of $\mathcal{D}_{ab}$ in equation \eqref{prop eq of motion} and obtain the effective potential to a fixed loop order. \\ At one-loop order, the effective potential is divergent and it ought to be renormalized, for example employing dimensional regularization. In the MS scheme, the well-known result for a single scalar field is~\cite{Coleman-Weinberg}: \begin{equation} \Gamma^{(1)}_{\rm ren}[\phi,\mu]= - \int d^4x V_{\rm eff}^{(1)}(\phi,\mu) \,,\qquad V_{\rm eff}^{(1)}(\phi,\mu) = \frac{m^4(\phi)}{64 \pi^2} \bigg[ \log \bigg(\frac{m^2(\phi)}{4 \pi \mu^2} \bigg) +\gamma_E -\frac{3}{2} \bigg] \,,\qquad \label{effective action: single field: CW} \end{equation} with $m^2(\phi)={\rm d}^2V^{(0)}/{\rm d}\phi^2$ the tree-level field-dependent mass of the field, $\mu$ the renormalisation scale and $\gamma_E=-\psi(1)\simeq 0.57\dots$ the Euler-Mascheroni constant. \\ When truncated at a fixed loop order, perturbative calculations produce results which depend on the scale $\mu$. This breaking of scaling symmetry is unphysical, and can be traced back to local counterterms needed to renormalize the primitive results of perturbative calculations, which diverge in $D=4$. This scale dependence can be removed from the effective action~(\ref{effective action: single field: CW}) by constructing the renormalisation group (RG) improved effective action $\Gamma^{(1)}_{\rm RG}[\phi,\mu]$, which resums the leading logarithms from all loops. One can obtain the RG-improved effective action {\it e.g.} by imposing the Callan-Symanzik equation on the perturbative action. The resulting RG-improved effective action, $\Gamma^{(1)}_{\rm RG}[\phi,\mu]$, superficially contains dependence on $\mu$, but this dependence just labels an equivalence class, {\it i.e.} the one parameter family of physically equivalent effective actions. \section{Midpoint expansion of the propagator} The approach that we employ consists in writing the propagator in Wigner space and then employing the derivative expansion. In translationally invariant systems (such as Minkowski vacuum or thermal states), the propagator $i\Delta(x;y)$ only depends on the relative coordinate $r=x-y$. Performing a Fourier transform with respect to $r$ -- also known as a Wigner transform -- yields a momentum space propagator, which can be used for an easy evaluation of the one-loop effective action in equation~(\ref{1 loop effective action 2}), and the result can be expressed in terms of the effective potential, see equation~(\ref{effective action: single field: CW}). On the other hand, when the dependence of the propagator on space-time coordinates is weak, it is still useful to perform the Wigner transform and use it as the starting point for a derivative expansion. We start considering the set of linearly independent coordinates composed by the relative coordinate $r$ and the midpoint coordinate $X=(x+y)/2$. To transform the equation of motion of the propagator in Wigner space, we use the following general relation \cite{Prokopec-Diamond,Prokopec:2004ic}: \begin{equation} \int d^D(x-y) e^{ip\cdot(x-y)} \int d^Dz A(x;z)B(z;y)=e^{-i \diamond}\{A(X;p),B(X;p)\}, \label{wigner transform with diamond} \end{equation} where the diamond operator acts as: \begin{equation} \diamond \{A(X;p),B(X;p)\}=\frac{1}{2}(\partial_X^A \cdot \partial_p^B-\partial_X^B \cdot \partial_p^A)A(X;p)B(X;p). \end{equation} This relation is useful since it gives us a starting point for a midpoint expansion of the propagator. Indeed, we first transform the propagator equation of motion to the linearly independent coordinates $(X;r)$ and then to the other set of linearly independent coordinates composed by the midpoint and Wigner momentum $(X;p)$.\\ We are considering the case of a single scalar field, thus we have: \begin{equation} \begin{split} A(x;z)&=(\partial_x^2-m^2(x))\delta^D(x-z),\\ B(z;y)&=i\Delta(z;y), \end{split} \end{equation} where $m^2(x)$ is the tree-level mass which depends on the background value of the field. In case of translationally invariant background field, we could easily use the equation of motion to find the functional inverse of the inverse propagator and, in turns, the one-loop effective potential. Considering instead a space-time dependent background field, we can write the equation of motion of the propagator in Wigner space as: \begin{equation} (-p^2-m^2(X))e^{-\frac{i}{2} (\overleftarrow{\partial_X} \cdot \overrightarrow{\partial_p}-\overleftarrow{\partial_p} \cdot \overrightarrow{\partial_X})}i\Delta(X;p)=i. \label{Derivative expansion: propagator} \end{equation} This representation of the propagator is useful when the propagator depends weakly on the midpoint coordinate $X$. To make that more precise, let us introduce scales $X_0$ and $p_0$, which characterize the length scale over which the propagator changes significantly, and the typical momentum of these excitations~\footnote{Rigorously speaking, the propagator depends on all momenta, with no upper bound. However, the propagator should be understood as a distribution, meaning that its action on test functions is defined by integrating over the momenta, and in such integrals typically very high momenta do not significantly contribute, as their effects are generally cancelled by rapid oscillations. When these oscillations are absent, the effects of high momentum modes are cancelled by the counterterms needed to renormalize the object of interest.}. The gradient expansion in Eq.~(\ref{Derivative expansion: propagator}) is then well defined if $p_0 X_0 \ll \hbar$, and can be viewed as a generalization of the WKB expansion in quantum mechanics to quantum field theory. The gradient expansion fails to capture the effects that occur when $p_0 X_0 \sim \hbar$, the most prominent being the generation of particles by rapidly varying background fields. Taylor expanding the exponential in Eq.~(\ref{Derivative expansion: propagator}) and applying the derivatives on the left-hand side, the previous equation becomes: \begin{equation} \Big[\!\!-\!p^2\!-\!m^2(X)\!-\!i p \cdot \partial_X \!+\!\frac{i}{2} (\partial_X m^2(X))\cdot\partial_p \!+\!\frac{1}{4} \partial^2_X \!+\!\frac{1}{8} (\partial^X_\mu\partial^X_\nu m^2(X)) \partial_p^\mu\partial_p^\nu\Big]i\Delta(X;p)=i ,\; \label{eom in x} \end{equation} where the double product in the quadratic term vanishes as we apply a second mixed derivative.\\ \begin{comment} Separating the real and imaginary part, we obtain: \begin{equation} \begin{split} &\big[-p^2-m^2(X)+\frac{1}{4} \partial^2_X +\frac{1}{8} (\partial_X^2 m^2(X)) \partial_p^2\big] i\Delta(X;p)=i\\ &\big[ -i p \cdot \partial_X +\frac{i}{2} (\partial_X m^2(X))\partial_p\big]i\Delta(X;p)=0 \end{split} \label{eom in x} \end{equation} \end{comment} We can now transform also the symmetric equation of motion \eqref{symmetric prop eq of motion} into Wigner space using equation \eqref{wigner transform with diamond} and, as before, keep only terms up to second order in the derivative expansion. Interestingly, we obtain a similar equation as before but with the imaginary terms with opposite signs: \begin{equation} \Big[i p \cdot \partial_X -\frac{i}{2} (\partial_X m^2(X))\cdot\partial_p \Big]i\Delta(X;p)=0. \label{eom in y} \end{equation} This is consistent with the fact that the propagator must satisfy both the symmetric equations of motion, in which the real parts of the operator remain unchanged, while the imaginary terms flip signs. We can now exploit the expansion of the inverse matrix to obtain a derivative expansion of the propagator: \begin{equation} \begin{split} i\Delta(X,p)=i\Delta^{(0)}-i(\Delta^{(0)}\mathcal{D}^{(1)} \Delta^{(0)} )+i\big[\!- \Delta^{(0)}\mathcal{D}^{(2)} \Delta^{(0)} + \Delta^{(0)}(\mathcal{D}^{(1)} \Delta^{(0)} )^2\big]+\cdots \quad \end{split} \end{equation} where we keep terms up to second order in number of derivatives. At zeroth order, it is easy to see that: \begin{equation} \begin{split} &\mathcal{D}^{(0)}\Delta^{(0)}=1,\\ \end{split} \label{propagator expansion terms1} \end{equation} while, the linear correction is given by: \begin{equation} \mathcal{D}^{(0)}i\Delta^{(1)}=-i \mathcal{D}^{(1)} \Delta^{(0)}. \end{equation} At this point, we find that the linear correction of the inverse propagator has two expressions coming from \eqref{eom in x} and \eqref{eom in y}, respectively: \begin{equation} \begin{split} \mathcal{D}^{(1)}&=-i p \cdot \partial_X +\frac{i}{2} (\partial_X m^2(X))\cdot\partial_p ,\\ \tilde{\mathcal{D}}^{(1)}&=i p \cdot \partial_X -\frac{i}{2} (\partial_X m^2(X))\cdot\partial_p. \end{split} \label{linear inverse propagator} \end{equation} Considering the first expression, we can explicitly compute the linear correction to the propagator from: \begin{equation} \begin{split} i \mathcal{D}^{(1)}\Delta^{(0)}=&\Big[-i p \cdot \partial_X +\frac{i}{2} (\partial_X m^2(X))\partial_p\Big] \; \frac{i}{-p^2-m^2(X)}\\ =&\frac{p \cdot(\partial_X m^2(X))}{(-p^2-m^2(X))^2} -\frac{p \cdot(\partial_X m^2(X))}{(-p^2-m^2(X))^2} =0, \end{split} \end{equation} and see that it vanishes. Consistently, the same result holds if we consider the $\tilde{\mathcal{D}}^{(1)}$ operator. It is important to highlight that we expect a similar result for all the odd terms in the gradient expansion. Indeed, we know that the terms with odd number of derivatives have opposite sign in the two symmetric equations of motion of the propagator and that the propagator should simultaneously satisfy both the equations.\\ As a consequence, the expansion of the propagator is simply given by: \begin{equation} i \Delta= i \Delta^{(0)}+i \Delta^{(2)}+\cdots \end{equation} where only terms with an even number of derivatives are present.\\ We are now ready to use the above result to obtain the gradient corrections of the effective action. The quantum corrections at one-loop order are given by the functional determinant of the inverse propagator. In the absence of linear correction to the propagator, this is given by: \begin{equation} \begin{split} \Gamma^{(1)}=\frac{i}{2} \text{Tr}\log(\Delta^{-1}) =&-\frac{i}{2} \text{Tr}\log(\Delta^{(0)}+\Delta^{(2)}+...)\\ =&-\frac{i}{2} \big[\text{Tr}\log(\Delta^{(0)}) + \text{Tr}(\mathcal{D}^{(0)}\Delta^{(2)} )+\cdots\big]. \end{split} \end{equation} The first term simply corresponds to the effective potential while the latter one contains the one-loop correction to the kinetic term. It can be computed knowing that the quadratic contribution to the propagator is given by: \begin{equation} \mathcal{D}^{(0)}i\Delta^{(2)}=-i\mathcal{D}^{(2)} \Delta^{(0)}, \end{equation} and knowing that $\mathcal{D}^{(2)}$ has a unique expression in the two symmetric equations of motion of the propagator. After performing a Wick rotation to Euclidean space, the second-derivative term can be integrated in momentum space and yields to: \begin{equation} \begin{split} -\frac{i}{2} \text{Tr}(\mathcal{D}^{(0)}\Delta^{(2)} )=\frac{1}{(4\pi)^{D/2}} \int d^Dx \bigg[&\!-\frac{1}{8}(\partial_{\mu}m^2)(\partial^{\mu}m^2) (m^2)^{-3+\frac{D}{2}} \Gamma\Big(3\!-\!\frac{D}{2}\Big) \\ &+\frac{1}{8} (\partial_{\mu}\partial^{\mu}m^2)(m^2)^{-2+\frac{D}{2}} \Gamma\Big(2\!-\!\frac{D}{2}\Big)\bigg] =0 . \end{split} \label{final result: single field} \end{equation} We find that, integrating by parts the second term, the two terms exactly cancel giving a vanishing result.\\ This result is in disagreement with % the literature~\cite{Fraser,Aitchison-Fraser,Aitchison-Fraser2,Chan,Iliopoulos-Itzykson-Martin}, which compute the gradient expansion using different methods. Indeed, they find finite second-order gradient corrections for a single scalar field. Differently from previous works, our computations rely on the fact that both the symmetric equations of motion of the propagator remain satisfied after the gradient expansion. \section{Generalisation to multiple fields with mass mixing} We now aim at generalising the previous result to the case of multiple scalar fields with canonical kinetic terms and mixed mass terms. In this case, the two symmetric equations of motion of the propagator are: \begin{equation} \begin{split} (\mathbb{1} \partial_x^2 -\mathbb{M}^2(x))_{ab} i\Delta(x,y)_{bc}&=i\delta_{ac}\delta^D(x-y),\\ i\Delta(x,y)_{cb}(\mathbb{1} \overleftarrow{\partial}_y^2 -\mathbb{M}^2(y))_{ba} &=i\delta_{ac}\delta^D(x-y), \end{split} \label{two symmetric equations} \end{equation} where $\mathbb{M}^2$ is the off-diagonal mass matrix, which is taken to be real and transpose invariant. It is obtained from the tree-level potential as $\mathbb{M}^2_{ab}=\frac{\partial^2 V^{(0)}}{\partial \phi_b \partial \phi_a}$ and it can contain the tree-level mass but also field-dependent contributions coming from higher order self-interactions of the fields. The two equations ought to be supplied with the symmetry property of the Feynman propagator~(\ref{propagator: symmetry requirement}), which in the multifield case cannot be derived from Eqs.~(\ref{two symmetric equations}).\\ In case of translation-invariant background fields, the off-diagonal mass matrix can be easily diagonalized and the effective potential is obtained summing over all the diagonal terms. For space-time dependent fields, the situation is more involved and the result of the previous section does not carry over to the case of mixing flavours.\\ Similarly to the single field case, we can now transform the two equations of motion into Wigner space using~\eqref{wigner transform with diamond}: \begin{align} \Big[-\mathbb{1}p^2-\mathbb{M}^2(X)+i\mathbb{1} p \cdot \partial_X -\frac{i}{2} (\partial_X \mathbb{M}^2)\cdot \partial_p& \\ \nonumber +\frac{1}{4}\mathbb{1} \partial^2_X +\frac{1}{8} (\partial_{\mu} \partial_{\nu}\mathbb{M}^2)\partial_{p_{\mu}}\partial_{p_{\nu}}\Big] \Delta(X;p)&=i\mathbb{1},\\ i\Delta(X;-p)^T\Big[-\mathbb{1}p^2-\mathbb{M}^2(X)-i\mathbb{1} p \cdot \overleftarrow{\partial_X} +\frac{i}{2} (\partial_X \mathbb{M}^2)\cdot \overleftarrow{\partial_p}&\\ \nonumber +\frac{1}{4}\mathbb{1} \overleftarrow{\partial}^2_X +\frac{1}{8} (\partial_{\mu} \partial_{\nu}\mathbb{M}^2)\overleftarrow{\partial}_{p_{\mu}}\overleftarrow{\partial}_{p_{\nu}}\Big]&=i\mathbb{1}. \end{align} At order zero we have: \begin{equation} \begin{split} (-\mathbb{1}p^2-\mathbb{M}^2)_{ab}i\Delta^{(0)}_{bc} &=i\delta_{ac},\\ i[\Delta^{(0)}]^T_{ab}(-\mathbb{1}p^2-\mathbb{M}^2)_{bc}&=i\delta_{ac}, \end{split} \end{equation} the solutions of the two equations give the zeroth-order propagator: \begin{equation} \Delta^{(0)}_{ab}=[(-\mathbb{1}p^2-\mathbb{M}^2)^{-1}]_{ab}, \end{equation} where the inverse is in the matrix sense. Similarly, we have that the first-order corrections satisfy the following two symmetric equations: \begin{equation} \begin{split} \Delta^{(1)}(X;p)&=\Delta^{(0)}\Big[i\mathbb{1}p \cdot \partial_X -\frac{i}{2}(\partial_X \mathbb{M}^2)\cdot \partial_p\Big] \Delta^{(0)},\\ [\Delta^{(1)}(X;-p)]^{T}&=\Delta^{(0)}\Big[-i\mathbb{1}p \cdot \overleftarrow{\partial}_X +\frac{i}{2}(\partial_X \mathbb{M}^2)\cdot \overleftarrow{\partial}_p\Big] \Delta^{(0)}, \end{split} \end{equation} which can be solved exploiting the relation $dA=-A (dA^{-1}) A$, with $A$ being a generic matrix and $A^{-1}$ its inverse. The matrices $\partial_X \mathbb{M}^2$ and $\Delta^{(0)}$ in general do not commute, so from the first equation we obtain: \begin{equation} \Delta^{(1)}(X;p)=\Delta^{(0)}[\Delta^{(0)};ip \cdot (\partial_X \mathbb{M}^2)] \Delta^{(0)}, \end{equation} {\it i.e.} we can express the linear order correction to the propagator in terms of the commutator between $\partial_X\mathbb{M}^2$ and $\Delta^{(0)}$. It is therefore clear that the first-order corrections are zero only if the two matrices commute. Moreover, solving the second symmetric equation we obtain exactly the same result. The two results are related by transposing in flavour and flipping the sign of the momentum.\\ We can now proceed to the second-order corrections, which are given by: \begin{equation} \begin{split} &\Delta^{(2)}(X;p)=-\Delta^{(0)} \mathcal{D}^{(2)}\Delta^{(0)}+\Delta^{(0)}(\mathcal{D}^{(1)}\Delta^{(0)})^2\\ &=-\Delta^{(0)} \bigg(\frac{1}{4} \mathbb{1} \partial_X^2+\frac{1}{8} (\partial_{\mu} \partial_{\nu} \mathbb{M}^2)\partial_{p_{\mu}} \partial_{p_{\nu}} \bigg)\Delta^{(0)} + \Delta^{(0)}\bigg([\Delta^{(0)};ip \cdot (\partial_X \mathbb{M}^2)] \Delta^{(0)}\bigg)^2. \end{split} \label{2order propagator0} \end{equation} Similarly as before, we can try to find a solution for the second-order corrections of the propagator exploiting the relation $d^2A=-A (d^2A^{-1}) A+2A (dA^{-1})A(dA^{-1})A$ and taking into account the fact that $\partial_{\mu} \partial_{\nu} \mathbb{M}^2$ and $\Delta^{(0)}$ do not commute. This yields: \begin{equation} \begin{split} \Delta^{(2)}(X;p)=&-\frac{1}{2} (\Delta^{(0)})^2 (\partial_X \mathbb{M}^2)\Delta^{(0)}(\partial_X \mathbb{M}^2)\Delta^{(0)}-\frac{1}{4} (\Delta^{(0)})^2 (\partial^2_X \mathbb{M}^2)\Delta^{(0)}\\ &-\Delta^{(0)} p^{\mu} p^{\nu} (\partial_{\mu} \partial_{\nu}\mathbb{M}^2)(\Delta^{(0)})^3-\frac{1}{4} \Delta^{(0)} (\partial^2_X \mathbb{M}^2)(\Delta^{(0)})^2\\ &-p^{\mu} p^{\nu} \Delta^{(0)} [\partial_{\mu}\mathbb{M}^2;\Delta^{(0)}]\Delta^{(0)}[\partial_{\nu}\mathbb{M}^2;\Delta^{(0)}]\Delta^{(0)}. \end{split} \label{2order propagator} \end{equation} It is worth highlighting that the second-order correction to the propagator in equation \eqref{2order propagator} is not transpose invariant. Since this ambiguity turns out to be immaterial for the construction of the one-loop effective action, we postpone a rigorous derivation of the propagator to another occasion. \begin{comment} For the purposes of this paper we adopt a symmetrized convention of how the second-order derivative operator acts in equations~(\ref{2order propagator0}): \begin{equation} \begin{split} \Delta^{(2)}(X;p)=&-\Delta^{(0)} \bigg(\frac{1}{4} \mathbb{1} \overleftarrow{\partial_X}\cdot\partial_X +\frac{1}{8} (\partial_{\mu} \partial_{\nu} \mathbb{M}^2)\overleftarrow{\partial}_{p_{\mu}} \partial_{p_{\nu}} \bigg)\Delta^{(0)} \\ &+ \Delta^{(0)}\bigg([\Delta^{(0)};ip \cdot (\partial_X \mathbb{M}^2)] \Delta^{(0)}\bigg)^2. \end{split} \label{2order propagator0B} \end{equation} Using the convention in equation \eqref{2order propagator0B} the second-order corrections to the propagator become: \begin{equation} \begin{split} \Delta^{(2)}(X;p)=&-\frac{1}{4} \Delta^{(0)} (\partial_X \mathbb{M}^2)(\Delta^{(0)})^2(\partial_X \mathbb{M}^2)\Delta^{(0)}-\frac{1}{2} p^{\mu} p^{\nu} (\Delta^{(0)})^2 (\partial_{\mu} \partial_{\nu}\mathbb{M}^2)(\Delta^{(0)})^2\\ &-p^{\mu} p^{\nu} \Delta^{(0)} [\partial_{\mu}\mathbb{M}^2;\Delta^{(0)}]\Delta^{(0)}[\partial_{\nu}\mathbb{M}^2;\Delta^{(0)}]\Delta^{(0)}, \label{2order propagator2} \end{split} \end{equation} which is transpose invariant, as expected. It is possible to check that the two different expressions in equations \eqref{2order propagator} and \eqref{2order propagator2} lead to the same corrections for the effective action. Thus, we choose to enforce transposition invariance and adopt equation \eqref{2order propagator2} in the following calculations.\\ \end{comment} \medskip As before, we can compute the relative corrections to the one-loop effective action starting from the general expression for real scalar fields and expanding the propagator in number of derivatives. Unlike in the single field case, the first-order corrections do not vanish, thus the complete expression for the one-loop effective action is: \begin{equation} \begin{split} \Gamma^{(1)}&=-\frac{i}{2} \Big[\text{Tr}\log(\Delta^{(0)})+\text{Tr}(\mathcal{D}^{(0)}\Delta^{(1)} ) + \text{Tr}(\mathcal{D}^{(0)}\Delta^{(2)} )-\frac{1}{2} \text{Tr}(\mathcal{D}^{(0)}\Delta^{(1)} )^2+... \Big]. \label{gradient expansion effective action} \end{split} \end{equation} In particular, the first-order corrections are given by: \begin{equation} -\frac{i}{2} \text{Tr}(\mathcal{D}^{(0)}\Delta^{(1)} ) =-\frac{i}{2} \int d^DX \int \frac{d^Dp}{(2 \pi)^D} \text{tr}\bigg([\Delta^{(0)};ip \cdot (\partial_X \mathbb{M}^2)] \Delta^{(0)} \bigg)=0, \end{equation} given that $\Delta^{(0)}$ depends on $p^2$, we expect the momentum integral to vanish since we are integrating an odd function over a symmetric domain.\\ We can proceed with the second-order corrections: \begin{equation} \begin{split} &\text{Tr}(\mathcal{D}^{(0)}\Delta^{(2)} )-\frac{1}{2} \text{Tr}(\mathcal{D}^{(0)}\Delta^{(1)} )^2=\text{Tr}\bigg(-\frac{1}{2} (\partial^2_X \mathbb{M}^2)(\Delta^{(0)})^2\\ &-\frac{1}{2} (\partial_X \mathbb{M}^2)\Delta^{(0)}(\partial_X \mathbb{M}^2) (\Delta^{(0)})^2-p^{\mu} p^{\nu} (\partial_{\mu} \partial_{\nu}\mathbb{M}^2)(\Delta^{(0)})^3\\ &-\frac{1}{2}p^{\mu} p^{\nu} [\partial_{\mu}\mathbb{M}^2;\Delta^{(0)}]\Delta^{(0)}[\partial_{\nu}\mathbb{M}^2;\Delta^{(0)}]\Delta^{(0)}\bigg). \label{2 order ea multifield} \end{split} \end{equation} We perform the computations for a generic number of scalar fields with mass mixing and we do not assume a particular form for the mass matrix $\mathbb{M}^2$. Firstly, we diagonalise the zeroth-order propagator using a suitable rotation matrix $\mathcal{R}$. Having a diagonal propagator, the momentum integral can be performed in the standard way. We find then convenient to separate the sum over the diagonal and off-diagonal terms and to transform all the remaining mass matrices to be diagonal. Depending on the terms in equation \eqref{2 order ea multifield}, we have to deal with first- or second-order spatial derivatives of $\mathbb{M}^2$. Consequently, the diagonalisation procedure introduces spatial derivatives of the rotation matrices. It is then useful to define the matrix product $T=\mathcal{R}(\partial_X \mathcal{R}^T)$. The detailed calculations can be found in Appendix A.\\ Using equation \eqref{second order appendix result} we can rearrange the different terms, such that we can write the final expression for the second-order corrections of the effective action as: \begin{equation} \begin{split} &\text{Tr}(\mathcal{D}^{(0)}\Delta^{(2)} )-\frac{1}{2} \text{Tr}(\mathcal{D}^{(0)}\Delta^{(1)} )^2=\frac{i}{(4\pi)^{D/2}} \Gamma\bigg( 1-\frac{D}{2}\bigg) \int d^DX \sum_{i\neq k} T_{ik} T_{ki}\\ & \times \frac{2D(8-D)m_i^2 m_k^2+(48-14D+D^2)m_k^4+(D^2-2D)m_i^4}{m_i^2-m_k^2} \frac{(m_k^2)^{\frac{D}{2}-2}}{8D}, \label{second gradient general formula} \end{split} \end{equation} where $m_i^2$ are the space-time dependent eigenvalues of the diagonalised mass matrix.\\ Interestingly, we see that all the diagonal terms cancel and the second-order corrections to the effective action are given only by the summation of off-diagonal terms. This is consistent with the conclusions that we obtained in the single field case.\\ It is important to remember that in four dimensions the Gamma function in equation \eqref{second gradient general formula} is divergent so we ought to investigate if this correction has to be renormalized with appropriate counterterms, similarly to what is usually done for the one-loop effective potential. As can be seen in Appendix B, the divergences in equation \eqref{second gradient general formula} cancel exactly so we are left with finite second-order gradient corrections. Consequently, we can also write our result in a manifestly scale-independent way. Considering equation \eqref{D second order ea} and taking the limit $D\rightarrow 4$ yields to, \begin{equation} \Gamma^{(1)}=\frac{1}{8(4\pi)^2} \int d^4X \sum_{i\neq k} T_{ik} T_{ki} \bigg[3m_i^2+ \frac{m_i^4+2m_i^2m_k^2}{m_i^2-m_k^2}\log\bigg(\frac{m_k^2}{m_i^2} \bigg)\bigg]. \label{4dim second order ea} \end{equation} This is our result for the second-order gradient correction of the one-loop effective action. The derivatives of the fields are hidden in the matrix product $T=\mathcal{R}(\partial_X \mathcal{R}^T)$. \subsection*{Application to a simple toy model} We are now interested in applying the previous result to a simple toy model of two real scalar fields with mass mixing. We consider the following action: \begin{equation} \mathcal{S}=\int d^Dx \bigg[ -\frac{1}{2}(\partial_{\mu}\phi)( \partial^{\mu}\phi)-\frac{1}{2}(\partial_{\mu}\psi) (\partial^{\mu}\psi)-V^{(0)}(\phi,\psi) \bigg], \label{L of 2 scalar fields with mass mixing} \end{equation} where $V^{(0)}$ is the tree-level potential of the theory. \\ Without assuming a specific expression for the tree-level potential, we have a tree-level mass matrix with the following generic form: \begin{equation} \mathbb{M}^2=\begin{pmatrix} m_{11}^2 & m_{12}^2\\ m_{12}^2&m_{22}^2\\ \end{pmatrix}, \label{tree-level mass matrix} \end{equation} where $m^2_{ab}=\frac{\partial^2 V^{(0)}}{\partial \phi_a \partial \phi_b}$. We can diagonalise the mass matrix using the following rotation matrix: \begin{equation} \mathcal{R}= \begin{pmatrix} \cos \theta& -\sin \theta \\ \sin \theta & \cos \theta\\ \end{pmatrix}, \end{equation} with: \begin{equation} \tan (2 \theta)=\frac{2m^2_{12}}{m^2_{22}-m^2_{11}}, \label{tan 2theta} \end{equation} and we find the following mass eigenvalues: \begin{equation} m_{\pm}^2=\frac{1}{2}\bigg(m_{11}^2+m_{22}^2 \pm \sqrt{\Delta}\bigg), \end{equation} with the discriminant $\Delta=m_{11}^4+4m_{12}^4-2m_{11}^2m_{22}^2+m_{22}^4$.\\ We can now compute the second-order gradient corrections to the effective action of this theory. To do so, we apply equation \eqref{4dim second order ea}.\\ We can start by noticing that the product: \begin{equation} \begin{split} \mathcal{R}(\partial_{\mu}\mathcal R^T)=& \begin{pmatrix} \cos \theta& -\sin \theta \\ \sin \theta & \cos \theta\\ \end{pmatrix} \cdot \begin{pmatrix} \partial_{\mu}\cos \theta& \partial_{\mu}\sin \theta \\ -\partial_{\mu}\sin \theta & \partial_{\mu}\cos \theta\\ \end{pmatrix}\\ =&\begin{pmatrix} 0& 1 \\ -1 & 0\\ \end{pmatrix}\cdot (\partial_{\mu}\theta) \end{split} \end{equation} is an antisymmetric matrix and thus: \begin{equation} \mathcal{R}(\partial_{\mu}\mathcal R^T)\mathcal{R}(\partial^{\mu}\mathcal R^T)=-\mathbb{1}(\partial_{\mu} \theta)(\partial^{\mu} \theta), \end{equation} such that the condition of summing only over off-diagonal terms in equation \eqref{4dim second order ea} is automatically satisfied. Exploiting equation \eqref{4dim second order ea} and taking the trace, we obtain for the effective action: \begin{equation} \begin{split} &\Gamma^{(1)}=\frac{-1}{8(4\pi)^2} \int d^4x (\partial_{\mu} \theta)(\partial^{\mu} \theta) \bigg[3m_+^2+3m_-^2+ \frac{m_+^4+m_-^4+4m_+^2m_-^2}{m_+^2-m_-^2}\log\bigg(\frac{m_-^2}{m_+^2} \bigg)\bigg]\\ &=\frac{-1}{32(4\pi)^2} \int d^4x \frac{(\partial_{\mu}\tan 2\theta)^2}{(1+(\tan2\theta)^2)^2} \bigg[3m_+^2+3m_-^2+ \frac{m_+^4+m_-^4+4m_+^2m_-^2}{m_+^2-m_-^2}\log\bigg(\frac{m_-^2}{m_+^2} \bigg)\bigg], \end{split} \label{two fields 2gradient primitive} \end{equation} which, using \eqref{tan 2theta}, can be rewritten as: \begin{equation} \begin{split} \Gamma^{(1)}=\frac{-1}{8(4\pi)^2} & \int d^4x \frac{1}{\Delta^{5/2}}[(m_{11}^4+m_{22}^4+4m_{11}^2m_{22}^2-2m_{12}^4)\log\bigg(\frac{m_-^2}{m_+^2} \bigg)\\ & +3(m_{11}^2+m_{22}^2)\sqrt{\Delta}][(m_{11}^2-m_{22}^2)\partial_{\mu}m_{12}^2+m_{12}^2(\partial_{\mu}m_{22}^2-\partial_{\mu}m_{11}^2)]^2. \end{split} \label{two fields 2gradient final} \end{equation} Clearly, expression \eqref{two fields 2gradient primitive} is invariant under the exchange of $m_+^2\rightarrow m_-^2$. Of course, this still holds in \eqref{two fields 2gradient final} since if $m_+^2\rightarrow m_-^2$, the sign change of the logarithm is compensated by $\sqrt{\Delta}\rightarrow -\sqrt{\Delta}$, such that the effective action remains invariant. Moreover, it is worth noting that, if the mass eigenvalue $m_-^2$ turns negative, {\it i.e.} if: \begin{equation} m_{11}^2m_{22}^2<m_{12}^4, \label{m_- neg} \end{equation} we obtain the logarithm of a negative quantity which gives complex one-loop corrections to the kinetic term. Similarly, complex results are obtained in case when $\Delta<0$. \\ Finally, vanishing second-order kinetic corrections are found in the limit $m_{11}^2=m_{22}^2$. In this limit the tangent in equation \eqref{tan 2theta} is undefined. We notice that if $m_{11}^2=m_{22}^2$, we have that the action is invariant under exchange of the two scalar fields $\phi \leftrightarrows \psi$. \section{Discussion and conclusion} In this paper, we performed the gradient expansion of the scalar sector of the quantum effective action at one-loop order. To do so, we considered the scalar propagator and we expanded its equation of motion in Wigner space. We made use of the fact that the Feynman propagator is non-trivially constrained by an additional equation of motion~(\ref{integrated prop eq of motion}), which can be obtained, for example, by noticing that the propagator $i\Delta_{ab}(x;y)$ should be symmetric~(\ref{propagator: symmetry requirement}) under exchange of points $x \leftrightarrows y$, in the single field case, and under the exchange of points and transposition in flavour $a \leftrightarrows b$, in the multi-field case. Unlike previous works in the literature, the key part of our calculation is to verify that both the symmetric equations of motion of the propagator remain satisfied after expanding in number of derivatives. The kinetic corrections to the one-loop effective action are then obtained from the functional determinant of the expanded inverse propagator. Firstly, we performed the calculations for the simple case of a single scalar field with generic tree-level potential. We find that the second-order gradient corrections vanish at one-loop~(\ref{final result: single field}). This result is in disagreement with previous works \cite{Fraser,Chan,Iliopoulos-Itzykson-Martin}. Moreover, considering both the symmetric equations of motion of the propagator, we are able to argue that all the terms with odd number of derivatives should also be zero. We explicitly verified this for the first-order corrections to the propagator. Furthermore, we generalized the calculations to the case of multiple scalar fields with canonical kinetic terms but with mass mixing in the tree-level potential. Considering again a generic tree-level mass matrix, we performed similar calculations as in the one-field case. Interestingly, we find that the previous result does not carry over to the multi-field case. Indeed, we obtain non-vanishing one-loop kinetic corrections for the scalar fields. On the other hand, we find that the second-order corrections to the propagator are not manifestly transpose invariant. This is in contrast with the requirement that both the symmetric equations of motion should be satisfied. Remarkably, at the level of the effective action this ambiguity is removed by the functional trace of the operators, so we find consistent unique one-loop kinetic corrections~(\ref{4dim second order ea}). From this general result, we can notice that, unlike the one-loop corrections of the potential, the second-order gradient corrections are finite and do not need to be renormalized with appropriate counterterms. This is interesting to highlight since it signifies that these gradient corrections do not introduce any renormalization scale dependence thus not breaking scaling symmetry. Finally, as a simple application, we applied our result to the case of two real scalar fields with mass mixing~(\ref{two fields 2gradient final}). From this example, it is worth highlighting that the kinetic terms cannot be written in the canonical form since the second-order one-loop gradient corrections introduce kinetic mixing terms between the two scalar fields which has a non-vanishing configuration space curvature. \\ Our computations have a wide range of applicability for Early Universe phenomena when quantum corrections cannot be neglected. For example, in Ref.~\cite{Canevarolo-Prokopec} we study the effect of gradient corrections for the case of bubble nucleation during the Electroweak phase transition in a conformal extension of the Standard Model. Indeed, to obtain the tunneling path from the false to the true vacuum state of the theory the full effective action is needed. In turns, this brings the necessity to have a good estimate of the one-loop quantum corrections for the kinetic terms besides the well-known ones for the potential. Similarly, we expect that many other instances in Cosmology where spatial or time gradients and multiple scalar fields are present, such as multifield inflationary models, and various baryogenesis and leptogenesis models, will provide a manifold of applications for our result. \section{Acknowledgements} The authors would like to thank Tanja Hinderer and Elisa Chisari for comments and Yaseen Asad for independently checking significant parts of the calculations presented in this work \cite{Asad}. \appendix \section{Second-order gradient corrections in the multi-fields case} In the following, we present the explicit computations for the second-order gradient corrections of the one-loop effective action. To do so, we will consider term by term in equation \eqref{2 order ea multifield}. We start considering the first term and we use index notation. We define the matrix $N=\mathcal{R} (\partial^2_X \mathbb{M}^2)\mathcal{R}^{T}$. Considering the first term, we have to compute the following integral in momentum space: \begin{equation} \begin{split} \text{Tr}\Big(-\frac{1}{2} (\partial^2_X \mathbb{M}^2)(\Delta^{(0)})^2\Big)=&-\frac{1}{2}\int d^DX \int \frac{d^Dp}{(2\pi)^D} \text{tr} ((\partial^2_X \mathbb{M}^2)(\Delta^{(0)})^2)\\ =&-\frac{1}{2}\int d^DX \int \frac{d^Dp}{(2\pi)^D} \delta^{ij} N_{ik} (\Delta^{(0)}_d)^2_{kj}\\ =&-\frac{1}{2} \int d^DX \; \delta^{ij} N_{ij}\int \frac{d^Dp}{(2\pi)^D} \frac{1}{(-p^2-m_j^2)^2}\\ =&-\frac{i}{2}\frac{\Gamma(2-D/2)}{(4\pi)^{D/2}}\sum_i \int d^DX \; N_{ii} (m_i^2)^{\frac{D}{2}-2}, \end{split} \label{1 term} \end{equation} where $m_i^2$ are the eigenvalues of the diagonalized mass matrix and $\Delta^{(0)}_d$ the diagonalized zeroth-order propagator.\\ With a similar procedure we can calculate also the second term: \begin{equation} -\int d^DX \int \frac{d^Dp}{(2\pi)^D} \text{tr} (p^{\mu} p^{\nu} (\partial_{\mu} \partial_{\nu}\mathbb{M}^2)(\Delta^{(0)})^3 ), \end{equation} which becomes: \begin{equation} \begin{split} &- \int d^DX \; \delta^{ij} N_{ij}\int \frac{d^Dp}{(2\pi)^D} \frac{p^2}{D(-p^2-m_j^2)^3}=\\ =&-i \frac{(D-2)\Gamma(1-D/2)}{8(4\pi)^{D/2}} \sum_i \int d^DX \; N_{ii} (m_i^2)^{\frac{D}{2}-2}. \end{split} \label{2 term} \end{equation} Using the property of the Gamma function $\Gamma(z+1)=z \Gamma(z)$, we see that the sum of \eqref{1 term} and \eqref{2 term} is simply: \begin{equation} -\frac{i}{4}\frac{\Gamma(2-D/2)}{(4\pi)^{D/2}}\int d^DX \; \delta^{ij} N_{ij} (m_j^2)^{\frac{D}{2}-2}. \label{1 2 terms} \end{equation} By partial integration, we transform the remaining mass matrix to be diagonal and \eqref{1 2 terms} becomes the following: \begin{equation} \begin{split} &\frac{-i}{4(4\pi)^{D/2}} \Gamma\bigg(2-\frac{D}{2}\bigg) \int d^DX \; \sum_{i,k} \bigg( 2 T_{ik}T_{ki} (m_{i}^2)^{D/2-1}-2T_{ik}T_{ki} (m_{k}^2)(m_{i}^2)^{D/2-2}\\ &-\bigg(\frac{D}{2}-2\bigg) (\partial_X m_{i}^2)^2(m_{i}^2)^{D/2-3}\bigg), \end{split} \label{1 2 term partial int} \end{equation} where we have defined the matrix product $T=\mathcal{R}(\partial_X \mathcal{R}^T)$. It is now convenient to separate the sum over the diagonal terms from the off-diagonal ones, such that equation \eqref{1 2 term partial int} becomes: \begin{equation} \begin{split} &-\frac{i}{4(4\pi)^{D/2}} \Gamma\bigg(2-\frac{D}{2}\bigg) \frac{4-D}{2} \int d^DX \; \sum_{i=k} \bigg( (\partial_X m_{i}^2)^2(m_{i}^2)^{D/2-3}\bigg)\\ &+\frac{i}{(4\pi)^{D/2}} \Gamma\bigg(1-\frac{D}{2}\bigg)\frac{D-2}{4} \int d^DX \; \sum_{i\neq k} T_{ik}T_{ki}\bigg((m_{i}^2)^{D/2-1}- (m_{k}^2)(m_{i}^2)^{D/2-2}\bigg), \label{1 off-diagonal} \end{split} \end{equation} and we see that the rotation matrices contribute only to off-diagonal terms.\\ Following the same procedure, we compute also the remaining terms containing first derivatives of the mass matrix. It is therefore useful to define $L=\mathcal{R}(\partial_X \mathbb{M}^2)\mathcal{R}^T$ and separate again the sum over diagonal and off-diagonal terms. We have: \begin{equation} \begin{split} &\text{Tr}\Big(-\frac{1}{2} \Delta^{(0)} (\partial_X \mathbb{M}^2)\Delta^{(0)}(\partial_X \mathbb{M}^2)\Delta^{(0)}\Big)\\ &=-\frac{1}{2} \int d^DX \; \sum_{i,k} L_{ik} L_{ki} \int \frac{d^Dp}{(2\pi)^D} \frac{1}{(-p^2-m_k^2)}\frac{1}{(-p^2-m_i^2)^2}\\ &=-\frac{1}{2} \int d^DX \; \bigg( \sum_{i} (L_{ii})^2 \int \frac{d^Dp}{(2\pi)^D} \frac{1}{(-p^2-m_i^2)^3}\\ &+\sum_{i\neq k} L_{ik} L_{ki} \int \frac{d^Dp}{(2\pi)^D} \bigg(\frac{(m_i^2-m_k^2)^{-2}}{(-p^2-m_k^2)}-\frac{(m_i^2-m_k^2)^{-2}}{(-p^2-m_i^2)}+\frac{(m_i^2-m_k^2)^{-1}}{(-p^2-m_i^2)^2}\bigg) \bigg),\\ \end{split} \label{3 term} \end{equation} where in the last step we have also decomposed in partial fractions. Let us now focus on the summation over the diagonal part. As before, we can transform the mass matrix in the $L$ product to be diagonal. We find that: \begin{equation} \sum_{i} (L_{ii})^2 = \sum_{i} ([\mathcal{R}(\partial_X \mathbb{M}^2)\mathcal{R}^T]_{ii})^2=\sum_{i} ([\mathcal{R}(\partial_X (\mathcal{R}^T \mathbb{M}_d^2 \mathcal{R})\mathcal{R}^T]_{ii})^2= \sum_{i} (\partial_X m_{i}^2)^2. \end{equation} Performing the momentum integral of the diagonal term, we obtain the final expression: \begin{equation} \frac{i}{4(4\pi)^{D/2}} \Gamma\bigg(3-\frac{D}{2}\bigg) \int d^DX \; \sum_i \bigg( (\partial_X m_{i}^2)^2(m_{i}^2)^{D/2-3} \bigg). \label{2 diagonal} \end{equation} Exploiting again the properties of the Gamma function, we can sum the diagonal terms that we have found so far in equations \eqref{1 off-diagonal} and \eqref{2 diagonal}. This yields to: \begin{equation} \begin{split} &\frac{i}{4(4\pi)^{D/2}} \Gamma\bigg(3-\frac{D}{2}\bigg) \int d^DX \; \sum_i \bigg( (\partial_X m_{i}^2)^2(m_{i}^2)^{D/2-3} \bigg)\\ &-\frac{i}{4(4\pi)^{D/2}} \Gamma\bigg(2-\frac{D}{2}\bigg) \frac{4-D}{2} \int d^DX \; \sum_i \bigg( (\partial_X m_{i}^2)^2(m_{i}^2)^{D/2-3}\bigg) =0. \end{split} \end{equation} Therefore, we see that the diagonal terms of the first three terms in equation \eqref{2 order ea multifield} do not contribute to the second-order corrections of the effective action. \\ On the other hand, we can perform similar computations for the off-diagonal part of the term in equation \eqref{3 term}. Performing the momentum integral yields to: \begin{equation} \begin{split} \frac{i}{(4\pi)^{D/2}} \Gamma\bigg(1-\frac{D}{2}\bigg) &\int d^DX \; \sum_{i \neq k} T_{ik} T_{ki} \bigg( \frac{(m_i^2)^{\frac{D}{2}-1}-(m_k^2)^{\frac{D}{2}-1}}{2}\\ &+\frac{D-2}{4}(m_k^2-m_i^2)(m_i^2)^{\frac{D}{2}-2} \bigg). \label{2 off-diagonal} \end{split} \end{equation} \\ Finally, we can consider the last term in equation \eqref{2 order ea multifield}. In this case we have: \begin{equation} \begin{split} &\text{Tr}\bigg(-\frac{1}{2}p^{\mu} p^{\nu} [\partial_{\mu}\mathbb{M}^2,\Delta^{(0)}]\Delta^{(0)}[\partial_{\nu}\mathbb{M}^2,\Delta^{(0)}]\Delta^{(0)}\bigg)\\ &=\text{Tr}(-p^{\mu} p^{\nu}(\partial_{\mu}\mathbb{M}^2)(\Delta^{(0)})^2(\partial_{\nu}\mathbb{M}^2)(\Delta^{(0)})^2+p^{\mu} p^{\nu}(\partial_{\mu}\mathbb{M}^2)(\Delta^{(0)})^3(\partial_{\nu}\mathbb{M}^2)(\Delta^{(0)}))\\ &= \int d^DX \; \sum_{i\neq k} L_{ik} L_{ki} \int \frac{d^Dp}{(2\pi)^D} \frac{1}{D} \bigg(\frac{2m_i^2+m_k^2}{(m_k^2-m_i^2)^3(-p^2-m_i^2)}\\ &-\frac{2m_i^2+m_k^2}{(m_k^2-m_i^2)^3(-p^2-m_k^2)}+\frac{m_i^2+m_k^2}{(m_k^2-m_i^2)^2(-p^2-m_k^2)^2}\\ &-\frac{m_k^2}{(m_k^2-m_i^2)(-p^2-m_k^2)^3}+\frac{m_i^2}{(m_k^2-m_i^2)^2(-p^2-m_i^2)^2}\bigg). \end{split} \label{last term} \end{equation} It is worth noticing that this last term does not contribute with any diagonal component. \\ Performing the momentum integral of equation \eqref{last term}, we obtain: \begin{equation} \begin{split} &\frac{i}{(4\pi)^{D/2}} \Gamma\bigg( 1-\frac{D}{2}\bigg) \int d^DX \sum_{i\neq k} T_{ik} T_{ki} \bigg( \frac{2m_i^2+m_k^2}{m_k^2-m_i^2} \frac{(m_i^2)^{\frac{D}{2}-1}-(m_k^2)^{\frac{D}{2}-1}}{D}\\ &-(m_k^2-m_i^2)\frac{(4-D)(2-D)}{8D}(m_k^2)^{\frac{D}{2}-2}-(m_i^2+m_k^2)\frac{2-D}{2D}(m_k^2)^{\frac{D}{2}-2}\\ &-(m_i^2)\frac{2-D}{2D}(m_i^2)^{\frac{D}{2}-2}\bigg). \label{3 off-diagonal} \end{split} \end{equation} Finally, it is possible to sum all the off-diagonal terms contributing to the second-order corrections, which are given in equations \eqref{1 off-diagonal}, \eqref{2 off-diagonal} and \eqref{3 off-diagonal}. To do so, it is convenient to keep in mind that all the indices are summed, therefore in each term we can rename $i \leftrightarrow k$ and then exploit the fact that $T_{ik}T_{ki}$ is symmetric under the exchange of indices. Applying this, we can notice that the sum of \eqref{1 off-diagonal} and \eqref{2 off-diagonal} is zero: \begin{equation} \begin{split} &\frac{i}{(4\pi)^{D/2}}\Gamma\bigg( 1-\frac{D}{2}\bigg) \int d^DX \sum_{i\neq k} T_{ik}T_{ki} \bigg( \frac{(m_i^2)^{\frac{D}{2}-1}-(m_k^2)^{\frac{D}{2}-1}}{2} \bigg)=0, \end{split} \end{equation} and we are left only with the contribution of \eqref{3 off-diagonal}: \begin{equation} \begin{split} &\text{Tr}(\mathcal{D}^{(0)}\Delta^{(2)} )-\frac{1}{2} \text{Tr}(\mathcal{D}^{(0)}\Delta^{(1)} )^2=\\ &=\frac{i}{(4\pi)^{D/2}} \Gamma\bigg( 1-\frac{D}{2}\bigg) \int d^DX \sum_{i\neq k} T_{ik} T_{ki} \bigg( \frac{2m_i^2+m_k^2}{m_k^2-m_i^2} \frac{(m_i^2)^{\frac{D}{2}-1}-(m_k^2)^{\frac{D}{2}-1}}{D}\\ &+(m_i^2-m_k^2)\frac{(4-D)(2-D)}{8D}(m_k^2)^{\frac{D}{2}-2}-(2m_i^2+m_k^2)\frac{2-D}{2D}(m_i^2)^{\frac{D}{2}-2}\bigg). \end{split} \label{second order appendix result} \end{equation} This result is used in equation \eqref{second gradient general formula} in the main text. \section{Manifestly scale-independent second-order gradient corrections} In this appendix, we investigate whether the second-order corrections to the one-loop effective action for multiple scalar fields are finite or not. Considering equation \eqref{second gradient general formula}, we can write the argument of the integral as: \begin{equation} \begin{split} &\frac{1}{2}\sum_{i\neq k} T_{ik} T_{ki} \frac{2D(8-D)m_i^2 m_k^2+(48-14D+D^2)m_k^4+(D^2-2D)m_i^4}{m_i^2-m_k^2} \frac{(m_k^2)^{\frac{D}{2}-2}}{8D}\\ &+\frac{1}{2}\sum_{i\neq k} T_{ik} T_{ki} \frac{2D(8-D)m_i^2 m_k^2+(48-14D+D^2)m_i^4+(D^2-2D)m_k^4}{m_k^2-m_i^2} \frac{(m_i^2)^{\frac{D}{2}-2}}{8D}, \end{split} \end{equation} we can then expand the mass terms as $(m^2)^{\frac{D}{2}-2} \approx \mu^{D-4}[1+\frac{D-4}{2}\log(\frac{m^2}{\mu^2})]$ and sum the first two terms, such that the previous expression becomes: \begin{equation} \begin{split} &\sum_{i\neq k} T_{ik} T_{ki}\bigg[ \frac{3\mu^{D-4}}{4D} (D-4)(m_k^2+m_i^2)\\ &+\frac{2D(8-D)m_i^2 m_k^2+(48-14D+D^2)m_k^4+(D^2-2D)m_i^4}{m_i^2-m_k^2} \frac{\mu^{D-4}}{16D} \frac{D-4}{2} \log \bigg(\frac{m_k^2}{\mu^2} \bigg)\\ &- \frac{2D(8-D)m_i^2 m_k^2+(48-14D+D^2)m_i^4+(D^2-2D)m_k^4}{m_i^2-m_k^2}\frac{\mu^{D-4}}{16D}\frac{D-4}{2} \log \bigg(\frac{m_i^2}{\mu^2} \bigg)\bigg]. \end{split} \end{equation} We have now the sum of three terms proportional to a $D-4$ factor which cancels the divergence hidden in the Gamma function of equation \eqref{second gradient general formula}. From this result we can conclude that the second-order corrections to the effective action are finite and there is no need of subtracting divergences with counterterms. Furthermore, this assures us that our final result has to be independent from the renormalisation scale $\mu$. Focussing on the last two terms in the previous expression, we can try to massage them to make the $\mu$-independence manifest: \begin{equation} \begin{split} &\sum_{i\neq k} T_{ik} T_{ki} \frac{\mu^{D-4}}{16D}\frac{D-4}{2}\bigg[\frac{2D(8-D)m_i^2m_k^2}{m_i^2-m_k^2}\log \bigg(\frac{m_k^2}{m_i^2}\bigg)+D^2\frac{m_k^4+m_i^4}{m_i^2-m_k^2}\log \bigg(\frac{m_k^2}{m_i^2}\bigg)\\ &+\frac{48-14D}{m_i^2-m_k^2}\bigg(m_k^4 \log \bigg(\frac{m_k^2}{\mu^2}\bigg)-m_i^4 \log \bigg(\frac{m_i^2}{\mu^2}\bigg) \bigg)\\ &+\frac{2D}{m_i^2-m_k^2}\bigg(m_k^4 \log \bigg(\frac{m_i^2}{\mu^2}\bigg)-m_i^4 \log \bigg(\frac{m_k^2}{\mu^2}\bigg) \bigg)\bigg]. \end{split} \label{mu-ind manifest} \end{equation} We now use a trick to handle the logarithms. We will introduce in our calculations a second arbitrary scale with mass dimension, called $\nu$. In particular we will use it to rewrite the last two terms of the previous expression \eqref{mu-ind manifest} as: \begin{equation} \begin{split} &\frac{48-14D}{m_i^2-m_k^2}\nu^4\bigg(\frac{m_k^4}{\nu^4} \log \bigg(\frac{m_k^2}{\mu^2}\bigg)-\frac{m_i^4}{\nu^4} \log \bigg(\frac{m_i^2}{\mu^2}\bigg) \bigg)\\ &+\frac{2D}{m_i^2-m_k^2}\nu^4\bigg(\frac{m_k^4}{\nu^4} \log \bigg(\frac{m_i^2}{\mu^2}\bigg)-\frac{m_i^4}{\nu^4} \log \bigg(\frac{m_k^2}{\mu^2}\bigg) \bigg)\\ &=\frac{12(4-D)}{m_i^2-m_k^2}\nu^4 \log \bigg[ \bigg(\frac{m_k^2}{\mu^2}\bigg)^{\frac{m_k^4}{\nu^4}} \bigg(\frac{m_i^2}{\mu^2}\bigg)^{-\frac{m_i^4}{\nu^4}} \bigg]-\frac{2D(m_i^4+m_k^4)}{m_i^2-m_k^2}\log \bigg(\frac{m_k^2}{m_i^2}\bigg) . \end{split} \end{equation} It is interesting to notice that only the first term is still dependent on the two arbitrary mass scales $\mu$ and $\nu$.\\ Putting together all the terms, we find the following complete result: \begin{equation} \begin{split} &\sum_{i\neq k} T_{ik} T_{ki}\mu^{D-4} (D-4)\bigg[ \frac{3}{4D} (m_k^2+m_i^2)+\frac{1}{32D}\bigg(\frac{2D(8-D)m_i^2 m_k^2}{m_i^2-m_k^2} \log\bigg(\frac{m_k^2}{m_i^2} \bigg) \\ &-2D \frac{m_i^4+m_k^4}{m_i^2-m_k^2}\log\bigg(\frac{m_k^2}{m_i^2} \bigg) +D^2 \frac{m_i^4+m_k^4}{m_i^2-m_k^2} \log\bigg(\frac{m_k^2}{m_i^2} \bigg)\\ &+\frac{12D(4-D)}{m_i^2-m_k^2}\nu^4 \log\bigg(\frac{(m_k^2)^{\frac{m_k^4}{\nu^4}}}{(m_i^2)^{\frac{m_i^4}{\nu^4}}} (\mu^2)^{\frac{m_i^4-m_k^4}{\nu^4}} \bigg) \bigg) \bigg]. \end{split} \end{equation} It is worth highlighting that we obtained a result in which the $\nu$- and $\mu$-dependent term can be discarded in the limit $D \rightarrow 4$, such that, as expected, we are left with a result independent from any arbitrary renormalisation scale: \begin{equation} \begin{split} \Gamma^{(1)}=&\frac{1}{2(4\pi)^{D/2}} \Gamma\bigg( 1-\frac{D}{2}\bigg) \int d^DX \sum_{i\neq k} T_{ik} T_{ki} \frac{D-4}{4D} \bigg[3(m_k^2+m_i^2)\\ &+\frac{1}{8} \frac{(D^2-2D)(m_i^4+m_k^4)+2D(8-D)m_i^2m_k^2}{m_i^2-m_k^2}\log\bigg(\frac{m_k^2}{m_i^2} \bigg)\bigg], \end{split} \end{equation} which can be rewritten as: \begin{equation} \begin{split} \Gamma^{(1)}=&\frac{1}{(4\pi)^{D/2}} \frac{\Gamma\bigg( 3-\frac{D}{2}\bigg)}{2D(D-2)} \int d^DX \sum_{i\neq k} T_{ik} T_{ki} \\ &\times \bigg[6m_i^2+\frac{1}{8} \frac{(D^2-2D)2m_i^4+2D(8-D)m_i^2m_k^2}{m_i^2-m_k^2}\log\bigg(\frac{m_k^2}{m_i^2} \bigg)\bigg]. \label{D second order ea} \end{split} \end{equation} This is the final result for the second-order gradient corrections of the effective action in $D$ dimensions. As claimed, it is finite and manifestly scale independent.
Title: Poisson-FOCuS: An efficient online method for detecting count bursts with application to gamma ray burst detection
Abstract: Gamma-ray bursts are flashes of light from distant exploding stars. Cube satellites that monitor photons across different energy bands are used to detect these bursts. There is a need for computationally efficient algorithms, able to run using the limited computational resource onboard a cube satellite, that can detect when gamma-ray bursts occur. Current algorithms are based on monitoring photon counts across a grid of different sizes of time window. We propose a new algorithm, which extends the recently developed FOCuS algorithm for online change detection to Poisson data. Our algorithm is mathematically equivalent to searching over all possible window sizes, but at half the computational cost of the current grid-based methods. We demonstrate the additional power of our approach using simulations and data drawn from the Fermi gamma-ray burst catalogue.
https://export.arxiv.org/pdf/2208.01494
\def\spacingset#1{\renewcommand{\baselinestretch}% {#1}\small\normalsize} \spacingset{1} { \title{\bf Poisson-FOCuS: An efficient online method for detecting count bursts with application to gamma ray burst detection.} \ifnotblind \author{ Kes Ward\\ STOR-i Doctoral Training Centre,\\ Lancaster University, Lancaster, UK\\~\\ Giuseppe Dilillo \\ Department of Mathematics, Computer Science and Physics, \\University of Udine, Udine, Italy\\~\\ Idris Eckley \\ Department of Mathematics and Statistics,\\ Lancaster University, Lancaster, UK \\~\\ Paul Fearnhead\\ Department of Mathematics and Statistics,\\ Lancaster University, Lancaster, UK} \fi } \noindent% {\it Keywords:} Poisson process, Anomaly detection, Functional pruning, Gamma-ray bursts \vfill \ifJASA \spacingset{1.45} % \fi \section{Introduction} This work is motivated by the challenge of designing an efficient algorithm for detecting GRBs for cube satellites, such as the HERMES scientific pathfinder \cite[]{Hermes-2021}, during the early 2020s. Cube satellites are compact and therefore relatively cheap to launch into space, but have limited computational power on-board. Gamma ray bursts (GRBs) are short-lived bursts of gamma-ray light. They were first detected by satellites in the late 1960s \cite[]{Klebesadel1973-sg}, and are now believed to arise when matter collapses into black holes. At the time of writing there is considerable interest in detecting gamma ray bursts with a viewe to identifying interesting astronomical features \cite[]{Luongo2021-ik}. The HERMES satellite array is interested in monitoring a variety of different photon energy bands, and regions in the sky, for the appearance of a GRB. Raw data from a satellite consists of a data stream of photons impacting a detector. The time of a photon impact is recorded in units of microseconds since satellite launch. New photon impacts are recorded on the order of approximately one every 500 microseconds. A GRB is indicated by a short period of time with an unusually high incidence of photons impacting the detector. Ideally the satellite would detect each GRB, and for each burst it detects it then transmits the associated data to earth. There are a number of statistical challenges with detecting GRBs. First, they can come from close or far away sources, and can therefore be very bright and obvious to observe or very dim and hard to pick out from background. They can also impact the detector over a variety of timescales. Figure \ref{fig:short_and_long_bursts} shows two GRBs, one short and intense lasting a fraction of a second, one longer and less intense lasting about ten seconds. These bursts were taken from the FERMI catalogue \cite[]{Ajello2019-gn}. Bursts ranging from a fraction of a second to a few minutes are possible. Second, less than one burst is recorded per 24 hours on average \cite[]{von2020fourth}, which is relatively rare in comparison to the velocity of the signal. The background rate at which the satellite detects photons also varies over time. This variation is on timescales much larger (minutes to days) than those on which bursts occur (milliseconds to seconds), and thus is able to be estimated separately from the bursts. We will not address this aspect of the problem directly, though Appendix \ref{section:background_bias} gives some guidance on choice of background estimation approaches. Finally, there are also computational challenges. For example, there is limited computational hardware on board the satellite, and additional constraints arise on the use of these due to battery life and lack of heat dissipation \cite[]{2003fenimore}. There is also a substantial computational and energy cost to transmitting data to earth, so only promising data should be sent. At a fundamental level, algorithmic techniques for detecting GRBs have gone unchanged through different generations of space-born GRB monitor experiments. As they reach a detector, high-energy photons are counted over a fundamental time interval and in different energy bands. Count rates are then compared against a background estimate over a number of pre-defined timescales \cite[]{lima1983}. To minimize the chance of missing a burst due to a mismatch between the event activity and the length of the tested timescales, multiple different timescales are simultaneously evaluated. Whenever the significance in count excess is found to exceed a threshold value over a timescale, a trigger is issued. Figure \ref{fig:computing_flowchart} gives a simplified overview of such a detection system. As data arrives we need to both detect whether a gamma ray burst is happening, and update our estimates of the background photon arrival rate. Because of the high computational cost of transmitting data to earth after a detection, if an algorithm detects a potential gamma ray burst there is an additional quality assurance step to determine whether it should be transmitted. The detection algorithm needs to be run at a resolution at which all gamma ray bursts are detectable. By comparison background re-estimation is only required once every second, and the quality assurance algorithm is only needed every time a potential gamma ray burst is detected. Thus the majority of the computational effort is for the detection algorithm -- and how to construct a statistically efficient detection algorithm with low computation is the focus of this paper. As mentioned, current practice for detecting a GRB is to compare observed photon counts with expected counts across a given bin width \cite[]{2012paciesas}. The choice of bin width affects the ease of discovery of different sizes of burst. Figure \ref{fig:short_and_long_bursts} shows an example. The short burst in Figure \ref{subfig:short_burst} is easily detectable with bin width $0.2$s, but lost to smoothing at a $2$s bin width. In contrast, the burst in Figure \ref{subfig:long_burst} has a signal too small relative to the noise to be detectable at a $0.2$s bin width, with the largest observation on the plot being part of the noise rather than the gamma ray burst. Only when smoothed to a bin width of $2s$ does the burst become visually apparent. Therefore, the bin width is first chosen small enough to pick up short bursts, and geometrically spaced windows of size 1, 2, 4, 8, ... times the bin width, up to a maximum window size, are run over the data in order to pick up longer bursts. This paper develops an improved approach to detecting GRBs. First we show that using the Page-CUSUM statistic \cite[]{Page1954-ej,Page1955-zf}, and its extension to count data \cite[]{Lucas1985-gm} is uniformly better than using a window-based procedure. These schemes require specifying both the pre-change and post-change behaviour of the data - in our case specifying the background rate of photon arrivals and the rate during the gamma ray burst. While in our application it is reasonable to assume that good estimates of the background photon arrival rate are available by monitoring the signal, specifying the photon arrival rate for the gamma ray burst is difficult due to their heterogeneity in terms of intensity. For detecting changes in mean in Gaussian data, \cite{Romano2021-ab} show how one can implement the sequential scheme of \cite{Page1955-zf} simultaneously for all possible post-change means, and call the resulting algorithm FOCuS. Our detection algorithm involves extending this algorithm to the setting of detecting changes in the rate of events for count data. It is based on modelling the arrival of photons on the detector as a Poisson process, and we thus call our detection algorithm Poisson-FOCuS. Our algorithm is equivalent to checking windows of any length, with a modified version equivalent to checking windows of any length up to a maximum size. This makes it advantageous for detecting bursts near the chosen statistical threshold whose length is not well described by a geometrically spaced window. In addition to this, the algorithm we develop has a computational cost lower than the geometric spacing approach, resulting in a uniform improvement on the methods already used for this application with no required trade-off. These advantages mean that the Poisson-FOCuS algorithm is currently planned to be used as part of the trigger algorithm of the HERMES satellite. Our improvement of existing window based methods addresses the aspect of trigger algorithms that has been shown to be most important for increasing power of detecting GRBs. As the computational resource on-board a satellite has increased, trigger algorithms have grown to support an increasing number of criteria, and is has been seen that the most important aspect of any detection procedure is the timescale over which the data is analyzed \cite[]{2004mclean}. For example, while early software, such as Compton-BATSE \cite[]{Gehrels1993-ep}, operated only a few different trigger criteria, a total of $120$ are available to the Fermi-GBM \cite[]{2009meegan} and and more than $800$ to the Swift-BAT \cite[]{2004mclean} flight software. While in many cases, this growth in algorithm complexity did not result in more GRB detection, better coverage of different timescales for GRBs did \cite[]{2012paciesas}. During the first four years of Fermi-GBM operations, $135$ out of $953$ GRBs triggered GBM over timescales not represented by BATSE algorithms, most of which were over timescales larger than the maximum value tested by BATSE ($1.024$ s) \cite[]{2014kienlin}. % The outline of the paper is as follows. In Section \ref{section:problem_setup} we define the mathematical setup of the problem and analyse the statistical models used in current and possible alternate approaches. Section \ref{section:functional_pruning} provides the main theoretical developments of a functional pruning approach, leading to an algorithm and computational implementation specified in Section \ref{section:theoretical_evaluation}. In Section \ref{section:empirical_evaluation} we give an evaluation of our method on various simulated data, and real data taken from the FERMI catalogue. The paper ends with a discussion. All proofs, and the calculations necessary for graphs, can be found in the Appendix. \section{Modelling framework} \label{section:problem_setup} The data we consider takes the form of a time-series of arrival times of photons. We can model the generating process for these points as a Poisson process with background parameter $\lambda(t)$, and with gamma ray bursts corresponding to periods of time which see a increase in the arrival rate over background level. Changes in the background rate $\lambda(t)$ over time may be due to rotation of the spacecraft or its orbit around the earth. They exist on a greater timescale (minutes to days) than the region of time over which an anomalous interval could occur (seconds). We assume that a good estimate of the current background rate $\lambda(t)$ is available. To ease exposition we will first assume this rate is known and constant, and denote it as $\lambda$ before generalising to the non-constant background rate in Section \ref{section:varying_background}. For readers interested in applying our method, we also discuss accounting for error in estimating the background rate in Appendix \ref{section:background_bias}. There are two approaches to analysing our data. The first is to choose a small time interval, $w$, which is smaller than the shortest gamma ray burst that we want to detect. In this case the data can be summarised by the number of photon arrivals in time bins of length $w$. By rescaling our time unit measurement (and therefore our rate $\lambda$) appropriately we can set $w=1$ without loss of generality. The second approach is to use the data of times between photon arrivals directly. We will explain how our detection algorithm can be implemented in this latter setting in Section \ref{section:other_data}. For the first setting, we will denote the the data by $x_1,x_2,\ldots,$ with $x_i$ denoting the number of arrivals in the $i$th time window. We use the notation $x_{t+1:t+h}=(x_{t+1},\ldots,x_{t+h})$ to denote the vector of observations between the $(t+1)$th and $(t+h)$th time window, and \[ \bar{x}_{t+1:t+h}=\frac{1}{h}\sum_{i=t+1}^{t+h} x_{i}, \] the mean of these observations. Under our model, if there is no gamma ray burst then each $x_i$ is a realisation of $X_i$, an independent Poisson distribution with parameter $\lambda$. If there is a gamma ray burst then the number of photon arrivals will be Poisson distributed with a rate larger than $\lambda$. We make the assumption that a gamma ray burst can be characterised by a width, $h$, and an intensity $\mu>1$ such that if the gamma ray burst starts at time $t+1$ then $x_{t+1},\ldots,x_{t+h}$ are realisations of independent Poisson random variables $X_{t+1},\ldots,X_{t+h}$ with mean $\mu\lambda$. See Figure \ref{fig:two_dimensions_anomalies_poisson} for a visualisation of an anomaly simulated directly from this model. Our algorithm is primarily interested in reducing the computation requirements of constant signal monitoring. Therefore our model considers a gamma ray burst as a uniform increase in intensity over its length, which does not take into account the unknown shape of a gamma ray burst. If a possible burst is found, an additional round of shape-based sanity checking requiring more computational resources can easily be performed prior to transmission back to Earth. \subsection{Window-based methods and detectability} If we assumed we knew the width of the gamma ray burst, $h$, then detecting it would correspond to testing, for each start time $t$, between the following two hypotheses: \begin{itemize} \item $\mathbf{H}_0$: $X_{t+1},\ldots,X_{t+h} \sim \text{Poisson}(\lambda)$. \item $\mathbf{H}_1$: $ X_{t+1},\ldots,X_{t+h} \sim \text{Poisson}(\mu \lambda)$, for some $\mu>1$. \end{itemize} \noindent We can perform a likelihood ratio test for this hypothesis. Let $\ell(x_{t+1:t+h};\mu)$ denote the log-likelihood for the data $x_{t+1:t+h}$ under our Poisson model with rate $\mu\lambda$. Then the standard likelihood ratio statistic is \[ LR=2\left\{\max_{\mu>1} \ell(x_{t+1:t+h};\mu)-\ell(x_{t+1:t+h};1) \right\}. \] \begin{restatable}{proposition}{lrderivation} Under the alternative, our LR statistic is 0 if $\bar{x}_{t+1:t+h} \leq \lambda$, otherwise \[LR = 2h\lambda \left\{\frac{\bar{x}_{t+1:t+h}}{\lambda} \log \left(\frac{\bar{x}_{t+1:t+h}}{\lambda}\right) - \left(\frac{\bar{x}_{t+1:t+h}}{\lambda}-1\right) \right\}. \] \end{restatable} The likelihood ratio is a function only of the expected count $h\lambda$ and the fitted intensity $\hat{\mu}_{t+1:t+h} := \bar{x}_{t+1:t+h}/\lambda$ of the interval $[t+1,t+h]$. % It can alternatively be written as a function only of the expected count $h\lambda$ and the actual count $h\bar{x}_{t+1:t+h}$, which forms the fundamental basis for our algorithm. In our application, thresholds for gamma ray burst detection are often set based on $k$-sigma events: values that are as extreme as observing a Gaussian observation that is $k$ standard deviations from its mean. As the likelihood ratio statistic is approximately $\chi^2_1$ distributed \cite[]{Wilks1938-lh}, this corresponds to a threshold of $k^2$. Gamma ray bursts with a combination of high intensity $\hat{\mu}_{t+1:t+h} := \frac{\bar{x}_{t+1:t+h}}{\lambda}$ and long length, as quantified by the expected count $h\lambda$, will have higher associated likelihood ratio statistics and thus be easier to detect. Figure \ref{fig:detectability_thresholds} shows regions in this two-dimensional space that correspond to detectable GRBs at different $k$-sigma levels. % Figure \ref{fig:window_detectability} shows what happens when we set a fixed threshold and for computational reasons only check intervals of certain lengths. We rely on the fact that a slightly brighter burst will also trigger detection on a longer or shorter interval than optimal. This is the type of approach that current window-based methods take \cite[]{2012paciesas}. We can see that, even with a grid of window sizes, we lose detectability if the true width of the GRB does not match one of the window sizes. \subsection{Page-CUSUM for Poisson data} As a foundation for our detection algorithm, consider the CUSUM (cumulative sum) approach of \cite{Page1955-zf} that was adapted for the Poisson setting by \cite{Lucas1985-gm}. These methods search for gamma ray bursts of unknown width but known size $\mu$, differing from a window method that searches for gamma ray bursts of known width $h$ and unknown size. To run our methods online it is useful to characterise by the start point $\tau$ of a possible anomaly. We have our hypotheses for the signal at time $T$: \begin{itemize} \item $\mathbf{H}_0$: There have been no anomalies, i.e. $X_1, ..., X_T \sim \text{Poisson}(\lambda)$. \item $\mathbf{H}_1$: There has been one anomaly, beginning at some unknown time $\tau$, with known intensity multiplier $\mu>1$, i.e. $X_1, ..., X_{\tau-1} \sim \text{Poisson}(\lambda)$ and $X_{\tau}, ..., X_T \sim \text{Poisson}(\mu \lambda)$. \end{itemize} \begin{restatable}{proposition}{lrderivationpage} Under the alternative, our LR statistic is 0 if $\bar{x}_{\tau:T} \leq \lambda \frac{\mu-1}{\log(\mu)}$ for all $\tau$, otherwise \[LR = \max_{1 \leq \tau \leq T} \left[2(T-\tau+1)\lambda \left\{\frac{\bar{x}_{\tau:T}}{\lambda} \log \left(\mu \right) - \left(\mu-1\right) \right\} \right]. \] \end{restatable} We work with a test statistic $S_T$ that is half the likelihood ratio statistic for this test, and compare to a $k$-sigma threshold of $k^2/2$. $S_T$ can be rewritten in the following form: $$S_T = \left[\max_{1 \leq \tau \leq T} \sum_{t=\tau}^T(x_t \log (\mu) - \lambda (\mu-1))\right]^+,$$ where we use the notation $[\cdot]^+$ to denote the maximium of the term $\cdot$ and 0. As shown in \cite{Lucas1985-gm}, $S_T$ can be updated recursively as $$S_0 = 0, \ \ S_{T+1} = [S_T + x_{T+1} \log \mu - \lambda(\mu-1)]^+.$$ It is helpful to compare the detectability of GRBs using $S_T$ with their detectability using a window method. To this end, we introduce the following propositions that .... \begin{restatable}{proposition}{pageonlywindow} \label{prop:page_only_window} For some choice of $\mu$ against a background rate of $\lambda$, let $S_T$ be significant at the $k$-sigma level. Then there exists some interval $[\tau,T]$ with associated likelihood ratio statistic that is significant at the $k$-sigma level. \end{restatable} \begin{restatable}{proposition}{pagebeatswindow} \label{prop:page_beats_window} For any $k$, $\lambda$ and $h$ there exists a $\mu$ and corresponding test statistic $S_T$ that relates directly to a window test of length $h$ and background rate $\lambda$ as follows: if, for any $t$, the data $x_{t+1:t+h}$ is significant at the $k$-sigma level then $S_{t+h}$ will also be significant at the $k$-sigma level. \end{restatable} Together these results show that Page's method is at least as powerful as the window method. Rather than implementing the window method with a given window size, we can implement Page's method with the appropriate $\mu$ value (as defined by Proposition \ref{prop:page_beats_window}) such that any GRB detected by the window method would be detected by Page's method. % However Page's method may detect additional GRBs and these would be detected by the window method with some window size (by Proposition \ref{prop:page_only_window}). In practice, as shown in Figure \ref{fig:page_window_comparison}, Page's method provides better coverage of the search space. Whilst the Page-CUSUM approach is more powerful than a window-based approach, to cover the space completely it still requires specifying a grid of values for the intensity of the gamma ray burst as in Figure \ref{fig:page_grid}. If the actual intensity lies far from our grid values we will lose power at detecting the burst. \section{Functional pruning} \label{section:functional_pruning} To look for an anomalous excess of count of any intensity and width without having to pick a parameter grid, we consider computing the Page-CUSUM statistic simultaneously for all $\mu\in [1, \infty)$, which we can do by considering the test statistic as a function of the $\mu$, $S_T(\mu)$. That is, for each $T$, $S_T(\mu)$ is defined for $\mu\in [1, \infty)$, and for a given $\mu$ is equal to value of the Page-CUSUM statistic for that $\mu$. By definition, $S_T(\mu)$ is a pointwise maximum of curves representing all possible anomaly start points $\tau$: $$S_T(\mu) := \left[\max_{1 \leq \tau \leq T} \sum_{t=\tau}^T\left[x_t \log (\mu) - \lambda (\mu-1)\right]\right]^+$$ We can view this as $S_T(\mu)=[\max_{1 \leq \tau \leq T} C^{(T)}_\tau(\mu)]^+$, where each curve $C^{(T)}_{\tau}(\mu)$ corresponds to half the likelihood ratio statistic for a gamma ray burst of intensity $\mu$ starting at $\tau$, $$C^{(T)}_{\tau}(\mu) := \sum_{t=\tau}^T[x_t \log (\mu) - \lambda (\mu-1)] .$$ \noindent Each curve is parameterised by two quantities, as $$C^{(T)}_{\tau}(\mu) := a^{(T)}_{\tau}\log (\mu) - {b^{(T)}_\tau}(\mu-1),$$ where $a^{(T)}_{\tau} = \sum_{t=\tau}^T x_t$ is the actual observed count and $b^{(T)}_{\tau} = \sum_{t=\tau}^T \lambda = \lambda(T-\tau+1)$ is the expected count on the interval $[\tau, T]$. As we move from time $T$ to time $T+1$ there is a simple recursion to update these coefficients $$ a^{(T+1)}_{\tau} =a^{(T)}_{\tau} + x_{T+1}, \ \ \ b^{(T+1)}_{\tau} = b^{(T)}_{\tau} + \lambda .$$ These are linear and do not depend on $\tau$, so the differences between any two curves are preserved with time updates. \subsection{Structure of logarithm curves} We call $C^{(T)}_{\tau}(\mu)$ a logarithm curve. Figure \ref{fig:example_logarithm_curves} shows the structure of what these logarithmic curves look like. Note that if the observed count $a_{\tau}^{(T)}$ is not greater than the expected count $b_{\tau}^{(T)}$, the curve $C_{\tau}^{(T)}$ will be non-positive on $\mu \in [1, \infty)$ as it contains no evidence for a change greater than the background rate $\lambda$ over $[\tau, T]$. The maximum of $C_{\tau}^{(T)}$ is located at $\mu=a_{\tau}^{(T)}/b_{\tau}^{(T)}$, representing the likelihood ratio for a post-change mean $\mu \lambda = \bar{x}_{\tau:T}$. If $a_\tau^{(T)}>b_\tau^{(T)}$ then the logarithm curve will be positive for some $\mu>1$. In this case we will define the root of the curve to be the unique $\mu^*>1$ such that $C^{(T)}_{\tau}(\mu^*)=0$. \iffalse % \begin{lemma} \label{lem:monotonic} Let $C(\mu)=[a\log \mu - b(\mu-1)]$, with $a/b>1$. Then the root of $C(\mu)$ depends only on the ratio $a/b$ and is monotonically increasing with $a/b$. \end{lemma} \begin{proof} Let $\mu^*$ be the root of $C(\mu)$. Then by rearrangement we have that \[ \frac{a}{b} = \frac{\mu^*-1}{\log \mu^*},\] where the right hand side is an increasing function of $\mu^*$ on the range $[1, \infty)$. \end{proof} We also define the slope of a curve $C^{(T)}_{\tau}$ to be the derivative at $\mu=1$, which is equal to $a_\tau^{(T)}-b_\tau^{(T)}$. \fi \subsection{Adding and pruning curves} For any two curves $C^{(T)}_{\tau_i}$ and $C^{(T)}_{\tau_j}$ at a given present time $T$, we will say that $C^{(T)}_{\tau_i}$ dominates $C^{(T)}_{\tau_j}$ if $$[C^{(T)}_{\tau_i}(\mu)]^+ \geq [C^{(T)}_{\tau_j}(\mu)]^+,\ \ \ \forall \mu \in [1, \infty).$$ This is equivalent to saying that there is no value of $\mu$ such that the interval $[\tau_j, T]$ provides better evidence for an anomaly with intensity $\mu$ than $[\tau_i, T]$. As the difference between curves is unchanged as we observe more data, this in turn means that for any future point $T_F \geq T$, the interval $[\tau_j, T_F]$ will not provide better evidence than $[\tau_i, T_F]$. Therefore, the curve associated with $\tau_j$ can be pruned, removed from our computational characterisation of $S_T(\mu)$. The following gives necessary and sufficient conditions for a curve to be dominated by another. \begin{proposition} Let $C^{(T)}_{\tau_i}$ and $C^{(T)}_{\tau_j}$ be curves that are positive somewhere on $\mu \in [1, \infty)$, where $\tau_i < \tau_j$ and $C_{\tau_i}^{(\tau_j-1)}$ is also positive somewhere on $\mu \in [1, \infty)$. Then $C^{(T)}_{\tau_i}$ dominates $C^{(T)}_{\tau_j}$ if and only if $a_{\tau_j}^{(T)} / b_{\tau_j}^{(T)} \leq a_{\tau_i}^{(\tau_j-1)} / b_{\tau_i}^{(\tau_j-1)}$ or equivalently $a^{(T)}_{\tau_j} / b^{(T)}_{\tau_j} \leq a^{(T)}_{\tau_i} / b^{(T)}_{\tau_i}$. Additionally, it cannot be the case that $C_{\tau_j}^{(T)}$ dominates $C_{\tau_i}^{(T)}$. \end{proposition} A formal proof is given in Appendix \ref{sec:conditionsforpruning}, but we see intuitively why this result holds by looking at Figure \ref{fig:example_logarithm_curves}. We see that $C^{(T)}_{\tau_i}$ dominates $C^{(T)}_{\tau_j}$ precisely when $C^{(T)}_{\tau_i}$ has both a greater slope at $\mu=1$ (which occurs when $C_{\tau_i}^{(\tau_j-1)}$ is positive) and a greater root than $C^{(T)}_{\tau_j}$, where (as shown in the proof) the root of a curve $C_{\tau}^{(T)}$ is an increasing function of $a_{\tau}^{(T)}/b_{\tau}^{(T)}$. The quantity $a_{\tau_i}^{(\tau_j-1)} / b_{\tau_i}^{(\tau_j-1)}$ does not change with time updates. Therefore, functional pruning occurs after periods of unpromising data where we have $x_t \approx \lambda$, bringing the ratios $a_{\tau}^{(T)}/b_{\tau}^{(T)}$ for curves down closer to $1$. This causes curves testing shorter more intense anomalies to be dominated by curves testing longer, less intense ones. \section{Algorithm and theoretical evaluation} \label{section:theoretical_evaluation} Using this result we obtain the Poisson-FOCuS algorithm, described in Algorithm \ref{algorithm:focus_poisson}. This algorithm stores a list of curves in time order by storing their associated $a$ and $b$ parameters, as well as their times of creation $\tau$, which for the constant $\lambda$ case can be computed as $T+1-b/\lambda$. On receiving a new observation at time $T$, these parameters are updated. If the observed count exceeds the expected count predicted by the most recent curve we also add a new curve which corresponds to a GRB that starts at time $T$. Otherwise we check to see if we can prune the most recent curve. This pruning step uses Proposition \ref{prop:bounded_curves}, which shows that if any currently stored curve can be pruned, the most recently stored curve will be able to be pruned. (Our pruning check does not does not need to be repeated for additional curves, as on average less than one curve is pruned at each timestep.) The final part of the algorithm is to find the maximum of each curve, and check if the maximum of these is greater than the threshold. If it is, then we have detected a GRB. The start of the detected GRB is given by the time that the curve with the largest maximum value was added. \begin{algorithm}[] \SetAlgoLined \KwResult{Startpoint, endpoint, and significance level of an interval above a $k$-sigma threshold.} initialise threshold $k^2/2$\; initialise empty curve list\; \While{anomaly not yet found}{ $ T \leftarrow T+1$\; get actual count $X_T$\; get expected count $\lambda$\; \tcp{update curves:} \For{curve $C_{\tau_i}^{(T-1)}$ in curve list $[C_{\tau_1}^{(T-1)}, ..., C_{\tau_n}^{(T-1)}]$}{ $a_{\tau_i}^{(T)} \leftarrow a_{\tau_i}^{(T-1)}+X_T$; $b_{\tau_i}^{(T)} \leftarrow b_{\tau_i}^{(T-1)}+\lambda$; } \tcp{add or prune curve:} \uIf{$X_T/\lambda > \max[a_{\tau_n}^{(T)}/b_{\tau_n}^{(T)}, 1]$}{ add $C_T^{(T)}: a_{T}^{(T)}=X_T, b_{T}^{(T)} = \lambda, \tau=T$ to curve list\;} \ElseIf{$a_{\tau_n}^{(T)}/b_{\tau_n}^{(T)} < \max[a_{\tau_{n-1}}^{(T)}/b_{\tau_{n-1}}^{(T)}, 1]$}{ remove $C_{\tau_n}^{(T)}$ from curve list\;} \tcp{calculate maximum $M$:} \For{curve $C_{\tau_i}^{(T)}$ in curve list}{ \If{$\max(C_{\tau_i}^{(T)}) > M$}{ $M \leftarrow \max(C_{\tau_i}^{(T)})$\; $\tau^* \leftarrow \tau_i$ } } \If{$M > k^2/2$}{ anomaly found on interval $[\tau^*, T]$ with sigma significance $\sqrt{2M} > k$\; } } \caption{Poisson-FOCuS for constant $\lambda$} \label{algorithm:focus_poisson} \end{algorithm} \subsection{Dealing with varying background rate} \label{section:varying_background} Algorithm \ref{algorithm:focus_poisson} deals with the constant $\lambda$ case. If $\lambda = \lambda(t)$ is not constant, but an estimate of $\lambda(T)$ is available at each timestep $T$, we can apply the same principle but with a change in the definition of $b^{(T)}_{\tau}$. We now have $b^{(T)}_{\tau} := \sum_{t=\tau}^T \lambda(t)$, the total expected count over the interval $[\tau, T]$. For the algorithm, this impacts how the co-efficients are updated, with the new updates being $$ a^{(T+1)}_{\tau} \leftarrow a^{(T)}_{\tau} + X_{T+1}, \ \ \ b^{(T+1)}_{\tau} \leftarrow b^{(T)}_{\tau} + \lambda(T+1) .$$ If we work with a non-homogeneous Poisson process in this way, it becomes impossible to recover $\tau$ from the coefficients $a^{(T)}_{\tau}$ and $b^{(T)}_{\tau}$, so $C_{\tau}^{(T)}$ must be computationally stored as the triplet $(\tau, a^{(T)}_{\tau}, b^{(T)}_{\tau})$. The Poisson-FOCuS algorithm gives us an estimate of the start point of a GRB by reporting the interval $[\tau^*, T]$ over which an anomaly is identified. In our application, if the additional sanity checking indicates a GRB is present, the whole signal starting some time before $\tau^*$ is then recorded and transmitted from the spacecraft to Earth for a period of time. After this has occurred, Poisson-FOCuS can restart immediately provided that a good background rate estimate is available. Figure \ref{fig:poisson_focus} shows an image of this algorithm running on a Poisson data signal. The Poisson-FOCuS algorithm correctly identifies the start point of the anomaly, as well as reporting a detection as soon as evidence crosses the $5$-sigma significance threshold. \subsection{Minimum $\mu$ value} For our application there is an upper limit on the length of a gamma ray burst. It thus makes sense to ensure we do not detect gamma ray bursts that are longer than this limit. % To do so, we set an appropriate $\mu_{\text{min}}$, and additionally prune curves which only contribute to $S_T(\mu)$ on $1 < \mu < \mu_{\text{min}}$, by removing, or not adding, curves $C_{\tau}^{(T)}$ to the list if $C_{\tau}^{(T)}(\mu_{\text{min}}) \leq 0$, i.e. \[ \frac{a^{(T)}_{\tau}}{b^{(T)}_{\tau}} \leq \frac{\mu_{\text{min}}-1}{\log \mu_{\text{min}}}.\] \noindent We can choose $\mu_{\text{min}}$ according to our significance threshold and the maximum expected count we are interested in searching for bursts over, using the proof of Proposition \ref{prop:page_beats_window} about detectability for the window-based method, as follows: \[ (h\lambda)_{\text{max}} = \frac{k^2}{2[\mu_{\text{min}} \log(\mu_{\text{min}}) - (\mu_{\text{min}}-1)]}.\] \noindent For a $5$-sigma significance threshold, assuming a background rate of one photon every $500\mu$s, a maximum length of 1 minute for a GRB would correspond to $\mu_{\text{min}}=1.015$, while a maximum length of 1 hour would have $\mu_{\text{min}}=1.002$. \subsection{Using time-to-arrival data} \label{section:other_data} Rather than taking as data the number of photons observed in each time window, we can take as our data the time between each observation. In this case our data is $U_1,U_2,\ldots$ where $U_i$ is the time between the $(i+1)$th and $i$th photons. Under the assumption the data follows a Poisson process, we have that the $U_i$ are independently Exponentially distributed. \begin{restatable}{proposition}{exponentialdata} The Poisson-FOCuS algorithm still works in the Exponential case, with the only difference being how we update the co-efficients of the curves. $$ a^{(T+1)}_{\tau} = a^{(T)}_{\tau} + 1, \ \ \ b^{(T+1)}_{\tau} = b^{(T)}_{\tau} + \lambda(T\!+\!1) U_{T+1},$$ where $\lambda(T)$ is the estimate of the backround rate at the time of the $T$th photon arrival. \end{restatable} \subsection{Computational cost comparisons} Using a window method, our computational costs per window consist of: adding $x_T$ and $\lambda_T$ to the window; removing $x_{T-h}$ and $\lambda_{t-h}$ from the window; calculating the test statistic and comparing to the threshold. Using Poisson-FOCuS, our computational costs per curve are: adding $x_T$ to $a_{\tau}^{(T)}$; adding $\lambda_T$ to $b_{\tau}^{(T)}$; calculating the maximum of the curve and comparing to the threshold. The computational cost per curve is therefore roughly equal to the computational cost per window. Thus when evaluating the relative computational cost of Poisson-FOCuS versus a window method it is required to calculate the expected number of curves kept by the algorithm at each timestep, and compare against the number of windows used. We now give mathematical bounds on this quantity, as follows: \begin{restatable}{proposition}{logcurves} The expected number of curves kept by Poisson-FOCuS without $\mu_{\text{min}}$ at each timestep $T$ is $\in [\frac{\log(T)}{2}, \frac{\log(T)+1}{2}]$. \end{restatable} \begin{restatable}{proposition}{boundedcurves} The expected number of curves kept by Poisson-FOCuS using some $\mu_{\text{min}} > 1$ at each timestep is bounded. \label{prop:bounded_curves} \end{restatable} For geometrically spaced windows, over an infinite horizon the number of windows used at each timestep $T$ is $\in [\log_2(T), \log_2(T)+1]$, and if a $h_{\text{max}}$ is implemented then this will be bounded after a certain point. Figure \ref{fig:cost_comparison} gives a comparison of the number of windows and expected number of curves, showing that although the bound from Proposition \ref{prop:bounded_curves} is difficult to calculate, it is substantially below the corresponding bound on the number of windows. Therefore, Poisson-FOCuS provides the statistical advantages of an exhaustive window search at under half the computational cost of a geometrically spaced one. \section{Empirical evaluation} \label{section:empirical_evaluation} We now empirically evaluate Poisson-FOCuS, and compare with a window method. We do this in two ways. First we simulate GRBs of different length and measure how many photon arrivals are needed within the GRB for them to be detected by each algorithm. Secondly, we run both methods on one week of data from the Fermi-GBM archive. Although not directly relevant to the improvements proposed by the Poisson-FOCuS algorithm, the issues with estimating the background rate and its impact on detecting GRBs are discussed in more detail in Appendix \ref{section:background_bias}. \subsection{Statistical comparison with window method} We first compare the FOCuS with a the window based method on synthetic data that has been simulated to mimic known GRBs, but allowing for different intensities of burst. To simulate the data for a chosen known GRB at a range of different brightnesses, the photon stream of the GRB was converted into a random variable via density estimation. One draw from this random variable would give a photon impact time, and $n$ independent draws sorted into time order would give a stream of photon impact times that well approximate the shape of the burst. These were then overlaid on a background photon stream to form a signal that was binned into fundamental time widths of $50$ms, which was fed into either Poisson-FOCuS (equivalent to an exhaustive window search) or a geometrically spaced window search. The maximum sigma-level that was recorded when passing over the signal with each method is then plotted for various different brightnesses $n$. To stabilise any randomness introduced by the use of a random variable for GRB shape, this was repeated 10 times with different random seeds common to both methods and the average sigma-level is plotted. The extent to which Poisson-FOCuS provides an improvement in detection power depends on the size and shape of the burst, and in particular whether the most promising interval in the burst lines up well with the geometrically spaced window grid. For example, the burst illustrated in Figure \ref{fig:grb1_pic} does not line up with this grid, and Figure \ref{fig:grb1_sim} shows how Poisson-FOCuS provides an improvement in detection power for this shape of burst at various different brightnesses. However, the shorter burst in Figure \ref{fig:grb2_pic} clearly has a most promising interval of size 1 for this binning choice, which is covered exactly by the geometric window grid. Therefore, Poisson-FOCuS provides no improvement over the window grid, as can be seen in Figure \ref{fig:grb2_sim}. Because it is impossible to predict beforehand whether the most promising region of the burst will line up with any given choice of window grid, Poisson-FOCuS therefore provides a statistical advantage over any non-exhaustive choice of window grid in the real data setting. \subsection{Application to FERMI data} \label{section:FERMI_application} In the context of HERMES, Poisson-FOCuS is currently being employed for two different purposes. First, a trigger algorithm built using Poisson-FOCuS is being developed for on-board, online GRB detection. To date, a dummy implementation has been developed and preliminary testings performed on the HERMES payload data handling unit computer. Second, Poisson-FOCuS is being employed in a software framework intended to serve as the foundation for the HERMES offline data analysis pipeline \cite[]{2022crupi}. In this framework, background reference estimates are provided by a neural network as a function of the satellite's current location and orientation. Since no HERMES cube satellites have been launched yet, testing has taken place over Fermi gamma-ray burst monitor (GBM) archival data, looking for events which may have evaded the on-board trigger algorithm. The data used for the analysis were drawn from the Fermi GBM daily data, Fermi GBM trigger catalogue, and Ferm GBM untriggered burst candidates catalogue, all of which are publically available at NASA's High Energy Astrophysics Science Archive Research Center \cite[]{fermigdays, fermigtrig, fermiuntriggered}. The algorithm was run over eight days of data, from 00:00:00 2017/10/02 to 23:59:59 2017/10/09 UTC time. This particular time frame was selected because, according to the untriggered GBM Short GRB candidates catalog, it hosts two highly reliable short GRB candidates which defied the Fermi-GBM online trigger algorithm. During this week the Fermi GBM algorithm was triggered by 11 different events. Six of these were classified as GRBs, three as terrestrial gamma-ray flashes, one as a local particle event and one as an uncertain event. The algorithm was run over data streams from 12 sodium iodide GBM detectors in the energy range of $50-300$ kiloelectron volts, which is most relevant to GRB detection but excludes the bismuth germanate detectors and higher energy ranges designed to find terrestrial gamma-ray flashes. The data was binned at $100$ms. Background count-rates were assessed by exponential smoothing of past observations, excluding the most recent $4$s, and any curves corresponding to start points older than $4$s were automatically removed from the curve lists. The returning condition used was the same used by Fermi-GBM: a trigger is issued whenever at least two detectors are simultaneously above threshold. After a trigger, the algorithm was kept idle for five minutes and then restarted. At a $5$-sigma threshold, Poisson-FOCuS was able to identify all the six GRBs which also triggered the Fermi-GBM algorithm, one of which is shown in Figures \ref{fig:burst_a} and \ref{fig:signif_b}. We also observed a trigger compatible with an event in the untriggered GBM Short GRB candidates catalog \cite[]{fermiuntriggered}, which is shown in Figures \ref{fig:burst_c} and \ref{fig:signif_d}. An uncertain event not in either catalogue is shown in Figures \ref{fig:burst_e} and \ref{fig:signif_f}, which may indicate a GRB that had been missed by earlier searches. \afterpage{ \clearpage } \section{Discussion} \label{section:discussion} The main purpose of this work was to create a GRB detection algorithm that is mathematically equivalent to searching all possible window lengths, while requiring less computational power than the grid of windows approach. This was suitable for use on the HERMES satellites, where it has lead to a reduction of required computations in a very computationally constrained setting, as well as a reduction of number of parameter choices by practitioners as exact values for window lengths in a grid no longer need to be specified or justified. There is increasing interest in detecting anomalies in other low-compute settings, for example Internet of Things sensors which must continuously monitor a signal \cite[]{Dey2018-dy}. These may have a limited battery life or limited electricity generation from sensor-mounted solar panels \cite[]{Nallusamy2011-iw}. Therefore, the algorithm we have developed may be of use more widely. % Much of the mathematical work presented in this paper is also applicable to the $\mu \in [0, 1]$ case that searches for an anomalous lack of count in a signal. When adapting Poisson-FOCuS to this setting, it is important to make sure the algorithm functions well in situations where the counts are small, as these are precisely the locations of anomalies. This would likely entail using the adaption to work directly on count data given in Section \ref{section:other_data}, while ensuring that an anomalous lack of count could be declared in between individual counts. Combining these two cases would give a general algorithm for detection of anomalies on $\mu \in [0, \infty)$. Code for Poisson-FOCuS and the analysis for this paper is available at the GitHub repository \ifnotblind \url{https://github.com/kesward/FOCuS} \else \url{https://github.com/***} \fi. \ifnotblind \noindent {\bf Acknowledgements} This work was supported by the EPSRC grants EP/N031938/1 and EP/R004935/1, and BT as part of the Next Generation Converged Digital Infrastructure (NG-CDI) Prosperity Partnership. \fi \bibliographystyle{agsm} \bibliography{bibliography, biblio} %
Title: The JCMT Transient Survey: Single Epoch Transients and Variability of Faint Sources
Abstract: Short-duration flares at millimeter wavelengths provide unique insights into the strongest magnetic reconnection events in stellar coronae, and combine with longer-term variability to introduce complications to next-generation cosmology surveys. We analyze 5.5 years of JCMT Transient Survey 850 micron submillimeter monitoring observations toward eight Gould Belt star-forming regions to search for evidence of transient events or long-duration variability from faint sources. The eight regions (30 arcmin diameter fields), including ~1200 infrared-selected YSOs, have been observed on average 47 times with integrations of approximately half an hour, or one day total spread over 5.5 years. Within this large data set, only two robust faint source detections are recovered: JW 566 in OMC 2/3 and MGM12 2864 in NGC 2023. JW 566, a Class II TTauri binary system previously identified as an extraordinary submillimeter flare, remains unique, the only clear single-epoch transient detection in this sample with a flare eight times bright than our ~4.5 sigma detection threshold of 55 mJy/beam. The lack of additional recovered flares intermediate between JW 566 and our detection limit is puzzling, if smaller events are more common than larger events. In contrast, the other submillimeter variable identified in our analysis, Source 2864, is highly variable on all observed timescales. Although Source 2864 is occasionally classified as a YSO, the source is most likely a blazar. The degree of variability across the electromagnetic spectrum may be used to aid source classification.
https://export.arxiv.org/pdf/2208.07815
\title{The JCMT Transient Survey: Single Epoch Transients and Variability of Faint Sources} \correspondingauthor{Doug Johnstone} \email{Doug.Johnstone@nrc-cnrc.gc.ca} \author[0000-0002-6773-459X]{Doug Johnstone} \affiliation{NRC Herzberg Astronomy and Astrophysics, 5071 West Saanich Rd, Victoria, BC, V9E 2E7, Canada} \affiliation{Department of Physics and Astronomy, University of Victoria, Victoria, BC, V8P 5C2, Canada} \author[0000-0003-1618-2921]{Bhavana Lalchand} \affiliation{Institute of Astronomy, National Central University, 300 Zhongda Road, Zhongli, Taoyuan 32001, Taiwan, R.O.C.} \author[0000-0002-6956-0730]{Steve Mairs} \affiliation{SOFIA Science Center, Universities Space Research Association, NASA Ames Research Center, Moffett Field, California 94035, USA} \affiliation{East Asian Observatory, 600 N. A`oh\=ok\=u Place, Hilo, HI 96720, USA} \author[0000-0001-8385-9838]{Hsien Shang} \affiliation{Institute of Astronomy and Astrophysics, Academia Sinica, Taipei 10617, Taiwan, R.O.C.} \author[0000-0003-0262-272X]{Wen Ping Chen} \affiliation{Institute of Astronomy, National Central University, 300 Zhongda Road, Zhongli, Taoyuan 32001, Taiwan, R.O.C.} \affiliation{Department of Physics, National Central University, 300 Zhongda Road, Zhongli, Taoyuan 32001, Taiwan, R.O.C.} \author[0000-0003-4056-9982]{Geoffrey C.\ Bower} \affiliation{Institute of Astronomy and Astrophysics, Academia Sinica, 645 N. A'ohoku Place, Hilo, HI 96720, USA} \author[0000-0002-7154-6065]{Gregory J.\ Herczeg} \affiliation{Kavli Institute for Astronomy \& Astrophysics, Peking University, Yiheyuan Lu 5, Haidian Qu, 100871 Beijing, China} \affiliation{Department of Astronomy, Peking University, Yiheyuan 5, Haidian Qu, 100871 Beijing, China} \author[0000-0003-3119-2087]{Jeong-Eun Lee} \affiliation{School of Space Research, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104, Republic of Korea} \author[0000-0001-8694-4966]{Jan Forbrich} \affiliation{Centre for Astrophysics Research, University of Hertfordshire, College Lane, Hatfield, AL10 9 AB, UK} \author{Bo-Yan Chen} \affiliation{Institute of Astronomy, National Tsing Hua University, Taiwan} \affiliation{Institute of Astronomy and Astrophysics, Academia Sinica, 645 N. A'ohoku Place, Hilo, HI 96720, USA} \author[0000-0003-1894-1880]{Carlos Contreras-Pe\~na} \affiliation{Department of Physics and Astronomy, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea} \affiliation{School of Space Research, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104, Republic of Korea} \author[0000-0001-6047-701X]{Yong-Hee Lee} \affiliation{School of Space Research, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104, Republic of Korea} \author{Wooseok Park} \affiliation{School of Space Research, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104, Republic of Korea} \author{Colton Broughton} \affiliation{Department of Physics and Astronomy, University of Victoria, Victoria, BC, V8P 5C2, Canada} \author{Spencer Plovie} \affiliation{Department of Physics and Astronomy, University of Victoria, Victoria, BC, V8P 5C2, Canada} \author{The JCMT Transient Team} \keywords{ISM: jets and outflows -- stars: formation -- stars: flares -- stars: variables: general -- submillimeter: stars -- surveys} \section{Introduction}\label{sec:introduction} Young stellar objects (YSOs) are known for their variability due to both accretion phenomena and magnetic activity. Young low-mass pre-main sequence stars are fully convective and therefore develop magetospheres from stellar dynamos. High-energy events driven by the magnetosphere's magnetic field interacting with itself or its surroundings are expected to occur, producing powerful stellar flares. Monitoring the timescales and amplitude changes of their emission over time provides a useful probe of these physical processes. Coronal flares are driven by magnetic reconnection of large loops of the field that protrude from the stellar surface. The magnetic reconnection converts magnetic energy into gas kinetic energy and bulk plasma motions. Most of the energy released is thermalized and radiated away as thermal emission, which can be measured using the soft x-ray emission from the corona or ultraviolet line emission from the chromosphere \citep{benz10}. A fraction of the released energy is emitted at radio frequencies as gyrosynchrotron radiation \citep{waterfall19}, manifesting as radio flares. An empirical scaling relation, $L_X/L_R \sim 10^{15\pm 1}$\,Hz links the two across several orders of magnitude, although the X-rays saturate at $\log L_X/L_{\rm bol} \sim -3$ \citep{gudel93}. Stellar flares are typically explained as magnetic reconnection in the stellar magnetosphere. However, for young stars the largest events, such as those equivalent to coronal mass ejections, may arise from enormous loops. These loops could potentially couple to the surrounding accretion disk, if present, and enhance the magnitude of the flare \citep{for16,for17}. Compared with main-sequence stars, flares have an elevated importance for T Tauri stars, with X-ray flare luminosities typically ranging from $L_X \sim 10^{28\mbox{--}31}$\,erg~s$^{-1}$, comparable to the solar maximum $L_X \sim 10^{27}$\,erg~s$^{-1}$ \citep{feigelson99}. More recently, \citet{Getman2021} identified a sample of 1000 super-flares with $L_X \sim 10^{30.5\mbox{--}34}$\,erg~s$^{-1}$ from YSOs, detected on both disk-bearing and diskless systems and across a wide-range of evolutionary stages, including protostars. Since many of these large flares are detected on diskless stars, the emission volume of the magnetic loops either does not require an anchor or may be amplified by anchoring the field to a nearby magnetized stellar or substellar companion \citep{lin22}. Previous observations of radio flares from YSOs have been reported at mm \citep{bower2003,furuya2003,massi2006,salter2008} and cm wavelengths \citep{for08,for17}. The brightest of these flares reached $L_R \sim 10^{19}$\,erg/s/Hz \citep{bower2003}, many orders of magnitude higher than the M-type star radio flaring events monitored in the mm by \citet{Macgregor2018, Macgregor2020}, and comparable to the highest super-flare X-ray luminosities, assuming the empirical scaling-relation. With the advent of all-sky monitoring campaigns at mm wavelengths by the Atacama Cosmology Telescope (ACT) and the South Pole Telescope (SPT), powerful radio flares are beginning to be detected, with $L_R > 10^{15.5}$\,erg/s/Hz and reaching to $L_R \sim 10^{19}$\,erg/s/Hz \citep{Naess2021, Guns2021}. Initial analyses suggest that these flares are associated with relatively young stars \citep{Naess2021}; however, proper statistical analysis awaits significantly larger samples, such as those planned by the CCAT Observatory \citep{CCAT2021} and CMB-S4 \citep{CMB2019}. These strong flares are important not only as a diagnostic of the magnetic and dynamo processes, but also for the energetic X-ray emission and MeV particles that interact with the surrounding disk. Combined, these processes lead to chemical reactions ionizing the disk, with diagnostics that include emission in \ion{Ne}{2} \citep[e.g.][]{glassgold2007,guedel2010} and H$^{13}$CO$^{+}$ \citep{Cleeves2017}. Finally, interactions between the MeV particles and dust grains in the disk can induce nuclear reactions, leading to short-lived radionuclides \citep{leet1998, sossi2017} and heating events \citep[see discussion by][]{shu1997}. The most powerful YSO radio flare observed to date, $L_R = 8 \times 10^{19}$\,erg/s/Hz, was detected from the Class\,II T Tauri binary JW\,566 in Orion OMC\,2/3 in the submillimeter (submm) at 450 and 850~\micron\ with the James Clerk Maxwell Telescope (JCMT) by \citet{mairs19}. Converted to X-ray luminosity using the \citet{gudel93} relation, that event produced $\sim10^{35}$\,ergs~s$^{-1}$, though it is likely that the correlation saturates at such high values. This scaled luminosity would be an order of magnitude stronger than the X-ray super-flares observed by \citet{Getman2021}, five orders of magnitude greater than the typical T Tauri star flare, and an impressive ten orders of magnitude brighter than most solar flares. The \citet{mairs19} discovery, found via a preliminary search through the JCMT Transient Survey data set, recovered this single flare detected on 2016-11-26, with a decay of 50\% during the thirty minute integration. The submm source associated with JW\,566 was not detected at any other epochs. The spectral index between the two submm wavelengths, $\alpha = 0.11$, is broadly consistent with non-thermal emission. Together, the short submm duration and low spectral index support the flare interpretation, with the brightening event likely due to magnetic reconnection energizing charged particles to emit gyrosynchrotron/synchrotron radiation. Along with the JW\,566 burst having the largest radio luminosity, it is also unique by being observed at the highest frequency, 650~GHz (450~\micron), of any YSO radio flare to date. In this paper, we analyze 5.5 years of JCMT Transient Survey submm monitoring observations toward eight Gould Belt star-forming regions to search for evidence of variability from faint sources, primarily Class\,II YSOs. In Section~\ref{sec:observations}, we describe the JCMT Survey, the standard data reduction procedure, and the additional processing techniques required by this paper. We present the results of our variability investigation in Section~\ref{sec:analysis} and discuss the implications of our results in Section~\ref{sec:discussion}. The paper is summarized in Section~\ref{sec:conclusions}. \section{Observations}\label{sec:observations} \subsection{JCMT Transient Survey}\label{sec:obs:JTS} The JCMT Transient Survey has been monitoring eight active Gould Belt star-forming regions within $500$\,pc of the sun \citep{herczeg2017,mairs2017} since December 2015. In this paper we present an analysis of observations taken over 5.5 years, through June 2021. The survey is the first dedicated long-term monitoring program of YSOs at submm wavelengths. Each region is observed with at least a monthly cadence\footnote{Occasionally, regions are monitored with a higher cadence. For example Serpens\,Main, which hosts the 18 month quasi-periodic submm source EC 53 \citep{yoo2017,yhlee20}, also known as V371 Ser, is typically monitored with a 2-week cadence.}, when available. Almost 300 bright ($>140$\,mJy/bm) submm peaks across all regions have been investigated for variability \citep{mairs2017GBSTrans,johnstone2018,leeyh2021}, with greater than 20\% of the 83 monitored protostars showing robust evidence for long-term brightening or dimming associated with accretion processes within the disk. None of the bright protostars show evidence of single-epoch enhanced variability, and none of the submm bright starless cores are found to be variable with either short or long timescales \citep{leeyh2021}. The eight monitored star-forming regions, namely IC\,348, NGC\,1333, OMC\,2/3, NGC\,2024, NGC\,2068, Ophiuchus, Serpens\,Main, and Serpens\,South (see Table \ref{tab:regions}) -- were selected to maximize the number of deeply embedded, Class\,0/I sources. JCMT Gould Belt Survey \citep{WardThompson2007} 850~\micron\ co-added images were used to locate bright submm peaks and collated against Spitzer YSO survey catalogues at mid- through far-IR \citep{megeath2012, dunham2015}. Along with the hundred odd protostars within these regions, more than 1500 known Class\,II YSOs (and many more Class\,III YSOs) are monitored by the survey \citep{herczeg2017}, although almost all the Class\,II systems are too faint in the submm to be detected in a single epoch. The monthly monitoring cadence was chosen based on an estimate of the thermal equilibration time for dust in a protostellar envelope coupled with the light propagation time through the envelope \citep{johnstone2013}. \subsection{Map Reconstruction}\label{sec:obs:dr} SCUBA-2 is a 10,000 pixel detector, simultaneously observing a 45~arcmin$^2$ footprint at 450 and 850~\micron\ \citep{holland2013}. We obtain images of 30-arcmin circular fields using the PONG1800 mode, which moves SCUBA-2 in a rotating `pong' pattern to obtain circular maps with consistent noise properties. Each total integration is $\sim 20$ to 40 min, set based on the precipitable water vapor to reach a noise level of $\sim 12$ mJy/beam at 850~\micron. The 8 star-forming regions in our sample have now been observed at 850~\micron\ for almost one full day (see Table~\ref{tab:regions}). The raw instrumental observations at $850$\micron\ are affected by the ``scan-synchronous'' low frequency correlation with the telescope motion (common-mode) and by a combination of elevation induced changes in sky brightness and magnetic field pickup across the detectors \citep[see][for details]{Chapin2013}. In addition, there is attenuation due to the atmospheric opacity, flat field corrections, sky noise, cosmic rays, and other contaminants that are considered and removed by the mapmaker. The \textsc{Makemap} algorithm, explained in detail by \citet{Chapin2013} and provided as a part of the \textsc{Starlink} software \citep{Currie2014}, is employed to produce astronomical maps from the raw data. At 850~\micron, a 3\arcsec\ pixel scale is chosen in order to subsample the $\sim14\arcsec$ beam. In the pre-processing stage of the data reduction process, flat-field correction, time series down-sampling, step-correction, and de-spiking of the inputs are applied. This is followed by an iterative process, where common-mode removal, extinction correction, high-pass filtering, map estimation and white noise measurements are performed. The reduction algorithm builds the map by calculating at each pixel location the flux density (hereafter brightness) and measurement uncertainty, based on the measured signal and noise properties of the relevant bolometers. Finally, spatial filtering to suppress signal on scales $>200\arcsec$, is applied to each JCMT Transient map reconstruction to optimize the extraction of compact sources \citep[for additional details see][]{Chapin2013, mairs2017}. The inherent $2\arcsec$ to $6\arcsec$ pointing uncertainty of the JCMT \citep{mairs2017} is corrected by using a cross correlation technique (see Mairs et al.\ in preparation), allowing for sub-arcsecond relative alignment accuracy across epochs. Each aligned map is smoothed using a $6\arcsec$ (2 pixel) FWHM Gaussian kernel in order reduce pixel noise, slightly broadening the 850~\micron\ FWHM beam to 15\arcsec. The relative brightness is calibrated by tracking the peak brightness of a large sample of submm sources in each region over all epochs and weighting their contributions to the calibration normalization by their individually measured signal to noise (see Mairs et al.\ in preparation). The alignment and relative calibration techniques achieve better than 2\% accuracy in the relative calibration at 850~\micron \citep[Mairs et al.\ in preparation,][]{mairs2017}, an improvement on the nominal brightness calibration of 8\% for standard reductions \citep{mairs2021}. \subsection{Normalized Standard Deviation and Normalized Epoch Residuals} \label{sec:obs:res} Our goal in this paper is to search for objects that are bright in a single epoch. To discover these flares, we first measure a standard deviation map that describes the noise at every pixel location in the field. We then compare each map of the region by its standard deviation map to identify outliers as possible transient objects. The standard deviation of the brightness at any pixel location, calculated by considering all epochs, can be compared against the expected measurement uncertainty at that location to search for unknown variable sources. In practice, in order to properly account for the significant increase in measurement noise toward the edge of each region, we use the square root of the average of the squares of the epoch-specific uncertainty measurements at the fixed location, derived during the map-making process (see Section~\ref{sec:obs:dr}). With such an approach, and assuming Gaussian statistics, the mean of the normalized standard deviation map tends to unity and the range of normalized values is inversely proportional to the root of the number of included epochs. Equivalently, the histogram of the values in the normalized standard deviation map can be approximated by a Gaussian with a peak at one and a width $N_{\rm epochs}^{-1/2} \sim 0.15$, for $N_{\rm epochs} = 30$--70. Thus, for the regions analysed here a 5-sigma outlier in the normalized standard deviation map should have a pixel value of 1.8, whereas a 25 sigma outlier will have a pixel value close to 5. When analysing the normalized standard deviation maps, localized sources with significant enhanced variability above the measurement noise will be evident by eye or can be detected by automated routines as described in Section~\ref{sec:analysis:all}. Care must be taken, however, when searching intrinsically bright areas within each map, as the calibration uncertainty becomes comparable in magnitude to the measurement noise for sources brighter than 500\,mJy/bm. Furthermore, the JCMT focus is not always sharp and thus the 850~\micron\ beam sidelobes can produce excess emission up to distances $\sim 30$\arcsec\ from bright peaks \citep[for details on the properties of the JCMT beam see][]{mairs2021}. Given that the individual epochs are brightness calibrated using the peak brightness of known sources, this excess focus uncertainty mostly adds positive signal to the map, leading to a noise component slightly asymmetric around zero. To account for brightness calibration and focus issues, we divide each star-forming region into two areas depending on whether the mean brightness, averaged over all epochs, lies above or below $100\,$mJy/bm (equivalently, 8 times the single epoch uncertainty of $12\,$mJy/bm). Typically, only a few percent of the area within each region lies above this threshold (see Table \ref{tab:stats}). Figure \ref{fig:ngc2068} presents the mean brightness and normalized standard deviation maps for a subsection of the NGC\,2068 star-forming region in Orion. The green 100\,mJy/bm contour marking the boundary between high and low mean brightness areas is over-plotted on the normalized standard deviation map (right panel), showing that the anomalously high standard deviation measurements are almost entirely confined to these bright regions within the map. Similarly, estimated five sigma anomalous normalized standard deviation contours are over-plotted on the mean brightness map (left panel) in blue. Red contours denote regions of extreme, 25 sigma, normalized standard deviation, highlighting the protostars HOPS\,373 to the north and HOPS\,358 to the south, as well as beam focus complications towards a bright non-variable protostar, HOPS\,317. These bright sources are best analysed directly via their light curves. The multi-wavelength time variability of HOPS\,373 has been investigated in detail by \citet{yoon2022}, while \citet{leeyh2021} found both HOPS\,373 and HOPS\,358 to be robust submm variables. When searching for sources that vary in only one epoch, such as rare flaring events, the normalized standard deviation map is not exactly the most appropriate tool, because individual events themselves have a disproportionate effect on the standard deviation. For such events it is more effective to subtract each individual epoch from the mean brightness determined using all {\it other} epochs, i.e.\ excluding the epoch of interest, to produce a single epoch residual brightness map. To quantify the significance of residual map peaks, we again apply a normalization by dividing the residual at each pixel by an expected measurement uncertainty. In this case there are two relevant contributions to the measurement uncertainty: (1) epoch-specific uncertainties that depend on the particulars of the observing conditions and (2) location-specific uncertainties which account for both intrinsic, on-going, source variability and anomalous measurement uncertainties due to, for example, nearby sources. We again take the epoch-specific uncertainty measure directly from the map-making process (see Section \ref{sec:obs:dr}) and here we equate the location-specific measure with the standard deviation of the brightness at each pixel across all other epochs. We then normalize each single epoch residual map by the {\it larger} of these two uncertainty values. As a guide, in Table \ref{tab:regions}, the epoch-specific measurement uncertainty ranges for each region are presented. For this calculation, only those areas lying below the $100$\,mJy/bm contour and within the central 15\arcmin\ radius of each region are utilized, ensuring that the time integration per pixel is uniform. These analyses were performed on the full 20--40 minute integrations. Often flare lifetimes may be shorter. The $5.5\sigma$ detection limit of 65\,mJy/beam corresponds to the summed integration, so a flare that lasted only $\sim10$~min would need to have a peak of $>180$\,mJy/beam to be identified. Flares that last for only a minute may be missed entirely, since each region in a map is only visited $\sim$ 10 times during a single observation \citep{mairs19}. Figure \ref{fig:ngc2068} clearly shows the importance of location-specific measurement uncertainties within each star-forming region, especially in areas of high brightness. Figure \ref{fig:ic348} presents example single epoch residual maps of IC\,348, revealing the variety of possible epoch-specific measurement uncertainty categories. While the majority of epochs are similar to 2016-01-15, where the measurement noise has $\sigma = 11\,$mJy/bm and is uniformly distributed, very occasionally an epoch will have anomalous spatially-correlated noise similar to 2017-02-09 but where the measurement noise remains low, $\sigma = 11\,$mJy/bm. Somewhat more common are epochs, such as 2019-04-11, where the noise properties remain uniformly distributed but are enhanced due to instrument or weather conditions; in this extreme case $\sigma = 16\,$mJy/bm. Given this knowledge, after determination of candidate single epoch variables, it is important to check for potential measurement noise anomalies in the observed epoch. \section{Analysis}\label{sec:analysis} \subsection{Searching for Faint Variables Over All Epochs} \label{sec:analysis:all} As a first step to identifying variable sources, the normalized standard deviation maps for each region produced using all available epochs were analysed by eye. For this analysis the 100\,mJy/bm contour was used to mark the bright areas, with their known higher residual uncertainty, as distinct from the bulk of each map where the noise properties are significantly more uniform. Although not the focus of this paper, within the bright areas of these star-forming regions we often find circularly shaped localized residuals at the locations of embedded protostars. In all cases, these sources were previously known to vary in brightness and are more effectively investigated via their light curves \citep{johnstone2018,leeyh2021}. The confined bright areas also present arc-like residuals tracing the extended sidelobes of the telescope beam due to focus issues (see for example the location of HOPS 317 in the right panel of Figure~\ref{fig:ngc2068}). Alternatively, within the low brightness areas of these star-forming regions, by eye we find only two locations with significant enhanced normalized standard deviation measurements. These locations are coincident with JW\,566 in OMC\,2/3 (Figure~\ref{fig:omc23}), previously discovered as a transient source by \citet{mairs19}, and a Spitzer-identified candidate Class\,II YSO in NGC\,2023, source MGM12\,2864 (hereafter Source\,2864) in the catalogue by \citet[MGM12]{megeath2012}, at the edge of the NGC\,2024 map (Figure~\ref{fig:n2024}). An automated procedure was used to search each star-forming region for evidence of anomalous standard deviation measurements at the locations of the known YSOs. For this analysis we utilized the Spitzer-based YSO catalogues by \citet{megeath2012}, \citet{stutz2013}, and \citet{dunham2015}. In Table \ref{tab:stats} we count the YSOs within the faint brightness area of each region, where the number ranges from 56 (Serpens\,Main) to 358 (OMC\,2/3). These YSOs occupy a small, typically few percent, fraction of each region's area (see Table \ref{tab:stats}). Thus, the range of normalized standard deviation measures found within each map can be used to determine the likelihood that a large value found at the location of a YSO is due to intrinsic source variability. In detail, a roughly beam-sized box of 5 by 5 pixels {(15\arcsec\ by 15\arcsec)} was centered on each YSO location {to ensure that the peak was well sampled despite a several arcsecond uncertainty in absolute image registration}. The fifth largest pixel value in the box was used as a proxy for the source variability in order to minimize the shot noise associated with individual pixels {due to the pixel gridding oversampling the beam}. This value was then compared against both the total number of pixels higher than that value across the rest of the normalized standard deviation map, excluding areas with bright emission, and the fractional area of the map containing YSOs to estimate the likelihood of such an enhanced YSO brightness deviation measurement within the region being due to random chance. The False Alarm Probability (FAP), therefore, decreases when the fifth largest pixel value in a given YSO increases and as the number of known YSOs within a region decreases. We set a relatively shallow FAP threshold of 10\% for candidate faint variables within each individual region, {in order to test for variables near the sensitivity limit}, yielding an expectation of one potential false alarm over the eight analysed regions. In total, seven candidates were recovered by this process. Five of the candidates were subsequently withdrawn after being found to lie extremely close to the $100\,$mJy/bm contour where the residual uncertainty could be seen to spatially spread. The only two robust detections via the automated process are the same sources as found by eye, JW\,566 in OMC\,2/3 (FAP$< 0.01$\%) and Source\,2864 in NGC\,2023 (FAP$<3$\%). \subsection{A Blind Search for Transient Variables by Epoch} \label{analysis:epoch} Following the approach developed in Section~\ref{sec:analysis:all}, for each region the individual epochs were analysed carefully to search for transient events localized in time. Here, rather than using the normalized standard deviation map to uncover potential variables, we instead analysed the normalized residual signal in each epoch, subtracting off the mean of all other epochs and dividing by the expected measurement uncertainty at each pixel (see Section~\ref{sec:obs:res}). For this analysis we exclusively consider the areas within each region that lie below the 100\,mJy/bm contour in order to avoid complications due to the brightness calibration and telescope focus. Each star-forming region map consists of half a million pixels, or about twenty thousand independent beams.\footnote{Recall that the map pixels are 3\arcsec\ in length and the smoothed 850~\micron\ JCMT beam has a FWHM of $\sim15\arcsec$.} Thus, a greater than four sigma likelihood result is expected about once per epoch due to random chance alone. Given that 34-67 epochs are observed for each of the eight regions, at least one five sigma false positive is expected to be found in the full sample, assuming Gaussian statistics. Thus, we first searched the normalized residual maps for each epoch of each region for anomalous residual peaks higher than 5.5 sigma. Such events were found in four epochs, specifically IC\,348 (2015-12-22, 2017-02-09), NGC\,1333 (2015-12-22), and OMC\,2/3 (2016-11-26). Inspection of the specific images, however, revealed that three of these maps contained correlated noise resulting in extended blooms across the map and leaving enhanced normalized residuals (see for example the bottom right panel in Figure~\ref{fig:ic348}, IC\,348 epoch 2017-02-09). Only epoch 2016-11-26 of OMC\,2/3 (see Figure~\ref{fig:omc23e}) contains a viable peak greater than 5.5 sigma, at the location of JW\,566 T Tauri star \citep[as already reported by][]{mairs19}. Other than JW\,566, to date within the JCMT Transient Survey there are no epoch-specific brightening events above an upper limit of 5.5 sigma. Given that the typical epoch-uncertainty is 12~mJy/bm (Table~\ref{tab:regions}) this is equivalent to $\sim 65$\,mJy/bm at 850~\micron. \subsection{Searching for Single-Epoch Transients of Known YSOs} At the locations of the known infrared YSOs in each region (Table \ref{tab:stats}) it is possible to search deeper within the normalized residual maps, using the procedure outlined in the previous section whereby we compare the single epoch normalized residual measurements obtained at the location of the known sources against the distribution of measurements across the larger map. Again, we search for the fifth largest pixel value within a 5 by 5 pixel region centered on each known YSO and compare against both the number of pixels with this value or higher in the full map and the fraction of the map covered by YSOs. We consider candidate YSO transient events detected in a given epoch when the FAP per epoch is less than 10\%. Even with this low detection threshold, we find only seven candidate events within the almost 400 JCMT Transient Survey epochs. This is somewhat less than the 40 events expected solely due to the FAP threshold, suggesting that the locations of YSOs are slightly less variable than the rest of each map (potentially due to leakage of excess uncertainty across the 100\,mJy/bm boundary). Furthermore, all of the seven candidate events have peak residual normalized signal below 4.5 sigma (see Table~\ref{tab:stats}) with the exception of JW\,566, which reaches 25.9 sigma (Figure~\ref{fig:omc23e}). The large increase in the significance of the JW\,566 detection due to the epoch-specific analysis versus the all-epoch analysis is expected because the extraordinary brightening event is {\it removed} from the epoch-specific determination of the expected standard deviation across epochs. Similarly, the lack of detection of significant variability from Source\,2864 in NGC\,2023 via the epoch-specific analysis suggests that it did not undergo a single transient event but rather it is a faint long-term submm variable source. We discuss further the enigmatic Source\,2864 in Section \ref{sec:disc:blazar}. Across eight star-forming regions, the JCMT Transient Survey covers 1200 YSOs (Table \ref{tab:stats}). Furthermore, each region has about 40 epochs to date (Table \ref{tab:regions}). In total, roughly 50,000 individual YSO 850~\micron\ brightness measurements and single epoch normalized residuals have been made by the JCMT Transient Survey. Assuming Gaussian statistics, we therefore expect at least one $\sim4.5$ sigma event just by random chance. As such, the YSO candidates identified above with peaks lower than 4.5 sigma are not robust. With the exception of the extraordinary bursting event coincident with JW\,566, we therefore assert that there are no epoch-specific YSO brightening events above an upper limit of 4.5 sigma, $\sim 55$\,mJy/bm at 850~\micron\ in the JCMT Transient Survey data set. \section{Discussion}\label{sec:discussion} \subsection{Submm Flaring Events from YSOs} \label{disc:flares} \citet{mairs19} discovered an extreme flaring event from the T Tauri star JW\,566 within the JCMT Transient Survey monitoring of OMC\,2/3. We also recover JW\,566 through our more detailed analysis over all regions and epochs to date. Our search of known, infrared-selected YSOs reveals no additional detections of single-epoch brightening events larger than 4.5 sigma, i.e., $\sim 55$\,mJy/bm, or equivalently a factor of 8 fainter in brightness than JW\,566. Our blind analysis also reveals zero other single-epoch transient events larger than 5.5 sigma, or 65~mJy/bm. This robust non-detection is surprising given the expectation that flaring events, like most stochastic distributions, are thought to follow a power-law distribution with a greater number of events at lower brightness. For example, \citet{Getman2021} find that dN/d$L_X \propto {L_X}^{-\alpha}$, with $\alpha \sim 2.1$ for both diskless and disk-bearing pre-main sequence stars, when $L_X > 10^{32.5}$erg/s. Furthermore, the JW\,566 event was observed to decay by $\sim50$\% over the half hour epoch \citep{mairs19}, which implies that it would have remained observable above our $\sim 55\,$mJy/bm threshold for at least an hour, assuming a linear decay, or two hours, if the decay were exponential. Taken together, these arguments imply that one should expect more fainter detections than bright ones. We therefore conclude that the JW\,566 submm flaring was an exceptional rare or unique event, and that its occurrence rate within the JCMT Transient Survey is not directly related to an underlying power-law distribution of bursts. The JCMT Transient Survey has monitored about 1200 infrared-selected YSOs for about 40 epochs each, equivalently 50000 individual half-hour observations. Additional Class\,III (diskless) YSOs are likely present in the field, a missing and older population that is often revealed through analysis of Gaia astrometry. This represents about 1000 days, equivalently 3 years, of 850~\micron\ monitoring of a random YSO. Taking our non-detection threshold at $\sim 55$\,mJy/bm and assuming a distance of 400\,pc for the typical monitored YSO, this converts to a radio luminosity threshold of $L_R \sim 10^{19}$\,erg/s/Hz, above which we assert that there have been no YSO radio flaring events detected, other than the exceptional source JW\,566. Assuming the \citet{gudel93} scaling relation between radio and X-ray luminosity, yields a lower limit x-ray luminosity of $L_X \sim 10^{34}$\,erg/s. This remains an extremely high threshold, on par with the brightest x-ray flares in the super-flare sample by \citet{Getman2021} and likely at the limit where X-rays are saturated and cannot further increase in luminosity. The nearest star-forming region within the JCMT Transient Survey, Ophiuchus, is approximately three times closer than this nominal distance. For this region, the 100 YSOs, monitored over 34 epochs and corresponding to 70 days of observation of a single YSO, yield an upper limit on radio luminosity from flares of $L_R \sim 10^{18}$\,erg/s/Hz (or $L_X \sim 10^{33}$\,erg/s). {As discussed in Section 2.1, radio flares may vary on timescales shorter than the standard half-hour JCMT epoch integrations. Thus, care must be taken in considering the limits presented here - which assume a constant brightness throughout the observation - when comparing with models or other radio surveys. Furthermore, while each epoch integration is approximately thirty minutes, the detector array is substantially smaller than the image field of view and thus the brightness measurement at any given location is an average over a summation of shorter, time-separated integrations \citep[for details see][]{mairs19}.} As noted in the introduction, all-sky mm surveys with ACT and SPT \citep{Naess2021,Guns2021} have caught a handful of radio flares from young stars at luminosities close to the non-detection threshold presented here. Planned campaigns by the CCAT-prime collaboration \citep{CCAT2021} and the CMB-S4 consortium \citep{CMB2019} should add significantly to this sample. JW\,566, however, remains almost an order of magnitude brighter than any of these other detections. For older M-stars, mm monitoring for radio flares in the mm by \citet{Macgregor2018} and \citet{Macgregor2020} has uncovered small flares but no superflares. Finally, we note that the JW\,566 flare is doubly remarkable due to both its extreme radio luminosity and its remarkably high radio frequency, the 650\,GHz measurement being almost an order of magnitude higher in frequency than either the ACT or SPT detections \citep{mairs19}. We therefore speculate that the rareness of the event may be due to a requirement that to be observed at high (submm) frequencies the event must also be extremely strong. Turning the hypothesis around, such a scenario would introduce a minimum luminosity threshold for submm flares which could explain the lack of lower brightness detections by the JCMT Transient Survey. Observational confirmation of this scenario would require detection of additional bright flares at 650\,GHz, similar to that of JW\,566, while no fainter events are detected. Interestingly, JW\,566 is a known binary system \citep{daemgen2012} with a disk around at least one component, as seen at mid-IR and mm wavelengths \citep{megeath2012,hacar2018}. This might increase the complexity of the magnetic structure of the system and its interaction with the surrounding disk material. Indeed, radio synchrotron emission has been mapped in the hierarchical system V773\,Tau~A \citep{massi2006, torres2012}, which has four components, including a tight inner binary. The enhanced activity has been shown to be produced by interacting helmet streamers and at periastron the radio brightness can be raised to more than 30 times that at apoastron \citep{adams2011}. The spectroscopic binary DQ\,Tau also exhibits strong mm flares that erupt at periastron \citep{salter2010}. Unlike V773\,Tau~A or DQ\,Tau, JW\,566 has only produced one observable powerful flare thus far, however, it remains intriguing to speculate that this single event highlights JW\,566 as having an unusual system architecture, perhaps with at least one component also being a tight and yet-unresolved binary. The rarity of this event may be partially explained if such bright flares occur only during periastron passage of close binaries, which are $\sim 3$\% of the total population \citet{mazzola20,kounkel21}. \subsection{An Enigmatic Blazar Masquerading as a YSO Candidate} \label{sec:disc:blazar} Along with JW\,566 in OMC2/3, our search for faint transient events uncovers a multi-epoch variable near the southern edge of the NGC\,2024 star-forming region field, at R.A.\ 05:41:21.7, Dec.\ $-02:11:08.3$, associated with Source\,2864 in \citet{megeath2012} and also GBS-VLA J054121.69$-$021108.3. This source is likely a blazar, as indicated by a parallax consistent with a background object \citep{kounkel17}, but it has at times been previously classified as a Class\,II YSO \citep{megeath2012} due to its spectral energy distribution and its location within an active star-forming region and rejected as a quasar because of the YSO classification. In contrast with JW\,566, Source\,2864 is detected in the every epoch of submm monitoring of the field. The left panel of Figure~\ref{fig:107_2864} reveals a dimming over time, with a short months-long burst in Autumn 2017. Over all epochs, the mean peak brightness at 850~\micron\ is $S = 53$\,mJy/bm while the variation around the mean is $\sigma = 28$\,mJy/bm, a factor of two larger than the calculated measurement uncertainty in the immediate vicinity of the source, 12--15\,mJy/bm.\footnote{As Source\,2864 is near the edge of the NGC\,2024 field, the surrounding measurement uncertainties are somewhat larger than the typical values of 11--12\,mJy/bm presented in Table~\ref{tab:regions}.} This source is fainter than the brightness limits for earlier JCMT-Transient analyses \citep{mairs2017GBSTrans,johnstone2018,leeyh2021}. Nevertheless, Source\,2864 has a fractional variability, $\sigma/S \sim 0.5$, which is similar to the strongest submm variables detected in those analyses. Source 2864 has been previously observed and classified using near through mid-IR measurements. The source is not visible in the 2MASS survey at any of J, H, or K. Based on Spitzer mid-IR photometry, \citet{mookerjea09} classified it as a highly extincted Class\,II source. This Spitzer classification was also assessed by \citet{megeath2012} based on the, non-extinction corrected, IRAC spectral index $\alpha = 0.03$. However, \citet{povich13} reconsidered the classification and calculated that almost 40 magnitudes of extinction are required to explain the optical through mid-IR spectral energy distribution, if the source is to be considered a late-stage YSO. The extreme extinction raises suspicions that Source\,2864 is only masquerading as a YSO. {It is well known \citep[e.g.][]{Harvey07, Gutermuth09, kryukova14} that YSOs have infrared colors similar to background galaxies and AGN \citep[for a review see][]{megeath22}. More recently, attempts to catalogue galaxies using their mid-IR colors \citep{rakshit19, paggi20} have had to deal with YSO contamination: especially for blazars and BL\,Lac objects.} Due to its mid-IR colors, Source\,2864 was included in the initial ALMA blazar catalogue by \citet{paggi20} but subsequently culled due to its pre-existing classification as a YSO. Indeed, many of the measured properties of Source 2864 are similar properties of young stars in Orion. For example, Source\,2864 is X-ray bright. Using ASCA, \citet{yamauchi00} found it to be the hardest X-ray source toward NGC\,2023, with an attenuation that is converted to A$_{\rm v} \sim 30$, similar to the extinction estimate from the infrared. The authors note that the hardness of the X-rays might also be explained with an extragalactic classification; however, the non-extinction corrected X-ray brightness is only slightly lower than for the other dozen X-ray sources revealed in the field. At the distance of Orion, the observed brightness is equivalent to $L_X \sim 10^{30}$\,erg/s. \citet{lopez13}, using XMM-Newton, also found Source\,2864 to be X-ray bright, with an uncorrected x-ray brightness similar to their other NGC\,2023 targets. While the infrared and X-ray properties are consistent with either a YSO or a background blazar, the radio properties demonstrate that the blazar interpretation is correct. Source 2864 was the brightest object in the field in a 3.6 cm survey by \citet{reipurth2004}. \citet{kounkel2014} later noted that Source\,2864 has the highest centimeter flux density reported for any young star. The authors also found it to be highly variable at radio wavelengths, with variations at almost the 50\% level. The source remained extremely bright in follow-up VLBA observations \citep{kounkel17}, requiring a very small angular size for the emission region. Additionally, the VLBI image of Source\,2864 reveals a core-jet structure \citep{petrov2021} that is typical of extragalactic sources. More importantly, the source revealed no measurable parallax, implying a location more distant than Orion. The source is therefore almost certainly a background, extragalactic, source, which, following the mid-IR color selection by \citet{paggi20}, we identify as a blazar. Source\,2864 is highly variable across many wavelengths - from the radio through the mid-IR (see Fig.~\ref{fig:107_2864}). Using the variability classification scheme of mid-IR NEOWISE lightcurves developed by \citet{park2021}, Source\,2864 is classified a a mid-IR irregular variable with a large observed flux standard deviation. Furthermore, across the last 5.5 yrs, Source\,2864 is dimming in the mid-IR in a similar manner to the submm. The source also undergoes occasional brightness jumps of $\sim 1-1.5$ mag, including a mid-IR burst near MJD\,58000 almost coincident in time with the JCMT-observed submm burst. Indeed, the three JCMT data points marking the burst span from 18 days before the NEOWISE observation to 49 days after. Unfortunately, an earlier NEOWISE burst around MJD\,57250, preceded the start of the JCMT Transient Survey by 100 days and a later burst near MJC\,58900 occurred just as Orion became a daytime target at the JCMT in Winter 2020, unobservable for the next few months. Similarity between observed mid-IR and submm lightcurves has been previously noted for protostars by \citet{contreras20}, where the expectation is that the mid-IR traces the variable accretion rate near the protostar and the submm traces the temperature response in the envelope to the accretion \citep[see also theoretical arguments by][]{johnstone2013,macfarlane2019a,macfarlane2019b,baek20}. As anticipated theoretically, \citet{contreras20} found that the typical fractional time-dependent response was much smaller in the submm than in the mid-IR, such that $F_{850} \propto F_{\rm mid-IR}^{1/5.5}$. Source\,2864, however, appears to deviate significantly from this relationship, showing a factor of two brightness variation for the MJD\,58000 burst in both the mid-IR and submm. Furthermore, radio variability for this source has also been observed at the 50\% level \citep{kounkel2014}. We therefore speculate that multi-wavelength variability can be used to classify sources, breaking the color-color diagram degeneracy between YSOs and extreme extragalactic sources.\footnote{Periodic mid-IR lightcurves have similarly been used to separate background AGB stars from YSOs in the Gould Belt \citep{leeje2021,park2021}.} Around YSOs, the physical environments responsible for various aspects of the spectral energy distribution, from optical through radio, are distinct and therefore can be expected to respond to system changes with differing amplitudes, and possibly time delays. For extreme extragalactic sources, such as blazars, the physical process responsible for the spectral energy distribution is less clearly understood but expected to be dominated by non-thermal processes producing a more consistent variability across wavelengths, as observed here for Source\,2864. For blazars, the spectral energy distribution in the submm through mid-IR is often dominated by synchrotron emission that leads to correlated variability in this region of the spectrum \citep[e.g.][]{hartman96}. Finally, we have analysed the JCMT submm lightcurve of Source\,2864 (Figure \ref{fig:107_2864}: Left) to search for potential secularity and inherent timescales.\footnote{Additional time-dependent mm through submm observations of Source\,2864 are available from ALMA, where it is used as an occasional secondary calibrator \citep{bonato19}. Although these data cover a range of wavelengths, they are significantly sparser than either the JCMT or NEOWISE observations and therefore not used here for time-dependent analyses.} Considering the observations prior to 2021, a robust linear fit is found with a slope of $-12\,$mJy/bm/yr, however after that date it has flattened and possibly begun to rise again. Anticipating that the light curve remains tied to an underlying mean brightness, while undergoing significant deviations, a structure function analysis \citep[see][for methods]{sergison20} was performed which found increased power on longer timescales but no clear associated intrinsic time constant. Similarly, a damped random walk analysis \citep[see][for methods]{kelly09,dexter14} was performed in order to estimate a saturation time scale (the timescale beyond which the amplitude of variability does not increase), with the results suggesting a value larger than the presently monitored 5.5 years. As shown by \citet{bower15}, such a long submm-measured saturation time is not uncommon for blazars. \section{Conclusions}\label{sec:conclusions} In this paper we have analyzed 5.5 years of JCMT Transient Survey submm monitoring observations toward eight Gould Belt star-forming regions to search for evidence of variability from faint sources, primarily Class\,II YSOs. Eight regions, which include infrared-selected $\sim 1200$ young stellar objects, have been monitored an average of 47 times, with each integration lasting about 30 minutes. Here we summarize the main results of the paper: \begin{itemize} \item When searching the normalized standard deviation maps derived using all epochs, only two robust source detections are recovered, JW\,566 \citep{mairs19} in OMC\,2/3 (FAP$< 0.01$\%) and Source\,2864 \citep{megeath2012} in NGC\,2023 (FAP$<3$\%); \item Other than JW\,566, to date within the JCMT Transient Survey there are no epoch-specific brightening events above an upper limit of 5.5 sigma, or $\sim 65$\,mJy/bm at 850~\micron. Furthermore, at the locations of known YSOs there are no epoch-specific brightening events above an upper limit of 4.5 sigma, $\sim 55\,$mJy/bm at 850\micron. Taking a distance of 400\,pc for the typical monitored YSO, the brightness limit above which no additional single epoch events are detected converts to a radio luminosity threshold of $L_R \sim 10^{19}$\,erg/s/Hz, and assuming the \citet{gudel93} scaling relation between radio and X-ray, yields a threshold luminosity of $L_X \sim 10^{34}$\,erg/s. This threshold lies at the top end of the super-flare X-ray luminosity sample analysed by \citet{Getman2021}. The largest radio flares, like JW\,566, may access energies that are beyond the saturation limit of X-rays emission; \item The JCMT Transient Survey has monitored about 1200 YSOs for about 40 epochs each, equivalent to 3 years of 850~\micron\ monitoring of a random YSO, with only a single burst event detected. The 100 YSOs in the core of Ophiuchus monitored over 34 epochs correspond to 70 days of observation of a single YSO, with no bursts detected. For the Ophiuchus sample, the upper limit on radio luminosity from flares, given our lack of detection, is $L_R \sim 10^{18}$\,erg/s/Hz and the assumed conversion to x-ray luminosity yields $L_X \sim 10^{33}$\,erg/s; \item The powerful radio flare from JW\,566 \citep{mairs19}, a binary T Tauri star in OMC2/3, remains a unique and rare event, perhaps related to binarity; \item We have identified one variable quasar in approximately 1.6~sq.~deg. of monitoring. The quasar, Source\,2864 from \citet{megeath2012}, visually coincident with NGC\,2023, is most likely a background blazar, the first extragalactic submm variable source detected by the JCMT Transient Survey. Consideration of variability strength as a function of frequency across the electromagnetic spectrum may allow for better classification of sources which overlap in the mid-IR color-color diagram, such as YSOs and extreme extra-galactic objects. \end{itemize} \section*{Acknowledgements} {The authors thank the referee for comments that improved this paper.} D.J.\ and G.J.H.\ appreciate a discussion with Marina Kounkel on the VLBI parallax non-detection of Source 2864. The authors wish to recognise and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan; Academia Sinica Institute of Astronomy and Astrophysics; the Korea Astronomy and Space Science Institute; the Operation, Maintenance and Upgrading Fund for Astronomical Telescopes and Facility Instruments, budgeted from the Ministry of Finance (MOF) of China and administrated by the Chinese Academy of Sciences (CAS), as well as the National Key R\&D Program of China (No. 2017YFA0402700). Additional funding support is provided by the Science and Technology Facilities Council of the United Kingdom and participating universities in the United Kingdom and Canada. Additional funds for the construction of SCUBA-2 were provided by the Canada Foundation for Innovation. The James Clerk Maxwell Telescope has historically been operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the National Research Council (NRC) of Canada and the Netherlands Organisation for Scientific Research. The JCMT Transient Survey project codes are M16AL001 and M20AL007. This research used the facilities of the Canadian Astronomy Data Centre operated by NRC Canada with the support of the Canadian Space Agency. D.J.\ is supported by NRC Canada and by an NSERC Discovery Grant. G.J.H.\ is supported by general grant 12173003 awarded by the National Science Foundation of China. H.S.\ is supported by Institute of Astronomy and Astrophysics, Academia Sinica, and by grants from the Ministry of Science and Technology (MoST) in Taiwan through 109-2112-M-001-028- and 110-2112-M-001-019-. \bibliographystyle{aasjournal} \bibliography{TrueTransients} \input{table1} \input{table2}
Title: A Method of Improving Standard Stellar Luminosities with Multiband Standard Bolometric Corrections
Abstract: Standard luminosity ($L$) of 406 main-sequence stars with the most accurate astrophysical parameters are predicted from their absolute magnitudes and bolometric corrections at Johnson $B,V$, and Gaia EDR3 $G$, $G_{BP}$, $G_{RP}$ filters. Required multiband $BC$ and $BC-T_{eff}$ relations are obtained first from the parameters of 209 DDEB (Double-lined Detached Eclipsing Binaries) with main-sequence components and Gaia EDR3 parallaxes. A simplified SED is formulated to give filter dependent component light contributions and interstellar dimming, which are essential in computing $BC$ of a component virtually at any filter. The mean standard $L$ of a star is calculated from the mean $M_{Bol}$ which is a mathematical average of independent $M_{Bol}$ values predicted at different filters, while the uncertainty of $L$ is the uncertainty propagated from the uncertainty of the mean $M_{Bol}$. The mean standard $L$ of the sample stars are compared to the corresponding $L$ values according to the Stefan-Boltzmann law. A very high correlation ($R^2>0.999$) is found. Comparing histogram distributions of errors shows that uncertainties associated with the mean standard $L$ (peak at $\sim2.5$ per cent) are much smaller than the uncertainties of $L$ (peak at $\sim8$ per cent) by the Stefan-Boltzmann law. Increasing the number of filters used in predicting the mean $M_{Bol}$ increases the accuracy of the standard stellar luminosity. Extinction law, color-color relations and color excess - color excess relations for Gaia passbands are demonstrated for main-sequence stars for the first time.
https://export.arxiv.org/pdf/2208.04110
\begin{Titlepage} \Title{A Method of Improving Standard Stellar Luminosities with Multiband Standard Bolometric Corrections} \Author{Bak{\i}\c{s}, V. and Eker, Z.}{Department of Space Sciences and Technologies, Faculty of Sciences, Akdeniz University, Antalya, TR.\\ e-mail:volkanbakis@akdeniz.edu.tr} \Received{April, 2022} \end{Titlepage} \Abstract{Standard luminosity ($L$) of 406 main-sequence stars with the most accurate astrophysical parameters are predicted from their absolute magnitudes and bolometric corrections at Johnson $B,V$, and Gaia EDR3 $G$, $G_{BP}$, $G_{RP}$ filters. Required multiband $BC$ and $BC-T_{eff}$ relations are obtained first from the parameters of 209 DDEB (Double-lined Detached Eclipsing Binaries) with main-sequence components and Gaia EDR3 parallaxes. A simplified SED is formulated to give filter dependent component light contributions and interstellar dimming, which are essential in computing $BC$ of a component virtually at any filter. The mean standard $L$ of a star is calculated from the mean $M_{Bol}$ which is a mathematical average of independent $M_{Bol}$ values predicted at different filters, while the uncertainty of $L$ is the uncertainty propagated from the uncertainty of the mean $M_{Bol}$. The mean standard $L$ of the sample stars are compared to the corresponding $L$ values according to the Stefan-Boltzmann law. A very high correlation ($R^2>0.999$) is found. Comparing histogram distributions of errors shows that uncertainties associated with the mean standard $L$ (peak at $\sim2.5$ per cent) are much smaller than the uncertainties of $L$ (peak at $\sim8$ per cent) by the Stefan-Boltzmann law. Increasing the number of filters used in predicting the mean $M_{Bol}$ increases the accuracy of the standard stellar luminosity. Extinction law, color-color relations and color excess - color excess relations for Gaia passbands are demonstrated for main-sequence stars for the first time.}{stars: fundamental parameters, (stars:) binaries: eclipsing, Sun: general} \section{Introduction} \label{sec:intro} An unobserved missing fraction of a stellar luminosity ($L$) by a photometric observation is named bolometric correction ($BC$) if it is expressed in magnitude units. As if fixing a defect or restoring the missing part; adding this fraction ($BC$) to the apparent ($V$) or absolute ($M_V$) magnitudes of a star, which are known to be naturally limited in a certain wavelength range, one obtains the apparent ($m_{Bol}$) or absolute ($M_{Bol}$) bolometric magnitudes representing the total $L$ of the star. Although $BC$ is useful, ingeniously invented, and the ready-to-use quantity to get those quantities from an apparent magnitude ($V$) and a parallax, major difficulty confronted by the earlier astronomers (Kuiper 1938; Mc Donald \& Underhill 1952; Popper 1959; Wildey 1963; Smak 1966; Johnson 1966; Weidemann \& Bues 1967; Heintze 1973) is that a pre-required $BC$ must be determined first from observations but there is neither a telescope nor a detector to measure a bolometric magnitude while $L$ is already known to be un-observable. Therefore, a relation between an apparent magnitude (e.g. $V$) and observable part of the luminosity (e.g. $L_V$) recognised in all of the photometric passbands with pre-established filter transmissions operated in the Vega system of magnitudes, where Vega is a common reference, cannot be established between a bolometric magnitude and $L$ using direct observations. This difficulty, however, was overcome by assuming arbitrary zero points for both the bolometric magnitudes and $BC$ scale. The arbitrariness attributed to $BC$ and bolometric magnitude scales was then caused publications of many different $BC$ tables; some containing all negative (Kuiper 1938; Popper 1959; Wildey 1963; Cox 2000; Pecaut \& Mamajec 2013) $BC$ values as if intentionally opposing the rest containing a limited number of positive $BC$ (Code et al. 1976; Johnson 1964, 1966; Flower 1977, 1996; Bessel et al. 1998; Sung et al. 2013; Cassagrande \& VandenBerg 2018; Eker et al. 2020). Incorrect usage of tabulated $BC$ values was discussed by Torres (2010). The biggest of the problems, however, is that a star could be found to have several $BC$ from several tables, implying that several different $M_{Bol}$ representing a single $L$. The problems of arbitrariness attributed to the $BC$ scale were studied recently by Eker et al. (2021a), who introduced the concept of standard $BC$. The standardisation was necessary to avoid problems caused by arbitrariness of the $BC$ scale and for unifying the $BC$ and $M_{Bol}$ values, which is the easiest way of assuring consistent $L$ of a star if it is predicted from astrometric (parallax) and photometric observations. Accuracy of the classical methods of computing a stellar luminosity (1- a direct method from radii ($R$) and effective temperatures ($T_{eff}$); 2- a method using a mass-luminosity relation ($MLR$); 3- a method requiring a bolometric correction) is later studied by Eker et al. (2021b), who introduced the concept of standard stellar $L$. If $L$ of a star is calculated from one of its absolute magnitudes ($M_\xi$, where $\xi$ indicates a filter in a photometric system) and corresponding standard $BC$, it is called standard $L$ while $L$ according to the Stefan-Boltzmann law is standard by definition. The methods (2) and (3) are indirect because a pre-determined $MLR$ is required for method (2) and a pre-determined $BC-T_{eff}$ relation is necessary for method (3). In the absence of these pre-determined relations, both are not operable. Eker et al. (2021b) claimed the indirect methods are less accurate than the direct method providing a stellar $L$ with a typical accuracy of 8.2 - 12.2 per cent, which could be as high as a few per cent; e.g. primary of V505 Per has $L= 2.688 L_\odot$ and its uncertainty ($\Delta L/L$) is 2.53 per cent implied by very small relative uncertainties of its radius $\Delta R / R = 1.09$ per cent and effective temperature $\Delta T_{eff} / T_{eff}= 0.32$ per cent (Tomasella et al. 2008). Only if a unique $BC$ directly determined from an observed SED with very high spectral resolution is used in the third method, then the relative uncertainty of predicted $L$ could be improved up to a one per cent or more level. Otherwise, using a standard $BC$ predicted from the standard $BC-T_{eff}$ relation, method 3 cannot provide an accuracy better than the direct method. However, using a unique $BC$ in method 3 is just a speculation, that is, it is impractical nowadays as expressed by Eker et al. (2021b). Therefore, the primary aim of this study is not to speculate, but to investigate how to improve the standard stellar luminosities obtained by the third method realistically using multiband standard $BC-T_{eff}$ relations. To achieve this aim, a new method is introduced for estimating relative light contributions of binary components from a simplified SED operable virtually at any photometric passband. Then, the classical method of Eker et al. (2020), which requires an apparent magnitude of a binary system, the light ratio of components, a reliable parallax and an interstellar extinction, is used for predicting the multiband $BC-T_{eff}$ relations for the Gaia $G$, $G_{BP}$ and $G_{RP}$, and Johnson $B$, $V$ passbands. Data and input parameters are described in \S2. The new method is explained in \S3. Calibrations of multiband $BC-T_{eff}$ relations are described in \S4. How to improve standard $L$ of a star is discussed in \S5. Discussions are found in \S6. Conclusions are in \S7. \section{Data} \label{sec:data} Having essentially the same purpose, to obtain most reliable empirical $BC$ from the most reliable stellar parameters first and then to calibrate the most reliable $BC$-$T_{eff}$ relations, this study and Eker et al. (2020) rely upon the same original data set of DDEB (Double-Lined Detached Eclipsing Binaries) published by Eker et al. (2018). The 509 main-sequence stars with most reliable masses ($M$) and radii ($R$) accurate within 15 per cent and with published effective temperatures ($T_{eff}$) as the components of DDEB having metallicities 0.008 $\leq$ Z $\leq$ 0.040 in the solar neighbourhood of the Galactic disk were originally used by Eker et al. (2018) for calibrating interrelated mass-luminosity (MLR), mass-radius (MRR) and mass-effective temperature (MTR) relations. Later, Eker et al. (2020) combined this data set with the data set of Graczyk et al. (2019), who studied the global zero-point shift between the photometric fluxes of 81 detached eclipsing binaries and Gaia DR2 trigonometric parallaxes (Gaia Collaboration et al. 2018) in order to increase the number of available systems with component light ratios in Johnson $B$ and $V$-bands, essential for computing the apparent magnitude of a component from the total brightness of a system; a step required before computing $BC$ of a component. This combined data set of Eker et al. (2020) contained 290 DDEB having at least one component on the main sequence. Aiming to calibrate a main-sequence $BC_V$ - $T_{eff}$ relation, Eker et al. (2020) could find only 400 main-sequence stars (194 binaries, 8 primaries, and 4 secondaries, that is, 206 systems) from the available 580 component stars (290 DDEB). 290 minus 206, that is, 84 systems were lost because of a missing systemic apparent brightness ($V_{tot}$), or a missing light ratio ($l_2$ / $l_1$) in the V band, or a missing parallax, or a missing interstellar dimming ($A_V$). if any one of those parameters is missing for a system, $BC$ of its components cannot be calculated, that is, this system is not eligible for calculating $BC$. A percentage of a third light is also needed if it is detected in the light curve of a system. 206 DDEB has 412 components. After subtracting the non-main-sequence stars, the number of stars with a computed $BC_V$ reduced to 400, from which a main-sequence $BC_V$ - $T_{eff}$ relation is calibrated by Eker et al. (2020). The original data set containing 290 systems investigated by Eker et al. (2020) is considered for this study for two purposes: First of all, we wanted to test whether or not the new method used in this study provides reliable component light contributions. For this, 206 systems with $V$ and $B$-passband light ratios published by Eker et al. (2020) are ideal for testing by comparing to the light ratios found in this study. Nevertheless, the new method also has its own limitations, to be discussed in the next section. That is, 206 systems will naturally be reduced further. To compensate for such a further loss and to be able to use its applicability to the systems even without standard $V$ light curve solutions, the larger data set (290 system) was chosen as the main sample for this study. Here in this study, nearly the same number (406) of main-sequence stars are found as the components of 209 DDEB (197 system, 9 primaries and 3 secondaries) eligible to calculate their $BC$ values and then to continue calibrating $BC$-$T_{eff}$ relations for the Gaia photometric passbands $G$, $G_{BP}$ and $G_{RP}$. Among this sub sample 312 main-sequence stars (152 binaries, 5 primaries and 3 secondaries) are found as a sub sub sample containing stars common between the present study and Eker et al. (2020), which are sufficient for testing and verifying the validity of the component contributions predicted by the new method involving SED. The basic physical parameters of the components and total brightness in $B$, $V$ and Gaia passbands of DDEBs used in this study are listed in Table~1, where the columns are self-explanatory to indicate order, name of the system, the component (primary or secondary) are in the first three columns. Note that the primaries and the secondaries identified as non-main-sequence stars are indicated by p* or s*. From the fourth to seventh columns, masses and radii with their errors are given. In columns 9 and 10, temperatures and errors are given. References for physical parameters are in columns 8 and 11. The total brightness of the systems in $B$ and $V$-bands with their errors are in columns 12-15 with their references in the 16th column. In the rest of the columns (17-22), the total brightness of the systems in the Gaia passbands which are available in the Gaia EDR3 are given. In Table~2, in addition to the first three columns which are the same as in Table~1, the fourth and fifth columns give cross references where Xref(1) (column 4) indicates the order number in Eker et al. (2018) and Xref(2) (column 5) indicates the order number in Eker et al. (2020). Every system in this study has at least one Xref number in columns 4 and 5. Only five binaries from the list of Graczyk et al. (2019) do not have Xref in column 4. Systems without Xref numbers in column 5 are the ones excluded by Eker et al. (2020) either because of missing $V$-band light ratio or without interstellar extinction etc. In this study, we preferred to display component contributions (columns 6 and 7) rather than light ratios which were preferred by Eker et al. (2020). Thus, the sum of the contributions of the primary and the secondary is one. The advantage here is that the component contributions of all systems in Table\,2 are predictable by the new method. There are two possibilities for this: from the reddened SED, and the other from the unreddened SED. The unreddened light contributions are listed in Table\,2 (columns 8, 9, 10, 11, 12) for $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$ passbands, respectively. Since reddened component contributions are the same in four digits after the decimal, to save space, they are not shown in Table\,2. At last, component apparent magnitudes in accord with the contributions are given under columns 13, 15, 17, 19 and 21 for the same bands in the same order. Rather than propagating observational random errors, Eker et al. (2020) had no option but to assume the same uncertainty for the component magnitudes as the systemic brightness because the light ratio ($L_2$/$L_1$) of components predicted from light curve solutions are usually published without an uncertainty. Using the same method here, the same steps are followed in this study. That is, the uncertainty of the component magnitudes were assumed to have the same uncertainty as the uncertainty of the systemic brightness. One may notice that many systems in Table 2 do not have $B$-band apparent magnitudes (Column 13) in contrast to apparent magnitudes at Gaia passbands, which are almost complete excluding only $\beta$ Aur and $\alpha$ CrB (both are too bright for reliable measurement), while ten systems are missing in the $V$-band. Those are the systems even the new method does not help to compute their $BC$ values because of missing total brightness. Including VV Pyx, there are eleven more systems (marked in Table 3) which do not produce reliable SED model with EDR3 parallaxes. For them, DR2 parallaxes produce better SED model. VV Pyx, $\beta$ Aur and $\alpha$ CrB are three systems we found better SED models with Hipparcos parallaxes. Before going to the next section, where the new method is described, we must certify the 406 stars considered in this study are all in the main-sequence stage of evolution, which is very obvious in Fig.~1. The three primaries and nine secondaries identified as non-main-sequence stars appear located outside of the main-sequence limits, which are shown by continuous lines, according to ZAMS (Zero Age Main Sequence) and TAMS (Terminal Age Main Sequence) limits for the solar abundance $Z = 0.014$ from PARSEC evolutionary models (Bressan et al. 2012). Metal abundance distribution of DDEB at the solar neighbourhood within the disk of the Milky Way is already discussed by Eker et al. (2018), where the peak at the solar metallicity (Z=0.014-0.017) together with the lower (Z=0.008) and the upper (Z=0.040) limits were indicated. To save space and to be clearer at identifications of non-main-sequence stars in Fig.~1, the ZAMS and TAMS curves for Z=0.008, Z=0.040 and discussions therein are not repeated in this study. However, because the present sub-sample is mainly selected from the sample of Eker et al. (2018), metal abundances of the main-sequences stars used in this study are also expected to have a similar metallicity distribution within the limits 0.008 $\leq$ Z $\leq$ 0.040. \section{A New method involving SED to obtain light ratio of components} When computing $BC$ of stars, unlike previous studies (Code et al. 1976; Cayrel et al. 1997; Girardi et al. 2008; Andrae et al. 2018; Chen et al. 2019) utilizing the well-known relation directly with $S_\lambda(V)$ (sensitivity function of the $V$ magnitude system), $f_\lambda$, (monochromatic flux from a star) and $C_2$ (arbitrary constant of integration), \begin{equation} BC_V=2.5log\frac{f_V}{f_{Bol}}+C_2=2.5log\frac{\int_{0}^{\infty} S_\lambda(V)f_\lambda d\lambda}{\int_{0}^{\infty} f_\lambda d\lambda}+C_2, \label{eq:bc} \end{equation} here, we introduce a new method operating indirectly. This new method deserves to be called indirect because the $V$ filtered radiation \begin{equation} f_V = \int_{0}^{\infty} S_\lambda(V) f_\lambda d\lambda, \label{eq:Vflux} \end{equation} is calculated using the SED model with simplifications only for predicting relative light contributions (to be explained later) of component stars in binaries, while $f_{Bol}$ stands for the bolometric flux of a component reaching to telescope if there is no extinction. Only after, apparent magnitudes of the components calculated from apparent magnitude of the system using the light ratio of components, and absolute magnitudes are calculated from the apparent magnitudes as being corrected for interstellar extinction, $BC$ of each component is calculated as \begin{equation} BC_V = M_{Bol} - M_V \end{equation} The first simplification is that the spectrum of a component ($f_\lambda$) is approximated by a Planck Function, rather than a model atmosphere characterized by a $T_{eff}$, log $g$, micro-turbulent velocity ($\zeta$) and metal abundance $[Fe/H]$. Being independent of model atmosphere parameters, the new method is easier to use and more suitable for obtaining empirical $BC$s; e.g. for main-sequence stars, rather than series of $BC$ tables specified with a log $g$, $\zeta$ or $[Fe/H]$ for various passbands of different photometric systems. Assuming no interstellar extinction in the first approximation, a spherical star with a radius $R$ produces a flux continuum (SED) at a distance $d$ from its centre. The monochromatic flux could be expressed as: \begin{equation} f_\lambda = \frac{R^2}{d^2} \pi B_\lambda(Teff). \label{eq:flambda} \end{equation} Then, the second simplification becomes clear; the spectral lines and prominent spectral features are ignored since the Plank function represents the star's continuum. The equation implies that limb darkening is also ignored within the solid angle $\pi{R^2}/{d^2}$ where the isotropic intensity is $B_\lambda(T_{eff})$. Eq.(4) would be adapted to a detached binary with spherical components as: \begin{equation} f_\lambda (system)= \frac{\pi}{d^2} \left[R_1^2 B_\lambda(T_1)+R_2^2 B_\lambda(T_2)\right], \label{eq:systemflux} \end{equation} where $R_1$ \& $R_2$ and $T_1$ \& $T_2$ are radii and effective temperatures of the primary and secondary, respectively. If $d$ is also the distance from the Earth, then $f_\lambda$ in both equations would represent an unreddened SED in units of W $m^{-2}A^{-1}$. The unreddened SED of V618~Per, which is one of the systems in Table~1, is shown by a dashed continuous curve starting from the upper left and ending at the lower right in Fig.~2. The observed spectrophotometric flux data of V618~Per are taken from the SIMBAD database (Wenger et al. 2000). The observed flux data does not appear to fit the unreddened SED, especially toward shorter wavelengths. The deviation from the unreddened SED is expected because of interstellar extinction. For modelling the observed SED data, our unreddened SED model is reddened by adjusting $E(B-V)$ of the system until a best fitting reddened SED is obtained using the reddening model of Fitzpatrick (1999). The parameter $R(V)$ is adopted as $R(V)= 3.1$. Since we have calculated the parameter $R(\lambda) = A_\lambda/E(B-V)$ for each filter individually, the initially selected value of $R(V)$ did not affect our analysis. We will discuss this issue later. The $\chi^2$ fitting of the reddened SED is displayed by a dotted continuous curve just below the unreddened SED in Fig.~2. Assuming no interstellar extinction, that is the previously computed unreddened SED (Eq.(5)), one may compute unreddened visual flux $f_V$ of a component, if a reliable trigonometric parallax (or distance) is available, using $f_\lambda$ from Eq.(4) to indicate the flux contribution of the component in the $V$ filtered radiation (SED) reaching above the Earth's atmosphere. However, by replacing the unreddened $f_\lambda$ with the reddened $f_\lambda$, which is obtained by $\chi^2$ fitting, one may compute the $V$-band flux contribution of the very same component in the reddened SED. We have systematically calculated both reddened and unreddened component contributions at $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$ passbands using the filter profiles from Bessel (1990) and Evans et al. (2018) for Johnson $B$, $V$, and Gaia passbands, respectively. For the sake of clarity, only $V$-band contributions of the primary and the secondary of V618~Per corresponding to the reddened SED are shown in Fig.~2 as vertical profiles on the left where the solid and dotted-dash lines are for primary's and secondary's contribution, respectively. Unreddened contributions of primary and secondary are not shown for clarity. Notice that having a bigger radius and a hotter effective temperature; the primary's contribution is larger than the secondary's contribution. It is not the absolute contributions, but relative contributions of components are given in Table\,2 (columns 6-12). The relative contribution of a primary is computed as: \begin{equation} Primary's\,cont. = \frac{f_\xi(pri)}{f_\xi(pri)+f_\xi(sec)}, \label{eq:pri_contrib} \end{equation} where $\xi$ represents one of the passbands $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$. Then, the secondary's relative contribution is just one minus the primary's contribution. \subsection{Multiband standard $BC_\xi$ by the new method} The new method provides not only component contributions but also provides the amount of dimming due to interstellar extinction in magnitude scale ($A_\xi$) which is needed when computing absolute magnitude ($M_\xi$) of a component from its apparent magnitude $\xi$, \begin{equation} M_\xi = \xi + 5log\varpi +5 - A_\xi, \label{eq:abs_mag} \end{equation} where $\varpi$ and $A_\xi$ are trigonometric parallax in arcseconds and interstellar extinction in magnitudes for the passband $\xi$, respectively. It would be clear according to Fig.~2 that \begin{equation} A_\xi = 2.5log\frac{\int_{0}^{\infty} S_\lambda(\xi)f_\lambda^0(system) d\lambda}{\int_{0}^{\infty} S_\lambda(\xi)f_\lambda(system) d\lambda}, \label{eq:A_dimming} \end{equation} where $f_\lambda^0$(system) is the unreddened SED of the system. The profile after convolution of $f_\lambda^0$(system) by the $G$ filter is shown as the dashed vertical profile on the right in Fig.~2. $f_\lambda$ (system) is the reddened SED of the system. The profile after convolution of $f_\lambda$(system) by the $G$ filter is shown as the dotted vertical profile in Fig.~2. The dimming ($A_\xi$) in each of the photometric bands $B$, $V$, $G$, $G_{BP}$, and $G_{RP}$ are calculated according to Eq.(8) and they are listed in Table\,3 together with the other parameters needed for computing absolute magnitudes of the components. The first three columns are the same as in Table\,2 (order, name, and p or s). Parallax and relative error of parallax are in the 4$^{th}$ and 5$^{th}$ columns. $L$ of the components according to Stefan-Boltzmann law in Solar and SI units and its associated relative error propagated from the uncertainties of radius and $T_{eff}$ are in the 6$^{th}$, 7$^{th}$, and 8$^{th}$ columns. After bolometric absolute magnitudes (column 9), which are computed directly from $L$, using the relation suggested by IAU General Assembly Resolution B2, hereafter (IAU 2015 GAR B2) \begin{equation} M_{bol} = -2.5logL+C_{bol}, \label{eq:M_bol} \end{equation} where $C_{bol}=71.197 425 ...$ if $L$ uses SI units, $C_{bol}= 88.697 425 ...$, and if $L$ uses cgs units (IAU 2015 GAR B2, Eker et al. (2021a)), the interstellar extinctions (dimming) in $V$, $B$, $G$, $G_{BP}$, and $G_{RP}$ passbands are given in the 11th, 12th, 13th, 14th, and 15th columns. The rest of the columns of Table\,3 is reserved for the absolute magnitudes of components in $V$, $B$, $G$, $G_{BP}$, and $G_{RP}$ bands and their associated errors. Eventually, the multiband standard $BC$ values of the component stars according to the basic definition $BC_{\xi} = M_{bol} - M_\xi$ (multiband form of Eq.(3)) are listed in Table\,4, where the columns are self-explanatory; order, name, p or s, $BC_B$ and its error, $BC_V$ and its error, $BC_G$ and its error, $BC_{G_{BP}}$ and its error, $BC_{G_{RP}}$ and its error. \subsection{Testing the new method} Even if a zero-point error is absent besides the propagated errors originating from the random observational uncertainties, consequence errors would also be appended to a computed $BC$ if it is calculated directly from Eq.(1) with a simplified SED. The consequence errors are defined here to indicate errors in a computed $BC$ if it is predicted according to Eq.(1) where the SED of the component is not its observed spectrum with a sufficient resolution but a spectrum represented by a Planck function with $T_{eff}$ of the component. The consequence errors are expected because simplifications introduced by Planck functions make some prominent spectral features lost, thus the computed $BC$ would be affected. Existence of a consequence or a zero-point error is sufficient to make a calculated $BC$ non-standard. Zero-point errors are avoided if one uses Eqs. (3) and (9), as it was claimed by IAU 2015 GAR B2 and Eker et al. (2021a), when computing $BC$ of a component direcly from $BC_V = M_{Bol}$ - $M_V$. For the consequence errors, however, we claim: Unlike Eq.(1) with a simplified SED excluding certain prominent spectral features together with spectral lines, the method in this study, which uses Eq.(3) rather than Eq.(1), does not exclude any of the spectral features and lines despite using a simplified SED. This is because the total effect of all prominent features and lines on a spectrum is automatically included in through $M_V$ in Eq.(3) where a simplified SED is needed indirectly only for estimating relative light contributions of binary components from which $M_{Bol}$ and $M_V$ of the components are predicted. Nevertheless, a test is necessary to make sure if the simplified SED provides reliable light contributions of the components. Fig.~3 compares fractional component contributions predicted in this study to a limited number of fractional light contributions in $B$ and $V$ passbands from the eclipsing light curves collected by Eker et al. (2020) for producing $BC_V$ and $BC_V$ - $T_{eff}$ relation obtained from DDEB. The one-to-one correlation of almost all data is very clear. Not only component contributions from the reddened SED but also the unreddened SED of this study almost perfectly confirm the component contributions (columns 6-12, Table~2) obtained from the eclipsing binary light curves. Fig.~3 confirms that even if light curve of an eclipsing binary system is highly reddened because of interstellar extinction, the light ratio of components or component contributions predicted from light curve solutions are the same as the light contributions predicted by the new method using the reddened and unreddened SED of the system. \section{Calibrations of multiband $BC$ - Temperature Relations} Once $BC$s according to Eq.(3) are available (Table\,4), then it is straightforward to calibrate $BC$-$T_{eff}$ relations using $T_{eff}$ of the component stars. The least-squares method is used to obtain the best-fitting curve of a calibrated $BC$-$T_{eff}$. Fig.~4 shows the empirical standard $BC$ values computed in this study and the best-fitting fourth-degree polynomials together with $1 \sigma$ deviations below each panel for $G$, $G_{BP}$ and $G_{RP}$. Empirical standard $BC$-$T_{eff}$ relations for $V$ and $B$ passbands of Johnson photometry are also produced for comparing $BC$-$T_{eff}$ of the $V$ band by Eker et al. (2020). Fig.~5 shows the $BC$-$T_{eff}$ curves of $B$ and $V$ bands. Comparing component light contributions produced in this study to the ones from light curve solutions as in Figure 3 is a preliminary test of the new method if reliable component contributions are produced or not. Having consistent $BC$-$T_{eff}$ relations in all the passbands $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$ is the second test of the new method, which is successful again. Successfully produced $BC$-$T_{eff}$ relations confirm not only the validity of component light contributions but also confirm the reliability of dimming ($A_\xi$) provided using the new method. Coefficients and uncertainties for the $BC$-$T_{eff}$ functions from the least-square method are listed in Table\,5 where the columns are for the photometric bands used in this study including a comparison column (column 4) of Eker et al. (2020). The rows are for the coefficients of the fitting polynomial, where associated errors are indicated by $\pm$ just below the value of a coefficient. The lower part of the table compares standard deviation (RMS), correlation ($R^2$), and the standard $BC$ of a main-sequence star with $T_{eff}$ = 5772 K (a solar twin). Maximum $BC$ and corresponding effective temperatures are given below the absolute and apparent magnitudes. The lowest part of the table is for indicating the range of the positive $BC$ values if exist. $T_1$ and $T_2$ are the two temperatures, which make $BC = 0$. The $BC$ values between $T_1$ and $T_2$ are all positive else negative. If $T_1$ and $T_2$ are not given, then all $BC$ values are all negative or all positive. If only $T_2$ is given, then $BC$ is positive for $T_{eff} < T_2$, $BC = 0$ if $T_{eff} = T_2$ else $BC$ is negative. Table\,5 indicates $BC$-$T_{eff}$ curve of this study has a smaller $RMS$ and higher correlation ($R^2$ value) compared to the $RMS$ and correlation coefficient obtained in Eker et al. (2020). Fig.\,6a compares the $BC_V$-$T_{eff}$ curve of this study to the $BC_V$-$T_{eff}$ curves by Flower (1996), Eker et al. (2020), Mamajec (2021) and Cox (2000). $BC$s of Cox (2000) are all overestimated (more negative) compared to other $BC$ displayed in the figure except Flower's (1996) towards the coolest part of the temperature scale. The $BC$ values of Mamajec (2021) appear overestimated (more negative) compared to the $BC$ values of Eker et al. (2020) for the stars hotter than 10000 K, also for a very small range of coolest stars but the rest appears as the same. The $BC$ values of Flower (1996) deviate from the curve of Eker et al. (2020) and Mamajec (2021) as being bigger absolute value towards the coolest temperatures. Nevertheless, except for a limited temperature range near 10000 K, all other $BC$ values appear overestimated when compared to the standard $BC$s of this study. Several tables exist which provide $BC$ in different photometric passbands including Gaia photometry (Martins and Plez 2006; Jordi et al. 2010 and Pedersen et al. 2020) as a function of atmospheric parameters $T_{eff}$, log $g$, $\zeta$ or $[Fe/H]$ and are commonly used to derive isochrones in different colours; Girardi et al. (2002) is one example. However, only Andrae et al. (2018) combined $BC_G$ of various atmospheric parameters (log $g$, $\zeta$, metallicity) and produced a single $BC_G$ - $T_{eff}$ relation for the main-sequence stars. Fig.\,6b compares empirical $BC$-$T_{eff}$ curve in the $G$ passband of this study to the $G$ band $BC$-$T_{eff}$ curve of Andrae et al. (2018), which appears overestimating $G$ band $BC$ for the stars cooler than 6500 K. The $BC$s of Andrae et al. (2018) cover a temperature range $3300 - 8000 K$. Other $BC$-$T_{eff}$ relations representing a specific $\zeta$ or $[Fe/H]$ predicted from model atmospheres are not suitable for comparison to the empirical relations of this study. Empirical $BC$-$T_{eff}$ relations are not like fundamental relations; e.g. Stefan-Boltzmann law. They are rather statistical relations like classical $MLR$ (Eker et al. 2018, 2021b). They could be used only under correct conditions set statistically. Because of stellar evolution (Clayton 1968), there could be many stars with the same $M$ but various $L$ due to different ages, different chemical compositions and internal mixing. Therefore, in reality, there is no unique luminosity ($L$) for a typical main-sequence star of a given mass ($M$). However, with a large uncertainty covering $L$ values of all main-sequence stars, the classical $MLR$'s may provide only a mean $L$ for a typical main-sequence star of a given $M$. The same is true that the $BC$-$T_{eff}$ relations may provide only a mean or a typical $BC$ for a typical main-sequence star of a given typical $T_{eff}$. Therefore, it is an astrophysical interest to have a table indicating typical $T_{eff}$ and typical $BC$ of main-sequence stars. Table\,6 is the extension of an original table given by Eker et al. (2018) and Eker et al. (2020), where typical fundamental astrophysical parameters of main-sequence stars are presented with $BC$, $(B-V)_0$ and $M_V$ as a function of typical effective temperatures associated with the spectral types. Table\,6, here, is kept short to contain only spectral types, typical $T_{eff}$ and mean $BC$s and intrinsic colors of nearby Galactic main-sequence stars with 0.008 $<$ Z $<$ 0.040 for the bands $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$. An interesting feature of multiband $BC$-$T_{eff}$ relations would be revealed if the $BC$ values on Table\,6 are plotted on a single frame (Fig.\,7) as a function of effective temperature. It is not a surprise to see $BC$-$T_{eff}$ relations of Gaia pass-bands (Fig.\,7a) cut each other at a common point near $\sim$10000 K (log $T_{eff}=$ 4). This must be due to using the Vega system of magnitudes, which assumes zero intrinsic colours for a hypothetical star of $A0V$ spectral type with an effective temperature 10000 K. The Vega system of magnitudes uses Vega as the standard star and assumes that all intrinsic colours of Vega are equal to 0. While Vega is classified as an A0V star, its effective temperature is ~9600K while its apparent brightness is V = 0.03 mag. The crossing point of the $BC$-$T_{eff}$ relations of Johnson $B$, $V$ occurring erroneously at a higher temperature (Fig.\,7b) indicates lower precision and accuracy of Johnson magnitudes with respect to Gaia magnitudes used in this study. This is not a surprise because the total $B$ and $V$ brightness of DDEB systems are collected from various sources (see Table 1) unlike Gaia magnitudes, which are taken from a single source, EDR3. The maximum $BC$ and/or ranges of positive $BC$ values given in Table\,5 appear to be visualised in Fig.\,7. \section{How to improve standard luminosities} Improving the accuracy of standard $L$ could be achieved in two ways. One way is to improve the accuracy of existing standard $BC - T_{eff}$ relations and the second way is to increase the diversity of the standard $BC - T_{eff}$ relations. The former way is already achieved by calibrating multiband standard $BC - T_{eff}$ relations using the most accurate stellar astrophysical data available. The improved relations and their related statistics are given in Table\,5. A user would produce a standard $L$ of a star almost twice more accurate now if he/she uses $BC_V - T_{eff}$ relation of this study rather than $BC_V - T_{eff}$ relation of Eker et al. (2020) if propagated uncertainty of $M_V$ of the star is dominated by relative parallax error $\left(\frac{\sigma_\varpi}{\varpi}\right)$ and if it is $\frac{\sigma_\varpi}{\varpi}$ $\ll$ 5.5 per cent. This is because the standard deviation of the $BC_V - T_{eff}$ curve of this study is reduced to $SD=$ 0.12 mag implying 11.05 per cent for $\Delta L/L$ if $M_V$ is errorless, while it was $SD=$ 0.215 mag (see Table\,5) implying 19.8 per cent for $\Delta L/L$ correspondingly. Otherwise (if $\frac{\sigma_\varpi}{\varpi} \gg$ 5.5 per cent), the uncertainty of computed $L$ would naturally be dominated by the parallax error. Then, the propagated uncertainty of the standard $L$ would have been much bigger ($\Delta L/L$ $\gg$ 11.05 per cent). The standard deviation of a $BC_V - T_{eff}$ curve determines the limiting accuracy of the standard $L$ (Eker et al. 2021b). Therefore, Table\,5 implies that a user can get a standard $L$ with an error as small as 12.5 per cent, 11.05 per cent, 10.2 per cent, 11.7 per cent, or 10.05 per cent correspondingly if he/she uses one of the $BC - T_{eff}$ relations at $B$, $V$, $G$, $G_{BP}$, or $G_{RP}$ photometric bands, respectively. However, if $\frac{\sigma_\varpi}{\varpi}$ $\gg$ 6.3 per cent, 5.5 per cent, 5.1 per cent, 5.8 per cent or 5.02 per cent, in accord with the photometric bands, the standard error of the L is bigger. At the limit, when the uncertainty of $M_V$ dominates over the uncertainty of $BC$ and the distance errors dominates over brightness and extinction errors, it becomes twice the relative error of the parallax according to the formulation of Eker et al. (2021b). Providing independent $BC - T_{eff}$ relations at $B$, $V$, $G$, $G_{BP}$ or $G_{RP}$ passbands determined independently by the least-squares method from the independent observational photometric and astrometric data of DDEB, which are known to provide the most accurate stellar astrophysical parameters, Table\,5 allowed us to investigate the second way of improving the accuracy of a standard $L$ as well. One does not need to calculate the actual value of the standard $L$ for speculating about its relative uncertainty ($\Delta L/L$) if it comes from a single $BC - T_{eff}$ relation. Similar speculations, however, are not possible in the second way of improving. Calculation of actual $L$ from each of the $BC - T_{eff}$ relations is needed. Then, there are many standard $L$ values for a star representing each of the photometric bands, like many independent measurements of a quantity. However, we prefer not to calculate many different standard $L$ for a star and then take an average. Instead, we prefer to predict five different $M_{Bol}$ together with their associated uncertainty propagated from the uncertainty of $M_\xi$, and the uncertainty of $BC_\xi$ (Table\,5) first. Then combine them according to \begin{equation} M_{bol} = \frac{1}{N}\sum_i^N M_{bol,i} \label{eq:Mbol} \end{equation} to get a single $M_{bol}$ for a star, where $M_{bol,i} = M_i + BC_i$, provided with $i=$ $B$, $V$, $G$, $G_{BP}$, and $G_{RP}$ passbands. N is a number between 2 (if $M_{Bol}$ is predicted from $B$, $V$ only) and 5 (if $M_{Bol}$ is predicted from all the passbands) because some systems do not have total apparent brightness measured at certain photometric bands. At last, the most improved standard $L$ of a star is predicted directly from its mean $M_{bol}$ value according to Eq.(9). To estimate its relative uncertainty ($\Delta L/L$), we have preferred to calculate a standard error for the $M_{Bol}$ first and then propagate it to the standard $L$. A similar approach of using an average bolometric magnitude calculated based on several different photometric passbands to derive $L$ was used by Pedersen et al. (2020) to derive $L$ of $B$ dwarfs, but for apparent instead of absolute bolometric magnitudes. Predicted (from photometry) and calculated (from $R$ and $T_{eff}$) $L$ of the sample stars in this study are compared in Fig.~8a. A very high correlation ($R^2>0.999$) between the predicted and calculated luminosities is seen clearly. Fig.~8b compares histogram distributions of their uncertainties. Uncertainties of the predicted $L$ have a sharp well-defined peak at 2 per cent with a smaller dispersion, while the uncertainties of the calculated $L$ have a fussy peak at 8 per cent with a much wider dispersion. Fig.~8 shows that a prominent improvement in predicting a standard L of a star occurs if all the existing independent $BC - T_{eff}$ relations are used according to the method introduced in this study. The improvement is remarkable and real (not speculative) that there is a method, now, which could provide a standard luminosity of a star more accurate than the classical method using observed radii and effective temperatures according to the Stefan-Boltzmann law. We summarise the data produced by the method used in this study in Table\,7. The columns are self-explanatory to show order, system name, the component (primary or secondary) in the first three columns. Then next ten columns are reserved for the predicted $M_{bol}$ values from its definition ($M_{bol} = M_\xi + BC_\xi$) and associated propagated errors at $B$, $V$, $G$, $G_{BP}$, and $G_{RP}$ passbands. Then, the combined (mean) $M_{bol}$ and its standard error are illustrated in columns 14 and 15. The logarithm of the predicted $L$ in solar units and its relative uncertainty are listed in columns 16 and 17. The last two columns are for the calculated $L$ in solar units and their relative uncertainty. Fig.~8 is produced from the last four columns of Table\,7. Therefore, the last four columns are ideal for a reader who is interested in comparing actual numerical values of the predicted and computed $L$ and to see how small the relative errors of the predicted $L$ are compared to the errors of the computed $L$. \section{Discussions} \subsection{The Sun and a solar twin for testing} In a first thought, one may think the Sun is not a good candidate for testing how good its luminosity would be predicted according to the method described in this study because it is the reference star that IAU 2015 GAR B2 used to determine the zero point constant of the bolometric magnitude scale $C_{Bol}=$ 71.197 425 ... mag from $L=$ 3.8275($\pm$0.0014) $x10^{26}$ W and $M_{Bol,\odot}\cong$ 4.739 996...mag. The $BC$ values (--0.600, 0.069, 0.106, --0.134, 0.567 respectively at $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$) given in Table\,5, which are marked with $\odot$ symbol, should not be understood as the $BC$ of the Sun. They are the predicted $BC$ values for a typical main-sequence star having $T_{eff}=$ 5772 K. Consequently, absolute and apparent magnitudes given in Table\,5 just below those $BC$ values are typical absolute and apparent magnitudes if this star is replaced as the Sun. Now the question: is it possible to estimate the luminosity of the Sun just from its effective temperature ($T_{eff,\odot}=$ 5772 K) using the $BC$ given in Table\,5? To proceed towards this aim, one needs apparent magnitudes and distance as in the other stars used in this study. It is for sure, one cannot find measured Gaia apparent magnitudes for the Sun despite it is possible one could calculate what it would be using a measured/calculated solar composite spectrum (Willmer, 2018) while it is possible for one to run into a value $V_\odot=$ --26.76 $\pm$ 0.03 mag (Torres, 2010). Same and slightly different (in the second digit after the decimal) values seem to be preferred by various authors (see the references in Torres (2010) and Eker et al. (2021b)). Astronomers handbook "Allen's Astrophysical Quantities" (Cox, 2000) gives it 0.01 mag dimmer, where apparent and absolute magnitudes of the Sun in $U$, $B$, $V$, $R$, $I$ and $K$ bands could be found. Taking $(B-V)_\odot=$ 0.65 mag, one can start from $V_\odot=$ --26.76 mag and $B_\odot=$ --26.11 mag and continue predicting the solar luminosity together with its uncertainty by applying the method described in this study. Here, we assume $B$ and $V$ apparent magnitudes of the Sun have the same uncertainty ($\pm0.03$ mag). No interstellar extinction for the Sun and because its distance is also known with a great precision compared to other stars; only the observational uncertainty $\Delta V \approx \Delta B \approx 0.03$ propagates to the solar absolute magnitudes: $M_{V,\odot}=$ 4.812 $\pm$ 0.03 mag, and $M_{B,\odot}=$ 5.462 $\pm$ 0.03 mag. Using the $BC$ and RMS values (Table\,5) for a typical main-sequence star with $T_{eff}=$ 5772 K, the predicted bolometric magnitudes of the Sun would be $M_{Bol,\odot}$ (1)= 4.881 $\pm$ 0.136 mag and $M_{Bol,\odot}$ (2)= 4.862 $\pm$ 0.12 mag, respectively from its $V$ and $B$ magnitudes. After combining them by taking a simple average $M_{Bol,\odot}$ = 4.872 mag. The differences from the mean indicate an $\pm$ 0.095 mag uncertainty. At last, using Eq.(9), the predicted solar luminosity is $L_P (\odot)= 3.39\times10^{26}$ W, and the relative uncertainty is $\Delta L/L\approx$ 8.7 per cent. Comparing this value to the nominal Solar luminosity $L= 3.838\times10^{26}$ W, one can see how successful the method is. A luminosity, which is about 11.7 per cent smaller than actual $L_\odot$, is predicted. A single channel prediction from $M_{Bol,\odot}$(1) or $M_{Bol,\odot}$(2), would have given us a prediction with a relative error of 12.5 per cent or 11.05 per cent respectively. All predicted $L$ values agree to the read $L$ within the error limits estimated. \subsection{A solar Twin for a test} The primary of HP Aur system with a mass $M=$ 0.9543 $\pm$ 0.0041 $M_\odot$, a radius $R=$ 1.0278 $\pm$ 0.0042 $R_\odot$, and an effective temperature $T_{eff}=$ 5810 $\pm$ 120 K (Lacy et al. 2014) could be considered as a solar twin. According to the Stefan-Boltzmann law, its luminosity is $L=$ 1.084 $L_\odot= 4.162\times10^{26}$ W. Propagation of the observational uncertainties shows its relative error ($\Delta L/L$) would be about 8.302 per cent. Now the question is: Using the method of this study, how accurately would its luminosity be predicted? Its apparent brightness' are $V=$ 11.489$\pm$0.07, $G=$ 11.283$\pm$0.003, $G_{BP}=$ 11.628$\pm$0.004, $G_{RP}=$ 10.761$\pm$0.004 mag. According to simplified SED, the primary contributes 75.7 per cent of the total light radiated by the system in $V$, 75 per cent in $G$, 76.5 per cent in $G_{BP}$ and 72.9 percent in $G_{RP}$. The simplified SED also implies $A_V=$ 0.335 mag, $A_G=$ 0.298 mag, $A_{G_{BP}}=$ 0.366 mag, and $A_{G_{RP}}=$ 0.207 mag as the interstellar dimming. Parallax of the system is 5.2432$\pm$0.0306 mas. Consequently, its absolute magnitudes are $M_V=$ 4.753$\pm$ 0.033, $M_G=$ 4.583$\pm$0.033, $M_{G_{BP}}=$ 4.860$\pm$0.042, and $M_{G_{RP}}=$ 4.152$\pm$0.024 mag. Table\,5 gives $BC_V=$ 0.072$\pm$0.12, $BC_G=$ 0.106$\pm$ 0.11, $BC_{G_{BP}}=$ --0.129$\pm$0.13, and $BC_{G_{RP}}=$ 0.562$\pm$ 0.11 mag. Here, we notice that the errors in $BC$ values are bigger than errors in fiter based absolute magnitudes. Thus, they colud be ignored. That is, $BC$ errors would be the dominant factor when calculating the uncertainty of its bolometric absolute magnitudes. This means if single pass-band used in predicting its $L$, the relative error of its $L$ ($\Delta L/L$) will not be smaller than 10 per cent. According to Eq.(3), the $M_{Bol}$ are calculated as 4.824, 4.688, 4.73, and 4.713 mag, respectively at four photometric bands. At last, one obtains a mean $M_{Bol}=$ 4.739$\pm$0.03 mag for the solar twin, where its uncertainty is assumed to be the standard error by definition. After computing its $L$ using Eq.(9), $L= 3.831\times10^{26}$ W is found. At last its uncertainty $\pm$0.03 translates to $\Delta L/L\sim$2.5 per cent. Predicted and calculated $L$ for the primary of HP Aur agree to each other within the error limits. The calculated luminosity from $R$ and $T_{eff}$ using the Stefan-Boltzmann law seems to be overestimated. The predicted standard $L= 3.831\times10^{26}$ W appears more reliable because of its predicted uncertainty. At last, it can be concluded that the method of computing standard L of stars using multiband photometry and $BC-T_{eff}$ relations involving SED is very successful in predicting luminosities much more accurately than the direct method using observed $R$ and $T_{eff}$. Using four $BC-T_{eff}$ curves ended up predicting a standard $L$ at least four times more accurately than in the case if one uses only one of the $BC-T_{eff}$ relations existing. \subsection{Standard or non-standard} What makes $BC$s (and $BC_V$ - $T_{eff}$ relation) of this study standard? What makes BCs of Cassagrande \& VandenBerg (2018), Cox (2020), Andrae et al. (2018), and Mamajek's personal online $BC$ Table accessible on the internet\footnote{www.pas.rochester.edu/$\sim$emamajek/EEM$\_$dwarf$\_$UBVIJHK$\_$colors$\_$Teff.txt} non-standard? In the first look, IAU 2015 GAR B2 was issued only for solving the long-lasting problem of arbitrariness attributed to the zero point of bolometric magnitudes. However, the arbitrariness of the bolometric magnitude scale is not independent of the arbitrariness of the $BC$ scale according to Eq.(3). Articles such as Cassagrande \& VandenBerg (2018), Andrae et al. (2018), which are still defending the arbitrariness of the $BC$ scale, cause confusion. Eker et al. (2021a) have shown that fixing the zero point of the bolometric magnitude scale also fixes the zero point of the bolometric correction scale. To avoid $BC$ determinations with different zero points, Eker et al. (2021a) have defined the concept of standard $BC$. The standard $BC$ is not only for the $V$ band; the definition covers all bands of all photometric systems. Eker et al. (2021b) explained how to recognise non-standard $BC$ values. Briefly, using Eq.(9) with a definite $C_{Bol}$ makes the computed $M_{Bol}$ unique. Since stellar absolute magnitudes at well-defined passbands of various photometric systems are also unique (absolute magnitude of a star cannot have two or more values for a specified band), the product of Eq.(3) was defined as the standard $BC$ because subtracting a unique number from another unique number is also unique. The nominal value of $C_{Bol} =$ 71.197 425 ... corresponds to the nominal values of $M_{Bol,\odot} =$ 4.74 mag (a rounded value. The true value is 4.739 996 ...) and $L_{\odot} = 3.828 x10^{26}$ W, thus $C_{Bol} = M_{Bol,\odot} +2.5 log L_{\odot}$ (see IAU 2015 GAR B2, Eker et al. (2021a,b)). Consequently, using a different $C_{Bol}$ than the nominal $C_{Bol}$ in Eq.(9) or using a non-nominal value of $M_{Bol,\odot}$ or $L_{\odot}$ in the following equation: \begin{equation} M_{bol} = M_{bol,\odot} - 2.5log\frac{L}{L_\odot} \label{eq:Mbol_sun} \end{equation} is sufficient to make a computed $BC$ non-standard. Moreover, a $BC$ is also not standard if it is computed through Eq.(1) with an arbitrary $C_2$, Cassagrande \& VandenBerg (2018) used $M_{Bol,\odot} =$ 4.75 mag for the absolute bolometric magnitude for the Sun rather than the nominal value $M_{Bol,\odot} =$ 4.74 mag suggested by (IAU 2015 GAR B2). On the other hand, the $BC$s of Cox (2000) are also not standard because the nominal $M_{Bol,\odot} =$ 4.74 mag was used together with a non-nominal $L_{\odot} =$ $3.845 \times10^{26}$ W corresponding to a non-nominal $C_{Bol}$ (see Eker et al. 2021b). Despite using the nominal values $M_{Bol,\odot}$ or $L_{Bol,\odot}$, the $BC$ values of Andrae et al. (2018) are also not standard. This is because Andrae et al. (2018) preferred to use Eq.(1) with an assumed arbitrary $C_2$ when computing $BC$ values for the Gaia photometric passbands. A slightly different case seems to have occurred on the $BC$ tables given by Cox (2000), who took the arbitrariness of $C_2$ granted. Consequently, to be consistent with the paradigm, "bolometric corrections must be negative" (see page 381 of Cox (2000)), the zero point of the $BC$ scale was set to make all $BC$ negative. This is the second reason why $BC$ values of Cox (2000) are not standard. Eker et al. (2021a) have shown that the zero point constant in Eq.(1) has different values at different filters such: $C_2 = C_{Bol} - C_\xi$, where $C_\xi$ is the zero point for a passband, thus $C_2$ is also not arbitrary but has a definite value. Although $C_2$ appears like an integration constant in Eq.(1), actually it is not an integration constant or algebraic sum of integration constants required by integrals appearing in Eq.(1). It is well known that definite integrals do not take constants. Therefore, $C_2$ must be a constant imposed by the absolute photometry, such as \begin{equation} M_{\xi} = -2.5log L_\xi+C_\xi. \label{eq:Mksi} \end{equation} Subtracting this from Eq.(9), which is imposed by Eq.(3), gives Eq.(1), where the definite integrals are for producing the surface fluxes of the star for the bolometric: $f_{Bol}=\int^{\infty}_{0}f_\lambda d\lambda$ and for a photometric band $f_{\xi}=\int^{\infty}_{0}S_\lambda (\xi) f_\lambda d\lambda$. The definite integrals do not take constants, but the absolute photometry (Eqs 9 and 12) requires $C_2 = C_{Bol} - C_\xi$. Since the value of $C_{Bol}$ was unknown before IAU 2015 GAR B2, and no telescope or a detector exist to observe $M_{Bol}$, it was natural to assume both $C_{Bol}$ and $C_{2}$ arbitrary. Therefore, authors such as Cox (2000), and Pecaut \& Mamajec (2013) have excuse to assume the $BC$ scale is arbitrary and then impose a personal condition to set up a private absolute scale. Cox (2000) took $BC_V = 0$ for F2 supergiants and Mamajec (2021) uses $BC_V = -0.085$ for a G2 main-sequence star in his online $BC$ table to set up a private zero point for the $BC$ scale. Similarly, Andrae et al. (2018) also set his absolute scale by taking $BC_{V,\odot}= -0.07$ and stating "bolometric correction needs a reference point to set the absolute scale". Setting up a private absolute scale for $BC$ as done by Mamajec (2021) and Andrae et al. (2018) is not acceptable anymore since 2015. Despite IAU 2015 GAR B2, such private absolute scales do not mean recognizing the absolute $BC$ scale set by IAU 2015 GAR B2, or IAU 2015 GAR B2 is not understood properly. Therefore, any $BC$ value which is according to a private $BC$ scale as implied by Cassagrande \& VandenBerg (2018), Cox (2000), Andrae et al. (2018), Mamajec (2021) is not standard. \subsection{Colour-Temperature and Temperature-Colour Relations} It is a great advantage to have already calibrated $BC$--$T_{eff}$ relations at various bands of a photometric system. This way, intrinsic colour-temperature relations would automatically be set. Flower (1996) and Eker et al. (2020) had to compute observed ($B-V$) colours of the components first from the light ratio ($l_2/l_1$) of components if they were available from light curve solutions in both $B$ and $V$ bands. Then, intrinsic $(B-V)_0$ colours are obtained using the reddening law $A_V/E(B-V) = R_V$ and definition of $E(B-V)= (B-V) - (B-V)_0$. Only after obtaining $(B-V)_0$ of components, then $(B-V) - T_{eff}$ relation is calibrated using published component effective temperatures. Here, in this study, first the intrinsic colours of the component stars (data) are computed directly as the difference between the absolute magnitudes in Table\,3. The computed intrinsic colours are then plotted in Fig.\,9 where solid lines represent colour -- temperature relations as $(G_{BP}-G_{RP})_0 - T_{eff}$, $(G-G_{RP})_0 - T_{eff}$, $(G-G_{BP})_0 - T_{eff}$, $(V-G)_0 - T_{eff}$, and $(B-V)_0 - T_{eff}$, respectively from top to bottom. At last, the five colour -- effective temperature relations are directly computable as the difference of $BC - T_{eff}$ relations. For example: $(B-V)_0 - T_{eff}$ relation is obtained as $BC_V (T_{eff}) - BC_B (T_{eff})$ from the functions presented in Table\,5. Similarly, for the other colours: $BC_G (T_{eff}) - BC_V (T_{eff})$ gives $(V-G)_0 - T_{eff}$ relation, $BC_{G_{BP}} (T_{eff}) - BC_G (T_{eff})$ gives $(G-G_{BP})_0 - T_{eff}$ relation, $BC_{G_{RP}} (T_{eff}) - BC_G (T_{eff})$ gives $(G-G_{RP})_0 - T_{eff}$ relation, and finally $BC_{G_{RP}} (T_{eff}) - BC_{G_{BP}} (T_{eff})$ gives $(G_{BP}-G_{RP})_0 - T_{eff}$ relations, respectively. It is for sure that the solid lines (colour -- temperature relations) follow the trend of the data quite nicely. Especially, the upper two panels in Fig.~9, $(G_{BP}-G_{RP})_0$ and $(G-G_{RP})_0$ are represented very nicely by the solid lines while the middle panel [$(G-G_{BP})_0$] could only be considered successful for the medium hot and cooler stars (log $T_{eff}$ < 4.2 ). Nevertheless, a small but a clear offset between the solid lines and data is obvious in the lowest two panels [$(V-G)_0$ and $(B-V)_0$] in Fig.~9. It has been already discussed at the end of section 4 that $B$ and $V$ data is less reliable compared the Gaia data. Moreover, if the number of data towards the cooler and hotter ends in Figs.~4 and 5 are compared, the Gaia bands appear relatively more crowded. Being less reliable and having less number of data compared to Gaia bands towards both ends of the temperature scale, the $BC - T_{eff}$ relations of the $B$ and $V$ bands appear to be the most probable cause of the offset seen in the lowest two panels of Fig.~9. This is because the solid lines are just the differences of the $BC$ -- $T_{eff}$ relations and the bias caused by the less number of data appears not only effecting both ends but also changing the mean value of the $BC$ values thus the solid lines appear to be noticeably shifted causing the offset seen especially for $B$ and $V$ bands. Therefore, only the solid lines in the upper two panels [$(G_{BP}-G_{RP})_0$ and $(G-G_{RP})_0$] are found suitable to represent colour -- temperature relations which are to be included in Table 6, where they are presented together with the $BC$ values produced from the polynomials in Table 5 as a function of spectral types and typical effective temperatures for main-sequence stars having metallicity 0.008 $<$ Z $<$ 0.040 are presented. On the other hand, it is more practical for a user to have an effective temperature - colour relation in order to estimate the effective temperature of a main-sequence star from an intrinsic colour. For this, we have calibrated inverse relations only for $B-V$ for Johnson photometry and $G_{BP}$ -- $G_{RP}$ for Gaia photometry. Effective temperatures of the DDEB sample of this study are plotted as a function of $(B-V)_0$ and $(G_{BP}-G_{RP})_0$ in Fig.\,10. Data points are the same as the lowest and uppermost panels in Fig.\,9, but the vertical and horizontal axis are interchanged and re-organised. The solid lines in Fig.\,10 are the temperature - colour relations which are re-predicted from intrinsic colours of DDEB marked on Fig.\,10 unlike the colour - temperature relations shown in Fig.\,9 which are obtained from the differences of $BC$-$T_{eff}$ relations. The temperature -- colour relations as polynomials are given in Table\,8. Fourth-degree polynomials are found best to explain $(B-V)$ and $(G_{BP} -G_{RP})$ intrinsic colours of the main-sequence stars chosen from components of the DDEB sample of this study. Coefficients and errors associated are determined by the least-squares method and listed in Table\,8 together with the ranges of their validity expressed in intrinsic colours as $-0.5 \leq (B-V)_0 \leq 1.5$ and $-0.6 \leq (G_{BP} -G_{RP})_0 \leq 1.7$. Except for the four stars (components of V881 Per and 2MASS J19071662+ 463932), which are marked with their order number in Fig.\,10a, the intrinsic $(B-V)$ colours are well represented by the predicted $T_{eff}$ - colour relation (solid line). The $T_{eff}$ - colour relations by Eker et al. (2020), marked as dotted curves, and Mamajec (2021), marked as a dashed curve, are also plotted on the same figure just for comparison. It is clear in Fig.\,10a that the solid line (this study) is more successful in representing data than the dotted and the dashed curves. Although both the dotted and the dashed curves are drawn to represent intrinsic $(B-V)$ colours up to 2.00, the reddest stars ($(B-V)_0$ > 0.80) are also not well represented by the dotted and dashed curves. Unfortunately, there are no other full-range intrinsic $(G_{BP}-G_{RP})$ colours published for comparing temperature - colour relation predicted in this study. The Main-sequence $(G_{BP}-G_{RP})$ intrinsic colours of Mamajec (2021) cover a range of temperatures $2350 \leq T_{eff} \leq 10700 $ K, spectral types B9.5V to M9.5V and $(G_{BP}-G_{RP})$ from --1.2 to 4.86. The full range of $(G_{BP}-G_{RP})$ data is represented better by the $T_{eff}$ - colour relation of this study. The dashed curve (Mamajec 2021) does not reach the hottest stars. Agreement between solid and dashed curves the middle temperatures are clear. The coolest stars are again better represented by $T_{eff}$ - colour relation of this study than the dashed curve of (Mamajec 2021). Intrinsic colours as a function of spectral types and effective temperatures computed according to the two $T_{eff}$ - color relations shown in Fig. 10 and listed in Table 8 are also included in Table 6 . Now, it is important to notice that Table 6 has four columns with three intrinsic colours; first two [$(G-G_{RP})_0$ and $(G_{BP}-G_{RP})_0$] of them (columns 8 and 9) are produced by subtracting the proper $BC$ columns in the same table as described before, that is directly from the $BC$ - $T_{eff}$ relations listed in Table 5 and the last two [$(G_{BP}-G_{RP})_0$ and $(B - V)_0$] of them (columns 10 and 11) are produced from the $T_{eff}$ - Colour relations listed in Table 8, which are produced directly from the intrinsic colours of DDEB stars. Having the same intrinsic colour [$(G_{BP}-G_{RP})_0$] produced by the two different methods described above is good for fine testing of the new method on producing intrinsic colors since in the first approximation the three intrinsic colours [$(B-V)_0$, $(V-G)_0$, and $(G-G_{BP})_0$] produced by the new method were already eliminated by eye inspections in Fig.~9. Eliminations of these intrinsic colors indicate that $BC - T_{eff}$ relation in Table 5 are not sufficiently accurate enough to produce intrinsic colors while they are shown reliable for estimating (if one of them is used) a standard $L$ and improving its accuracy (if multiples of them are used) as demonstrated in Fig.~ 8. The mean difference between the two columns in Table 6 giving the same intrinsic color [$(G_{BP}-G_{RP})_0$] produced by the different methods (columns 9 and 10) could be used as a parameter to indicate reliability of the new method with respect to the classical method. The existing numbers in Table 6 indicate a 0.06 mag difference for this study. Minimizing this value in a future study would definitely indicate a noticeable improvement of the new method for producing reliable intrinsic colors from $BC - T_{eff}$ relations. Not only the intrinsic colours, but also the predicted standard $L$ vould be improved because $BC - T_{eff}$ relations themselves would also be improved automatically. An ideal case is that both methods are producing the same numbers, that is the mean difference between the two columns producing the same colour should be zero or negligible. For that we encourage future researchers not only to increase the number of filters, photometric systems and the number of DDEB stars to be used (especially towards both ends of the temperature scale) in their study but also to find a method for homogenizing the systemic brightness' or set up an observing program to obtain consistent total brightness of systems for inconsistent bands to improve their consistency like Gaia bands. \subsection{Reddening Law and Colour-Colour Relations} It is possible to check the best value of the passband-based parameter $R(\xi)$ by modelling the $A(\xi)$-$E(B-V)$ relation, which is given as: \begin{equation} R_{\xi}=\frac{A(\xi)}{E(B-V)} \label{eq:Rfilter} \end{equation} \begin{equation} E(B-V)=A(B)-A(V) \label{eq:EBV} \end{equation} Using passband based $A(\xi)$ values from Table\,3 and using Eq.(14), colour excess - interstellar dimming relations for Johnson and GAIA passbands have been constructed and shown in Figs.\,11 and 12 together with standard deviation (RMS) from the correlation equations on each plot. The correlation is actually in the form $f(x)=a + bx$, where $a$ is the constant term and $b$ is $R(\xi)$. In all correlations, the constant term is zero under Eq.(13). The most commonly used parameter, $R(V)$ is found to be $3.012\pm0.002$, which is slightly smaller than the common average value for the solar neighbourhood in the Milky Way ($R(V)=$3.1). For the Johnson-$B$ filter, this relation is predicted as $R(B)=4.012\pm0.002$ and for the GAIA passbands, they are found as $R(G)=2.872\pm0.013$, $R(G_{BP})=3.494\pm0.009$ and $R(G_{RP})=1.885\pm0.001$. It is noteworthy that the errors of the parameters ($R(\xi)$) are found to be relatively small ($<0.1$ per cent for $A(V)$ and $A(B)$-$E(B-V)$ relations, $<0.7$ per cent for $A(G)$, $A(G_{BP})$ and $A(G_{RP})$-$E(B-V)$ relations). The accuracy of the correlation parameter is relatively better for the interstellar dimming in GAIA passbands versus GAIA colour excess except for $A(G_{RP})$-$E(G_{BP}-G_{RP})$ which is $\sim0.8$ per cent. Other useful relations used in photometry are the colour-colour diagrams and colour excess relations between colours based on certain photometric systems. Colour excess relations between different colours may show the direction of interstellar extinction on the diagram. Having this information on the colour-colour diagrams permits users to define unreddened colours of stars by tracing back the extinction direction up to the intersecting point on the line of unreddened main-sequence stars. Fig.\,13 shows colour-colour and colour excess - colour excess diagrams for the nearby main-sequence stars as predicted from the DDEB sample of this study for various GAIA passbands. Fig.\,13a compares $(G-G_{BP})$-$(G_{BP}-G_{RP})$ relation of this study to the one given by Arenou et al. (2018). A very good agreement between them is very clear. Nevertheless, the colour-colour curves for GAIA passbands of main-sequence stars are almost parallel to the direction of interstellar reddening which creates difficulties in the determination of unreddened colours by going back along the reddening direction. Among the colour-colour relations in Fig.\,13, $(G-G_{RP})$-$(G_{BP}-G)$ (panel c) seem to be more suitable for searching intrinsic colours towards cooler stars since the reddest part is un-parallel to the reddening direction. The ratio of color excess of $E(G-G_{BP})$/$E(G_{BP}-G_{RP})$, $E(G-G_{RP})$/$E(G_{BP}-G_{RP})$ and $E(G-G_{BP})$/$E(G_{BP}-G_{RP})$ in panels d, e and f of the figure gives the direction of extinction in the color-color diagrams shown in panels a, b and c, respectively. The solid lines shown with red color are the best fits to all data while the dashed lines represent the borders of the data. These borders refer to the limits. Since a unique reddening direction in a color-color diagram for all stars in our sample is not expected, it is normal to see a certain interval of the reddening direction values in different Gaia EDR3 passbands. The slope of the solid lines refers to the average value for the relevant color excess ratio defining the direction of reddening in color-color diagrams. The slope of the dashed lines changes between --0.31 and --0.68, 0.44 and 2.2, and 0.31 and 0.69 for $E(G-G_{BP})$/$E(G_{BP}-G_{RP})$ (panel d), $E(G-G_{RP})$/$E(G_{BP}-G_{RP})$ (panel e) and $E(G-G_{BP})$/$E(G_{BP}-G_{RP})$ (panel f) ratios, respectively. Therefore, the reddening direction shown by an arrow in the left panels is not unique. Direction of the arrow may change for different galactic directions. \section{Conclusions} \begin{itemize} \item A simplified SED model is established for predicting component light contributions of binaries and their interstellar extinctions. \item The component light contributions predicted by the simplified SED model in $B$ and $V$ bands of Johnson photometry are tested by comparing to the $B$ and $V$ band light contributions predicted from the light curve solutions of DDEB. The simplified SED model is found very successful and reliable in predicting component light contributions according to test in this study. \item 209 DDEB are found eligible to provide a binary SED model without complexities (third light or any excess flux) which may spoil the SED of the binary. Then, using component contributions, which are produced from the simplified SED model, empirical standard $BC$s are produced by a method described by Eker et al. (2020). \item The empirical standard $BC$ values are used in calibrating empirical standard $BC$-$T_{eff}$ relations in $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$ bands. The most accurate $BC$-$T_{eff}$ relation ever discussed is produced and presented for interested readers. \item Empirical standard $BC$-$T_{eff}$ relations of five passbands are used for predicting standard stellar $L$. They are compared to the $L$ calculated from the observed $R$ and $T_{eff}$. If a standard $L$ is predicted from a single $BC$-$T_{eff}$ relation of a given band, propagated errors indicate that it cannot be more accurate than about 10 cent. Accuracy of the predicted $L$ increases by increasing the number of $BC$-$T_{eff}$ relations at various passbands. A standard $L$ with an uncertainty as high as one per cent (peak at $\sim$ 2.5 per cent), is possible. \item Multi-band $BC$-$T_{eff}$ relations are shown to be practical to obtain intrinsic color-temperature relations. Intrinsic colour - temperature relations could be produced directly from differences of $BC$-$T_{eff}$ relations. \item Inverse color-temperature relations involving $(B-V)_0$ and $(G_{BP}-G_{RP})_0$ are produced for interested readers who wants to calculate effective temperature of a main-sequence star from its $(B-V)_0$ and $(G_{BP}-G_{RP})_0$. \item Reddening laws, colour-colour and colour excess - colour excess relations involving Johnson $B$, $V$ and Gaia passbands covering all spectral classes of the main-sequence from the DDEB sample of this study are demonstrated. \end{itemize} \clearpage \newpage \Acknow{This work uses the VizieR catalogue access tool, CDS, Strasbourg, France; the SIMBAD database, operated at CDS, Strasbourg, France. This work presents results from the European Space Agency (ESA) space mission, Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement (MLA). The Gaia mission website is https://www.cosmos.esa.int/gaia. The Gaia archive website is https://archives.esac.esa.int/gaia. We thank to Assoc.Prof. Mustafa Caner for proof reading and correcting English grammar and linguistics. We are also grateful to the anonymous referee who carefully read our paper and made very meaningful comments that made us improve its clearance.} \newpage \begin{table*} \label{tab:physicalpars} \centering \caption{Physical parameters and total magnitudes of selected systems in $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$. The full table is available online.} \resizebox{\textwidth}{!}{\begin{tabular}{cccccccccccccccccccccc} \hline Order & Name & pri/sec & $M$ & err & $R$ & err & Reference & $T$ & err & Reference & $B$ & err & $V$ & err & Reference & $G$ & err & $G_{BP}$ & err & $G_{RP}$ & err \\ & & & ($M_\odot$) & ($M_\odot$) & ($R_\odot$) & ($R_\odot$) & & ($K$) & ($K$) & & (mag) & (mag) & (mag) & (mag) & & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) \\ \hline 1 & V421 Peg & p & 1.594 & 0.029 & 1.584 & 0.028 & 2016NewA...46...47O & 7250 & 80 & 2016NewA...46...47O & 8.650 & 0.010 & 8.280 & 0.010 & 2016NewA...46...47O & 8.208 & 0.003 & 8.384 & 0.003 & 7.890 & 0.004 \\ 2 & V421 Peg & s & 1.356 & 0.029 & 1.328 & 0.029 & 2016NewA...46...47O & 6980 & 120 & 2016NewA...46...47O & 8.650 & 0.010 & 8.280 & 0.010 & & 8.208 & 0.003 & 8.384 & 0.003 & 7.890 & 0.004 \\ 3 & DV Psc & p & 0.677 & 0.019 & 0.685 & 0.030 & 2014PASA...31...24E & 4450 & 8 & 2007MNRAS.382.1133Z & 11.604 & 0.010 & 10.621 & 0.010 & 2000A\&A...355L..27H & 10.219 & 0.005 & 10.997 & 0.015 & 9.334 & 0.013 \\ 4 & DV Psc & s & 0.475 & 0.010 & 0.514 & 0.020 & 2014PASA...31...24E & 3614 & 8 & 2007MNRAS.382.1133Z & 11.604 & 0.010 & 10.621 & 0.010 & & 10.219 & 0.005 & 10.997 & 0.015 & 9.334 & 0.013 \\ 5 & MU Cas & p & 4.657 & 0.100 & 4.192 & 0.050 & 2014PASA...31...24E & 14750 & 500 & 2004AJ....128.1840L & 11.112 & 0.009 & 10.808 & 0.007 & 2019ApJ...872...85G & 10.742 & 0.003 & 10.894 & 0.003 & 10.452 & 0.004 \\ 6 & MU Cas & s & 4.575 & 0.090 & 3.671 & 0.040 & 2014PASA...31...24E & 15100 & 500 & 2004AJ....128.1840L & 11.112 & 0.009 & 10.808 & 0.007 & & 10.742 & 0.003 & 10.894 & 0.003 & 10.452 & 0.004 \\ 7 & TYC 4019-3345-1 & p & 1.920 & 0.010 & 1.760 & 0.050 & 2013PASA...30...26B & 8600 & 310 & 2013PASA...30...26B & 12.550 & 0.009 & 12.150 & 0.008 & 2013PASA...30...26B & 11.952 & 0.003 & 12.164 & 0.003 & 11.597 & 0.004 \\ 8 & TYC 4019-3345-1 & s & 1.920 & 0.010 & 1.760 & 0.050 & 2013PASA...30...26B & 8600 & 570 & 2013PASA...30...26B & 12.550 & 0.009 & 12.150 & 0.008 & & 11.952 & 0.003 & 12.164 & 0.003 & 11.597 & 0.004 \\ 9 & YZ Cas & p & 2.263 & 0.012 & 2.525 & 0.011 & 2014MNRAS.438..590P & 9520 & 120 & 2014MNRAS.438..590P & 5.715 & 0.026 & 5.660 & 0.015 & 2019ApJ...872...85G & 5.630 & 0.003 & 5.659 & 0.003 & 5.546 & 0.005 \\ 10 & YZ Cas & s & 1.325 & 0.007 & 1.331 & 0.006 & 2014MNRAS.438..590P & 6880 & 240 & 2014MNRAS.438..590P & 5.715 & 0.026 & 5.660 & 0.015 & & 5.630 & 0.003 & 5.659 & 0.003 & 5.546 & 0.005 \\ ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \\ 409 & IT Cas & p & 1.330 & 0.009 & 1.603 & 0.015 & 2014PASA...31...24E & 6470 & 110 & 1997AJ....114.1206L & 11.640 & 0.019 & 11.150 & 0.039 & 1997AJ....114.1206L & 11.042 & 0.003 & 11.284 & 0.003 & 10.628 & 0.004 \\ 410 & IT Cas & s & 1.328 & 0.008 & 1.569 & 0.040 & 2014PASA...31...24E & 6470 & 110 & 1997AJ....114.1206L & 11.640 & 0.019 & 11.150 & 0.039 & & 11.042 & 0.003 & 11.284 & 0.003 & 10.628 & 0.004 \\ 411 & BK Peg & p & 1.414 & 0.007 & 1.985 & 0.008 & 2014PASA...31...24E & 6265 & 85 & 2010A\&A...516A..42C & 10.540 & 0.041 & 9.982 & 0.010 & 2019ApJ...872...85G & 9.835 & 0.003 & 10.116 & 0.003 & 9.386 & 0.004 \\ 412 & BK Peg & s & 1.257 & 0.005 & 1.472 & 0.017 & 2014PASA...31...24E & 6320 & 30 & 2010A\&A...516A..42C & 10.540 & 0.041 & 9.982 & 0.010 & & 9.835 & 0.003 & 10.116 & 0.003 & 9.386 & 0.004 \\ 413 & AP And & p & 1.277 & 0.004 & 1.234 & 0.006 & 2014AJ....147..148L & 6565 & 150 & 2014AJ....147..148L & 11.606 & 0.057 & 11.074 & 0.085 & 2015AAS...22533616H & 10.910 & 0.003 & 11.195 & 0.003 & 10.457 & 0.004 \\ 414 & AP And & s & 1.251 & 0.004 & 1.195 & 0.005 & 2014AJ....147..148L & 6495 & 150 & 2014AJ....147..148L & 11.606 & 0.057 & 11.074 & 0.085 & & 10.910 & 0.003 & 11.195 & 0.003 & 10.457 & 0.004 \\ 415 & AL Scl & p & 3.617 & 0.110 & 3.241 & 0.050 & 2014PASA...31...24E & 13550 & 350 & 1987A\&A...179..141H & 5.985 & 0.014 & 6.070 & 0.009 & 2000A\&A...355L..27H & 6.073 & 0.003 & 6.017 & 0.004 & 6.145 & 0.004 \\ 416 & AL Scl & s & 1.703 & 0.040 & 1.401 & 0.020 & 2014PASA...31...24E & 10300 & 360 & 1987A\&A...179..141H & 5.985 & 0.014 & 6.070 & 0.009 & & 6.073 & 0.003 & 6.017 & 0.004 & 6.145 & 0.004 \\ 417 & V821 Cas & p & 2.025 & 0.066 & 2.308 & 0.028 & 2014PASA...31...24E & 9400 & 400 & 2009MNRAS.395.1649C & 8.402 & 0.029 & 8.286 & 0.017 & 2019ApJ...872...85G & 8.227 & 0.003 & 8.265 & 0.003 & 8.121 & 0.004 \\ 418 & V821 Cas & s & 1.620 & 0.058 & 1.390 & 0.022 & 2014PASA...31...24E & 8600 & 400 & 2009MNRAS.395.1649C & 8.402 & 0.029 & 8.286 & 0.017 & & 8.227 & 0.003 & 8.265 & 0.003 & 8.121 & 0.004 \\ \hline \end{tabular}} \end{table*} \newpage \begin{table*} \label{tab:lightcontrib} \centering \caption{Component apparent magnitudes of DDEB in $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$ according to component contributions estimated by the new method using simplified SED. The full table is available online.} \resizebox{\textwidth}{!}{\begin{tabular}{cccccccccccccccccccccc} \hline Order & Name & pri/sec & \multicolumn{2}{c}{Cross Reference} & \multicolumn{2}{c}{Cont. (Eker et al. 2020)} & \multicolumn{5}{c}{Unreddened light contribution (this study)} & \multicolumn{10}{c}{Component apparent brightness (mag)} \\ & & & Xref 1 & Xref 2 & $l_{B}$ & $l_{V}$ & $l_{B}$ & $l_{V}$ & $l_{G}$ & $l_{G_{BP}}$ & $l_{G_{RP}}$ & $B$ & err & $V$ & err & $G$ & err & $G_{BP}$ & err & $G_{RP}$ & err \\ \hline 1 & V421 Peg & p & 1 & 1 & --- & 0.624 & 0.630 & 0.622 & 0.621 & 0.625 & 0.614 & 9.152 & 0.010 & 8.796 & 0.010 & 8.725 & 0.003 & 8.894 & 0.003 & 8.421 & 0.004 \\ 2 & V421 Peg & s & 2 & 1 & --- & 0.376 & 0.370 & 0.378 & 0.379 & 0.375 & 0.386 & 9.729 & 0.010 & 9.335 & 0.010 & 9.261 & 0.003 & 9.449 & 0.003 & 8.923 & 0.004 \\ 3 & DV Psc & p & 5 & 2 & 0.918 & 0.889 & 0.906 & 0.873 & 0.852 & 0.878 & 0.826 & 11.711 & 0.010 & 10.768 & 0.010 & 10.393 & 0.005 & 11.138 & 0.015 & 9.541 & 0.013 \\ 4 & DV Psc & s & 6 & 2 & 0.082 & 0.111 & 0.094 & 0.127 & 0.148 & 0.122 & 0.174 & 14.166 & 0.010 & 12.861 & 0.010 & 12.294 & 0.005 & 13.284 & 0.015 & 11.233 & 0.013 \\ 5 & MU Cas & p & 7 & 3 & 0.556 & 0.557 & 0.551 & 0.554 & 0.553 & 0.552 & 0.556 & 11.758 & 0.009 & 11.450 & 0.007 & 11.385 & 0.003 & 11.539 & 0.003 & 11.090 & 0.004 \\ 6 & MU Cas & s & 8 & 3 & 0.444 & 0.443 & 0.449 & 0.446 & 0.447 & 0.448 & 0.444 & 11.982 & 0.009 & 11.684 & 0.007 & 11.617 & 0.003 & 11.766 & 0.003 & 11.333 & 0.004 \\ 7 & TYC 4019-3345-1 & p & 9 & 4 & 0.497 & 0.496 & 0.500 & 0.500 & 0.500 & 0.500 & 0.500 & 13.303 & 0.009 & 12.903 & 0.008 & 12.705 & 0.003 & 12.917 & 0.003 & 12.350 & 0.004 \\ 8 & TYC 4019-3345-1 & s & 10 & 4 & 0.503 & 0.504 & 0.500 & 0.500 & 0.500 & 0.500 & 0.500 & 13.303 & 0.009 & 12.903 & 0.008 & 12.705 & 0.003 & 12.917 & 0.003 & 12.350 & 0.004 \\ 9 & YZ Cas & p & 13 & 5 & 0.943 & 0.919 & 0.934 & 0.916 & 0.915 & 0.925 & 0.895 & 5.790 & 0.026 & 5.756 & 0.015 & 5.726 & 0.003 & 5.744 & 0.003 & 5.667 & 0.005 \\ 10 & YZ Cas & s & 14 & 5 & 0.058 & 0.081 & 0.066 & 0.084 & 0.085 & 0.075 & 0.105 & 8.658 & 0.026 & 8.347 & 0.015 & 8.310 & 0.003 & 8.465 & 0.003 & 7.988 & 0.005 \\ ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \\ 409 & IT Cas & p & 577 & 202 & 0.508 & 0.509 & 0.511 & 0.511 & 0.511 & 0.511 & 0.511 & 12.370 & 0.019 & 11.880 & 0.039 & 11.771 & 0.003 & 12.014 & 0.003 & 11.357 & 0.004 \\ 410 & IT Cas & s & 578 & 202 & 0.492 & 0.491 & 0.489 & 0.489 & 0.489 & 0.489 & 0.489 & 12.416 & 0.019 & 11.926 & 0.039 & 11.818 & 0.003 & 12.060 & 0.003 & 11.404 & 0.004 \\ 411 & BK Peg & p & 579 & 203 & 0.634 & 0.636 & 0.635 & 0.637 & 0.637 & 0.636 & 0.639 & 11.034 & 0.041 & 10.472 & 0.010 & 10.324 & 0.003 & 10.607 & 0.003 & 9.873 & 0.004 \\ 412 & BK Peg & s & 580 & 203 & 0.366 & 0.364 & 0.365 & 0.363 & 0.363 & 0.364 & 0.361 & 11.633 & 0.041 & 11.081 & 0.010 & 10.935 & 0.003 & 11.213 & 0.003 & 10.492 & 0.004 \\ 413 & AP And & p & 581 & 204 & 0.530 & 0.529 & 0.529 & 0.527 & 0.526 & 0.528 & 0.524 & 12.297 & 0.057 & 11.770 & 0.085 & 11.607 & 0.003 & 11.889 & 0.003 & 11.158 & 0.004 \\ 414 & AP And & s & 582 & 204 & 0.470 & 0.471 & 0.471 & 0.473 & 0.474 & 0.472 & 0.476 & 12.424 & 0.057 & 11.886 & 0.085 & 11.721 & 0.003 & 12.010 & 0.003 & 11.263 & 0.004 \\ 415 & AL Scl & p & 583 & 205 & 0.960 & 0.950 & 0.924 & 0.914 & 0.916 & 0.920 & 0.904 & 6.070 & 0.014 & 6.167 & 0.009 & 6.169 & 0.003 & 6.107 & 0.004 & 6.255 & 0.004 \\ 416 & AL Scl & s & 584 & 205 & 0.040 & 0.050 & 0.076 & 0.086 & 0.084 & 0.080 & 0.096 & 8.790 & 0.014 & 8.738 & 0.009 & 8.760 & 0.003 & 8.764 & 0.004 & 8.685 & 0.004 \\ 417 & V821 Cas & p & 585 & 206 & 0.797 & 0.779 & 0.794 & 0.784 & 0.785 & 0.789 & 0.774 & 8.652 & 0.029 & 8.550 & 0.017 & 8.490 & 0.003 & 8.522 & 0.003 & 8.400 & 0.004 \\ 418 & V821 Cas & s & 586 & 206 & 0.203 & 0.221 & 0.206 & 0.216 & 0.215 & 0.211 & 0.226 & 10.120 & 0.029 & 9.951 & 0.017 & 9.893 & 0.003 & 9.957 & 0.003 & 9.736 & 0.004 \\ \hline \end{tabular}} \end{table*} \begin{table*} \centering \caption{Component absolute magnitudes of DDEB in $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$ and propagated uncertainties. Eleven DR2 and one Hipparcos parallaxes are shown in square brackets and parenthesis, respectively. The full table is available online.} \label{tab:absolute_mag} \resizebox{\textwidth}{!}{\begin{tabular}{ccccccccccccccccccccccccc} \hline Order & Name & pri/sec & Parallax & $\frac{\sigma_\varpi}{\varpi}$ & $L/L_\odot$ & L(SI) & $\frac{\Delta L}{L}$ & $M_{Bol}$ & err & $A_B$ & $A_V$ & $A_G$ & $A_{G_{BP}}$ & $A_{G_{RP}}$ & $M_B$ & err & $M_V$ & err & $M_{G}$ & err & $M_{G_{BP}}$ & err & $M_{G_{RP}}$ & err \\ & & & (mas) & (\%) & &$\times10^{27}$ & (\%) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) \\ \hline 1 & V421 Peg & p & 6.5051 & 0.4 & 6.245 & 2.390 & 6 & 2.751 & 0.061 & 0.122 & 0.092 & 0.087 & 0.104 & 0.057 & 3.096 & 0.042 & 2.770 & 0.042 & 2.704 & 0.041 & 2.857 & 0.041 & 2.430 & 0.041 \\ 2 & V421 Peg & s & 6.5051 & 0.4 & 3.771 & 1.444 & 8 & 3.299 & 0.088 & 0.122 & 0.092 & 0.087 & 0.104 & 0.057 & 3.673 & 0.042 & 3.310 & 0.042 & 3.240 & 0.041 & 3.412 & 0.041 & 2.932 & 0.041 \\ 3 & DV Psc & p & 23.7216 & 0.1 & 0.166 & 0.063 & 9 & 6.691 & 0.095 & 0.800 & 0.603 & 0.492 & 0.633 & 0.368 & 7.787 & 0.041 & 7.041 & 0.041 & 6.777 & 0.040 & 7.381 & 0.043 & 6.049 & 0.042 \\ 4 & DV Psc & s & 23.7216 & 0.1 & 0.041 & 0.016 & 8 & 8.219 & 0.085 & 0.800 & 0.603 & 0.492 & 0.633 & 0.368 & 10.242 & 0.041 & 9.134 & 0.041 & 8.678 & 0.040 & 9.528 & 0.043 & 7.741 & 0.042 \\ 5 & MU Cas & p & 0.5133 & 3.7 & 749.386 & 286.827 & 14 & -2.447 & 0.149 & 1.644 & 1.233 & 1.243 & 1.446 & 0.776 & -1.333 & 0.091 & -1.231 & 0.090 & -1.305 & 0.090 & -1.356 & 0.090 & -1.133 & 0.090 \\ 6 & MU Cas & s & 0.5133 & 3.7 & 631.206 & 241.594 & 13 & -2.260 & 0.146 & 1.644 & 1.233 & 1.243 & 1.446 & 0.776 & -1.109 & 0.091 & -0.997 & 0.090 & -1.074 & 0.090 & -1.128 & 0.090 & -0.891 & 0.090 \\ 7 & TYC 4019-3345-1 & p & 0.8918 & 1.4 & 15.266 & 5.843 & 15 & 1.781 & 0.168 & 1.062 & 0.797 & 0.763 & 0.908 & 0.499 & 1.992 & 0.050 & 1.856 & 0.050 & 1.693 & 0.050 & 1.760 & 0.050 & 1.602 & 0.050 \\ 8 & TYC 4019-3345-1 & s & 0.8918 & 1.4 & 15.266 & 5.843 & 27 & 1.781 & 0.294 & 1.062 & 0.797 & 0.763 & 0.908 & 0.499 & 1.992 & 0.050 & 1.856 & 0.050 & 1.693 & 0.050 & 1.760 & 0.050 & 1.602 & 0.050 \\ 9 & YZ Cas & p & 10.6528 & 0.5 & 47.181 & 18.058 & 5 & 0.556 & 0.056 & 0.041 & 0.031 & 0.031 & 0.036 & 0.019 & 0.886 & 0.049 & 0.862 & 0.044 & 0.833 & 0.041 & 0.845 & 0.041 & 0.785 & 0.042 \\ 10 & YZ Cas & s & 10.6528 & 0.5 & 3.576 & 1.369 & 14 & 3.357 & 0.152 & 0.041 & 0.031 & 0.031 & 0.036 & 0.019 & 3.754 & 0.049 & 3.453 & 0.044 & 3.417 & 0.041 & 3.567 & 0.041 & 3.106 & 0.042 \\ ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \\ 409 & IT Cas & p & 1.9419 & 0.8 & 4.057 & 1.553 & 7 & 3.220 & 0.077 & 0.203 & 0.153 & 0.142 & 0.171 & 0.095 & 3.607 & 0.048 & 3.168 & 0.059 & 3.071 & 0.044 & 3.284 & 0.044 & 2.703 & 0.044 \\ 410 & IT Cas & s & 1.9419 & 0.8 & 3.886 & 1.488 & 8 & 3.266 & 0.092 & 0.203 & 0.153 & 0.142 & 0.171 & 0.095 & 3.654 & 0.048 & 3.214 & 0.059 & 3.117 & 0.044 & 3.331 & 0.044 & 2.750 & 0.044 \\ 411 & BK Peg & p & 3.2643 & 0.5 & 5.469 & 2.093 & 5 & 2.895 & 0.060 & 0.203 & 0.153 & 0.141 & 0.170 & 0.095 & 3.400 & 0.058 & 2.888 & 0.043 & 2.753 & 0.042 & 3.006 & 0.042 & 2.347 & 0.042 \\ 412 & BK Peg & s & 3.2643 & 0.5 & 3.114 & 1.192 & 3 & 3.507 & 0.032 & 0.203 & 0.153 & 0.141 & 0.170 & 0.095 & 3.999 & 0.058 & 3.497 & 0.043 & 3.364 & 0.042 & 3.612 & 0.042 & 2.966 & 0.042 \\ 413 & AP And & p & 2.9143 & 0.7 & 2.546 & 0.975 & 9 & 3.725 & 0.100 & 0.406 & 0.306 & 0.282 & 0.341 & 0.190 & 4.213 & 0.071 & 3.787 & 0.095 & 3.648 & 0.043 & 3.871 & 0.043 & 3.291 & 0.043 \\ 414 & AP And & s & 2.9143 & 0.7 & 2.291 & 0.877 & 9 & 3.840 & 0.101 & 0.406 & 0.306 & 0.282 & 0.341 & 0.190 & 4.340 & 0.071 & 3.903 & 0.095 & 3.762 & 0.043 & 3.991 & 0.043 & 3.395 & 0.043 \\ 415 & AL Scl & p & 4.6006 & 3.6 & 319.014 & 122.103 & 11 & -1.519 & 0.117 & 0.041 & 0.031 & 0.032 & 0.037 & 0.020 & -0.657 & 0.090 & -0.550 & 0.089 & -0.549 & 0.088 & -0.616 & 0.089 & -0.451 & 0.089 \\ 416 & AL Scl & s & 4.6006 & 3.6 & 19.903 & 7.618 & 14 & 1.493 & 0.155 & 0.041 & 0.031 & 0.032 & 0.037 & 0.020 & 2.063 & 0.090 & 2.021 & 0.089 & 2.042 & 0.088 & 2.042 & 0.089 & 1.979 & 0.089 \\ 417 & V821 Cas & p & 3.4262 & 0.6 & 37.469 & 14.341 & 17 & 0.806 & 0.187 & 0.123 & 0.092 & 0.092 & 0.107 & 0.058 & 1.203 & 0.051 & 1.132 & 0.045 & 1.072 & 0.042 & 1.089 & 0.042 & 1.016 & 0.042 \\ 418 & V821 Cas & s & 3.4262 & 0.6 & 9.522 & 3.644 & 19 & 2.293 & 0.205 & 0.123 & 0.092 & 0.092 & 0.107 & 0.058 & 2.671 & 0.051 & 2.532 & 0.045 & 2.475 & 0.042 & 2.524 & 0.042 & 2.352 & 0.042 \\ \hline \end{tabular}} \end{table*} \begin{table*} \centering \caption{Empirical standard component $BC$s of DDEB in $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$ and propagated uncertainties. The full table is available online.} \label{tab:BC_table} \resizebox{\textwidth}{!}{\begin{tabular}{cccccccccccccccccc} \hline Order & Name & pri/sec & $BC_B$ & err & $BC_V$ & err & $BC_G$ & err & $BC_{G_{BP}}$ & err & $BC_{G_{RP}}$ & err & $(B-V)_0$ & $(V-G)_0$ & $(G-G_{BP})_0$ & $(G-G_{RP})_0$ & $(G_{BP}-G_{RP})_0$ \\ & & & (mag)& (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) \\ \hline 1 & V421 Peg & p & -0.345 & 0.074 & -0.019 & 0.070 & 0.047 & 0.069 & -0.105 & 0.074 & 0.322 & 0.065 & 0.326 & 0.066 & -0.152 & 0.275 & 0.427 \\ 2 & V421 Peg & s & -0.374 & 0.098 & -0.011 & 0.094 & 0.059 & 0.094 & -0.113 & 0.097 & 0.367 & 0.091 & 0.363 & 0.070 & -0.172 & 0.308 & 0.480 \\ 3 & DV Psc & p & -1.096 & 0.104 & -0.350 & 0.101 & -0.085 & 0.100 & -0.689 & 0.105 & 0.642 & 0.098 & 0.746 & 0.265 & -0.604 & 0.728 & 1.332 \\ 4 & DV Psc & s & -2.023 & 0.095 & -0.916 & 0.091 & -0.460 & 0.090 & -1.309 & 0.095 & 0.478 & 0.088 & 1.108 & 0.456 & -0.849 & 0.938 & 1.787 \\ 5 & MU Cas & p & -1.113 & 0.175 & -1.216 & 0.173 & -1.141 & 0.173 & -1.091 & 0.175 & -1.313 & 0.171 & -0.102 & 0.075 & 0.050 & -0.172 & -0.222 \\ 6 & MU Cas & s & -1.151 & 0.172 & -1.263 & 0.169 & -1.187 & 0.169 & -1.132 & 0.171 & -1.369 & 0.168 & -0.112 & 0.077 & 0.054 & -0.183 & -0.237 \\ 7 & TYC 4019-3345-1 & p & -0.212 & 0.176 & -0.076 & 0.174 & 0.088 & 0.173 & 0.021 & 0.175 & 0.179 & 0.172 & 0.136 & 0.164 & -0.068 & 0.090 & 0.158 \\ 8 & TYC 4019-3345-1 & s & -0.212 & 0.299 & -0.076 & 0.297 & 0.088 & 0.297 & 0.021 & 0.299 & 0.179 & 0.297 & 0.136 & 0.164 & -0.068 & 0.090 & 0.158 \\ 9 & YZ Cas & p & -0.330 & 0.074 & -0.306 & 0.066 & -0.277 & 0.064 & -0.290 & 0.069 & -0.230 & 0.060 & 0.024 & 0.029 & -0.012 & 0.048 & 0.060 \\ 10 & YZ Cas & s & -0.398 & 0.159 & -0.097 & 0.156 & -0.060 & 0.155 & -0.210 & 0.157 & 0.250 & 0.154 & 0.301 & 0.037 & -0.150 & 0.310 & 0.460 \\ ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \\ 409 & IT Cas & p & -0.388 & 0.090 & 0.052 & 0.093 & 0.149 & 0.084 & -0.064 & 0.088 & 0.517 & 0.081 & 0.440 & 0.097 & -0.213 & 0.368 & 0.581 \\ 410 & IT Cas & s & -0.388 & 0.104 & 0.052 & 0.106 & 0.149 & 0.099 & -0.064 & 0.102 & 0.517 & 0.096 & 0.440 & 0.097 & -0.213 & 0.368 & 0.581 \\ 411 & BK Peg & p & -0.504 & 0.083 & 0.007 & 0.068 & 0.143 & 0.068 & -0.111 & 0.073 & 0.549 & 0.064 & 0.511 & 0.136 & -0.253 & 0.406 & 0.660 \\ 412 & BK Peg & s & -0.492 & 0.067 & 0.009 & 0.047 & 0.143 & 0.046 & -0.105 & 0.053 & 0.541 & 0.040 & 0.501 & 0.134 & -0.248 & 0.398 & 0.646 \\ 413 & AP And & p & -0.488 & 0.123 & -0.062 & 0.135 & 0.078 & 0.105 & -0.146 & 0.109 & 0.435 & 0.103 & 0.426 & 0.139 & -0.223 & 0.357 & 0.580 \\ 414 & AP And & s & -0.500 & 0.123 & -0.063 & 0.136 & 0.078 & 0.106 & -0.151 & 0.110 & 0.445 & 0.104 & 0.437 & 0.141 & -0.229 & 0.367 & 0.597 \\ 415 & AL Scl & p & -0.863 & 0.147 & -0.970 & 0.145 & -0.970 & 0.144 & -0.903 & 0.147 & -1.069 & 0.143 & -0.107 & 0.000 & 0.067 & -0.099 & -0.165 \\ 416 & AL Scl & s & -0.570 & 0.179 & -0.528 & 0.177 & -0.549 & 0.176 & -0.549 & 0.178 & -0.486 & 0.175 & 0.042 & -0.021 & 0.000 & 0.063 & 0.063 \\ 417 & V821 Cas & p & -0.397 & 0.194 & -0.326 & 0.190 & -0.267 & 0.190 & -0.283 & 0.191 & -0.210 & 0.188 & 0.071 & 0.059 & -0.016 & 0.057 & 0.073 \\ 418 & V821 Cas & s & -0.377 & 0.211 & -0.239 & 0.208 & -0.182 & 0.208 & -0.230 & 0.209 & -0.059 & 0.206 & 0.138 & 0.057 & -0.048 & 0.123 & 0.172 \\ \hline \end{tabular}} \end{table*} \begin{table} \centering \caption{Parameters for passband based $BC$ - temperature relations shown in Figs.~4 and 5 in the form $BC_{\xi}$ = a + b $\times$ (log $T_{eff}$) + c $\times$ (log $T_{eff}$)$^2$ + d $\times$ (log $T_{eff}$)$^3$ + e $\times$ (log $T_{eff}$)$^4$. For the calculation of passband based solar absolute magnitudes, solar absolute bolometric magnitude is adopted to be $M_{bol,\odot}=4.74$ and the $BC_{\odot}$ refers to $T_{eff,\odot}=5772$ K. The relations are valid in the temperature range of 2900-38000 K.} \label{tab:BCpar_table} \resizebox{\columnwidth}{!}{\begin{tabular}{cccccccc} \hline Coefficient & $BC_{B}$ & $BC_{V}$ & $BC_{V}*$ & $BC_{G}$ & $BC_{G_{BP}}$ & $BC_{G_{RP}}$ \\ \hline a & --1272.43 & --3767.98 & --2360.69565 & --1407.14 & --3421.55 & --1415.67 \\ & $\pm$394.2 & $\pm$288.8 & $\pm$519.80058 & $\pm$256.7 & $\pm$293.6 & $\pm$253.3 \\ b & 1075.85 & 3595.86 & 2109.00655 & 1305.08 & 3248.19 & 1342.38 \\ & $\pm$394.4 & $\pm$290.9 & $\pm$519.47090 & $\pm$258.9 & $\pm$296.1 & $\pm$255.4 \\ c & --337.831 & --1286.59 & --701.96628 & --453.605 & --1156.82 & --475.827 \\ & $\pm$147.7 & $\pm$109.6 & $\pm$194.29038 & $\pm$97.67 & $\pm$111.7 & $\pm$96.34 \\ d & 46.8074 & 204.764 & 103.30304 & 70.2338 & 183.372 & 74.9702 \\ & $\pm$24.53 & $\pm$18.32 & $\pm$32.23187 & $\pm$16.34 & $\pm$18.68 & $\pm$16.12 \\ e & --2.42862 & --12.2469 & --11.5957 & --4.1047 & --10.9305 & --4.44923\\ & $\pm$1.552 & $\pm$1.146 & $\pm$1.441 & $\pm$1.023 & $\pm$1.169 & $\pm$1.009 \\ \hline rms & 0.136257 & 0.120071 & 0.215 & 0.11068 & 0.126577 & 0.109179 \\ $R^2$ & 0.9616 & 0.9789 & 0.941 & 0.9793 & 0.9738 & 0.9884 \\ $BC_\odot$ (mag) & --0.600 & 0.069 & --0.016 & 0.106 & --0.134 & 0.567\\ $M_\odot$ (mag) & 5.340 & 4.671 & 4.756 & 4.634 & 4.874 & 4.173\\ $m_\odot$ (mag) & --26.232 & --26.901 & --26.816 & --26.938 & --26.698 & -27.399 \\ \hline $BC_{max}$ (mag) & --0.301 & 0.094 & 0.095 & 0.106 & --0.062 & 0.709\\ $T_{max}$ (K) & 8222 & 6397 & 6897 & 5715 & 6879 & 4345\\ \hline $T_{1}$ (K) &-& 5300 & 5859 & 4565 &-&-\\ $T_{2}$ (K) &-& 7830 & 8226 & 7420 &-&8590\\ \hline *: Eker et al. (2020). \end{tabular}} \end{table} \begin{table*} \centering \caption{Mean bolometric corrections and intrinsic colours of nearby main-sequence stars as a function of typical effective temperature and spectral types having metallicities 0.008 $<$ $Z$ $<$ 0.040 for the passbands $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$.} \resizebox{\textwidth}{!}{\begin{tabular}{lcccccccccc} \toprule & & \multicolumn{7}{|c}{From Table 5} & \multicolumn{2}{|c}{From Table 8} \\ \midrule SpT & Teff & $BC_{B}$ & $BC_{V}$ & $BC_{G}$ & $BC_{G_{BP}}$ & $BC_{G_{RP}}$ & $(G-G_{RP})_0$ & $(G_{BP}-G_{RP})_0$ & $(G_{BP}-G_{RP})_0$ & $(B-V)_0$ \\ & (K) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) \\ \midrule O7 & 35810 & -3.088 & -3.399 & -3.291 & -3.150 & -3.671 & -0.380 & -0.521 & -0.562 & -0.364 \\ O8 & 33963 & -2.941 & -3.192 & -3.126 & -2.957 & -3.492 & -0.367 & -0.536 & -0.543 & -0.352 \\ O9 & 32211 & -2.795 & -2.997 & -2.965 & -2.775 & -3.319 & -0.354 & -0.544 & -0.525 & -0.340 \\ B0 & 29512 & -2.556 & -2.700 & -2.709 & -2.497 & -3.042 & -0.334 & -0.545 & -0.493 & -0.320 \\ B1 & 25119 & -2.127 & -2.218 & -2.266 & -2.045 & -2.561 & -0.296 & -0.516 & -0.432 & -0.279 \\ B2 & 21135 & -1.690 & -1.769 & -1.827 & -1.626 & -2.079 & -0.252 & -0.453 & -0.359 & -0.231 \\ B3 & 18408 & -1.364 & -1.446 & -1.501 & -1.327 & -1.714 & -0.212 & -0.387 & -0.295 & -0.190 \\ B5 & 15136 & -0.955 & -1.029 & -1.076 & -0.946 & -1.222 & -0.146 & -0.276 & -0.194 & -0.123 \\ B6 & 13964 & -0.808 & -0.869 & -0.914 & -0.803 & -1.029 & -0.115 & -0.226 & -0.148 & -0.093 \\ B7 & 13032 & -0.694 & -0.737 & -0.782 & -0.687 & -0.868 & -0.086 & -0.181 & -0.106 & -0.065 \\ B8 & 12023 & -0.576 & -0.591 & -0.636 & -0.560 & -0.685 & -0.049 & -0.125 & -0.053 & -0.031 \\ B9 & 10666 & -0.436 & -0.389 & -0.437 & -0.390 & -0.425 & 0.012 & -0.035 & 0.032 & 0.026 \\ A0 & 9886 & -0.371 & -0.274 & -0.323 & -0.297 & -0.269 & 0.055 & 0.028 & 0.093 & 0.067 \\ A1 & 9419 & -0.340 & -0.206 & -0.257 & -0.243 & -0.173 & 0.084 & 0.070 & 0.135 & 0.095 \\ A2 & 9078 & -0.322 & -0.158 & -0.209 & -0.206 & -0.102 & 0.107 & 0.104 & 0.168 & 0.118 \\ A3 & 8750 & -0.309 & -0.113 & -0.164 & -0.173 & -0.033 & 0.130 & 0.140 & 0.202 & 0.142 \\ A5 & 8222 & -0.301 & -0.045 & -0.094 & -0.125 & 0.078 & 0.172 & 0.204 & 0.266 & 0.185 \\ A6 & 7980 & -0.303 & -0.016 & -0.064 & -0.107 & 0.130 & 0.194 & 0.237 & 0.298 & 0.209 \\ A7 & 7745 & -0.308 & 0.009 & -0.036 & -0.091 & 0.179 & 0.215 & 0.271 & 0.332 & 0.233 \\ A8 & 7534 & -0.317 & 0.030 & -0.012 & -0.079 & 0.224 & 0.236 & 0.304 & 0.365 & 0.257 \\ F0 & 7161 & -0.343 & 0.062 & 0.027 & -0.066 & 0.302 & 0.275 & 0.367 & 0.427 & 0.303 \\ F1 & 6966 & -0.363 & 0.075 & 0.045 & -0.062 & 0.342 & 0.297 & 0.404 & 0.466 & 0.331 \\ F2 & 6792 & -0.384 & 0.084 & 0.059 & -0.062 & 0.377 & 0.318 & 0.440 & 0.501 & 0.357 \\ F3 & 6637 & -0.406 & 0.089 & 0.071 & -0.064 & 0.408 & 0.337 & 0.473 & 0.535 & 0.382 \\ F5 & 6397 & -0.447 & 0.094 & 0.086 & -0.073 & 0.455 & 0.369 & 0.529 & 0.591 & 0.429 \\ F6 & 6310 & -0.464 & 0.093 & 0.091 & -0.078 & 0.472 & 0.381 & 0.550 & 0.613 & 0.447 \\ F7 & 6223 & -0.483 & 0.092 & 0.095 & -0.084 & 0.488 & 0.393 & 0.573 & 0.636 & 0.465 \\ F8 & 6152 & -0.499 & 0.091 & 0.098 & -0.090 & 0.501 & 0.403 & 0.591 & 0.655 & 0.481 \\ G0 & 6026 & -0.530 & 0.086 & 0.102 & -0.102 & 0.524 & 0.422 & 0.626 & 0.690 & 0.510 \\ G1 & 5957 & -0.548 & 0.082 & 0.104 & -0.110 & 0.536 & 0.433 & 0.646 & 0.710 & 0.526 \\ G2 & 5888 & -0.567 & 0.078 & 0.105 & -0.118 & 0.548 & 0.443 & 0.667 & 0.730 & 0.543 \\ G3 & 5848 & -0.579 & 0.075 & 0.105 & -0.124 & 0.555 & 0.450 & 0.679 & 0.743 & 0.554 \\ G5 & 5741 & -0.612 & 0.065 & 0.106 & -0.140 & 0.573 & 0.467 & 0.713 & 0.776 & 0.582 \\ G6 & 5689 & -0.629 & 0.060 & 0.106 & -0.148 & 0.581 & 0.476 & 0.730 & 0.793 & 0.596 \\ G7 & 5649 & -0.642 & 0.055 & 0.105 & -0.155 & 0.588 & 0.483 & 0.743 & 0.806 & 0.607 \\ G8 & 5559 & -0.674 & 0.044 & 0.104 & -0.172 & 0.602 & 0.498 & 0.774 & 0.837 & 0.633 \\ K0 & 5248 & -0.801 & -0.010 & 0.090 & -0.247 & 0.645 & 0.556 & 0.892 & 0.951 & 0.729 \\ K1 & 5070 & -0.888 & -0.054 & 0.075 & -0.302 & 0.666 & 0.591 & 0.969 & 1.028 & 0.788 \\ K2 & 4898 & -0.982 & -0.106 & 0.056 & -0.366 & 0.683 & 0.628 & 1.049 & 1.102 & 0.846 \\ K3 & 4732 & -1.085 & -0.167 & 0.031 & -0.439 & 0.696 & 0.665 & 1.135 & 1.177 & 0.902 \\ K5 & 4345 & -1.375 & -0.362 & -0.053 & -0.660 & 0.709 & 0.762 & 1.368 & 1.365 & 1.025 \\ M0 & 3802 & -1.939 & -0.803 & -0.258 & -1.138 & 0.666 & 0.924 & 1.804 & 1.659 & 1.193 \\ M1 & 3648 & -2.143 & -0.977 & -0.341 & -1.323 & 0.636 & 0.978 & 1.959 & 1.750 & 1.240 \\ M2 & 3499 & -2.363 & -1.174 & -0.435 & -1.528 & 0.598 & 1.033 & 2.126 & 1.845 & 1.286 \\ M3 & 3350 & -2.610 & -1.402 & -0.545 & -1.765 & 0.547 & 1.092 & 2.312 & 1.947 & 1.331 \\ M4 & 3148 & -2.991 & -1.770 & -0.722 & -2.143 & 0.457 & 1.179 & 2.600 & 2.111 & 1.393 \\ M5 & 2999 & -3.314 & -2.094 & -0.879 & -2.473 & 0.370 & 1.249 & 2.843 & 2.264 & 1.439 \\ \bottomrule \end{tabular}} \label{tab:empiricalBC} \end{table*} \begin{table*} \centering \caption{Component bolometric magnitudes and luminosities of DDEB in $B$, $V$, $G$, $G_{BP}$ and $G_{RP}$ passbands. The full table is available online.} \label{tab:bol_mag_DDEB} \resizebox{\textwidth}{!}{\begin{tabular}{ccccccccccccccccccc} \hline Order & Name & pri/sec & $M_{bol}(B)$ & err & $M_{bol}(V)$ & err & $M_{bol}(G)$ & err & $M_{bol}(G_{BP})$ & err & $M_{bol}(G_{RP})$ & err & $\left<M_{bol}\right>$ & Mean err & log $\left<L/L_\odot\right>$ & $\frac{\Delta L}{L}$ & log ($L/L_\odot$)(SB) & $\frac{\Delta L}{L}$ \\ & & & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & (mag) & & (\%) & & (\%) \\ \hline 1 & V421 Peg & p & 2.760 & 0.143 & 2.825 & 0.124 & 2.722 & 0.115 & 2.789 & 0.133 & 2.713 & 0.111 & 2.762 & 0.021 & 0.791 & 1.924 & 0.796 & 5.655 \\ 2 & V421 Peg & s & 3.312 & 0.143 & 3.383 & 0.124 & 3.284 & 0.115 & 3.349 & 0.133 & 3.271 & 0.111 & 3.320 & 0.021 & 0.568 & 1.919 & 0.577 & 8.146 \\ 3 & DV Psc & p & 6.499 & 0.142 & 6.741 & 0.124 & 6.751 & 0.115 & 6.789 & 0.134 & 6.757 & 0.112 & 6.707 & 0.053 & -0.787 & 4.857 & -0.780 & 8.789 \\ 4 & DV Psc & s & 8.051 & 0.142 & 8.115 & 0.124 & 8.317 & 0.115 & 8.160 & 0.134 & 8.369 & 0.112 & 8.202 & 0.061 & -1.385 & 5.580 & -1.391 & 7.832 \\ 5 & MU Cas & p & -2.240 & 0.164 & -2.208 & 0.148 & -2.329 & 0.140 & -2.255 & 0.155 & -2.293 & 0.137 & -2.265 & 0.021 & 2.802 & 1.943 & 2.875 & 13.768 \\ 6 & MU Cas & s & -2.060 & 0.164 & -2.021 & 0.148 & -2.145 & 0.140 & -2.070 & 0.155 & -2.108 & 0.137 & -2.081 & 0.021 & 2.728 & 1.950 & 2.800 & 13.423 \\ 7 & TYC 4019-3345-1 & p & 1.687 & 0.145 & 1.764 & 0.127 & 1.549 & 0.118 & 1.601 & 0.136 & 1.601 & 0.115 & 1.640 & 0.038 & 1.240 & 3.496 & 1.184 & 15.498 \\ 8 & TYC 4019-3345-1 & s & 1.687 & 0.145 & 1.764 & 0.127 & 1.549 & 0.118 & 1.601 & 0.136 & 1.601 & 0.115 & 1.640 & 0.038 & 1.240 & 3.496 & 1.184 & 27.114 \\ 9 & YZ Cas & p & 0.540 & 0.145 & 0.642 & 0.125 & 0.562 & 0.115 & 0.591 & 0.133 & 0.591 & 0.112 & 0.585 & 0.017 & 1.662 & 1.569 & 1.674 & 5.117 \\ 10 & YZ Cas & s & 3.382 & 0.145 & 3.533 & 0.125 & 3.469 & 0.115 & 3.505 & 0.133 & 3.466 & 0.112 & 3.471 & 0.025 & 0.508 & 2.340 & 0.553 & 13.983 \\ ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \\ 409 & IT Cas & p & 3.174 & 0.145 & 3.261 & 0.131 & 3.153 & 0.116 & 3.214 & 0.134 & 3.144 & 0.113 & 3.189 & 0.022 & 0.620 & 1.988 & 0.608 & 7.053 \\ 410 & IT Cas & s & 3.220 & 0.145 & 3.307 & 0.131 & 3.200 & 0.116 & 3.261 & 0.134 & 3.191 & 0.113 & 3.236 & 0.022 & 0.602 & 1.988 & 0.590 & 8.500 \\ 411 & BK Peg & p & 2.926 & 0.148 & 2.981 & 0.125 & 2.846 & 0.115 & 2.925 & 0.133 & 2.827 & 0.112 & 2.901 & 0.028 & 0.736 & 2.617 & 0.738 & 5.487 \\ 412 & BK Peg & s & 3.537 & 0.148 & 3.591 & 0.125 & 3.454 & 0.115 & 3.534 & 0.133 & 3.436 & 0.112 & 3.510 & 0.029 & 0.492 & 2.645 & 0.493 & 2.990 \\ 413 & AP And & p & 3.796 & 0.154 & 3.879 & 0.151 & 3.724 & 0.116 & 3.805 & 0.134 & 3.713 & 0.112 & 3.783 & 0.030 & 0.383 & 2.772 & 0.406 & 9.194 \\ 414 & AP And & s & 3.911 & 0.154 & 3.996 & 0.151 & 3.843 & 0.116 & 3.923 & 0.134 & 3.831 & 0.112 & 3.901 & 0.030 & 0.336 & 2.750 & 0.360 & 9.280 \\ 415 & AL Scl & p & -1.414 & 0.163 & -1.361 & 0.147 & -1.405 & 0.139 & -1.368 & 0.154 & -1.409 & 0.136 & -1.391 & 0.011 & 2.453 & 1.032 & 2.504 & 10.783 \\ 416 & AL Scl & s & 1.659 & 0.163 & 1.686 & 0.147 & 1.658 & 0.139 & 1.696 & 0.154 & 1.627 & 0.136 & 1.665 & 0.012 & 1.230 & 1.116 & 1.299 & 14.269 \\ 417 & V821 Cas & p & 0.865 & 0.146 & 0.929 & 0.126 & 0.819 & 0.115 & 0.847 & 0.133 & 0.846 & 0.112 & 0.861 & 0.018 & 1.552 & 1.694 & 1.574 & 17.193 \\ 418 & V821 Cas & s & 2.365 & 0.146 & 2.440 & 0.126 & 2.332 & 0.115 & 2.365 & 0.133 & 2.350 & 0.112 & 2.371 & 0.018 & 0.948 & 1.688 & 0.979 & 18.872 \\ \hline \end{tabular}} \end{table*} \begin{table*} \centering \caption{Parameters of temperature - intrinsic colour relations shown in Fig.~10.} \label{tab:logTeff_par_table} \begin{tabular}{cccccc} \hline \hline \multicolumn{6}{c}{log $T_{eff}$ = a + b $\times$ $(B-V)_0$ + c $\times$ $(B-V)_0^2$ + d $\times$ $(B-V)_0^3$ + e $\times$ $(B-V)_0^4$}\\ \hline & a & b & c & d & e \\ \hline & 4.05136 & --0.902404 & 1.03912 & --0.686631 & 0.144272 \\ & $\pm$0.005228 & $\pm$0.01865 & $\pm$0.06344 & $\pm$0.1399 & $\pm$0.07865 \\ \multicolumn{6}{c}{rms= 0.05091} \\ \multicolumn{6}{c}{valid in the range --0.5 $ \leq (B-V)_0 \leq $ 1.5 mag.} \\ \hline \hline \multicolumn{6}{c}{log $T_{eff}$ = a + b $\times$ $(G_{BP}-G_{RP})_0$ + c $\times$ $(G_{BP}-G_{RP})_0^2$ + d $\times$ $(G_{BP}-G_{RP})_0^3$ + e $\times$ $(G_{BP}-G_{RP})_0^4$ } \\ \hline & a & b & c & d & e \\ \hline & 4.04695 & --0.595137 & 0.42341 & --0.199622 & 0.0351755 \\ & $\pm$0.004102 & $\pm$0.007874 & $\pm$0.0211 & $\pm$0.0211 & $\pm$0.005871 \\ \multicolumn{6}{c}{rms= 0.0396} \\ \multicolumn{6}{c}{valid in the range --0.6 $ \leq (G_{BP}-G_{RP})_0 \leq $ 1.7 mag.} \\ \hline \end{tabular} \end{table*} \clearpage \newpage
Title: A Clear View of a Cloudy Brown Dwarf Companion from High-Resolution Spectroscopy
Abstract: Direct imaging studies have mainly used low-resolution spectroscopy ($R\sim20-100$) to study the atmospheres of giant exoplanets and brown dwarf companions, but the presence of clouds has often led to degeneracies in the retrieved atmospheric abundances (e.g. C/O, metallicity). This precludes clear insights into the formation mechanisms of these companions. The Keck Planet Imager and Characterizer (KPIC) uses adaptive optics and single-mode fibers to transport light into NIRSPEC ($R\sim35,000$ in $K$ band), and aims to address these challenges with high-resolution spectroscopy. Using an atmospheric retrieval framework based on petitRADTRANS, we analyze KPIC high-resolution spectrum ($2.29-2.49~\mu$m) and archival low-resolution spectrum ($1-2.2~\mu$m) of the benchmark brown dwarf HD 4747 B ($m=67.2\pm1.8~M_{\rm{Jup}}$, $a=10.0\pm0.2$ au, $T_{\rm eff}\approx1400$ K). We find that our measured C/O and metallicity for the companion from the KPIC high-resolution spectrum agree with that of its host star within $1-2\sigma$. The retrieved parameters from the $K$ band high-resolution spectrum are also independent of our choice of cloud model. In contrast, the retrieved parameters from the low-resolution spectrum are highly sensitive to our chosen cloud model. Finally, we detect CO, H$_2$O, and CH$_4$ (volume mixing ratio of log(CH$_4$)=$-4.82\pm0.23$) in this L/T transition companion with the KPIC data. The relative molecular abundances allow us to constrain the degree of chemical disequilibrium in the atmosphere of HD 4747 B, and infer a vertical diffusion coefficient that is at the upper limit predicted from mixing length theory.
https://export.arxiv.org/pdf/2208.01657
\thispagestyle{plain} \newcommand{\btx}{\textsc{Bib}\TeX} \newcommand{\thestyle}{\texttt{\filename}} \begin{center}{\bfseries\Large Reference sheet for \thestyle\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \thestyle\ package, \LaTeX\ the source file \thestyle\texttt{.dtx}. \end{quote} \head{Overview} The \thestyle\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \thestyle. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\thestyle|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \thestyle\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \thestyle\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \thestyle\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \thestyle\ is also loaded; instead, add the option to \thestyle. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \thestyle; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \thestyle\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description}
Title: Late-time constraints on modified Gauss-Bonnet cosmology
Abstract: In this paper, we consider a gravitational action containing a combination of the Ricci scalar, $R$, and the topological Gauss--Bonnet term, $G$. Specifically, we study the cosmological features of a particular class of modified gravity theories selected by symmetry considerations, namely the $f(R,G)= R^n G^{1-n}$ model. In the context of a spatially flat, homogeneous and isotropic background, we show that the currently observed acceleration of the Universe can be addressed through geometry, hence avoiding \emph{de facto} the shortcomings of the cosmological constant. We thus present a strategy to numerically solve the Friedmann equations in presence of pressureless matter and obtain the redshift behavior of the Hubble expansion rate. Then, to check the viability of the model, we place constraints on the free parameters of the theory by means of a Bayesian Monte Carlo method applied to late-time cosmic observations. Our results show that the $f(R,G)$ model is capable of mimicking the low-redshift behavior of the standard $\Lambda$CDM model, though substantial differences emerge when going toward high redshifts, leading to the absence of a standard matter-dominated epoch. Finally, we investigate the energy conditions and show that, under suitable choices for the values of the cosmographic parameters, they are all violated when considering the mean value of $n$ obtained from our analysis, as occurs in the case of a dark fluid.
https://export.arxiv.org/pdf/2208.02677
\title{Late-time constraints on modified Gauss--Bonnet cosmology} \author{Francesco Bajardi} \email{francesco.bajardi@unina.it} \author{and Rocco D'Agostino} \email{rocco.dagostino@unina.it} \affiliation{Scuola Superiore Meridionale, Largo S. Marcellino 10, 80138 Napoli, Italy.} \affiliation{Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Napoli, Via Cinthia 9, 80126 Napoli, Italy.} \pacs{98.80.-k, 95.36.+x, 04.50.Kd} \section{Introduction} \label{introd} The milestone for modern cosmology represented by the discovery of the accelerating expansion of the Universe \cite{Riess98,Perlmutter99} has undermined our understanding of the cosmic puzzle over the last two decades. Among the many proposals to explain the observed acceleration, the cosmological constant ($\Lambda$) introduced by Einstein is the simplest attempt able to reproduce the exotic features of the dark energy fluid, which is believed to drive the current cosmic expansion \cite{Peebles03,Copeland06}. However, the resulting scenario, known as the standard $\Lambda$CDM model, is affected by the so-called \emph{fine-tuning problem} resulting from the very large difference between the vacuum energy density predicted from particle physics and its observed value \cite{Weinberg89,Padmanabhan03}. Moreover, a further issue, known as the \emph{coincidence problem}, is due to the fact that the present time turns out to coincide with the only time in the cosmic history when the energy densities of matter and vacuum are of the same order of magnitude \cite{Carroll01}. Therefore, several alternative paradigms have been proposed to address these shortcomings\footnote{In order to heal the cosmological constant problem, a recent study has suggested a mechanism for removing the vacuum energy contribution by means of a phase-transition during the inflationary era \cite{D'Agostino22}.}, such as considering peculiar fluids with negative pressure described in terms of scalar fields \cite{Ratra88,Caldwell98,Zlatev99}, or scenarios aiming to unify different cosmological epochs \cite{Sahni00,Scherrer04,Capozziello06,Anton-Schmidt,D'Agostino22b}. Nevertheless, the lack of compelling and definitive solutions has naturally led to explore also the possibility that modifications of gravity could be the origin of dark energy. In fact, due to incompatibilities with (and among) observations and issues at the theoretical level, alternatives to Einstein's General Relativity (GR) started being developed, providing possible solutions to yet unsolved issues. In this framework, modifications extending the Hilbert-Einstein action caught much attention, due to their capability of reproducing GR under given limits. This is the case of $f(R)$ gravity \cite{Carroll04,Starobinsky07,Nojiri11}, whose gravitational action generalizes the Hilbert-Einstein one by including a generic function of the Ricci scalar curvature, $R$. Thus, as soon as $f(R) = R$, GR is fully recovered. The $f(R)$ models, characterized by field equations of the fourth order, can mimic, under suitable forms, the dark energy behavior without resorting to $\Lambda$. However, no $f(R)$ model is so far capable of fitting all the experimental data at once, or reproducing the whole cosmic history better than the $\Lambda$CDM model. Moreover, leading to higher-order field equations, some $f(R)$ models exhibit ghosts in their Hamiltonian structure, with the consequence that a self-consistent quantization scheme cannot be pursued. The main features of $f(R)$ gravity, its applications, and the theoretical structure can be found \emph{e.g.} in \cite{Sotiriou10,DeFelice_review,Capozziello_review} and reference therein. Among the extensions of GR, a particular interest has been gained by theories involving the Gauss--Bonnet invariant in the gravitational action \cite{Nojiri:2005jg, Li:2007jm, Elizalde:2010jx, DeFelice:2009aj}. Specifically, within all the possible combinations of the second-order invariants $R^2$, $R^{\mu \nu} R_{\mu \nu}$ and $R^{\mu \nu \rho \sigma} R_{\mu \nu \rho \sigma}$, with $R_{\mu \nu}$ and $R_{\mu \nu \rho \sigma}$ being the Ricci and the Riemann tensors, respectively, there is a particular linear combination leading to a topological surface in four dimensions. Such a topological surface is the Gauss--Bonnet invariant, defined as $G \equiv R^2- 4 R^{\mu \nu} R_{\mu \nu} + R^{\mu \nu \rho \sigma} R_{\mu \nu \rho \sigma}$. This could be of interest to address issues inherent in GR at different energy scales. More precisely, $G$ naturally emerges in gauge theories of gravity, such as Lovelock \cite{Bajardi:2021hya, Lovelock:1971yv, Mardones:1990qc}, Chern--Simons \cite{Achucarro:1986uwr, Aviles:2016hnm, Gomez:2011zzd} or Born--Infeld \cite{Leigh:1989jq, Tseytlin:1997csa, Tseytlin:1999dj} gravity. Moreover, its topological nature allows to reduce the order of the equations of motion and simplify the dynamics. However, due to the Gauss-Bonnet theorem, $G$ vanishes identically in three dimensions (or less), while it represents a trivial boundary in four dimensions. Therefore, in the latter case, it cannot provide dynamical contributions to the field equations. Nonetheless, a generic function of the Gauss--Bonnet term is trivial in three dimensions (or less), with the consequence that $f(G)$ gravity can be taken into account as a suitable modification of GR in four dimensions, due to its capability of restoring Einstein's theory under particular limits \cite{Bajardi:2020osh}. Motivated by the above reasons, in this work we consider a gravitational action constituted by a combination of the Ricci scalar and the Gauss--Bonnet term, leading to the $f(R,G)$ theories. These have been extensively studied in different contexts (see, \emph{e.g.}, \cite{DeFelice:2010hg, DeFelice:2011ka, Makarenko:2012gm, DeFelice:2010sh, Elizalde:2020zcb, Sadjadi:2010kp, Mustafa:2020jln}), providing interesting results on different scales. In particular, here we study the cosmological dynamics of a subclass of the $f(R,G)$ models, selected by symmetry considerations. Our purpose is to test the viability of such a scenario by means of late-time cosmic observations, and check whether it may represent a suitable alternative to the standard cosmological paradigm. The present work is organized as follows. In Sec.~\ref{modified gauss}, we discuss the main properties of Gauss--Bonnet gravity and cosmology, focusing on a particular function selected via the Noether symmetry approach. In Sec.~\ref{SecObs}, we explore the background cosmological dynamics of the selected model and test its viability through a Bayesian analysis based on Monte Carlo methods applied to late-time cosmic observations, such as Supernovae Ia and observational Hubble data. Moreover, a systematic comparison with the predictions of the standard cosmological paradigm is carried out, along with the analysis of deviations from GR and possible tensions with respect to the most recent findings in the literature. In Sec.~\ref{sec:energy}, we then study the validity of the energy conditions and the physical implications resulting from possible violations of them in terms of the free parameters of the model. Finally, in Sec.~\ref{conclSec}, we discuss our results, remarking on the main theoretical features exhibited by the model. We thus conclude this work by outlining the future perspectives of the modified Gauss--Bonnet dark energy scenario. \section{Modified Gauss-Bonnet gravity and cosmology} \label{modified gauss} One of the most general extensions of the Hilbert-Einstein action can be built by means of higher-order curvature invariants and dynamical scalar fields, $\phi$, non-minimally coupled to geometry. For instance, one could consider the action\footnote{In this paper, we consider units where $8\pi G_N=c=\hbar=1$.} \begin{eqnarray} S = \int d^4x\, \sqrt{-g}\, && f \Big(\phi, R, \Box R, ..., \Box^{n} R, \nonumber \\ && \hspace{0.6cm} R^{\mu \nu}R_{\mu \nu}, R^{\mu \nu \rho \sigma} R_{\mu \nu \rho \sigma}\Big) \,, \label{HOaction} \end{eqnarray} containing higher-order derivatives in the geometric terms and leading to $2n+4$-th order field equations. Here, $g$ is the determinant of the metric tensor $g_{\mu\nu}$, whereas $\Box \equiv \nabla_\mu \nabla^\mu$ is the D'Alembert operator, with $\nabla_\mu$ being the covariant derivative. As previously mentioned, we shall focus on a particular subcase of the action \eqref{HOaction}, containing a function of the scalar curvature and the Gauss--Bonnet invariant. In particular, by defining $P \equiv R^{\mu \nu} R_{\mu \nu}$ and $\mathcal{Q} \equiv R^{\mu \nu \rho \sigma} R_{\mu \nu \rho \sigma}$, the variation of the action \begin{equation} S = \int d^4 x\, \sqrt{-g} \, f(R,P,\mathcal{Q}) \,, \label{fpqact} \end{equation} yields the following field equations: \begin{eqnarray} && f_R(R,P,\mathcal{Q}) G_{\mu \nu} = \left[\frac{1}{2} g_{\mu \nu} f(R,P,\mathcal{Q}) - R f_R(R,P,\mathcal{Q}) \right] \nonumber \\ &-& \left(g_{\mu \nu} \Box - D_\mu D_\nu \right)f_R(R,P,\mathcal{Q}) \\ &-& 2 \left[ f_P(R,P,\mathcal{Q}) R^\alpha_\mu R_{\alpha \nu} + f_{\mathcal{Q}}(R,P,\mathcal{Q}) R_{\rho \sigma \alpha \mu} R^{\rho \sigma \alpha}_{\;\;\;\;\;\; \nu} \right] \nonumber \\ &-& g_{\mu \nu} D_\rho D_\sigma \left[f_P(R,P,\mathcal{Q}) R^{\rho \sigma}\right] - \Box \left[f_P(R,P,\mathcal{Q}) R_{\mu \nu} \right] \nonumber \\ &+& 2 D_\sigma D_\rho \left[f_P(R,P,\mathcal{Q}) R^\rho_{\left\{ \mu \right.} \delta^\sigma_{ \nu \left. \right\}} + 2 f_{\mathcal{Q}}(R,P,\mathcal{Q}) R^{\rho\;\;\;\;\; \sigma}_{\; \left\{ \mu \nu\right\}} \right] , \nonumber \label{field equations f(R,P,Q)}\, \end{eqnarray} where $ \{ \} $ denotes the anti-commutator, while $f_P$ and $f_{\mathcal{Q}}$ denote the derivative of $f$ with respect to $P$ and $\mathcal{Q}$, respectively. The Gauss--Bonnet topological invariant arises when considering the combination $f(R, P, \mathcal{Q}) = f(R, R^2 -4P+ \mathcal{Q}) \equiv f(R, G)$. Under this assumption, one obtains the action \begin{equation} S= \int d^4x\, \sqrt{-g}\, \left[f(R, G)+\mathcal{L}_m\right] , \label{actionfRG} \end{equation} and the field equations become \begin{eqnarray} && f_R G_{\mu \nu} = \frac{1}{2} g_{\mu \nu}\left(f-R f_{R}\right)+ \nabla_{\mu} \nabla_{\nu} f_{R}-g_{\mu \nu} \square f_{R} \nonumber \\ && + f_{G} \left(-2 R R_{\mu \nu}+4 R_{\mu \rho} R_{\nu}^{\rho}-2 R_{\mu}^{\,\, \rho \lambda \sigma} R_{\nu \rho \lambda \sigma} \right. \nonumber \\ &&\left. +4 g^{\rho \lambda} g^{\sigma \alpha} R_{\mu \rho \nu \sigma} R_{\lambda \alpha}\right) +2\left(\nabla_{\mu} \nabla_{\nu} f_{G}\right) R \nonumber \\ &&- 2 g_{\mu \nu}\left(\square f_{G}\right) R+ 4\left(\square f_{G}\right) R_{\mu \nu}-4\left(\nabla_{\rho}\nabla_\mu f_{G}\right) R_{\nu}^{\rho} \nonumber \\ && -4\left(\nabla_\rho \nabla_\nu f_{G}\right) R_{\mu}^{\rho} +4 g_{\mu \nu}\left(\nabla_\rho \nabla_\lambda f_{G}\right) R^{\rho \lambda} \nonumber \\ && -4\left(\nabla_{\lambda} \nabla_{\alpha} f_{G}\right) g^{\rho \lambda} g^{\sigma \alpha} R_{\mu \rho \nu \sigma} + T_{\mu \nu} \label{f(RG)FE}, \end{eqnarray} where $T_{\mu \nu}$ is the energy-momentum tensor associated to the matter Lagrangian density $\mathcal{L}_m$, namely \begin{equation} T_{\mu\nu}= -\dfrac{2}{\sqrt{-g}}\dfrac{\delta \mathcal{L}_m}{\delta g^{\mu\nu}}\,. \end{equation} In our notation, the subscripts $G$ and $R$ indicate the derivatives with respect to the Gauss-Bonnet invariant and the Ricci scalar, respectively. Interestingly, within the cosmological context, it turns out that the function $f(G) \sim \sqrt{G}$ behaves like the scalar curvature, thus permitting to recover the Einstein-Hilbert action even without imposing the GR limit as a requirement \cite{Bajardi:2020osh}. Therefore, the introduction of $G$ can play the role of an effective cosmological constant given by curvature. Nonetheless, as pointed out in \cite{DeFelice:2009ak}, higher-order derivatives can induce the presence of superluminal ghosts at the level of cosmological perturbations. This causes the impossibility of recasting the Lagrangian into a canonical form, so that the Hamiltonian becomes linearly unstable. However, in~\cite{Astashenok:2015haa, Nojiri:2018ouv}, the authors show that the Lagrange multipliers can, in principle, address this issue leading to ghost-free primordial curvature perturbations. This can be proved by casting $f(R,G)$ gravity in the Jordan frame, thus coupling the Gauss--Bonnet invariant with a dynamical scalar field and choosing a suitable form for the resulting extra potential. To explore the cosmological dynamics of $f(R,G)$ gravity, let us consider the spatially-flat Friedmann--Lema\^itre--Robertson--Walker (FLRW) line element \begin{equation} ds^2=-dt^2+a(t)^2\delta_{ij}dx^idx^j\,, \label{linelement} \end{equation} where $a(t)$ is the scale factor\footnote{We here follow the standard recipe, according to which the scale is normalized to the unity at the present time.} depending on cosmic time, $t$. Hence, the Gauss-Bonnet scalar can be expressed as \begin{equation} G = 24 \left(\frac{\dot{a}^2 \ddot{a}}{a^3} \right)= \frac{8}{a^3}\frac{d}{dt} \left(\dot{a}^3 \right), \label{GBEXPR} \end{equation} from which one can notice that the quantity $\sqrt{-g} \, G$ is a total derivative. Moreover, neglecting radiation and assuming pressureless matter, the modified Friedmann equations read \begin{align} &H^2=\frac{1}{3}\left(\dfrac{\rho_m}{f_R}+\rho_{de}\right), \label{eq:first} \\ &2\dot{H}+3H^2=-p_{de}\,, \label{eq:second} \end{align} where \begin{align} \rho_{de}&=\frac{1}{2f_R}(Rf_R+G f_G-f-6H \dot{f_R} -24 H^3\dot{f_G}) \label{rde} \,,\\ p_{de}&=\dfrac{1}{f_R}\Big[2H\dot{f_R}+\ddot{f_{R}}+8H^3\dot{f_G}+8H\dot{H} \dot{f_G}+4H^2\ddot{f_G} \nonumber \\ &\hspace{1.2cm} -\frac{1}{2}(Rf_R+Gf_G-f)\Big], \label{pde} \end{align} and \begin{align} R&=6(2H^2+\dot H)\,, \label{cosmoexprR}\\ G&=24H^2(H^2+\dot H)\,\label{cosmoexprG}. \end{align} In the Lagrangian formalism, it is possible to use the cosmological expressions of $R$ and $G$ as Lagrange multipliers and obtain the point-like Lagrangian. Specifically, when considering the line element \eqref{linelement}, the action \eqref{actionfRG} can be written as \begin{equation} S = \int dt \left[a^3 f - \lambda \left(R - 6 \frac{\ddot{a}}{a} - 6 \frac{\dot{a}^2}{a^2} \right) - \tau \left(G - 24 \frac{\ddot{a} \dot{a}^2}{a^2} \right) \right], \end{equation} where $\lambda$ and $\tau$ are the Lagrange multipliers. As shown in \cite{Acunzo:2021gqc, Bajardi:2021tul}, the variational principle with respect to $R$ and $G$ can provide the value of $\lambda$ and $\tau$, respectively. Therefore, after integrating out second derivatives, the Lagrangian takes the form \begin{align} \mathcal{L}=&\ 6 a \dot{a}^{2} f_R +6 a^{2} \dot{a} \left(f_{RR} \dot{R} + f_{R G} \dot{G} \right) \label{lagr f(R,G)} \\ &- 8 \dot{a}^{3} \left(f_{R G} \dot{R} + f_{G G} \dot{G} \right)+a^{3} \left(f -R f_R - G f_G \right). \nonumber \end{align} Notice that Eqs.~\eqref{cosmoexprR} and \eqref{cosmoexprG} can be also obtained by the energy condition and the Euler-Lagrange equation with respect to the scale factor, respectively. The former is a condition of zero energy that allows recovering the modified first Friedmann equation when the Lapse function is not included in the starting line element. Moreover, the Euler-Lagrange equations with respect to $R$ and $G$ give back the cosmological expressions of the two scalars by construction. \subsection{Selecting $f(R,G)$ models by Noether symmetries} Sketching the steps reported in~\cite{Capozziello:2014ioa, Camci:2018apx}, we here show how to select viable $f(R,G)$ models by means of the so-called \emph{Noether symmetry approach} (see \cite{Bajardi:2020xfj, Urban:2020lfk, Dialektopoulos:2018qoe} for details). To do this, let us first recall that, if $X$ is the generator of a certain transformation being a symmetry for the Lagrangian $\Lagr$ and $X^{[1]}$ its first prolongation, then the following condition must hold: \begin{equation} X^{[1]} \Lagr + \dot{\xi} \Lagr = \dot{\mathfrak{g}}\,, \label{Noethcond} \end{equation} where $\mathfrak{g}$ is a gauge function depending on the minisuperspace variables. In a generic minisuperspace of the form $\mathcal{S} = \{q^i\}$, the first prolongation of $X$ reads \begin{equation} X^{[1]} = \xi \frac{\partial}{\partial t} + \eta^i \frac{\partial}{\partial q^i} + (\dot{\eta}^i - \dot{\xi} \dot{q}^i) \frac{\partial}{\partial \dot{q}^i}\,, \end{equation} with $\eta^i$ and $\xi$ being the infinitesimal generators related to variable transformations and time translations, respectively. Generally, $t$ accounts for an affine parameter, which in cosmology is represented by the cosmic time. In our case, the minisuperspace is made of three variables, namely $\mathcal{S} \equiv \{a,R,G \}$, and the infinitesimal generator $\eta^i$ can be thus decomposed as $\eta^i = \{\alpha, \beta, \zeta\}$. Under these conditions, the Noether vector $X^{[1]}$ becomes \begin{eqnarray} X^{[1]} &=& \xi(a,R,G, t) \partial_t + \alpha(a,R,G, t) \partial_a + \beta(a,R,G, t) \partial_R \nonumber \\ &+& \zeta(a,R,G, t) \partial_{G} + \dot{\alpha}(a,R,G, t) \partial_{\dot{a}} + \dot{\beta}(a,R,G, t) \partial_{\dot{R}} \nonumber \\ &+& \dot{\zeta}(a,R,G, t) \partial_{\dot{G}}\,, \end{eqnarray} and the identity \eqref{Noethcond} applied to the Lagrangian \eqref{lagr f(R,G)} provides a system of 10 differential equations. The selected functions are \begin{subequations} \begin{eqnarray} && f(R,G) = f_0 R + f_1 G^\frac{3}{2}\,, \\ && f(R,G) = f_0 R^\frac{7}{8} + f_1 G\,, \\ && f(R,G) = f_0 R^{\frac{1}{2}} + f_1 G^{\frac{1}{4}}\,, \\ && f(R,G) = f_0 R^n G^{m}\,. \end{eqnarray} \end{subequations} In what follows, we focus our attention on the latter function and investigate its cosmological properties. \subsection{The case $f(R,G)= R^n G^{m}$} Let us then consider the model $f(R,G)=R^n G^m$. To determine the cosmological dynamics, we can make use of the relations reported in Appendix~\ref{app:relations}. Also, one may introduce the cosmographic parameters $(q,j,s)$ \cite{Weinberg72,Visser05,rocco_chebyshev}, and express the time derivatives of the Hubble parameter as follows: \begin{subequations} \begin{align} \dot H&=-H^2(1+q) \,,\\ \ddot{H}&=H^3(j+3q+2)\,,\\ \dddot{H}&=H^4\left[s-4 j-3 q (q+4)-6)\right]\,. \end{align} \end{subequations} Thus, we find \begin{equation} \rho_{de}=\dfrac{3H^2}{n(q-1)q^2}\sum_{k=0}^4b_k q^k\,, \label{eq:rho_de} \end{equation} where we have defined $b_k\equiv b_k(j;n,m)$ as \begin{subequations} \begin{align} b_0=&\ m (m -1) j\,, \\ b_1=&\ m\left[2 n +3 m -2 j (n +m -1)-3\right], \\ b_2=&\ n \left[2 m (j-2)-j+1\right]+ n ^2 (j-2)\nonumber \\ &+(m -1) \left[m (j-4)-1\right], \\ b_3=&\ n(3-n)-(m -2) (m -1)\,, \\ b_4=&\ (2 m -1) (n +m -1)\,. \end{align} \end{subequations} It is worth to note that, for $n=1$ and $m=0$, \emph{i.e.} $f(R,G)\rightarrow R$, Eq.~\eqref{eq:rho_de} identically vanishes and one recovers the behavior of pure GR. Moreover, we find \begin{equation} p_{de}=\dfrac{H^2}{n(q-1)^2 q^3}\sum_{k=0}^7c_k q^k\,, \label{eq:p_de} \end{equation} where the lengthy expressions of $c_k\equiv c_k(j,s;n,m)$ are reported in Appendix~\ref{app:p_de}. To simplify the calculations and reduce the number of degrees of freedom, we shall consider the case $m=1-n$, namely $f(R,G)= R^n G^{1-n}$, for which the vacuum field equations admit exact solutions \cite{Capozziello:2014ioa}. Clearly, for $n=1$ GR is fully recovered, while for $n=0$ the model only leads to trivial dynamics. The cosmological features of this model have been recently addressed in several contexts. For instance, the dynamical analysis pursued in~\cite{SantosDaCosta:2018bbw} shows that two (out of eight) fixed points yield possible candidates for the dark energy era, thus predicting the accelerating behavior of the late-time Universe. The authors also show that the radiation-dominated era occurs when $n < \frac{1}{5} \left(2+\sqrt{14} \right)$, while the matter epoch is recovered when $n$ approaches the value of $1$. On the other hand, it has been shown that, for $n > 1$, the Universe undergoes a never-ending acceleration without the possibility of structure formation. Moreover, in~\cite{Bamba:2010wfw}, a possible way to cure the finite-time future singularities is addressed by higher-order curvature corrections arising from higher-order field equations. Also, in~\cite{Odintsov:2018nch}, the authors study the inflationary era and calculate the power spectrum of the primordial curvature perturbations. In~\cite{Bahamonde:2019swy}, the Noether symmetry approach is applied to $f(R,G)$ gravity and exact solutions are provided in a static and spherically symmetric background. Therefore, for $m=1-n$, it is straightforward to show that the effective dark energy density and pressure are given by, respectively, \begin{align} \rho_{de}&=\frac{3 H^2 (n-1) \left(j-2 q^3-2 q^2+q\right)}{(q-1) q^2}\,, \label{rhode} \\ p_{de}&=\frac{H^2 (n-1)}{(q-1)^2 q^3}\Big\{q \big[(4 n-6) q^5+2 (4 n-7) q^4-4 n q^2\nonumber \nonumber \\ & \hspace{2.8cm} +q (n-s-1)+3 q^3+s\big] \nonumber \\ & \hspace{2.4cm}+2 j q \left(-2 n q^2-2 n q+n+6 q^2\right)\nonumber \\ & \hspace{2.4cm}+ j^2 (n-3 q+1)\Big\}\,. \label{pide} \end{align} Hence, the equation of state (EoS) parameter for dark energy, $w_{de}\equiv p_{de}/\rho_{de}$, reads \begin{widetext} \begin{equation} w_{de}=\dfrac{3 q j^2-j^2-12 j q^3-n \left(j-2 q^3-2 q^2+q\right)^2+6 q^6+14 q^5-3 q^4+q^2 s+q^2-q s}{3 q \left(j+q-jq-3q^2+2q^4\right)}\,. \end{equation} \end{widetext} Interesting properties can be obtained by considering the field equations in vacuum. Indeed, setting $\rho_m=0$ in Eqs.~\eqref{eq:first} and \eqref{eq:second}, one can obtain analytical solutions (see Appendix~\ref{app:vacuum solutions}). These can be easily handled by introducing the variable $\gamma = R/G$. Hence, for $m=1-n$, the point-like Lagrangian can be written as \begin{equation} \Lagr = 2f_0 n \gamma^{n-2} \dot{a} \left[-3 a \dot{a} \gamma - 3(n-1) a^2 \dot{\gamma} + 4 (n-1) \dot{a}^2 \gamma \dot{\gamma}\right]. \end{equation} In this way, $G$ turns out to be cyclic and the field equations admit the exact solution \begin{equation} a(t) = a_0 t^{2n-1}\,, \qquad \gamma = \frac{4n-3}{8(n-1)(2n-1)^2} \, t^2\,. \label{ExactSol} \end{equation} \section{Comparison with observations} \label{SecObs} In this section, we shall test the observational viability of the model under study by means of a Bayesian analysis of the late-time cosmic data. In particular, we consider the measurements from the Supernovae (SN) Ia Pantheon catalog \cite{Scolnic18} and the cosmic chronometers (CC) given by the observational Hubble data collected in \cite{Capozziello18}. In fact, statistical analyses based on these datasets allow obtaining reliable outcomes that are not affected by assumptions of any underlying fiducial model \cite{D'Agostino18,D'Agostino19}. In what follows, we describe the main features of such measurements, together with the corresponding Likelihood functions. \subsection{Supernovae Ia} The Pantheon sample \cite{Scolnic18} consists of 1048 measurements of SN Ia in the redshift\footnote{The redshift $z$ is related to the scale factor through $z= a^{-1}-1$.} range [0.01,\,2.3]. In such a catalog, the standardization of each SN is obtained by adopting the SALT2 light-curve fitter\footnote{We refer the reader to \cite{Betoule14} for the details on the parametrization of the SN distance modulus in terms of the light-curve coefficients and the host-galaxy corrections.} \cite{Guy07}. In the present study, we use the 6 measurements of the quantity $E^{-1}(z)\equiv H(z)/H_0$ as presented in \cite{Riess18}, where $H_0$ is the Hubble constant. These constitute a self-consistent and model-independent set built upon the full Pantheon collection, relying only on the assumption of a spatially flat universe. The Likelihood of the SN data can be thus written as \begin{equation} \mathscr{L}_\text{SN}\propto \exp\left\{-\frac{1}{2}\mathbf{v}^\text{T} \mathbf{C}_\text{SN}^{-1} \mathbf{v}\right\} , \end{equation} where deviations from the theoretical expectations are accounted for through the differences $v_i= E^{-1}_{obs,i}-E^{-1}_{th,i}$ evaluated at each data point, while $^\text{T}$ indicates the transpose of the same vector. Moreover, $\mathbf{C}_\text{SN}^{-1}$ is the inverse of the covariance matrix measuring the correlations among the SN data, as reported in \cite{Riess18}. \subsection{Cosmic Chronometers} The additional dataset we utilize in our analysis is based on the differential age method \cite{Jimenez02}. The latter permits to investigate the cosmic expansion in a model-independent way through the spectroscopic age measurement of couples of passively-evolving galaxies, which can be thought as chronometers for measuring the redshift variation with respect to the cosmic time, $dz/dt$. Thus, one can obtain the value of the Hubble parameter from the relation $H(z)=-(1+z)^{-1}dz/dt$. Specifically, in our study, we take into account the 31 data points up to $z\sim 2$ previously collected in \cite{Capozziello18}\footnote{See also references therein.}. As these measurements are uncorrelated among themselves, we can write the corresponding Likelihood simply as \begin{equation} \mathscr{L}_\text{CC}\propto\exp\left\{-\dfrac{1}{2}\sum_{i=1}^{31}\left(\dfrac{H_{obs,i}-H_{th,i}}{\sigma_{H,i}}\right)^2\right\} , \end{equation} with $\sigma_H$ being the relative $65\%$ uncertainties associated to the observed $H$ values, $H_{obs}$. \subsection{Monte Carlo analysis} The low-redshift data described above can be thus used to place observational constraints over the free parameters of the $f(R,G)=R^n G^{1-n}$ model. To this aim, we adopted the Markov Chain Monte Carlo (MCMC) method by means of the Metropolis-Hasting algorithm \cite{Hastings70} applied to the joint Likelihood, given by \begin{equation} \mathscr{L}_\text{joint}=\mathscr{L}_\text{SN}\times \mathscr{L}_\text{CC}\,. \end{equation} The theoretical values of the Hubble rate can be obtained by numerically solving Eq.~\eqref{eq:first}. Assuming matter to behave as a pressureless perfect fluid, we can write $\rho_m=3H_0^2\Omega_{m0}(1+z)^3$, with $\Omega_{m0}$ being the current value of the matter density parameter\footnote{The subscript ``0" refers to quantities evaluated at $z=0$, corresponding to the present time.}. Thus, for the specific model under consideration, the first Friedmann equation takes the form \begin{align} H^2=&\ \dfrac{H_0^2 4^{n-1}\Omega_{m0} (z+1)^3}{n}\left\{\frac{H^2 \left[H-(z+1) H'\right]}{2 H-(z+1) H'}\right\}^{n-1}\nonumber \\ &+\frac{H^2 (n-1) (z+1)}{\left[H-(z+1) H'\right]^2 \left[2 H-(z+1) H'\right]}\left\{2 (z+1)^2 {H'}^3 \right. \nonumber \\ &\left. + H^2 \left[3 H'-(z+1) H''\right] -5 H (z+1) {H'}^2 \right\}, \label{eq:diff} \end{align} where the prime denotes the derivative with respect to $z$. The above equation has been obtained by converting the time derivatives into derivatives with respect to the redshift according to \begin{equation} \frac{d}{dt}= -(1+z)H(z) \dfrac{d}{dz}\,. \end{equation} Eq.~\eqref{eq:diff} represents a second-order differential equation for the function $H(z)$, which can be solved by means of suitable boundary conditions. The first initial condition is simply $H(0)=H_0$. To determine the second initial condition, one may require that, at the present time, the first derivative of the Hubble parameter agrees with the predictions of the standard $\Lambda$CDM model, which is characterized by the following expansion law: \begin{equation} H_{\Lambda\text{CDM}}=H_0 \sqrt{\Omega_{m0}(1+z)^3+1-\Omega_{m0}}\,. \end{equation} Thus, taking the first derivative of the above equation with respect to $z$, one finds \begin{equation} H_{\Lambda\text{CDM}}'=\frac{3 H_0 \Omega_{m0} (1+z)^2}{2 \sqrt{\Omega_{m0} (1+z)^3+1-\Omega_{m0}}}\,, \end{equation} which determines the second initial condition for Eq.~\eqref{eq:diff}, namely $H'(0)=3H_0\Omega_{m0}/2$. In our numerical analysis, we considered the reduced Hubble constant $h\equiv H_0/(100$ km $s^{-1}$ Mpc$^{-1}$), which represents a free parameter of the model, together with $\Omega_{m0}$ and $n$. We thus assumed the cosmological parameters as uniformly distributed within the following ranges: \begin{equation} h\in (0.5,0.9)\,, \quad \Omega_{m0}\in (0,1)\,, \quad n\in (1,2)\,. \end{equation} In order to constrain the cosmological parameters, we ran a small initial chain of 2,000 steps, from which we removed the first 100 ones to account for the burn-in phase. This provided us with a test covariance matrix that served as a starting guess for the subsequent main chains. We then ran five independent chains of 20,000 steps each, which have been eventually merged into a final bigger chain of 1,000,000 points. In Table~\ref{tab:results}, we report the $1\sigma$ and $2\sigma$ confidence level (C.L.) results of our MCMC analysis, while Fig.~\eqref{fig:contours} shows the 2-D marginalized contours and 1-D posterior distributions of the free parameters of the model. \begin{table} \begin{center} \setlength{\tabcolsep}{1em} \renewcommand{\arraystretch}{1.8} \begin{tabular} {c c c c} \hline \hline Parameter & Mean & 68\% limits & 95\% limits \\ \hline $h$ & 0.694 &$\pm \, 0.019 $ & $\pm \, 0.037$\\ $\Omega_{m0}$ & 0.223 & $^{+\,0.070}_{-\,0.098}$ & $^{+\,0.173}_{-\,0.152}$\\ $n$ & 1.29 & $_{-\,0.10}^{+\,0.11}$ & $^{+\,0.18}_{-\,0.19}$\\ \hline \hline \end{tabular} \caption{Constraints at the 68\% and 95\% C.L. on the free parameters of the $f(R,G)=R^nG^{1-n}$ model, resulting from the MCMC analysis of the combined SN+CC data.} \label{tab:results} \end{center} \end{table} \subsection{Discussion of the results} Here, we shall discuss our findings by virtue of the predictions of the standard cosmological scenario. For this purpose, we recall the $1\sigma$ C.L. constraints on the $\Lambda$CDM model previously obtained from the MCMC analysis of the combined SN+CC data \cite{D'Agostino20}: \begin{equation} h=0.692 \pm 0.019\,, \quad \Omega_{m0}=0.296\,^{+\, 0.026}_{-\, 0.029}\,. \label{LCDM results} \end{equation} From Table \ref{tab:results}, one can notice that the value of the Hubble constant resulting from the $f(R,G)$ model under consideration is fully consistent with the one predicted by $\Lambda$CDM. Our results differ by $\sim 1.7\sigma$ from the most recent (local) model-independent measurement by Riess et al. \cite{Riess22}, while agrees at $1\sigma$ with the estimate inferred by the Planck Collaboration \cite{Planck18}. The constraints on the parameter $n$ (c.f. Table \ref{tab:results}) indicate more than $2\sigma$ deviations from the GR limit. As expected, the $f(R,G)$ model is capable of accounting for the dark energy effects without the need for the cosmological constant, due to the interplay between the Ricci scalar and the Gauss--Bonnet invariant. Furthermore, although in agreement at the $1\sigma$ among each other, the mean result for the present matter density parameter is lower than both the late-time outcome given in (\ref{LCDM results}) and the early-time estimate of the Planck Collaboration assuming a $\Lambda$CDM cosmology, $\Omega_{m0}=0.315\pm 0.007$ \cite{Planck18}. The effect of such a discrepancy may be seen in Fig.~\ref{fig:LCDM comparison}, where we show the Hubble expansion rate of the $f(R,G)$ model compared to the $\Lambda$CDM prediction. Indeed, we note that the $f(R,G)$ model is able to reproduce fairly the accelerated behavior of the Universe up to $z\sim 1$. However, the differences between the two scenarios emerge as going backward in time, when the matter contribution starts becoming important until it eventually prevails over the dark energy effects. Such behavior may translate into matter instabilities when density perturbations are taken into account during matter and radiation domination \cite{DeFelice10}. The discrepancies with respect to the standard cosmological model are better visible from the analysis of the effective EoS parameter, given as \begin{equation} w_\text{eff}\equiv -1-\frac{2\dot{H}}{3H^2}=-1+\frac{2}{3}(1+z)\frac{H'}{H}\,. \end{equation} In Fig.~\ref{fig:weff}, in view of the mean results of our MCMC analysis, we can see that the effective EoS parameter of the $f(R,G)$ model shows a phantom behavior at the present time while, at high redshifts, it does not properly converge to zero as expected in order to have a standard matter-dominated phase. This is clearly due to the lower matter density abundance compared to $\Lambda$CDM, which strongly affects the cosmic evolution of $w_\text{eff}$. \section{Energy conditions} \label{sec:energy} Starting from the expressions for the dark energy density and pressure given by Eqs.~\eqref{rhode} and \eqref{pide}, one can study the validity of the energy conditions associated with $f(R,G)= R^n G^{1-n}$ gravity. The energy conditions play a fundamental role in defining physically viable models, especially in the context of extended theories of gravity (see, \emph{e.g.}, \cite{Capozziello:2014bqa}). They account for a set of inequalities the energy density and the pressure must satisfy, aiming to select the states of matter that are allowed in a given spacetime. Specifically, the null energy condition (NEC) imposes the trace of the energy-momentum tensor to be non-negative; the weak energy condition (WEC) is associated with the requirement of having positive energy; the dominant energy condition (DEC) validity implies that matter cannot travel faster than light, preserving the causality principle; finally, the strong energy condition (SEC) preserves the attractive nature of the gravitational field. Clearly, in GR, where the energy density and the pressure are those of standard matter, all the energy conditions are identically satisfied, whereas they can be violated as soon as exotic fluids are considered. In the context of modified theories of gravity, the modified field equations can be recast such that the right-hand side can play the role of an effective energy-momentum tensor prompted by curvature. In this way, as shown in Eqs.~\eqref{rhode} and \eqref{pide}, the energy conditions can be also applied to the extra geometric terms that, in principle, can mimic the behavior of exotic matter fluids. Moreover, recasting $\rho_{de}$ and $p_{de}$ in terms of the cosmographic parameters, it is possible to determine the ranges of the free parameters of a given theory leading to an accelerating cosmic expansion at late times. In our case, we study the behavior of the energy conditions depending on the free parameter $n$. As the values of the cosmographic parameters vary as a function of cosmic time, we shall consider their present-day estimates inferred from observations for finding theoretical bounds over $n$. These can be then confronted with the results of our analysis, to check for possible inconsistencies. Specifically, using the values of the cosmographic parameters from the concordance $\Lambda$CDM model with $\Omega_{m0}=0.3$, namely $q_0=-0.55$, $j_0=1$ and $s_0=-0.35$ \cite{rocco_review}, it turns out that the energy conditions are satisfied in the following cases: \begin{subequations} \begin{align} \text{NEC}:& \ 1 < n < 34\,, \\ \text{WEC}: & \ \nexists n \in \mathbb{R}\, , \\ \text{DEC}: & \ \nexists n \in \mathbb{R}\,, \\ \text{SEC}: & \ 1 < n < 34\,. \end{align} \end{subequations} We notice that the WEC and DEC are identically violated for any values of $n$, meaning that the extra geometric terms may give rise to negative effective pressure and to the violation of the causality principle. On the other hand, our constraints on $n$ are in agreement with the range admitted for the validity of the WEC and DEC. A further possibility may be to consider purely model-independent estimates of the cosmographic parameters. In particular, using the recent findings of \cite{Capozziello20}, namely $q_0=-0.6$, $j_0=1.32$ and $s_0=8.47$, we obtain \begin{subequations} \begin{align} \text{NEC}:& \ -29 < n < 1\,, \\ \text{WEC}:& \ -29 < n < 1\,, \\ \text{DEC}:& \ -29 < n < -0.25\,, \\ \text{SEC}:& \ -19 < n < 1\,. \end{align} \end{subequations} This case is of particular interest since these values have been obtained through a kinematic procedure that does not rely on any \emph{a priori} assumed cosmological model. It is worth noting that, in all cases, our constraints over $n$ violate all the energy conditions, thus mimicking a dark fluid behavior. \section{Final remarks and perspectives} \label{conclSec} In this work, we studied the cosmological behavior of a specific class of modified Gauss-Bonnet gravity models. To this aim, we first outlined the main properties of a gravitational action involving a general combination of the Ricci scalar and the Gauss-Bonnet invariant. Hence, assuming a flat FLRW cosmological background, we obtained the point-like Lagrangian of the theory and the related equations of motion. We then applied the Noether symmetry approach to select viable functions and reduce the dimension of the minisuperspace, thus allowing us to find exact solutions to the vacuum field equations. The selected function, namely $f(R,G) = R^n G^{1-n}$, reduces to GR as soon as the real constant $n$ approaches the unity. However, the scenario under study does not recover the cosmological constant case explicitly, so it is particularly interesting toward finding viable alternatives to the standard $\Lambda$CDM model, capable of mimicking the dark energy behavior and avoiding the conceptual issues proper of $\Lambda$. In our case, we showed that the right-hand sides of the modified Friedmann equations can be understood as effective energy density and pressure due to curvature. We thus found the expression of the dark energy EoS parameter in terms of both the cosmographic parameters and the free constant of the theory, $n$. Furthermore, we investigated the cosmological features of the $f(R,G)$ model in the presence of matter fields. Assuming non-relativistic pressureless matter and neglecting the late-time contribution of the radiation fluid, we numerically solved the first Friedmann equation to find the redshift behavior of the Hubble parameter. In so doing, we considered the $\Lambda$CDM model to find suitable initial conditions over $H(z)$ and its derivatives. Then, we employed the most recent low-redshift observations to directly compare our theory with the model-independent predictions of the cosmic expansion. In particular, we performed a Bayesian analysis through the MCMC method, using the combination of Supernovae Ia and Hubble observational data. Assuming uniform prior distributions, we obtained constraints over the free parameters of the model at the $1\sigma$ and $2\sigma$ C.L., which allowed us to reconstruct the cosmological evolution of the Hubble expansion rate and the total effective EoS parameter. Our analysis shows that the $f(R,G)$ model is able to explain the current acceleration of the Universe without resorting to $\Lambda$. However, a close comparison with the predictions of the standard cosmological scenario reveals that the $f(R,G)$ model starts to considerably deviate from $\Lambda$CDM as the redshift increases, thus failing to provide a standard matter-dominated era. This result is confirmed by the behavior of the effective EoS parameter, which does not vanish when $z\gg 1$. This appears to be common with other modified gravity theories, such as $f(R)$, where matter instabilities occur as density perturbations are taken into account. Finally, we complemented our analysis by studying the validity of the energy conditions, when written in terms of effective pressure and energy density. Specifically, we considered two different sets of cosmographic parameters, namely the values inferred from the concordance $\Lambda$CDM model and those emerging from a kinematic model-independent approach to the dark energy problem. In the first case, we showed that the WEC and DEC are identically violated for any $n$, while the NEC and SEC are satisfied for $1<n<34$. Therefore, the value $n \sim 1.29$ obtained from our observational analysis lies within the validity ranges of NEC and SEC. On the other hand, considering the second set of cosmographic parameters, it turns out that the NEC, WEC and SEC are satisfied for $n<1$, whereas the DEC is fulfilled for $n<0.25$. It is worth stressing that, in the latter case, the value of $n$ selected by the cosmological analysis violates all the energy conditions, confirming that the $f(R,G)$ model is capable of behaving like GR with the cosmological constant, thus mimicking the dark energy features. To conclude, the model under investigation well behaves when confronted with observations at late times, though it is unable to properly address the matter-dominated epoch. Nonetheless, similarly to other modified gravity models, a typical solution to the latter problem consists of considering the action of screening mechanisms, implying a gravitational Lagrangian characterized by the presence of additional coupling constants, whose contributions become dominant at different spatial/temporal scales. This, in principle, could allow to recover the standard behavior at intermediate redshifts and thus properly predict the formation of cosmic structures. In this respect, useful insights could arise from the study of cosmological perturbations and the comparison with the growth of matter overdensity measurements. \begin{acknowledgements} The authors acknowledge the support of Istituto Nazionale di Fisica Nucleare (INFN), {\it iniziative specifiche} GINGER and QGSKY. The authors would also like to thank Salvatore Capozziello for useful discussions. \end{acknowledgements} \appendix \section{Useful relations} \label{app:relations} For the sake of completeness, we here report some useful relations for determining the cosmological dynamics in the case of $f(R,G) = R^n G^m$ gravity. Specifically, starting from the definitions given in Eqs.~\eqref{cosmoexprR} and \eqref{cosmoexprG}, the time derivatives of the Ricci scalar and the Gauss-Bonnet term take the form \begin{align} \dot{R}&= 6(4H\dot{H}+\ddot{H})\,,\\ \dot{G}&=24H(4H^2\dot{H}+2\dot{H}^2+H\ddot{H}) \,. \end{align} Moreover, the time derivatives of the functions appearing in Eqs.~\eqref{rde} and \eqref{pde} can be expressed in terms of the above equations and the derivatives with respect to $R$ and $G$ as follows: \begin{align} \dot{f_R}=&\ \dot R f_{RR}+\dot G f_{RG}\,,\\ \dot{f_G}=&\ \dot G f_{GG} + \dot R f_{RG}\,,\\ \ddot{f_R}=&\ \ddot{R}f_{RR}+\ddot{G}f_{RG}+\dot{R}^2 f_{RRR}+\dot{G}^2 f_{RGG}+2\dot{R}\dot{G}f_{RRG}\,,\\ \ddot{f_G}=&\ \ddot{G} f_{GG} +\ddot R f_{RG}+\dot{G}^2 f_{GGG}+\dot{R}^2 f_{RRG}+2\dot{R}\dot{G}f_{RGG}\,. \end{align} \section{Effective dark energy pressure} \label{app:p_de} In the case of $f(R,G) = R^n G^m$ gravity models, the dark energy pressure \eqref{pde} can be written in the compact form \eqref{eq:p_de}, where the explicit expressions of the coefficients $c_k(j,s;n,m)$ are \begin{align} c_0=&\ m \left(-m ^2+3 m -2\right)j^2\,, \\ c_1=&\ m (m -1) \left[3 j^2 (n +m -2)-j (4 n +6 m -6)+s\right], \\ c_2=&\ m \left\{2 j \left[9 n m +n (4 n -11)+7 m ^2-16 m +9\right]\right. \nonumber \\ &\hspace{0.6cm}-(2 n +3 m -3) (2 n +3 m +s-2)\nonumber \\ &\left.\hspace{0.6cm}-3 j^2 (n +m-2) (n +m -1)\right\}, \\ c_3=&\ (m -1) \left[m ^2 \left(j^2-6 j+15\right)-m \left(2 j^2-18 j-3 s+15\right)\right.\nonumber \\ &\left.+3\right]+n ^2 \left[12 m +3 (m -1) j^2-2 (5 m -7) j+s-10\right]\nonumber \\ & +(j-2)^2 n^3+n \left[19 m ^2-25 m +\left(3 m ^2-6 m +2\right) j^2\right.\nonumber \\ &\left.-2 \left(8 m ^2-13 m +5\right) j+4 m s-s+3\right], \\ c_4=&\ n \left\{3 m -2 \left[m (3 m +2) j+j+m (s-3 m )\right]+s+13\right\}\nonumber \\ &+2 n ^3 (2-j)-n ^2 \left[-5 m +2 (m -2) j+s+8\right] \nonumber \\ &+(m -1) \left\{9+m \left[5 m -6 (m +1) j-s+11\right]\right\},\\ c_5=&\ n ^2 \left[m (4 j-9)-8\right]+(m -1) \left[m ^2 (4 j-15)-3 m-9\right] \nonumber \\ &+ n \left(m ^2 (8 j-17)+m (6-4 j)-2\right)+n^3\,, \\ c_6=&\ 4n m (2-n) -(n -4) n +3 m ^2-3\,, \\ c_7=&\ 2 m (2 m -1) (n +m -1) \,. \end{align} \section{Solutions to vacuum field equations} \label{app:vacuum solutions} Making use of the relations reported in Appendix~\ref{app:relations}, it is possible to find analytic solutions for the scale factor of $f(R,G)=R^n G^m$ gravity in vacuum. In particular, it turns out that the theory under consideration admits two different sets of solutions. The first one is a time power-law scale factor of the form $a(t) = a_0 t^\ell$, with \begin{eqnarray} &&\small \ell = \left[\frac{1}{{2 (2 m+n-2)}}\right]\Big\{-8 m^2-8 m n+11 m-2 n^2 \nonumber \\ &&+4 n-3 \pm \Big[\Big(8 m^2+8 m n-11 m+2 n^2 -4 n+3\Big)^2 \\ && +4 (2 m+n-2) \Big(4 m^2+6 m n-5 m +2 n^2-3 n+1 \Big) \Big]^{\frac{1}{2}} \Big\}. \nonumber \end{eqnarray} Setting $m = 1-n$, the solution takes the form $a(t) = a_0 t^{2n-1}$, as written in Eq.~\eqref{ExactSol}. Another solution occurs when considering exponential scale factors of the form $a(t) = a_0 e^{s\, t}$, with $s$ being a real number. However, in order for this scale factor to be the solution to the field equation, we must also have $m=1-n/2$.
Title: Evidence for Co-rotation Origin of Super Metal Rich Stars in LAMOST-Gaia: Multiple Ridges with a Similar Slope in phi versus Lz Plane
Abstract: Super metal-rich (SMR) stars in the solar neighborhood are thought to be born in the inner disk and came to present location by radial migration, which is most intense at the co-rotation resonance (CR) of the Galactic bar. In this work, we show evidence for the CR origin of SMR stars in LAMOST-Gaia by detecting six ridges and undulations in the phi versus Lz space coded by median VR, following a similar slope of -8 km/s kpc/deg. The slope is predicted by Monario et al.'s model for CR of a large and slow Galactic bar. For the first time, we show the variation of angular momentum with azimuths from -10 deg to 20 deg for two outer and broad undulations with negative VR around -18 km/s following this slope. The wave-like pattern with large amplitude outside CR and a wide peak of the second undulations indicate that minor merger of the Sagittarius dwarf galaxy with the disk might play a role besides the significant impact of CR of the Galactic bar.
https://export.arxiv.org/pdf/2208.13353
\title{Evidence for Corotation Origin of Super Metal-Rich Stars in LAMOST-Gaia: Multiple Ridges with a Similar Slope in $\phi$ versus $\Lz$ Plane} \author[0000-0002-8442-901X]{Yuqin Chen} \altaffiliation{CAS Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China;cyq@nao.cas.cn,gzhao@nao.cas.cn} \altaffiliation{School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China} \author[0000-0002-8980-945X]{Gang Zhao} \altaffiliation{CAS Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China;cyq@nao.cas.cn,gzhao@nao.cas.cn} \altaffiliation{School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China} \author[0000-0003-3265-9160]{Haopeng Zhang} \altaffiliation{CAS Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China;cyq@nao.cas.cn,gzhao@nao.cas.cn} \altaffiliation{School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China} \keywords{Galaxy disks (589); Galaxy evolution (594); Stellar kinematics (1608)} \section{INTRODUCTION} With the release of the Gaia data, rich structures in the phase-space distribution have been revealed. For instance, multiple ridges displayed in velocity space were found by \cite{2018A&A...616A..11G} and \cite{2018A&A...619A..72R}. \cite{2019MNRAS.488.3324F} also reported the ridge of the Hercules moving group, and many features, so-called "horn" and "hat", in the $R$ versus $\Vphi$ coded by mean $\VR$ velocity. The origins of these rich structures is a debate topic with the Hercules moving group as a typical case. It has been suggested that the Hercules moving group, {first reported in \cite{1999ApJ...524L..35D}, was formed outside of the bar’s outer Lindblad resonance (OLR) based on a faster bar \citep{2016MNRAS.457.2569M}. However, \cite{2017ApJ...840L...2P} favored for the scenario that orbits trapped at the co-rotation resonance (CR) of a slow bar could produce the Hercules moving group in local velocity space. Based on the Galactic model of \citet{2017ApJ...840L...2P}, \citet{2019A&A...626A..41M} proposed that no fewer than six ridges in local action space that can be related to resonances of this slow bar, which induces a wave-like pattern with wavenumber of $m=6$ excitation. However, \cite{2019MNRAS.490.5414F} explained the ridges in the average Galactocentric radial velocity as a function of angular momentum and azimuth as a wavenumber $m=4$ pattern caused by spiral arms. Most of the observed structures in the phase-space distribution can be explained by different combinations of non-axisymmetric perturbations, making their modeling degenerate \citep{2019MNRAS.490.1026H}. Meanwhile, the relative contribution of the CR and OLR resonances (e.g. for the Hercules moving group) is different between the slow and fast rotation speed of the Galactic bar. The combination of different resonances due to various perturbations make it difficult to discover their origins. Following the ridges as a function of azimuth provides a promising way to disentangle the effect of different resonances. In this respect, \citet{2019A&A...632A.107M} shows that the Hercules angular momentum changes significantly with azimuth as expected for the CR resonance of a dynamically old large bar. They proposed that such a variation would not happen close to the OLR of a faster bar at least for 2 Gyr after its formation. In this letter, we investigate the variation of angular momentum ($\Lz$) with Galactic azimuth ($\phi$) coded by $\VR$ using SMR stars in the LAMOST-Gaia survey as tracers. Since SMR stars in the solar neighborhood are thought to originate from the inner disk \citep{2015A&A...582A.122K,2019AJ....158..249C,2003ApJ...591..925C}, features related to the CR resonance of the Galactic bar should be easily identified in this special population. \section{Data} SMR stars with $\feh>0.2$ and spectral signal-to-noise ratio (S/N) larger than 10 are selected from LAMOST DR7 \citep{2006ChJAA...6..265Z,2012RAA....12..723Z,2012RAA....12.1197C,2012RAA....12..735D,2015RAA....15.1089L}, which provide radial velocity and updated stellar parameters based on methods in \cite{2015RAA....15.1095L}. Then we cross-match this sample with {\it Gaia} EDR3 \citep{2021A&A...649A...1G} to obtain proper motions, and limit stars with errors in proper motions less than 0.05 $mas/yr$, and the renormalized unit weight error of $RUWE<1.44$ \citep{2018A&A...616A...2L}. Bayesian distances are from the StarHorse code \citep{2019A&A...628A..94A} using {\it Gaia} EDR3 and stars with relative error less than 10\% are adopted. The radial velocity are based on the LAMOST survey, and stars with errors in radial velocity larger than 10 $\kmprs$ are excluded from the sample. Galactocentric positions, spatial velocity and orbital parameters (apocentric/pericentric distances, $\Rapo$ and $\Rper$) are calculated based on the publicly available code {\it Galpot} with the default potential ({\it MilkyWayPotential}) provided by \cite{2017MNRAS.465...76M}. Note that $\Lz$ are calculated with the definition of Equation (3.72) as presented in \cite{2008gady.book.....B}, while \citet{2019A&A...632A.107M} adopted an approximate value of $R \Vphi$. We use the cylindrical coordinate ($\VR$, $\Vphi$, $\Vz$) with the Sun's distance from the Galactic centre $R=8.21$ kpc, the solar peculiar velocity of ($U_{\odot}, V_{\odot}, W_{\odot}) = $ (11.1, 12.24, 7.25) $\kmprs$ \citep{2010MNRAS.403.1829S} and the circular speed of $V_{c} = 233.1 \kmprs$ \citep{2011MNRAS.414.2446M}. The sample have 214,199 star stars with velocities between $-300 \kmprs$ and $+300 \kmprs$, and the velocity dispersions of (36.1,23.1, 17.3) $\kmprs$ in ($\VR$, $\Vphi$, $\Vz$). Their Galactic locations are of $7<R<10$ kpc, and $-0.5<Z<1.5$ kpc with peaks at $R=8.5$ kpc and $|Z|\sim 0.25$ kpc, respectively. \section{The $\phi$ versus $\Lz$ plane} Figure 1 shows the $\phi$ versus $\Lz$ space coded by median $\VR$ for SMR stars. There are six ridges and undulations, generally following a similar slope of $-8 \kmprs \kpcprdeg$, which is predicted for stars with orbits trapped at CR in the Galactic model of \citet{2019A&A...626A..41M}. The red ridge with positive median $\VR$ following the red line of $Lz=-8*\phi+1600 \kmprskpc$ corresponds to the Hercules moving group in \citet{2019A&A...632A.107M}, and for comparison we plot the green line, which has a zero slope with $\Lz=1600 \kmprskpc$ at $\phi=0 $, as expected for the OLR in the bar model. Note that the range of mean $\VR$ in \citet{2019A&A...632A.107M} (their Fig.~2) is of $\pm 20 \kmprs$, while the {\bf coverage} of $\pm 50 \kmprs$ in median $\VR$ is {adopted} in our Fig.~1. The uncertainties of the median $\VR$ for these features are of 0.5-1.0 km/s. Except for the Hercules moving group, two ridges (with intercepts of $1380$ and $1920$ $\kmprskpc$) and three undulations (with intercepts of $1500$, $1800$ and $2100$ $\kmprskpc$) are clearly shown. The similar slope in the $\Lz$ versus $\phi$ plane for the six ridges and undulations indicates that the bar's resonance is not limited within the location of Hercules moving group ($\Lz=1600 \kmprskpc$), but has significant effect in the solar neighborhood ( $\Lz=2100 \kmprskpc$). Meanwhile, positive median $\VR$ are found for the inner region of $\Lz<1700 \kmprskpc$, above which median $\VR$ are negative except for a narrow ridge of $Lz=-8*\phi+1920 \kmprskpc$ with median $\VR$ in the order of $10 \kmprs$. This transition may indicates that minor merge event might start to take effect, which significantly increase the variation amplitude and make the undulations wider. Since the slope of $-8 \kmprs \kpcprdeg$ persists, we expect that the role of the CR is still significant in the outer region of $\Lz>1700 \kmprskpc$ for SMR stars. Interestingly, \cite{2022arXiv220606207G} found a somewhat similar velocity distribution with a lower velocity variation (see their Fig. 16), but a change in the sign of $\VR$ velocity (as a function of Galactocentric angle) at a radius of 10 kpc, in phase with the bar angle, is not seen in our SMR sample because only a small fraction of SMR stars in our sample could reach to 10 kpc. Although the wave-like pattern between the ridges and undulations are found for both the inner and outer regions but the amplitudes and the widths are different. The two blue undulations in the outer region have (negative) median $\VR$ of $\sim -18 \kmprs$, while the two ridges in the inner disk have (positive) median $\VR$ around $21 \kmprs$. The width of the outest undulation at $\Lz=2100 \kmprskpc$ is largest, spanning a range of 2.5 kpc from $7$ to $9.5$ kpc in term of guiding radius $\Rg$, and the median $\VR$ is the most negative, which indicates extra mechanisms, such as the minor merge of the Sgr dwarf galaxy, may play an important contribution to this undulation. \section{The distributions of pericentre and apocentre distances} In order to know how far these SMR stars can reach in the Galactic inner and outer disks, Fig.~2 shows the distribution of pericentre and apocentrc distances for all stars (red solid lines) and stars aligned with the Sun and the Galactic centre (black dash lines). Then we fit the distributions with two gaussian functions for stars at the solar azimuth ($\phi=0$) and obtain two peaks for pericentre and apocentre distances respectively. The main peak of pericentre distance is at 7.25 kpc and a second peak at 5.62 kpc, while the histogram of apocentre distance shows a main peak at 8.6 kpc and the second peak at 9.4 kpc. It is found that 79\% SMR stars excurse within 4 kpc around the location of the Sun at 8.2 kpc. The second peak of pericentre distance at 5.6 kpc is interesting because it is close to the CR position at 6 kpc in \citet{2017ApJ...840L...2P} and very close to 5.5 kpc in \cite{2019MNRAS.490.4740B} for a slow bar. There are 36\% SMR stars with pericentre distance less than 6.5 kpc, and they are significantly affected by the CR of the Galactic bar. Finally, the apocentre distribution has a second peak at 9.4 kpc, which is within the the bar's OLR at about 10.5 kpc according to \citet{2017ApJ...840L...2P}. \section{Comparison with the Galactic bar model} Based on a realistic model for a slowly rotating large Galactic bar with pattern speed of $\Omega_b=39 \kmprs\, kpc^{-1}$, \citet{2019A&A...626A..41M} show no fewer than six ridges in local action space that can be related to resonances with the bar. It is interesting that SMR stars in the present sample do show six ridges and undulations in the $\Lz$ versus $\phi$ plane. For direct compariosn, we adopt the Galactic coordinate frame for stellar velocity (U,V,W) and show the $R-V$ diagram coded by median $-U$ in Fig.~3, which matches their Fig.~6 quite well. Specifically, the ridge at $\Lz$ of $1380$ corresponds to CR (their green line) feature and the undulation at $\Lz=2100 \kmprskpc$ fits the OLR (their red line) feature. The two ridges (at $1600$ and $1920 \kmprskpc$) are associated with the 6:1 (pink) and 3:1 (purple) resonances, and one undulation at $1800 \kmprskpc$ is related to the 4:1 (blue) resonance. Note that the two ridges at $\Lz=1380 \kmprskpc$ and $\Lz=1600 \kmprskpc$ in the present work are the same as the strong positive $\VR$ features near $\Lz=1400$ and $\Lz=1600$ observed in the solar neighborhood by \citet{2021MNRAS.505.2412C} based on {\it Gaia DR2} as a result of the bar's resonance. They also suggested a slow bar with the current pattern speed of $\Omega_b=35.5 \kmprs\, kpc^{-1}$, and placed the corotation radius at $6.6$ kpc. Moreover, \citet{2021MNRAS.500.4710C} introduced a decelerating bar model, which can reproduce with its corotation resonance the offset and strength of the Hercules stream in the local $\VR$ versus $\Vphi$ plane and the double-peaked structure of mean $\VR$ in the $\Lz - \phi$ plane due to the accumulation of orbits near the boundary of the resonance. Further work on the comparison of the model's result with observation is desire. In sum, the multiple ridges and undulations found for SMR stars in Fig.~1 can be explained by the bar resonances in the model of \citet{2019A&A...626A..41M}. Their similar slope indicates that these features (even in the OLR region) are affected by the CR of the slow and long bar. But the strong $\VR$ modulations from ridges to undulations and very wide range in the last undulation beyond the CR region suggest that minor merger} may also play an role in the Galactic disk. Finally, we investigate chemical signatures of the six ridges and undulations based on abundances in \cite{10.1093/mnras/stac1959}, and there are 42,109 SMR stars with $\mgfe$ ratios available. There is no difference in $\mgfe$ among ridges and undulations. Both have a peak at $\mgfe \sim 0.05$ dex, typical for the old bar. Based on the LAMOST middle resolution survey, \cite{2021RAA....21..153Z} also suggested that SMR stars have slightly enhanced $\mgfe=0.08$ dex. Note that stars from the Sgr dwarf galaxy itself usually have low $\mgfe\sim-0.05$ at $\feh\sim-0.4$, and there is no star with solar metallicity as shown in Fig.~9 of \cite{2021SCPMA..6439562Z}. Therefore, these SMR stars in the present work are not from the Sgr dwarf galaxy itself, but the minor merge event of the Sgr dwarf galaxy with the Galactic disk could induce strong modulations of $\VR$ from ridges to undulations, and make the undulation at $\Lz=2100 \kmprskpc$ becomes wider. \section{Conclusions} We have detected, for the first time, six ridges and undulations following a single slope in the $\phi$ versus $\Lz$ plane coded by median $\VR$ from a specific population of SMR stars based on LAMOST DR7 and {\it Gaia} EDR3. Specifically, the variation of radial velocity with angular momentum $\Lz$ and azimuth $\phi$ for the six ridges and undulations is seen with a similar slope of $-8 \kmprs \kpcprdeg$, which is predicted for stars with orbits trapped at the CR of a slow bar in the model of \citet{2019A&A...626A..41M}. The median $\VR$ shifts from positive to negative values at $\Rg\sim 7.4$ kpc (for $\phi=0$, $\Lz\sim1700 \kmprskpc$). This transition may indicate the role of minor merge starts to take effect together with the contribution from the CR of the bar. The most outer undulation around $\Rg\sim 8.7$ kpc (for $\phi=0$, $\Lz\sim2100 \kmprskpc$) has a wide feature (three times larger than ridges), which is probably also related to the minor merge event of the Galactic disk with Sgr dwarf galaxy, but this remains an open question for further study in the future. Moreover, since the major merge event by the Gaia-Sausage-Enceladus galaxy bring its metal-rich component \citep{2021SCPMA..6439562Z}, and the accreted halo stars with special chemistry \citep{2019NatAs...3..631X}, into the solar neighborhood, it is interesting to probe how major merge events take effect on the existence of these ridges and undulations. Finally, many moving groups exist in the solar neighborhood \citep{2009ApJ...692L.113Z}, and it is of high interest to probe if they may leave some imprints in the $\phi$ versus $\Lz$ plane coded by median $\VR$ as the Hercules moving group. \begin{acknowledgements} This study is supported by the National Natural Science Foundation of China under Grant Nos. 11988101, 11890694, and National Key R\&D Program of China under Grant No. 2019YFA0405502.\\ Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, LAMOST) is a National Major Scientific Project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. \end{acknowledgements} \bibliography{msv1}{} \bibliographystyle{aasjournal}
Title: Rotating and Expanding Gas in Binary Post-AGB Stars
Abstract: There is a class of binary post-AGB stars (binary system including a post-AGB star) that are surrounded by Keplerian disks and outflows resulting from gas escaping from the disk. To date, there are seven sources that have been studied in detail through interferometric millimeter-wave maps of CO lines (ALMA/NOEMA). For the cases of the Red Rectangle, IW Carinae, IRAS 08544-4431, and AC Herculis, it is found that around greater than 85% of the total nebular mass is located in the disk with Keplerian dynamics. The remainder of the nebular mass is located in an expanding component. This outflow is probably a disk wind consisting of material escaping from the rotating disk. These sources are the disk-dominated nebulae. On the contrary, our maps and modeling of 89 Herculis, IRAS 19125+0343, and R Scuti, which allowed us to study their morphology, kinematics, and mass distribution, suggest that, in these sources, the outflow clearly is the dominant component of the nebula (around 75% of the total nebular mass), resulting in a new subclass of nebulae around binary post-AGB stars: the outflow-dominated sources.Besides CO, the chemistry of this type of source has been practically unknown thus far. We also present a very deep single-dish radio molecular survey in the 1.3, 2, 3, 7, and 13 mm bands (around 600 h of telescope time). Our results and detections allow us to classify our sources as O- or C-rich. We also conclude that the calculated abundances of the detected molecular species other than CO are particularly low, compared with AGB stars. This fact is very significant in those sources where the rotating disk is the dominant component of the nebula.
https://export.arxiv.org/pdf/2208.02846
\newcommand{\on}{89\,Her } \newcommand{\onp}{89\,Her} \newcommand{\iras}{IRAS\,19125+0343 } \newcommand{\irasp}{IRAS\,19125+0343} \newcommand{\ac}{AC\,Her } \newcommand{\acp}{AC\,Her} \newcommand{\rs}{R\,Sct } \newcommand{\rsp}{R\,Sct} \newcommand{\x}{\,$\times$\,} \newcommand{\secp}{\mbox{\rlap{.}$''$}} \section{Introduction} There is a~type of post-AGB star characterized by their spectral energy distributions (SEDs), which shows a~near-infrared (NIR) excess indicating the~presence of hot dust close to the~the stellar system \citep[][]{vanwinckel2003,oomen2018}. Their IR spectra reveal the~presence of highly processed dust grains, so~the dust might be located in~stable structures \citep[][]{gielen2011a,jura2003,sahai2011}. All the above suggests the~presence of circumbinary disks. Their disk-like shape has been confirmed by interferometric IR data see, e.g., \citep{hillen2017,kluska2019}). Their radial velocity curves reveal that the~post-AGB stars are part of a~binary system (see, e.g., \citep{oomen2018}). The~systematic detection of binary systems in~these objects strongly suggests that the~angular momentum of the~disks comes from the~stellar~system. Observations of $^{12}$CO and $^{13}$CO in~the~$J= 2-1$ and $J= 1-0$ lines (230.538 and 220.398\,GHz, respectively) have been well analyzed in~sources with such~NIR excess \citep[][]{bujarrabal2013a}. There are two types of CO line profiles: (a) narrow CO line profiles characteristic of rotating disks and weak wings, which implies that most of the~nebular mass is contained in~the~disk (with Keplerian dynamics), and~(b) composite CO line profiles including a~narrow component, which very probably represents emission from the~rotating disk, and~strong wings, which represents emission from the~outflow, which could dominate the~nebula \citep[][]{bujarrabal2013a}. These types of line profiles are also found in~young stars surrounded by a~rotating disk made of remnants of interstellar medium (ISM) and those expected from disk-emission modeling (see, e.g., \citep{bujarrabal2013a,guilloteau2013}). These~results indicate that the~CO emission lines of our sources come from disks with Keplerian or quasi-Keplerian~rotation. The study of the~chemistry of this class of binary post-AGB stars, together with the~very detailed kinematic analysis of Keplerian disks and outflows around these sources, is based on published articles \mbox{(see \citep{gallardocava2021,gallardocava2022}).} This paper is organized as follows. Technical information of our observations is given in~Section~\ref{observaciones}. In~Section~\ref{obs}, we~present mm-wave interferometric maps and models of the~most representative cases of a~disk-dominated nebula (AC\,Herculis), an~outflow-dominated nebula (R\,Scuti), and~an intermediate case in~between the~disk- and the~outflow-dominated nebula (89\,Herculis). We present the~first molecular survey in~this kind of object in~Section~\ref{mole}, together with discussions about molecular intensities and chemistry. Finally, we~summarize our conclusions in~Section~\ref{sec5}. \section{Observations} \label{observaciones} We show interferometric maps of our sources using the~NOEMA interferometer. Observations of the~$^{12}$CO $J= 2-1$ rotational transition were carried out towards AC\,Herculis, R Scuti, and~89\,Herculis. Observations of the~$^{13}$CO $J= 2-1$ rotational transition were also obtained for 89\,Herculis. Our single-dish observations were performed using the~30\,m\,IRAM telescope \linebreak (Granada, Spain) and the~40\,m\,Yebes telescope (Guadalajara, Spain). We~observed at the~1.3, 2, 3, 7, and~13\,mm bands. Our observations required a~total telescope time of $\sim$600\,h distributed over the~two telescopes and for several projects to observe the~nebulae around the~next binary post-AGB stars: AC\,Her, the~Red\,Rectangle, HD\,52961, IRAS\,19157$-$0257, IRAS\,18123+0511, IRAS\,19125+0343, AI\,CMi, IRAS\,20056+1834, and~R\,Sct. \section{NOEMA~Observations} \label{obs} In this section, we~present the~results directly obtained from the~observations for AC\,Her, R\,Sct, and~89\,Her \citep[][]{gallardocava2021}. We show our NOEMA maps per velocity channel \linebreak and position--velocity (PV) diagrams along~the equatorial rotating disk and along the~axis of the~nebula (see~Figures~\ref{fig:ac12mapas}, \ref{fig:acher12pv}, \ref{fig:rs12mapas}, \ref{fig:rsct12pv}, \ref{fig:89hermapas}, and ~\ref{fig:89her13pv}). \unskip \unskip \unskip \unskip \unskip \subsection{AC\,Herculis} The $^{12}$CO $J=2-1$ mm-wave interferometric maps are presented in~Figure\,\ref{fig:ac12mapas}. We can see in~the~left panel of Figure\,\ref{fig:acher12pv} the~PV diagram along the~equatorial direction, and it~very clearly shows the~characteristic signature of rotation with Keplerian dynamics. On~the contrary, the~analysis of the~PV diagram along the~nebula axis direction helps us to detect the~presence of an~axially outflowing component (see Figure\,\ref{fig:acher12pv} \textit{right}). A theoretical PV diagram along the~axis direction in~the~presence of a~disk with Keplerian dynamics might show emission with a~form close to a~rhombus, with~equal or very similar emission in~all~four quadrants of the~PV diagram. Nevertheless, we~do not see equal emission in~the~four quadrants: we~see slightly inclined emission at central velocities at around $\pm$1$''$. This~fact can be explained by the~existence of an~expanding component that surrounds the~rotating~disk. \subsection{R\,Scuti} We present combined NOEMA maps and 30\,m maps of R\,Sct in~$^{12}$CO $J=2-1$ emission in~Figure\,\ref{fig:rs12mapas} and PV diagrams in~Figure\,\ref{fig:rsct12pv}. In both figures, we~clearly see two components: an~intense inner region and an~extended component of around $\sim$40$''$ surrounding the~inner region. This extended and expanding component contains most of the~total nebular mass (see Section~\ref{secrsctmodel}). The PV diagram along the~equatorial direction shows an~intense central clump in~the~innermost region of the~nebula that may represent the~unresolved rotating disk (see Figure\,\ref{fig:rsct12pv} \textit{left}). The velocity dispersion from the~inner (and unresolved) central condensation is similar to other post-AGB nebulae with disks, including a~significant lack of blueshifted emission (see, e.g.,~\citep{bujarrabal2016}). The PV diagram along the~nebula axis reveals the~structure of the~nebula (see Figure\,\ref{fig:rsct12pv} \textit{right}): we~clearly see two large cavities at approximately $\pm$10$''$. We see this type of structure in~other pPNe, such as M\,2$-$56 \citep{castrocarrizo2002} or M\,1$-$92 \citep{alcolea2007}. \subsection{89\,Her} We present NOEMA maps and PV diagrams of 89\,Her of $^{13}$CO $J=2-1$ emission (and $^{12}$CO $J=2-1$; see~\citep{gallardocava2021} for further details) in~Figures~\ref{fig:89hermapas} and \ref{fig:89her13pv}. We see an~intense central clump and an~extended hourglass-shaped structure surrounding this central clump. For a~distance of 1\,kpc, the~size of the~hourglass-like structure is, at~least, 10,000\,AU. \subsection{Models} \label{models} Our models consist of a~disk with present Keplerian dynamics and an~extended and expanding component escaping from the~rotating disk and surrounding it. The outflowing component can present different shapes, such as an~hourglass, an~ellipsoid, etc. We assume LTE populations, which is a~reasonable assumption for low-$J$ rotational levels of CO transitions. We consider potential laws for the~density ($n$) and rotational temperature ($T$). Additionally, we~also consider Keplerian dynamics in~the~rotating disk ($V_{rot\,K}$) and radial expansion in~the~extended component ($V_{exp}$). We must highlight that our code produces results that can be quantitatively compared to~observations. \begin{equation} n=n_{0}\left(\frac{r_{0}}{r}\right)^{\kappa_{n}}, \end{equation} \begin{equation} T=T_{0}\left(\frac{r_{0}}{r}\right)^{\kappa_{T}}, \end{equation} \begin{equation} V_{rot_{K}} = V_{rot_{K_{0}}}\sqrt{\frac{10^{16}}{r}}, \end{equation} \begin{equation} V_{exp} = V_{exp_{0}} \frac{r}{10^{16}}. \end{equation} \subsubsection{AC\,Her} Our proposed model for the~structure of AC\,Her (Figure\,\ref{fig:densidades}; see also Figures~\ref{fig:ac12mapas}~and~\ref{fig:acher12pv}) is very similar to the~one found for the~Red\,Rectangle, IRAS\,08544$-$4431, and~IW\,Car (see~\citep{bujarrabal2016,bujarrabal2017, bujarrabal2018}). The total mass of the~nebula is 8.3\x 10$^{-4}$\,M$_{\odot}$. Our model predicts that the~mass of the~outflow must be $\leq$5\% of the~total mass. Thus, AC\,Her is clearly a~binary post-AGB star surrounded by a~disk-dominated nebula, due to the fact that the~mass of the~Keplerian disk is, at~least, 19 times larger than that of the~outflow. The Keplerian rotation velocity field of the~disk is compatible with a~central total stellar mass of $\sim$1\,M$_{\odot}$. \subsubsection{R\,Sct} \label{secrsctmodel} Our proposed model for the~structure of R\,Sct is presented (Figure\,\ref{fig:densidades}; see also Figures~\ref{fig:rs12mapas}~and~\ref{fig:rsct12pv}). There are slight differences between the PV diagrams, but~all of them are accounted for in~our uncertainties. We~note that our best-fit model cannot be very different from other models. The nature of R\,Sct is not yet clear, but~our interferometric maps firmly suggest that this source is also a~binary post-AGB star surrounded by a~disk with Keplerian dynamics and by a~high-mass extended and expanding component. The mass of the~nebula is found to be $\sim$\,3.2\x10$^{-2}$\,M$_{\odot}$ and approximately $\sim$25\% of the~nebular material would be placed in~the~rotating disk. This fact, together with the~large size of the~outflow, allows us to classify this source as an~outflow-dominated post-AGB nebula. The disk with Keplerian dynamics is compatible with a~central stellar mass of 1.7\,M$_{\odot}$. \unskip \subsubsection{89\,Her} The total mass of the~nebula around 89\,Her is 1.4\x 10$^{-2}$\,M$_{\odot}$ and our proposed model predicts that the~mass of the~hourglass must be $\sim$50\% of the~total mass (Figure\,\ref{fig:densidades}; see also Figures~\ref{fig:89hermapas}~and~\ref{fig:89her13pv}). Thus, this source is in~between the~disk- and outflow-dominated sources. We find that the~disk with Keplerian dynamics is compatible with a~central stellar mass of 1.7\,M$_{\odot}$. \section{First Molecular Survey in~Binary Post-AGB~Stars} \label{mole} The chemistry of this kind of binary post-AGB source with rotating disks is practically unknown. We~present a~very deep and wide survey of radio lines in~ten of our sources: AC\,Her, the~Red\,Rectangle, 89\,Her, HD\,52961, IRAS\,19157$-$0257\, IRAS\,18123+0511, IRAS\,19125+0343, AI\,CMi, IRAS\,20056+1834, and~R\,Sct. All~of them have been observed at 7 and 13 mm, and~most of them have been also observed in~the~1.3, 2, and~3\,mm bands; see Table\,\ref{lineas} (see~\citep{gallardocava2022}). \begin{table}[H] \caption{Molecular transitions detected in~this~work.}\label{lineas} \tablesize{\footnotesize} \newcolumntype{C}{>{\centering\arraybackslash}X} \begin{tabularx}{\textwidth}{>{\centering}m{0.8cm}C>{\centering}m{2.5cm}CCCCC} \toprule \multicolumn{4}{c}{\textbf{O-Bearing Molecules}} & \multicolumn{4}{c}{\textbf{C-Bearing Molecules}}\\\cmidrule{1-8} {\textbf{Species}} & \multicolumn{2}{c}{\textbf{Transition}} & \boldmath{$\nu$} \textbf{[MHz]}& {\textbf{Molecule}} & \multicolumn{2}{c}{\textbf{Transition}} & \boldmath{$\nu$} \textbf{[MHz]}\\ \midrule $^{28}$SiO & $v=0$ & $J=1-0$ & 43,423.85 & HCN & $v=0$ & $J=1-0$ & 88,630.42 \\ & $v=0$ & $J=2-1$ & 86,846.99 & CS & $v=0$ & $J=3-2$ & 146,969.00 \\ & $v=0$ & $J=5-4$ & 217,104.98 & SiS & $v=0$ & $J=5-4$ & 90,771.56 \\ & $v=1$ & $J=1-0$ & 43,122.08 \\ & $v=1$ & $J=2-1$ & 86,243.37 \\ & $v=2$ & $J=1-0$ & 42,820.59 \\ SO & $v=0$ & $J_{N}=6_{5}-5_{4}$ & 219,949.44 \\ H$_{2}$O & $v=0$ & $J_{Ka,\,Kc}=6_{1,\,6}-5_{2,\,3}$ & 22,235.08 \\ \bottomrule \end{tabularx} \end{table} \unskip \subsection{Molecular~Richness} We show in~Figure~\ref{fig:moleculas_raras} integrated intensity ratios between the~main rare molecules (SO,~SiO, SiS, CS, and~HCN) and CO ($^{13}$CO $J=2-1$ and $^{12}$CO $J=1-0$). Additionally, we~also compare these molecular integrated intensities with the~12, 25, and~60\,$\upmu$m IR emission. The~averaged of our results are represented with black horizontal lines. We compare our results with the~molecular emission of AGB stars (blue and red horizontal lines represent averaged values of the~molecular emission for O- and C-rich AGB stars, respectively). We~note that the~average of our results always presents low molecular emission in~molecules other than CO. Note the~large range (logarithmic scale) of intensity ratios. These~low intensities are more remarkable in~the~disk-dominated sources, such as AC\,Her and the~Red\,Rectangle (see~\citep{gallardocava2021, gallardocava2022}). \subsection{The Discrimination between O- and C-Rich~Envelopes} Evolved stars present a~O/C\,>\,1 and O/C\,<\,1 chemistry, so they can be classified as O- and C-rich environments, respectively. The O/C abundance ratio has important effects on~the~molecular abundances. The lines of O-bearing molecules are much more intense in~O-rich environments than in~the~C-rich ones (such as SiO and SO). On~the contrary, the~lines of C-bearing molecules are much more intense in~C-rich environments \mbox{(see,~e.g.,~\citep{bujarrabal1994a, bujarrabal1994b})}. Additionally, SiO and H$_{2}$O maser emission is exclusive to O-rich environments (see,~e.g.,~\citep{kim2019}). \textls[-15]{We analyze the integrated intensities of pairs of molecular transitions, and~this analysis is crucial to distinguish between O- and C-rich environments (see Figure~\ref{fig:cuadros_mol_mol}). When an~O-bearing molecule is compared with a~C-bearing one, we~find that the integrated intensities are larger in~O-rich than in~C-rich sources. Our results are compared with CSEs around AGB stars, because~they are prototypical environments rich in~molecules.} Based on~the~maser detection of O-bearing molecules (SiO maser emission in~R\,Sct, AI\,CMi, and~IRAS\,20056+1834; H$_{2}$O maser emission in~R\,Sct and AI\,CMi), we~classify some of our sources as O-rich. On the~contrary, and~based on~the~integrated intensity ratios, we~classify the~nebula around 89\,Her as C-rich (see Figure\,\ref{fig:cuadros_mol_mol}). Therefore, the~nebula around AC\,Her, the~Red\,Rectangle, AI\,CMi, IRAS\,20056+1834, and~R\,Sct presents a~O/C\,>\,1 chemistry, while 89\,Her presents a~O/C\,<\,1 environment \citep{gallardocava2022}. \section{Conclusions}\label{sec5} There is a~class of post-AGB star that is part of a~binary system with a~significant NIR excess that is surrounded by a~disk with Keplerian dynamics and an~extended and expanding component composed of gas escaping from the~disk and surrounding~it. Based on our observational data and model results, we~find disk-dominated sources that present $\geq$85\% of the~total nebular mass located in~the~Keplerian disk. This~is the~case of AC\,Herculis. We also find a~subclass of these binary post-AGB stars, in~which the~disk contains $\sim$25\% of the~total mass of the~nebula, such as R\,Scuti. The extended components of these outflow-dominated sources are mainly composed of cold gas. Moreover, our NOEMA maps and modeling suggest that the~nebula around 89\,Her is in~an intermediate case~between both the~disk- and the~outflow-dominated sources, since around 50\% of the~nebular mass is located in~the~rotating disk. See Section~\ref{obs} for further details. HD\,52961 and IRAS\,1957$-$0247 would also belong to this intermediate case. However, the~existence of this intermediate type is not clear, because~these objects were classified as intermediate sources under high uncertainties and they could belong to either subclass: the~disk- or the~outflow-dominated sources. In the~case of 89\,Her, our new 30\,m\,IRAM on-the-fly observations recover all the~filtered flux. These~maps show a~larger hourglass-like structure compared to that in~the NOEMA maps. According to these new maps and preliminary results, the~hourglass-like structure around 89\,Her could contain most of the~material (Gallardo Cava~et~al., in~prep). We present the~first survey in~the~search for molecules other than CO in~binary post-AGB stars surrounded by Keplerian disks (see Section~\ref{mole}). The emission of molecules other than CO in~our sources is low and this fact is especially remarkable in~the~disk-dominated nebulae. Additionally, and according to our analysis, we~catalog the~chemistry of 89\,Her as C-rich. On~the contrary, we~find O-rich environments in~AC\,Her, the~Red\,Rectangle, AI\,CMi, IRAS\,20056+1834, and~R\,Sct. \newpage \vspace{6pt} \authorcontributions{ Conceptualization, I.G.C., V.B., and J.A.; methodology, I.G.C, V.B., and J.A.; software, I.G.C. and V.B.; validation, I.G.C., V.B., J.A., M.G.-G., A.C.-C., H.V.W., and M.S.-G.; formal analysisI.G.C., V.B., J.A., and H.V.W.; investigation, I.G.C., V.B., and J.A.; resources, I.G.C., V.B., J.A., M.G.-G., A.C.-C., H.V.W., and M.S.-G.; data curation, I.G.C., V.B., J.A., M.G.-G., and A.C.-C.; writing—original draft preparation, I.G.C.; writing—review and editing, I.G.C., V.B., and J.A.; visualization, I.G.C., V.B., and J.A., supervision, I.G.C., V.B., and J.A.; project administration, J.A., funding acquisition, J.A. and V.B. All authors have agreed to the published version of the manuscript.} \funding{This work is part of the~AxiN and EVENTs\,/\,NEBULAE\,WEB research programs supported by Spanish AEI grants AYA\,2016-78994-P and PID2019-105203GB-C21. I.G.C. acknowledges Spanish MICIN for the~funding support of~BES2017-080616. } \dataavailability{Not applicable.} % \conflictsofinterest{The authors declare no conflicts of~interest.} \begin{adjustwidth}{-\extralength}{0cm} \reftitle{References} \end{adjustwidth}
Title: Conceptual Design of the Modular Detector and Readout System for the CMB-S4 survey experiment
Abstract: We present the conceptual design of the modular detector and readout system for the Cosmic Microwave Background Stage 4 (CMB-S4) ground-based survey experiment. CMB-S4 will map the cosmic microwave background (CMB) and the millimeter-wave sky to unprecedented sensitivity, using 500,000 superconducting detectors observing from Chile and Antarctica to map over 60 percent of the sky. The fundamental building block of the detector and readout system is a detector module package operated at 100 mK, which is connected to a readout and amplification chain that carries signals out to room temperature. It uses arrays of feedhorn-coupled orthomode transducers (OMT) that collect optical power from the sky onto dc-voltage-biased transition-edge sensor (TES) bolometers. The resulting current signal in the TESs is then amplified by a two-stage cryogenic Superconducting Quantum Interference Device (SQUID) system with a time-division multiplexer to reduce wire count, and matching room-temperature electronics to condition and transmit signals to the data acquisition system. Sensitivity and systematics requirements are being developed for the detector and readout system over a wide range of observing bands (20 to 300 GHz) and optical powers to accomplish CMB-S4's science goals. While the design incorporates the successes of previous generations of CMB instruments, CMB-S4 requires an order of magnitude more detectors than any prior experiment. This requires fabrication of complex superconducting circuits on over 10 square meters of silicon, as well as significant amounts of precision wiring, assembly and cryogenic testing.
https://export.arxiv.org/pdf/2208.02284
\keywords{cosmic microwave background, transition edge sensor, time-division multiplexing} \section{Introduction} \label{sec:intro} % \thispagestyle{FirstPage} CMB-S4 is an upcoming survey experiment that will map the cosmic microwave background (CMB) and millimeter-wave sky to unprecedented sensitivity and precision~\cite{cmbs4_sciencebook}. It will enable a wide range of science in cosmology and astrophysics, and carries the potential to transform our understanding of the universe. First conceived by the community during the 2013 Snowmass physics planning activity as the ultimate ground-based CMB survey, CMB-S4 builds on several prior generations of CMB experiments. The motivation and potential impact of CMB-S4 was also described in the Astro2020 decadal survey, which ranked it as a top priority for the next decade\cite{astro2020}. The science case for CMB-S4 includes searching for primordial gravitational waves predicted by cosmic inflation; searching for the effects of new light relic particles in the early universe; mapping the matter distribution of the universe; and opening a new window on the millimeter-wave, time-variable astronomical sky. These exciting science goals require an exceptionally deep survey to hunt for the faint signal from cosmic inflation, and a precise, high-resolution survey of the majority of the sky to measure as many spatial modes of the CMB as possible. CMB-S4 will therefore conduct two surveys; one targeting 3\% of the sky, sensitive to both degree angular scales and arcminute angular scales to search for the signal from cosmic inflation, and a second targeting 60\% of the sky sensitive to arcminute angular scales to search for the effects of light relic particles, map the matter density, and do transient millimeter-wave astronomy. The survey requirements drive CMB-S4 to use 5-6\,m diameter telescopes, dubbed ``Large Aperture Telescopes" (LATs) to achieve arcminute angular resolution, and 0.5-m diameter telescopes (``Small Aperture Telescopes") (SATs) to measure the degree-scale signals with lower instrumental systematic errors. CMB-S4's science goals require 500,000 photon-noise-limited detectors in these telescopes, a significant increase compared to prior experiments. The measurements must also be made across a decade in observing frequencies in order to separate the CMB signal from Galactic foregrounds. CMB-S4 will use the same modular detector and readout electronics implementations in the focal planes of the LATs and SATs, differing only where necessary to achieve optimal performance and production efficiency. The large detector count necessitates robust and scalable methods for fabrication and packaging the detectors and cryogenic readout components; this has influenced the conceptual design for the system. Components must be tested and validated to meet stringent performance requirements, and re-use and re-working of components is expected to be necessary to yield integrated modules meeting deployment criteria. In this proceedings, we describe the conceptual design for the modular detector and readout systems that will be used in CMB-S4. In Section \ref{sec:requirements}, we discuss the development of technical requirements that flow down from the science goals of the experiment, and the key requirements that drive the design for the detector, readout, and module sub-systems. In Section \ref{sec:concept}, we describe the high-level detector and readout concept including technologies utilized and technical choices made in defining the modular detector and readout system. In Section \ref{sec:modularimplementationfors4}, we describe the specific implementation of these technologies for CMB-S4 including the details of the modular design. In Section \ref{sec:test_stands}, we discuss the development and design validation plan, and in Section \ref{sec:production}, we describe future work on scaling production of these components. \section{Requirements for CMB-S4 Detector \& Readout System}\label{sec:requirements} The requirements and performance targets of the CMB-S4 detector and readout system are derived from an iterative process of systems engineering that flows the experiment's science goals to measurement and technical requirements, and then down to requirements on subsystems and individual assemblies and components, as described in Besuner et al. in these proceedings. We report the current state of these requirements, which we continue to mature. An important consideration for technical implementation decisions is technical readiness of potential designs. Minimizing technical risk is prioritized, leading to frequent choosing of proven approaches or low-risk variations on them. Details of technical choices are discussed in \ref{sec:concept} and the implementation and resulting target requirements are discussed in Section \ref{sec:modularimplementationfors4}. The overall project system requirements (Level 1 Technical Requirements) that impact the detector and readout system are driven largely by the instantaneous sensitivity needed to measure CMB temperature and polarization at various observing frequencies, at sufficient spatial resolution while scanning from CMB-S4 telescopes, to achieve the necessary map depths (Level 1 Measurement Requirements) within $\sim$7 years of science operations, while keeping systematic errors subdominant to statistical uncertainty. This translates roughly to requirements on observing-band-wise counts of photon-noise-limited, polarization-sensitive detectors to integrate sky signal under historical optical loading conditions and assumed observing efficiency at the sites. Next-level-down derived requirements (Level 2 Subsystem requirements) are assigned to the three Level 2 subsystems of the project, Detectors, Readout and Modules, encompassing the detector and readout system. Target values are prescribed for detector counts by observing band, band center and edge frequencies, pixel operability or channel yield, per-detector noise-equivalent power and its acceptable readout contribution, supported dynamic range of input optical power (saturation power), detector time constant and readout bandwidth. Based on technical choices described in Section \ref{sec:concept}, such as the choice of transition-edge-sensor (TES) bolometers, orthomode transducers (OMT) and time-division multiplexing (TDM) as well as interfaces between the detectors and readout, further technical targets are prescribed such as TES normal resistance, superconducting transition temperature, orthomode transducer orientation, inter-channel crosstalk, electro-thermal feedback loopgain, readout sampling and multiplexing rates, etc. There is also a requirement for an additional in-series TES of higher saturation power for every polarization sensor to enable optical calibration with external sources. The current target values for some of the system's key requirements are enumerated in Table~\ref{tab:requirements}. These targets are the result of the project's iterative approach, and continue to be optimized to work with the other subsystems of the experiment to meet the overall project requirements. Some requirements described above are still being developed. They provides a basis for system prototyping, which is underway and described in Section \ref{sec:prototyping}. \begin{table}[] \scriptsize \renewcommand{\arraystretch}{1.5} \begin{tabular}{l|cccc|cccc} & \multicolumn{4}{c}{\textbf{Large-aperture telescope (LAT)}} & \multicolumn{4}{|c}{\textbf{Small-aperture telescope (SAT)}} \\ \textbf{Detector Wafer Type} & \textit{ULF} & \textit{LF} & \textit{MF} & \textit{HF} & \textit{LF} & \textit{MF1} & \textit{MF2} & \textit{HF} \\ \hline \textbf{Band center(s) {[}GHz{]}} & 20 & 26 / 39 & 93 / 149 & 227 / 286 & 26 / 39 & 85 / 145 & 95 / 155 & 227 / 286 \\ \textbf{Fractional bandwidth} & 0.25 & 0.33 / 0.45 & 0.32 / 0.28 & 0.26 / 0.21 & 0.33 / 0.45 & 0.24 / 0.22 & 0.24 / 0.22 & 0.26 / 0.21 \\ \textbf{Saturation power {[}pW{]}} & 0.40 & 0.75 / 4.2 & 4.6 / 13 & 32 / 42 & 1.4 / 6.4 & 7.9 / 14 & 7.9 / 14 & 33 / 41 \\ \textbf{Dark NEP {[}aW/$\boldsymbol{\sqrt{Hz}}${]}} & 5 & 6 / 22 & 22 / 42 & 81 / 103 & 13 / 34 & 42 / 57 & 41 / 59 & 109 / 133 \\ \hline \textbf{Optical efficiency} & \multicolumn{8}{c}{$65\%$ average in-band efficiency (for detector module only)} \\ \textbf{Operable channels} & \multicolumn{8}{c}{$\geq 85\%$ per wafer, installed, including losses from detectors, wirebonds, readout} \\ \textbf{Transition temperature} & \multicolumn{8}{c}{160 mK (science TES)} \\ \hline \end{tabular} \vspace{1mm} \caption{Current target parameters for CMB-S4's detector and readout system by observing band groups or wafer type. All except \textit{ULF} have dichroic pixels. The saturation power shown is for the science TES. Dark noise-equivalent power (NEP) includes expected noise contributions from the detector and readout system. } \label{tab:requirements} \end{table} \section{Detector \& Readout System Concept}\label{sec:concept} The CMB-S4 detector and readout system concept employs arrays of TES bolometers, coupled to the sky via feedhorns and read out using time-division multiplexing (TDM) of Superconducting Quantum Interference Devices (SQUIDs). These design choices are the product of an analysis of the available technologies, weighing their demonstrated performance, estimated cost and risk, and comparing to the Level 1 Technical Requirements from the science flowdown. TES bolometers are a well-established technology across CMB-S4's entire frequency range, with demonstrated noise levels and fabrication yields that meet the instrument requirements~\cite{westbrook_2018,DuffYield,sobrin_spt3g_2022,Moncelsi2020,henderson_advact_2016}. Feedhorns coupled to planar ortho-mode transducers (OMTs) have been demonstrated to provide excellent beam quality and polarization efficiency, with recent advances in direct machining and electromagnetic optimization enabling high-quality horns to be manufactured economically at scale. Each CMB-S4 feedhorn will deliver power to four TESs: two orthogonal polarization modes for each of two frequency bands, defined by on-chip lumped-element filters; this dichroic architecture allows for high sensor density while maintaining high end-to-end optical efficiency in all observing bands. TDM readout has a long heritage in CMB instrumentation, with sufficiently demonstrated high yield and low noise (both white and $1/f$), and with recent developments for X-ray micro-calorimetry enabling higher multiplexing factors and lower noise levels\cite{henderson_readout_2016,Moncelsi2020}. In this section, we describe each of these major design components in more detail. \subsection{Optical Coupling} The detector antennae are feedhorn-coupled planar orthomode transducers (OMTs). The feedhorns fully define the detector beams. We optimize the feedhorn performance to the specific optical requirements of the telescope and frequency band using Markov Chain Monte Carlo methods~\cite{spline_horns}. Requirements typically include edge taper, ellipticity, and/or spillover. Previous experiments have typically used stacks of $\sim$40 through-etched Si wafers to build up the feedhorn profiles in an array~\cite{ACTPol_Instrument}. However, new methods using direct-machining into Al with custom tooling can reduce the time and cost of production by a factor of $\sim$20~\cite{horn_fab}. The feedhorns are designed to have a monotonically increasing profile shape to enable direct machining~\cite{Zeng2010}. A direct-machined feedhorn array and single feedhorn cross-section are shown in Figure~\ref{fig:horn}. The OMT consists of four Nb probes on a low-stress, SiN membrane that split the polarization into two orthogonal directions. Previous experiments have typically used probes with linear features (left panel of Figure~\ref{fig:OMT})~\cite{OMT_TRUCE}, but CMB-S4 will use a new probe design with a ``wine glass" shape (right panel of Figure~\ref{fig:OMT}) that has a more uniform response in frequency and a $\sim$2\% efficiency gain in the high band. This new design was made possible by improved computing power enabling more complex numerical optimizations. The OMT design was optimized for the LAT mid-frequency detector wafer (MF), and this OMT design is linearly scaled for other observing bands with a scaling factor optimized for the bands. The radiation from the probes is passed into a superconducting co-planar waveguide (CPW). A stepped impedance transformer is employed to transition to the low impedance microstrip lines that make up the on-chip detector filters and circuitry. \subsection{Transition of optical power to sensor} After the stepped impedance transformer, the optical signal is transmitted through superconducting microstrip transmission line. The microstrip consists of a ground layer, which is typically niobium (Nb); a dielectric spacer, either a silicon oxide (SiO$_\textrm{x}$) or a silicon nitride (SiN$_\textrm{x}$); and a top conductor, usually the same material as the ground layer. The signal is routed to an in-line diplexer that partitions the signal into the two optical passbands. The diplexer filter uses either a lumped element or stub filter components. After the diplexer, the signal is coupled to the bolometer through one of two approaches. One approach terminates a pair of transmission lines (each connecting to a one of the opposing OMT fins) across a matched impedance load resistor. If the input line lengths are equal, then only in-band power from the TE$^\circ_{\textrm{11}}$ is dissipated in this configuration. The other approach feeds the pair of microstrip lines into a hybrid tee, which produces a sum and a difference output from the two input signals. The higher order modes from the sum output are terminated on the substrate and the TE$^\circ_{\textrm{11}}$ mode from the difference output is routed to the bolometer where the signal is dissipated through a sufficient length of lossy material. In places where microstrip transmission lines are required to cross each other, the design employs cross over structures either using vias or additional dielectric and conductor material. \subsection{Transition-Edge Sensor bolometer} The superconducting microstrip is terminated on Transition-Edge Sensor (TES)\cite{1995ApPhL..66.1998I} bolometers, which transduce changes in optical power to changes in current. TES bolometers have been adopted widely by recent CMB experiments for their scalable manufacturing\cite{1999ApPhL..74..868G,DuffYield,2018JLTP..193..703P}, well-understood noise properties\cite{2005cpd..book...63I} and % ability to achieve ``background-limited'' operation where detector noise is dominated by % photon shot noise. \cite{westbrook_2018,DuffYield,sobrin_spt3g_2022,Moncelsi2020,henderson_advact_2016} % The detectors are voltage biased, with strong electro-thermal feedback % resulting in a highly linear response with improved response time \cite{1996ApPhL..69.1801L}. As a low impedance sensor, the TES is compatible with several multiplexing readout technologies. CMB TES bolometers are fabricated using thin-film microfabrication techniques used in both superconducting microelectronics and MEMs applications, which enables scaling to large array production. The key parameters for the TES bolometer designs are the superconducting transition temperature of the sensor ($T_c$), sensor impedance ($R_{op}$), the bolometer thermal conductance ($G$), or equivalently saturation power, $P_\textrm{sat}$, % and the time constant ($\tau$) of the bolometers. Figure \ref{fig:TES} shows a picture of a CMB-S4 prototype TES bolometer. Each bolometer will have two TESs fabricated on the same released area of the bolometer and connected in series\cite{2010SPIE.7741E..0HO}. The two TES have different superconducting transition temperatures, with one optimized for science observations and the other designed for higher-power calibration sources. For the science TES, $T_c$ is chosen as $\sim$160\,mK and $G$ is chosen band-wise to minimize the phonon-carrier noise of TES bolometers for an operating temperature of 100\,mK, while providing sufficient dynamic range or operating margin under the predicted optical load for that band without saturation. $P_\textrm{sat}$ is the total power dissipated when the bolometer is voltage biased at the operating point. When observing the sky, this power consists of both % incoming optical power and electrical bias power. For CMB-S4, $P_\textrm{sat}$ is designed to be roughly a factor of three greater than the expected optical power providing margin % for changing optical loading due to changes in weather and the telescope's observing elevation. % The normal resistance ($R_\textrm{norm}$) of the science TES is currently targeted at $14\pm2$ milli-Ohm. The $T_c$ for the calibration TES is higher than the science TES, and chosen so that the detector can observe a 450\,K calibration source without saturating. The calibration $R_\textrm{norm}$ is chosen so that it can be reliably voltage biased when including the normal series resistance of the science TES. The upper bound of the bolometer $\tau$ is chosen such that sensors go through multiple $\tau$ periods as the detector beam scans a physical scale of interest across the sky. The lower bound is set by readout bandwidth and electro-thermal-feedback stability. CMB-S4's science TES bolometer $\tau$'s will have a lower bound of 1\,ms and an upper bound of $\sim 3 - 37$\,ms depending on the observing band. The science TES will consist of a sputtered film of aluminum manganese (AlMn) alloy\cite{2004ApPhL..85.2137D,2011ITAS...21..196S}.% The Mn concentration and the temperature of a controlled bake of the AlMn films are used to set the $T_c$\cite{2016JLTP..184...66L}. % Both the lateral dimension and thickness of the TES are chosen to achieve the required $R_\textrm{norm}$. A film of gold, gold/palladium, or palladium, grown either by sputtering or electron beam evaporation, will be used as a thermal anchor where the volume of this heat capacity is chosen to provide the required $\tau$. Niobium film is used for the superconducting leads to connect to the TESs and the RF transmission lines. All of the TES structures are fabricated % on a low stress SiN$_\textrm{x}$ membrane, which is then patterned and released to form narrow bridges suspending the TES island. The release is carried out either via backside through wafer etch with a deep reactive ion etching (DRIE) or a xenon fluoride process. The bolometer $G$ is determined either by the electrons in a strip of normal metal along one of the bridges, or by the phonon transport along the bridges. In the case of the latter, the ratio of cross-section area versus length of the suspended bridges is adjusted to tune thermal conductance. % \subsection{Transition-edge sensor biasing} A direct-current (DC) voltage bias is applied to each TES bolometer to heat it into its superconducting transition. CMB-S4 will apply these biases using a small ($\sim 400\,\upmu\Omega$) shunt resistance wired in parallel with each TES as shown in Figure~\ref{fig:mux_column_schematic_incl_tes_biasing}, driven by differential current sources in the warm electronics selected for low $1/f$-noise. A single bias line can drive several dozen TES/shunt pairs wired in series, typically corresponding to a single column of the TDM readout. The use of shared bias lines requires percent-level tolerances on uniformity among detectors in a wafer, such that all detectors on a given bias line can simultaneously achieve near-optimal performance. In parallel with its shunt resistor, each TES is also wired in series with its associated SQUID input coil and with a discrete series inductor. The total inductance in this loop (discrete, input coil, and stray) is tuned to control the bandwidth of the TES circuit. This ensures stable operation of the TES, and controls aliasing of TES and SQUID noise from the sampling rate of the TDM system. This introduces important couplings between design parameters of the TESs and readout system, which must be accounted for in system design. \subsection{\label{ssec:sqampchainandmux} SQUID amplifier chain and multiplexing} CMB-S4 will use an implementation of time-division multiplexing (TDM) using DC SQUIDs to read out its TES bolometers. This technology has over a decade of heritage on fielded CMB receivers \cite{irwin_tdm_2002}. In TDM, TESs are read out in a 2D grid of rows and columns. The current signal from each TES is first amplified by a dedicated first-stage SQUID (SQ1), which is shunted by a Josephson junction (JJ) switch. Many SQ1s shunted by switches are chained in series and connected to the input of second stage SQUID Series Array (SSA) at $\sim$4 K. A common SQ1 feedback coil is inductively coupled to every SQ1 in a column in series to allow operating any SQ1 in a closed flux-locked loop. Flux coupled to any JJ switch through an inductively coupled control line can either make the switch superconducting (off), shorting out its SQ1, or resistive (on), exposing its SQ1 to the column’s SSA input, depending on the flux applied. Switch control lines are connected in series for one switch from each column to form a readout row, controlled by a single row-select (RS) line. TES arrays are then read out by switching on one row at a time with all others off while operating the on row’s SQ1s in a closed flux-locked loop. This TDM architecture was first deployed on BICEP3 in 2015\cite{bkxv_2022}, and subsequently on many other instruments. To use TDM in CMB-S4 we aim to increase scalability, and reduce the cost and integration complexity by increasing the multiplexing factor, or number of TESs that can be read out per column. In prior TDM readout implementations on CMB instruments, the multiplexing factor has been limited by the achieved readout bandwidth. The readout bandwidth limits the row switching rate, which in turn results in a degradation of noise performance due to SQUID and TES noise aliasing as the multiplexing factor is increased. The AdvACT experiment has demonstrated the highest multiplexing factor to date using the legacy TDM architecture\cite{henderson_readout_2016}, reading out 64 rows on 32 columns with a row switching rate of 2~$\upmu$s resulting in an increase in noise due to aliasing of between 5-10\%\cite{gallardo_aliasing_2020}. To increase the multiplexing factor we will incorporate several low-risk technological improvements that have been developed by NIST Boulder for the readout of much faster X-ray TESs\cite{doriese_tdm_2016,durkin_bandwidth_2021,smith_tdm_2021}. Planned improvements include adding a shunt resistor across the SSA to increase the system bandwidth\cite{zeng_shunt_2013}, a new faster SSA design, and a new SQ1 design with a higher input mutual inductance. Taken together, preliminary studies indicate these improvements may enable substantially lower row switching rates and thereby multiplexing factors in excess of 120 rows per column, but we baseline 80 rows in the conceptual design. % Additionally, we will incorporate new two-level switching SQUID multiplexing architectures\cite{dawson_two-level_2019} into CMB-S4's detector readout to significantly reduce the number of wires required to switch the rows. These architectures connect banks of RS switches in parallel with a single control switch shunting each bank to significantly reduce the number of required wires. One switch from each bank is connected serially across all banks and columns to form a two-dimensional switching matrix. To address an individual row in the multiplexer, warm electronics must activate both the desired RS switch, and the control switch for the bank the row is in, or the chip select (CS) switch. % A schematic of the readout circuit for one column of the CMB-S4 multiplexing architecture is shown in Figure~\ref{fig:mux_column_schematic_incl_tes_biasing}. Prototype chips implementing this new architecture are being fabricated now at NIST Boulder and will be tested soon in CMB-S4 detector and readout cryogenic test stands. \section{Modular Implementation for CMB-S4}\label{sec:modularimplementationfors4} The concrete implementation of detectors, readout electronics, and optical coupling in CMB-S4 emphasizes modularity as an overarching design guideline, which facilitates quality control during production, and enables reuse and reworking of components. A schematic of the modular implementation concept is shown in figure \ref{fig:drm_schematic}. Silicon optical coupling wafers are assembled into a stack together with a feedhorn array and a single 150-mm silicon wafer of TES bolometers (``detector wafer''), to form the waveguide that efficiently couples light from the telescope into the TESs via OMTs and microstrip. This optical stack is mounted to a frame that also contains a number of 100\,mK readout modules, consisting of the first-stage SQUID amplifiers, TES bias resistors, Nyquist-bandwidth-defining filters, and wiring on silicon and printed circuit boards. Each 100\,mK readout module is interchangeable and consists of 4 readout columns of an 80-row TDM architecture with two-level addressing. The 100\,mK readout modules connect to the detectors via a superconducting flexible circuit, as well as with NbTi cables to the SQUID series arrays at 4\,K which form the next stage of the amplification chain in the readout. The chain terminates in warm readout electronics modules mounted outside the camera cryostats, and connected to 4\,K readout elements with manganin cables. The modular design balances the need for optimal sensitivity and a desire to use interchangeable components conducive to mass production and testing. Detector parameters, wafer layouts, and feedhorn designs are optimized for each telescope type (LAT and SAT) and observing band to meet L2 subsystem requirements, L1 technical requirements, and up through the measurement requirements and the science goals. This results in eight unique detector and feedhorn designs, whose basic parameters are summarized in Table~\ref{tab:ubertable}. \begin{table}[] \begin{tabular}{l|cccc|cccc} & \multicolumn{4}{c}{\textbf{Large-aperture telescope (LAT)}} & \multicolumn{4}{|c}{\textbf{Small-aperture telescope (SAT)}} \\ \textbf{Detector wafer type} & \textit{ULF} & \textit{LF} & \textit{MF} & \textit{HF} & \textit{LF} & \textit{MF1} & \textit{MF2} & \textit{HF} \\ \hline \textbf{Number of wafers} & 4 & 17 & 108 & 41 & 24 & 72 & 72 & 48 \\ \textbf{Active detectors / wafer} & 54 & 192 & 1728 & 1876 & 48 & 588 & 676 & 1876 \\ \textbf{Total active detectors} & 216 & 4800 & 279,936 & 120,064 & 1152 & 36,288 & 41,760 & 68,736 \\ \hline \textbf{Readout modules / wafer} & 1 & 1 & 6 & 6 & 1 & 3 & 3 & 6 \\ \textbf{Readout columns / wafer} & 4 & 4 & 24 & 24 & 4 & 12 & 12 & 24 \\ \textbf{readout rows} & \multicolumn{4}{c}{80 with 2-level addressing} & \multicolumn{4}{|c}{80 with 2-level addressing} \\ \hline \end{tabular} \vspace{1mm} \caption{Summary of detector and readout counts for the eight wafer types of CMB-S4. All wafers are dichroic except the \textit{ULF} wafer, with the detectors split evenly between frequencies. Total active detectors include a small number of dark channels for calibration purposes, and is the total across all telescopes. } \label{tab:ubertable} \end{table} \subsection{Integrated Detector \& Readout Module at the 100\,mK Focal Plane}\label{sec:100mK} The detectors, readout electronics, and optical coupling components are mechanically integrated together in a module designed to operate at 100\,mK, which also provides mechanical and thermal interfaces to the cryostat and mixing chamber stage of the dilution refrigerator. The design is driven by several key requirements: 1.) integrated modules must tile in the SAT focal planes with minimal deadspace, including $\lesssim 1$\,mm gaps between feedhorn arrays and 2\,mm wall thickness on the perimeter of the feedhorn; 2.) 100\,mK readout components must be arranged in identical, individually shielded, connectorized modules, each of which connects TES signals to the detector wafer via superconducting flexible circuits; 3.) the integrated module must have sufficiently high thermal conductivity both internally and to the cryostat thermal bus such that detectors can be stably operated at 100\,mK. Figure~\ref{fig:modulemontage} shows the key elements of the integrated module design. The primary structural element of the integrated module is a copper ``spider plate'', shown in Figure~\ref{fig:modulemontage}, which clamps the optical coupling wafers onto the feedhorn array and mechanically supports the feedhorn array with tabs in each corner of the array. Cutouts in the corner of the feedhorn arrays permit the use of low-profile tabs that enable modules to be tiled in the SAT focal planes with a separation of $\sim 1$\,mm. The secondary purpose of the spider plate is to provide a mounting point for the 100~mK readout modules. The requirement that these modules fit behind the footprint of the feedhorn array imposes strict requirements on their width and motivates the use of 80-row multiplexed readout with only 4 readout columns per readout module. There would be one readout module mounted per hexagonal side of the LAT MF, HF, and SAT HF integrated modules, which have the highest detector densities. Since little space is available between the integrated modules for mounting and support structures in the SAT cryostat focal plane, modules are mounted to the focal plane structure by a tube that connects with the spider plate in the center space between the 100mK readout modules. Since the LAT cryostat possesses individual circular optics tubes, there is more clearance than in the SAT around the perimeter of the feedhorn array for mounting features, and the integrated modules can be mounted to the cryostat structure either at the feedhorn or from behind as in the SAT. Detailed designs of the module mounting are currently being developed in parallel with the LAT and SAT cryostat designs. \subsubsection{150-mm Hexagonal Arrays of TES bolometers} A detector wafer consists of a close-packed array of OMT pixels and TES bolometers. CMB detector wafers are designed for ease of mass fabrication via deposition, etch, and patterning of metals and dielectrics on a monolithic silicon substrate, and with special attention to high yield and uniformity across the array. CMB-S4's large quantity of detector wafers also necessitates control of processes over hundreds of wafers. A CMB-S4 detector wafer is a chamfered hexagon cut out of a $\sim500\,\upmu$m-thick 150-mm-diameter silicon wafer with 12 to 469 OMT pixels, depending on the wafer type, connected to 4 TES bolometers each. TES bias connections from each pixel are routed to the edges of the hexagonal wafer and presented as wire bond pads for connections to the 100\,mK readout modules via superconducting traces in a flexible circuit. The detector wafer architecture used by CMB-S4 consisting of arrays of feedhorn-coupled OMTs coupled to TESs with $T_c \sim 160$\,mK has an extensive heritage, having been demonstrated by multiple experiments with high yield and uniformity. A typical pixel operability of $>95\%$ has been achieved per wafer, with similarly high wafer-to-wafer yields \cite{DuffYield}. In previous realizations of this process, deposition, etch, and pattern uniformity require much better than $\pm 5\%$ (shown to be improved to $<\pm 1\%$ for some processes) to achieve acceptable yield and performance; combined with high-quality pixel component designs, this results in the ability to achieve high efficiencies with very small spreads (as low as $\pm5\%$ across an array). The detector fabrication processes for these devices, historically developed at NIST Boulder, are currently in the process of being ported to multiple microfabrication foundries in order to meet the immense fabrication throughput requirements of CMB-S4. Due to the heterogeneous equipment available at the participating foundries, each site defines its own processes and choice of materials that meet the basic functional specifications of the CMB-S4 detectors. Some early prototype wafers are pictured in Figure~\ref{fig:teswafers}. To integrate with the rest of the module, the detector wafers must conform to several mechanical interfaces. First, the dimensions of the OMTs themselves are optimized together with the optical coupling in order to maximize efficiency subject to constraints on beam size, ellipticity, and polarization purity. The layout of detectors within each wafer is optimized in order to meet both the NET requirement of the entire focal plane, together with the requirement on the maximum allowable spillover onto the cryogenic stop inside the telescope. A mixture of two layouts, one with fully hex-close-packed (HCP) pixels and another with three, offset rhombus-shaped sections of HCP pixels are used for different frequencies of detectors. The two layouts offer more flexibility in the number of detectors per wafer, while also providing flexibility to the fabrication sites in the style of wiring that is used (the rhombus layout enables creating wiring with stepper lithography). Finally, the layout of bond pads on the perimeter of the wafer is standardized to be compatible with stepper-based wiring, using groupings of 25 pairs of 70-$\upmu$m wide, double-row bond pads. The different wafer designs populate a subset of the groupings and a subset of pads within each grouping in order to maintain compatibility with the 100\,mK readout modules. \subsubsection{Feedhorn to TES wafer stackup} The Si detector stack is composed of a photonic choke wafer, a waveguide interface plate (WIP) wafer, the detector wafer, and a backshort wafer as shown in Figure~\ref{fig:OC_stack}. The photonic choke wafer has a pattern of square pillars that is optimized for each observing band to minimize leakage at the interface between the Al feedhorn array and the Si detector stack~\cite{photonic_choke}. The WIP has a ring-shaped boss feature on the backside of the detector wafer. The inner radius of the boss feature matches the waveguide radius of the detector stack and keeps the waveguide gap between the WIP and the OMT $<15\,\upmu$m. The outer radius of the ring is tuned to minimize leakage from the gap between the OMT and waveguide. The roughly quarter-wave backshort is tuned to optimize efficiency across the two bands. The backshort includes $10\,\upmu$m tall posts that offset the backshort wafer from the detector wafer wiring. The backshort also includes moats filled with absorptive material positioned behind the optical TES bolometers to reduce high frequency out-of-band leakage. The optical coupling wafers are fabricated using silicon-on-insulator (SOI) wafers to tune the depths of the features. The features are etched into the wafers using DRIE, and then the wafers are seed-coated with 200\,nm of Ti and $1\,\upmu$m of Cu via sputtering to ensure even sidewall coverage. Next, the WIP and choke wafers are glued together into one piece, and the WIP+choke piece and backshort wafer are Au-coated with $3\,\upmu$m Cu followed by $3\,\upmu$m Au. After Au-coating, the coupling wafers are assembled together with the detector array. The pieces are placed in a simple gluing jig with two pins for alignment, clamped together, and glued in 2--3 glue channels on each side. The Si detector stack is coupled to the Al feedhorn array with a pin and slot system to account for the differential thermal contraction between the two materials when cooled. In the final assembly, the pins are press fit into the horn array, and the detector wafer stack is clamped to the feedhorn array with springs as shown in Figure~\ref{fig:modulemontage}. The feedhorn positions are oversized such that the two pieces are aligned when cold. The alignment tolerance of the mid-frequency waveguides is $20\,\upmu$m. \subsubsection{Readout components at the 100 mK focal plane} The first-stage SQUID amplifiers, multiplexing, TES biasing and signal filtering components are contained in 100\,mK readout modules co-located with the detector wafers at the focal plane. They are electrically connected to the wafer via a short flexible circuit with superconducting traces to limit parasitic impedances between the TES detector and the first stage amplifier. The first-stage SQUID amplifier (SQ1) and flux-activated row-select switches are fabricated onto a multiplexer chip (MUX) serving $\sim10$ TES channels each.The shunt resistors for TES voltage-biasing and the series inductors that define the TES signal and noise bandwidth are fabricated onto a ``Nyquist'' filter chip (NYQ), also called a TES biasing chip. The MUX and NYQ chips for a readout column are seated on and bonded to a larger silicon wiring chip. This chip contains superconducting traces that connect to the Nyquist chips on one edge, and present bond pads for connection to the TES detectors on a perpendicular edge. Sets of chips for four columns are assembled into a readout module. The components and their assembly into a prototype 2-column 100\,mK readout module is shown in Figure \ref{fig:ro100mk}. The 100\,mK readout module also receives row-select (RS) and chip-select (CS) control signals for multiplexing. Up to six readout modules, corresponding to 24 columns, are connected to a single row address board which is also located at the 100\,mK focal plane. This simple PCB distributes the RS and CS signals to the 100\,mK readout modules along flexible circuits with copper traces. \subsubsection{Electrical interconnects and mechanical assembly}\label{sec:flexandassembly} The requirement that the 100\,mK readout be modular practically necessitates the use of superconducting flexible circuits between the TESs and the readout. The per-channel MUX and NYQ chip superconducting circuitry footprint is insufficiently compact to fit readout for entire SAT and LAT HF detector wafers on a single layer of Si directly behind the TES wafer. Stacking multiple layers of Si readout components behind the TES wafer to accommodate the readout components would significantly increase the effort required for rework and replacement of components in the lower layers. Thus, a single-layer design was chosen for the 100\,mK readout modules, and the area required for Si components makes a superconducting flexible circuit an attractive option for routing the TES connections from the detector wafer to the readout. Superconducting flexible circuits have been used in small quantities for this interconnect in multiple generations of CMB experiments using TDM, including ACTpol~\cite{ACTPol_Instrument}, AdvACT~\cite{Pappas:2016}, and CLASS~\cite{Dahal:2020} using Al traces. Nevertheless, these circuits for CMB-S4 present several challenges including the large quantity of cables ($\sim$4,500 including spares), 90\,$\upmu$m trace pitch, and work hardening property of Al which limits the number of cycles of flexing. R\&D is currently in progress on two fabrication processes. The first uses Al film evaporated on kapton and then patterned with a lift-off process similar to one developed at SLAC National Accelerator Laboratory (SLAC)~\cite{Tomada:2015dia}. The second process, developed by HighTec\footnote{\url{https://hightec.ch/}}, uses Nb patterned on a polyimide substrate and has achieved 10\,$\upmu$m feature resolution~\cite{Broise:2018}. The final steps of the integrated module assembly consist of wirebonding the superconducting flexible circuit between the TES wafer in the feedhorn/wafer stack and the 100\,mK readout modules, and then mounting the readout modules on the back of the spider plate. Similar to the design of AdvACT and SO modules, the CMB-S4 modules use Au wirebonds connected from Pd pads on the detector wafer to the Au-plated feedhorn arrays, to provide heatsinking. After these are added, a Si DC wiring wafer is added on top of the backshort, and Al wirebonds carrying TES signals are added from the detector wafer to the DC wiring wafer. This wafer serves as an adapter between the bondpad layouts on the flex cable and the detector wafer, and it allows the flex cables to route radially outward on the edge of each hex, which significantly simplifies the assembly process. Superconducting flex cables are then glued to the DC wiring wafer and wirebonds are added. The 100\,mK readout modules are finally folded behind the wafer and bolted to the spider plate, completing the assembly. \subsection{Supporting cryogenic readout electronics} Signals from the integrated detector and readout module at 100\,mK described above continue to an additional SQUID amplification stage that must be located at a warmer temperature of the receiver, and eventually out to the room temperature readout electronics which control the multiplexing signals, provide detector and SQUID biases, and digitize detector signals. A schematic overview of these supporting electronics is shown in Figure~\ref{fig:drm_schematic}, example component and wiring counts are given in Table~\ref{tab:numerology}, and photos of prototypes are shown in Figure~\ref{fig:4kelectronics}. \begin{table} \centering \begin{tabular}{l|l|l|} & \textbf{SAT} & \textbf{LAT} \\ & \textit{HF (Single Wafer)} & \textit{Full Camera} \\ \hline \textbf{Optical TES} & 1872 & 137,904 \\ \textbf{Total Active TES} & 1884 & 138,414 \\ \textbf{100 mK readout boards (4 columns each)} & 6 & 470 \\ \textbf{Column cables (25-pin, 100 mK - 4 K)} & 6 & 470 \\ \textbf{Row address boards} & 1 & 85 \\ \textbf{Row cables (51-pin, 100 mK - 300 K)} & 1 & 85 \\ \textbf{SSA Modules (8 SSAs each)} & 3 & 239 \\ \textbf{Column cables (100-pin, 4 K - 300 K)} & 3 & 239 \\ \textbf{Column boards (room temperature)} & 3 & 239 \\ \textbf{Row boards (room temperature)} & 1 & 85 \\ \hline \end{tabular} \caption{Example TDM readout quantities are enumerated for an SAT HF detector wafer and an entire LAT receiver (85 wafers). Total active TES count includes dark detectors for calibration.} \end{table}\label{tab:numerology} Low-thermal-conductance twisted-pair wiring is used to connect the cold electronics to warm electronics, with thermal intercepts at all possible temperature stages. The electrical design of this wiring is a straightforward application of wiring in demonstrated TDM systems, with some engineering challenges in managing thermal loads while staying within the overall impedance budget, and designing long cable runs to connect across the large focal plane. Design improvements to increase available bandwidth are described in Section \ref{ssec:sqampchainandmux}. Connecting the 100\,mK integrated module to second stage amplifier at warmer temperature, a 25-wire twisted pair NbTi cable per readout module of 4 columns carries SQ1 bias, SQ1 feedback, and detector bias. The control signals for the row address module at 100\,mK are carried by a 51-wire NbTi cable which supports up to six readout modules (24 columns). A SQUID series array (SSA) further amplifies signals for transmission to a room-temperature amplifier. With multiplexing, only a single SQ1 first-stage SQUID feeds the SSA at any given time. To achieve the necessary amplification, the SSA must contain a large number of SQUID elements, which drives the design for this stage. The resulting power dissipation precludes the placement of this SSA on any of the sub-Kelvin cryogenic stages. The expected operating temperature is between 1\,and 4\,K, separate from the 100\,mK electronics and connected via low thermal conductance twisted pair wiring. These large arrays of SQUIDs are extremely sensitive to magnetic field variations and gradients, but their placement away from the focal plane allows for relatively compact, effective magnetic shielding as part of their packaging. Groups of 8 SSAs are packaged together into a module, which supports 8 columns of readout. A 100-wire manganin twisted-pair cable for this 8-column package carries the SSA bias and feedback between this temperature stage and room temperature, along with the SQ1 bias and feedback, and detector biases to be transmitted to 100\,mK. The SSA is a mature design from NIST Boulder, which is a configurable array with 6 banks of 64 SQUID elements that can be connected in series or in parallel. The array layout and external shunting can be adjusted to modify electrical properties including input and output impedance, which is used in optimizing the gain and bandwidth of the system. \subsection{Room Temperature Electronics} The SQUID multiplexer and amplifier chains described in Section~\ref{ssec:sqampchainandmux} require warm electronics for control and read out. In particular, warm electronics provide SQUID biases and feedback for the two stages of SQUID amplification per readout column, row-select flux biases, and TES biases. The warm electronics also operate each TES in a closed flux-locked loop and stream digitally filtered and downsampled data from the receiver to data acquisition for storage and subsequent analysis. For CMB-S4, a new warm electronics readout system is being developed at SLAC. Previous CMB observatories have used the Multi-Channel Electronics (MCE) developed for the SCUBA-2 experiment as warm readout for TDM SQUID multiplexing, but several key components of the MCE system have reached obsolescence~\cite{battistelli_functional_2008}. The new system takes advantage of miniaturization of electronics since the design of the MCE, to significantly shrink the size of the warm electronics and enable CMB-S4's planned high-channel-count receivers. The new electronics are based on an extendable, compact, module-based architecture. In this new architecture, each warm readout system is a collection of two distinct types of modules with identical mechanical footprints that mount directly to connectors on the vacuum wall of the CMB-S4 receiver cryostats. All modules forming a single warm readout system are networked together using ethernet cables with one module serving as the controller for the other modules which interfaces externally with off-receiver data acquisition. Each module connects to the CMB-S4 receivers via a male 100-pin micro-D connector with the same pinout as the legacy MCE system, to maintain backwards compatibility and enable testing with the MCE while the new system is under development. The two types of modules are ``row" modules which each provide up to 48 row switching flux biases and ``column" modules which each provide the SQUID biases, SQUID feedbacks, and low noise analog front-ends for up to eight columns of TDM readout as well as up to eight TES biases. Both row and column modules are 127~mm wide by 254~mm long, and have identical electrical back-end interfaces. In addition to being mechanically compact, the modules are designed to be conduction cooled enabling higher-density packing than comparable air-cooled systems, and eliminating the risk of microphonic and electrical pickup from fans. On the opposite end of the modules from the 100-pin micro-D cryostat connector, both types of modules have a single 48V DC power input from which all other module voltages are derived using in-module regulators, two RJ-45 in/out connectors for networking groups of modules, and a dual Small Form Factor Pluggable (SFP) cage which supports both a 1~Gbps ethernet interface for testing and development and a timing input. The row and column modules each have an FPGA controller which commands the analog-to-digital converters (ADC) and digital-to-analog converters (DAC) and handles digital processing tasks like operating each TES in its own closed flux-locked loop as well as data filtering, downsampling, and streaming. Single-ended SQUID biases and feedback, row-select flux biases, and bipolar TES biases are all provided by DACs with in-module filtering to condition and bandwidth limit signals before they are injected into the cryostat. Each of the eight low-noise analog front ends in the column modules consist of an amplifier chain with a low-voltage-noise first-stage preamplifier which feeds one channel of an integrated eight channel ADC. The new electronics incorporate improvements informed by feedback from users of the legacy MCE system including fully integrated clock, timing, and communications, a single DC power input, a higher clock rate potentially enabling much higher multiplexing factors and data rates, and a modernized communication interface. First prototypes of the row and column modules have been designed and assembled, shown in Figure~\ref{fig:slacwarmelexmodules}. While fully functional, testing on SQUID multiplexers has indicated the need for a revision of the modules to address a few performance issues including higher than expected noise pickup from the in-module switching regulators used to step down the common 48V input voltage. To help mitigate this and to improve performance generally, design changes are planned for this second revision including an improved filtering and grounding design for the switching regulators, fully differential SQUID biases and feedbacks, and a new lower noise, fully differential front-end design. \section{Prototyping and Design Validation} \label{sec:prototyping} CMB-S4's development and design validation plan will mature the detector and readout system from a conceptual design to a prototype, and then to a preliminary design for pre-production before advancing to a final design for full production. This plan will advance both the integrated module sub-components, and the eight different integrated module types in a phased approach. The immediate goal during the next year of development is to demonstrate performance of the integrated detector and readout system with noise and optical coupling that meets the instrument requirements for a subset of module types, using prototype CMB-S4 hardware for a majority of the module components. We have developed readout and detector testing capability in cryogenic test stands, which will provide measurements of the module sub-components and integrated module performance (e.g., the TES properties on the detector wafer). Using one of these test stands, we have already conducted an end-to-end validation of the electrical design at the level of a few TES bolometers of different saturation powers connected to a TDM SQUID multiplexer. Further prototype development and measurements will be used to feedback and iterate on the design, and also validate that the sub-component requirements are still met in the integrated module. In this section, we describe the design and development plan for the prototype sub-components, and the planned testing of the integrated module. \subsection{Cryogenic Test stands for prototyping}\label{sec:test_stands} To support the prototyping of detector fabrication, readout electronics, and modules, the CMB-S4 project is commissioning three (eventually expanding to six) cryogenic prototyping test stands which are equipped with a standardized readout and test module design and are each capable of testing a single 150-mm detector wafer. These cryostats are either Bluefors LD-400 or Oxford Triton dilution refrigerators, made available to the project by members of the collaboration, which are equipped with legacy MCE room-temperature readout. We have designed and fabricated a complete, modular TDM readout kit which is drop-in compatible with these cryostats as shown in Figure~\ref{fig:teststand}. The kit incorporates every component required from room temperature to 100\,mK including vacuum and thermal feedthroughs, cabling and fixturing, PCBs, and prototype 100\,mK readout modules, 4K SSAs and associated electronics. Following an initial round of testing at SLAC, all systems were supplied with the kits. This readout kit, in turn, is being used to develop a pre-prototype module (see Section \ref{sec:flatmodule}) at Fermi National Accelerator Laboratory (FNAL) that is being used to test prototype detector wafers from the TES wafer fabrication sites for CMB-S4. In this early stage of project development, modules are wirebonded and assembled at FNAL and delivered to the other test stands in the project for testing. A campaign of testing the same module repeatedly at multiple sites will provide a ``calibration'' ensuring that each site reports measurements comparable to the others. The test stands will also provide feedback on prototype detector characterization equipment (see Section \ref{sec:testing}). During pre-production, as readout and module designs are finalized, the project will commission eight high-throughput test cryostats, each capable of testing seven integrated detector and readout modules; these cryostats will supersede the prototyping test stands. \subsection{Detector and Readout functional validation} We have connected a few individual TES bolometers to our prototype TES biasing and readout implementation to successfully validate the electrical design of the detector and readout system. While a new advanced cryogenic TDM SQUID multiplexing architecture, described in Section~\ref{ssec:sqampchainandmux}, is being developed for CMB-S4, it is designed to be backwards compatible with the legacy TDM architecture used in currently active CMB observatories. This has enabled functional validation of many readout components using the legacy hardware. Likewise the legacy multiplexer is being used to characterize and validate TES designs, from single devices to full wafers. In particular, directly connecting individual TES bolometers to the multiplexer decouples readout design validation from that of the TES wafers and the integrated detector and readout module. As shown in Figure~\ref{fig:ro100mk}, single pre-screened, NIST first-stage SQUID 11-channel multiplexing chips, mask name ``mux15b", were integrated into prototype readout modules and directly connected to TES test devices through a prototype TES bias or ``shunt'' chip. The TES biasing chip, connected between the multiplexing chip and TES devices with superconducting aluminum wirebonds, shunts the TESs with 450\,$\upmu\Omega$ bias resistors. On the shunt chip, these bias resistors are connected in series on a common bias line which enables voltage biasing the TESs into transition. The TES bolometers were fabricated at NIST Boulder, several per test die, with saturation powers spanning the range of expected CMB-S4 TES device parameters (See Table~\ref{tab:ubertable}), from $\sim0.3-30$\,pW. The TESs had normal resistances of $\sim10$~m$\Omega$ and transition temperatures of $\sim160$\,mK. Pairs of identical TES devices were connected to adjacent rows of the multiplexer to allow characterization of individual, pair sum, and pair difference noise. Other adjacent inputs on the multiplexer were left unconnected, allowing for a measurement of the readout noise alone. For these measurements, the MCE switched over 33 rows, with a row switching rate of 2\,$\upmu$s, even though only the first 11 rows were instrumented with first stage SQUIDs. Figure~\ref{fig:tesnoisewithrokit} shows a comparison of the measured readout noise to noise measured on a pair of TES devices with a saturation power of $\sim27$~pW. White noise performance of the readout is in agreement with known performance. Excess noise at low frequencies is under investigation but thought to be due to a combination of the lack of temperature regulation in the laboratory space where this testing was conducted and known performance issues with the older revision legacy MCE boards used for these measurements. \subsection{Flat Module Development}\label{sec:flatmodule} The next stage of development integrates the readout kits with prototype TES wafers in a pre-prototype ``flat module''. Due to the ongoing development of the superconducting flexible circuits described in Section~\ref{sec:flexandassembly}, integrated ``string'' tests of detectors and readout electronics use a scheme in which 100\,mK readout modules are arrayed radially on each side of the detector wafer, enabling pads on the detector wafer and the readout to be directly wirebonded without the use of flex cables, as shown in Figure~\ref{fig:flatmodule}. This module fulfills the programmatic goal of performing full-system tests of CMB-S4 detectors and readout as early as possible in the project development cycle. Using a geometry similar to the module of AdvACT~\cite{Ward:2016}, this design provides a platform for end-to-end testing of the entire system, both dark and optically, without the superconducting flexible circuit and with more relaxed space requirements than the production module. The prototype 100\,mK readout modules, shown in Figure~\ref{fig:ro100mk}, use 2 readout columns per module, with each module bonded to a subset of the detectors on each side of the wafer and the option of installing one readout module on each side of the detector wafer. Detector wafers may be tested with or without any optical coupling wafers, and in the latter configuration the detector wafer is simply secured to the feedhorn with brass spring clips. \subsection{Dark and Optical characterization}\label{sec:testing} Flat modules will undergo both dark and optical tests to compare with CMB-S4 targets and provide feedback to the TES wafer development program of CMB-S4. These tests will characterize the TES properties across the entire wafer as well as the integrated performance of the module components. The tests will be initially performed in the cryogenic test stands described in Section \ref{sec:test_stands}. The dark tests will characterize integrated module properties such as channel operability, TES properties, detector stability, and time constants. Variable temperature blackbody sources will be used to estimate the module optical efficiency from the detector response. The dark tests will also estimate the overall sensitivity of the detectors, or noise-equivalent power (NEP), from a combination in-transition current noise measurements and detector load curves at multiple blackbody source temperatures. In the prototype calibrator design shown in Figure \ref{fig:coldload}, a flashing IR source coupled through a small aperture in the blackbody also enables measurements of the optical time constants of the science detectors. These will be compared to the time constants inferred from the TES response to an electrical voltage bias step. The optical characterization of the integrated modules will be performed by using equipment coupled to the cold modules through a series of out-of-band radiation filters and a vacuum window. For instance, the detector frequency response will be measured using a Fourier transform spectrometer; low-pass and high-pass thick grill filters will be used to check for spurious out-of-band detector response. Other optical properties will be spot-checked during development including beam shape, cross-polarization response, and polarization sensitivity angle. RF and magnetic pickup of the integrated module will also be measured using swept RF and magnetic sources. \section{Future work towards production at scale}\label{sec:production} After the prototyping phase, described in the previous section, the CMB-S4 project would have verified that the detector and readout system components, and their integrated system meet the Level 2 subsystem requirements, enabling the start of pre-production. During pre-production, the project will demonstrate quality assurance, component fabrication, integration, testing, and quality control steps for a fraction of the required integrated modules and their components, but at the necessary rate and throughput for full production. % Quality assurance and control are key in this phase, and a logging system will be used to record key metrics that enable monitoring and control of the fabrication processes. During production the logging system data will be routinely reviewed to detect process variations and correct them before enough variation occurs to affect performance. During production, the full set of approximately 500 science-grade modules will be delivered over a roughly 3-year period. This will require producing and screening an estimated 700 TES wafers and 150 SQUID wafers, along with associated optical, readout, and module components. This amounts to fabrication of complex superconducting circuits on over 10\,m$^2$ of silicon, as well as significant amounts of precision wiring, assembly and cryogenic testing. To meet the required fabrication rate of approximately 20 TES wafers and 5 SQUID wafers per month will require a multi-site fabrication approach. Several micro- and nanofabrication foundries specializing in superconducting thin film fabrication will be utilized by the project to achieve this rate. We plan to consolidate 100\,mK integrated module assembly and testing into 2 sites, in order to reduce the amount of duplication of expertise and infrastructure. Each testing site will house four high-throughput screening cryostats, capable of characterizing seven modules per cooldown at the nominal 100\,mK operating temperature. As done during prototyping and pre-production, each module will be tested twice, performing a series of dark and optical characterization measurements to verify that it is science-grade by meeting the instrument requirements. \appendix % \acknowledgments % CMB-S4 is supported by the U.S. Department of Energy (DOE), Office of High Energy Physics (HEP) under Contract No. DE–AC02–05CH11231; by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the Divisions of Physics and Astronomical Sciences and the Office of Polar Programs of the U.S. National Science Foundation under Mid-Scale Research Infrastructure award OPP-1935892. Work at Argonne National Lab including use of the Center for Nanoscale Materials, a U.S. Department of Energy Office of Science User Facility, was supported by DOE, Office of Basic Energy Sciences and HEP, under Contract No. DE-AC02-06CH11357. Work at the Fermi National Accelerator Laboratory, managed and operated by Fermi Research Alliance, LLC was supported by DOE HEP under Contract No. DE-AC02-07CH11359. Work at SLAC National Accelerator Laboratory was supported by DOE HEP under contract DE-AC02-76SF00515. D.R.B. was supported by DOE HEP under award number DE-SC0021435, and NSF's Office of Integrative Activities under award OIA-2033199. Considerable additional support is provided by the many CMB-S4 collaboration members and their institutions. \bibliography{report} % \bibliographystyle{spiebib} %
Title: Weighing Exo-Atmospheres: A novel mid-resolution spectral mode for SCALES
Abstract: SCALES (Slicer Combined with an Array of Lenslets for Exoplanet Spectroscopy) is a 2 to 5 micron high-contrast lenslet-based Integral Field Spectrograph (IFS) designed to characterize exoplanets and their atmospheres. Like other lenslet-based IFSs, SCALES produces a short micro-spectrum of each lenslet's micro-pupil. We have developed an image slicer that sits behind the lenslet array and dissects and rearranges a subset of micro-pupils into a pseudo-slit. The combination lenslet array and slicer (or slenslit) allows SCALES to produce much longer spectra, thereby increasing the spectra resolution by over an order of magnitude and allowing for comparisons to atmospheric modeling at unprecedented resolution. This proceeding describes the design and performance of the slenslit.
https://export.arxiv.org/pdf/2208.11209
\keywords{ adaptive optics, high-contrast, instrumentation, exoplanets, thermal infrared, integral field spectroscopy, slenslit } \section{INTRODUCTION} \label{sec:intro} Directly imaging exoplanets has driven the development of extreme AO (adaptive optics) and continues to push the technical envelope of astronomical instrumentation. SCALES (\textsc{Slicer Combined with an Array of Lenslets for Exoplanet Spectroscopy}, previously the \textsc{Santa Cruz Array of Lenslets for Exoplanet Spectroscopy}), like previous direct imaging instruments such as GPI\cite{macintosh2014gpi} and CHARIS\cite{groff2015charis}, uses a lenslet array to sample the field of view and prisms to disperse each lenslet's pupil image into a short spectrum. Figure~\ref{fig:cad} shows the a CAD model of SCALES. The high-level specifications are shown in Table~\ref{tab:high-level-specs}. It will sit behind the AO system of the Keck II telescope at Maunakea, where it will take full advantage of the large telescope primary diameter to peer closer in to stars hosting exoplanets than any other current OIR instrument. SCALES is currently in its Final Design phase, and is planned to be on-sky by late 2025. Figure~\ref{fig:optical-layout} shows the SCALES optical design, with each module (foreoptics, imager, spectrograph, and slenslit) denoted with a polygon. Coronagraphic optics, used to suppress starlight, are in the foreoptics and precede the lenslet array. The spectrograph is a 1-1 reimaging system, and is next to the slenslit module which is used in the mid-resolution mode. It reimages the micro-pupil images produced by each lenslet array onto the detector, and selectable dispersers and filters are used to disperse the micro-pupil images into spectra onto the detector. SCALES is designed with two main IFS modes: a low-resolution mode and a mid-resolution mode. The low-resolution mode uses LiF prisms to disperse the spectra (see Table~\ref{tab-od:prism-specs} for details). The mid-resolution mode uses custom 1st order gold-coated Zerodur gratings (see Table~\ref{tab-od:grating_specs}). Figure~\ref{fig:resolution-by-filter} shows the instanteous spectral resolution as measured at the detector for each filter and mode combination. The lenslet array has two sub-arrays; the low-resolution mode uses a 110x110 subarray, and a smaller 17x18 subarray used for the mid-resolution mode. This mode will allow for characterization of exo-atmospheres at unprecedented spatial and spectral resolution at narrower inner working angles than ever before. In doing so, we will be able to measure chemical abundances, in effect weighing exo-atmospheres. There is also an imager mode, discussed in a separate Proceedings (see below). \begin{table}[hpt] \centering \caption{SCALES high-level specs.} \input{Tables/high-level-specs} \label{tab:high-level-specs} \end{table} For more information on SCALES, please see the following proceedings in this conference: \noindent\textbf{SCALES Overview}: Paper No. 12184-18 (Skemer et al.)\cite{SkemerStatus2022}\\ \textbf{Optical Design}: Paper No. 12184-159 (Renate Kupke et al.)\cite{Reni2022}\\ \textbf{Imaging Channel}: Paper No. 12188-65 (Ravinder Banyal et al.)\cite{Banyal2022}\\ \textbf{Cold-Stop / Lyot Stop}: Paper No. 12185-332 (Li et al.)\cite{Jialin2022}\\ \textbf{Aperture Masks}: Paper No. 12183-89 (Lach et al.)\cite{Lach2022}\\ \textbf{Keck Instrument Development}: Paper No. 12184-4 (Kassis et al.)\cite{Kassis2022}\\ \section{OPTICAL DESIGN OF THE SLENSLIT} \label{sec:design} The SCALES slenslit is responsible for reformatting a regular, two-dimensional grid of lenslet pupil image into a quasi-one-dimensional pseudoslit suitable for dispersing at medium spectral resolution. It primarily uses three sets of mirrors to perform this rearrangement: \begin{enumerate} \item \textbf{Slicer mirrors} \\ The slicer mirrors are responsible for separating each column of lenslet pupil images into an individual beam. The slices are located at a focal plane. \item \textbf{Pupil mirrors} \\ The pupil mirrors, as the name suggests, are at the pupil planes of the slicer mirrors, and are not necessarily co-planar with each other. In combination with the slicer mirrors, the pupil mirrors de-magnify and place the images of the lenslet columns precisely where desired at the next focal plane (located at the field mirrors). However, given these powered optics' locations, they destroy the telecentricity of the outgoing beams. \item \textbf{Field mirrors} \\ The field mirrors are at the output focal plane and co-planar. By powering these optical surfaces, we can correct for the atelecentric errors introduced by the slicer and pupil mirrors. This allows us to keep the same size and location of the footprint on the disperser plane for all slices. \end{enumerate} In order to make the slicing optics fit inside the space envelope, we elected to use a set of input and output relays. The input relay reimages the lenslet pupil image plane onto the slicer mirrors and magnifies the beam by a factor of 8, allowing the slicing optics operate at f/64. The slicer performs a factor of 4 demagnification from the input focal plane at the slicer mirrors to the output focal plane at the field mirrors. The output relay returns the beam speed to f/8 to match the expected input speed to the collimator. A return fold mirror that rides on the mode selector mechanism delivers the slenslit output beam into the spectrograph, which then disperses the pseudoslit onto the detector. The mode selector mechanism carries vignetting masks and return fold mirror such that the low-resolution mode is used when the mid-resolution subarray of lenslets is blocked, and vice-versa. The slenslit pseudoslit is confocal with the lenslet pupil image plane, meaning that the spectrograph `sees' both low and medium resolution inputs as originating at the same plane, and thus we can use the same optics and detector for both without refocusing. \begin{table}[hpt] \centering \caption{Specifications of the SCALES prisms.} \input{Tables/PrismSpecs} \label{tab-od:prism-specs} \end{table} \begin{table}[hpt] \centering \caption{Specifications of the SCALES gratings. The spectral resolution is calculated from linear dispersion, not the expected spectral resolution measured at the detector.} \input{Tables/GratingSpecs} \label{tab-od:grating_specs} \end{table} \subsection{Slenslit interleaving} \label{subsec-od:slenslit-interleaving} The interleaving of the pseudoslit comes about as a way to increase the effective field of view while preserving the spacing of each spectral trace on the detector relative to the low-resolution mode. Figure~\ref{fig-od:slenslit-interleaving} shows a schematic using a $6 \times 6$ subarray of lenslet pupil images that are interleaved in the same way as the pseudoslit, although the SCALES pseudoslit is larger. The SCALES slenslit pseudoslit is comprised of \SI{306} pupil images (\SI{18} columns of \SI{17} rows each) arranged into 3 super-columns of \SI{6} columns of \SI{17} pupil images each. Each super-column is separated by \SI{1.5}{ \milli\meter} at the detector, which shortens the instantaneous bandpass slightly due to the spectra of the outer columns falling off of the detector. Note that the separation of the spectra on the detector are separated by \numrange{5}{7} pixels on the detector, which is similar to the separation of the shorter spectra of the low-resolution mode. The slenslit opto-mechanical design is discussed further in \S\ref{sec:mech-design}. The slenslit optics are located after the lenslet array and feed the spectrograph with a rearranged pseudoslit for medium-resolution spectroscopy; the slenslit is the `scenic bypass' offering much higher spectral resolution over a smaller field of view compared to the low-resolution IFS. Medium resolution dispersion is accomplished via a suite of three gratings, to be mounted on the same carousel and located at the same pupil plane as the prisms. We have baselined gold-coated etched gratings on Zerodur substrate with the characteristics shown in Table~\ref{tab-od:grating_specs}. The opening angle of \SI{38}{\degree} for all gratings and prisms is set by the geometry of the collimator and camera and the mechanical envelope needed for the large disperser carousel. The blaze angles in Table \ref{tab-od:grating_specs} were determined with GSolver, a rigorous coupled wave analysis program. These blaze angles give peak efficiencies of \SI{80}\% for the K- and L-bands and \SI{66}\% for M-band. \subsection{Slenslit optical design} \label{subsubsec-od:slenslit-design} The SCALES slenslit optical design (shown in Figure~\ref{fig-od:slenslit-overview}) is effectively 3 in-series focal plane-to-focal plane reimagers, and each are discussed in this subsection. The opto-mechanical design is discussed in more detail in \S\ref{sec:mech-design}. The first focal plane is the output micro-pupil image plane produced by the lenslet array, which produces a regular grid of micro-pupils with an f/\SI{8} beam speed. An input relay magnifies the beam speed from f/\SI{8} to f/\SI{64} and produces a focal plane for the slicing optics; the slicing optics geometrically rearrange the focal plane into a focal plane pseudoslit while decreasing the f-ratio to f/\SI{16}; and lastly the output relay, which returns the beam speed to f/\SI{8}. The real image of the pseudoslit is designed to be confocal with the lenslet array’s pupil image plane, which acts as the object for the spectrograph optics. Both the input and output relays are reflective TMAs. A mode selector mechanism blocks the primary lenslet array when using the slenslit, also hosts a return fold mirror, which feeds the slenslit output into the spectrograph. The disperser mechanism allows us to put gratings for various bandpasses and spectral resolutions in the optical train. The optical prescription of the input and output relays is given in Table~\ref{tab-od:slenslit-relays-optical-prescription}, and the mechanical aspects of the design are discussed in more detail in \S\ref{sec:mech-design}. \begin{table}[tp] \centering \caption{Slenslit input and output relay prescription.} \label{tab-od:slenslit-relays-optical-prescription} \input{Tables/slenslit-relay-optics-prescription} \end{table} \subsection{Slenslit input} \label{subsec-od:slenslit-input} The input TMA relay is responsible for taking the lenslet array output (a regular grid of spots operating at f/8) and providing a magnified focal plane (operating at f/\SI{64}). It consists of 3 mirrors in a TMA format, and the centers of curvature are coplanar (although not co-located). It takes a field of view of $\sim$\SI{6.2}{ \milli\meter} $\times$ $\sim$\SI{6.0}{\milli\meter}, ($18 \times 17$ lenslets, at \SI{0.341}{ \milli\meter} pitch) and outputs a focal plane $8\times$ larger onto the slicer optics. The mirrors are designed to not vignette the low-resolution optical beam path. Figure~\ref{fig-od:slenslit-input-relay} shows the input relay on its sub-bench. \subsection{Slenslit slicing optics} \label{subsec-od:slenslit-slicing-optics} The slicing optics are grouped into 3 sets of mirrors, described in detail below and shown in Figure~\ref{fig-od:slenslit-slicer-optics}. The slicer mirror lives at the f/\SI{64} focal plane outputted by the input TMA relay. For ease of manufacturing, all of the slicing optics are spherical. The slices each have optical power to produce a pupil plane roughly \SI{470}{ \milli\meter} away. The pupil mirrors each live at the location of the pupil image produced by the slicer, and are arranged in 2 columns. They, in turn, produce a focal plane inhabited by the field mirrors. The slicer, pupil, and field mirrors all have optical power; the field mirrors are powered in order to correct any atelecentricities introduced by the slicer and pupil mirrors. The output of the slicing optics produces an f/\SI{16} beam (e.g., $4\times$ demagnification). The field mirror's three super-columns are slightly offset in the vertical direction, allowing each lenslet pupil image to be dispersed in the horizontal direction across the full width of the detector. A subtlety of diffraction-limited image slicers is that the pupils of each slice are diffracted. In order to prevent PSF-broadening, the pupil mirrors are slightly elongated along the axis perpendicular to the slice's long axis, which captures the diffracted energy. The pupil mirrors could be made with circular apertures, but it's possible to more tightly pack them by elongating the mirrors along one axis (this approach was used by the FRIDA slicing optics as well~\cite{CuevasFRIDA2014}). This also reduces higher-order aberrations by keeping the angles of incidence and reflection smaller for the most marginal mirrors. The slicing optics’ typical optical prescription ranges are described in Table~\ref{tab-od:slenslit-slicing-optics-optical-prescription}; note that each set of slice, pupil, and field mirrors have unique tips, tilts, and radii of curvature, although all mirrors are spheres. \begin{table}[tp] \centering \caption{Typical slenslit slicer prescription.} \label{tab-od:slenslit-slicing-optics-optical-prescription} \input{Tables/slenslit-slicer-optical-prescription} \end{table} \subsection{Slenslit output} \label{subsec-od:slenslit-output} The output TMA relay is responsible for taking the output of the slicer (the three staggered super-columns) and providing a de-magnified focal plane (operating at f/\SI{8}) that is confocal with the lenslet array’s pupil image plane. It consists of 3 mirrors in a TMA format, and the centers of curvature are co-planar and co-linear (although not co-located); the optical prescription is shown in Table~\ref{tab-od:slenslit-relays-optical-prescription}. Figure~\ref{fig-od:slenslit-output-relay} shows the output relay on its sub-bench. This allows us to use the same spectrograph optics as the low-resolution mode of SCALES. \section{MECHANICAL DESIGN OF THE SLENSLIT} \label{sec:mech-design} Within the slicer module, there are three (\num{3}) mirrors in the input TMA relay, fifty-two (\num{52}) mirrors in the slicing optics spread over six (\num{6}) substrates, three (\num{3}) mirrors in the output TMA relay, and one (\num{1}) flat fold return mirror. In total, the slenslit has \num{59} mirrors over \num{13} substrates; however, each micro-pupil `sees' only \num{10} reflections as it flashes through the `scenic byway' on its way to the SCALES spectrograph. Figure~\ref{fig-od:slenslit-mech-overview} shows the opto-mechanical design minus the flat fold return mirror. We have modeled all mirrors with realistic toolpaths in SolidWorks to ensure no mirrors cut into neighboring mirrors (i.e., fratricide) and provide appropriate toolpath reliefs (i.e., no infinitely sharp corners for cutting features). Each mirror's geometry is defined in SolidWorks by using the Zemax optical prescription (e.g., center of curvature, radius of curvature, mirror aperture vertex, etc.). To simplify the SolidWorks model, all surfaces are modeled as spheres -- before fabrication, a point cloud will be generated with the conic and aspheric terms added in where appropriate. The substrates start out as raw metal blanks, with mirror features added sequentially. Location features, such as bolt pads on the rear surfaces, are included in the mechanical design; this means the design inherently carries with it features that are defined relative to the mirror's geometry (e.g., center of curvature, aperture center, centering pin, etc.). The opto-mechanical design calls for each of the three sub-assemblies (input relay, slicing optics, output relay) to be mounted on their own sub-benches which in turn are mounted to a larger sub-bench mounted to the main optical bench. This particular design approach allows for each of the three sub-assemblies (input relay, slicing optics, output relay) to be aligned separately before aligning the sub-assemblies to each other. Further, each mirror mount on its sub-bench is initially defined by use of 6061-T6 `nudger blocks,' each of which carries one or two fine-pitched ball-end screws that provide three points of contact to the mount (this constrains the position of the mount on the plane of the sub-bench). The ball-end screws can push the mounts around during alignment, and are designed to provide flexible, highly repeatable location information if the mounts need to be removed for any reason. Before cooling down to cryogenic temperatures, the nudger blocks will be replaced with three (\num{3}) 6061-T6 eccentric cams that will rotate about the bolt hole formerly used by the nudger blocks and come into contact with the mount before being locked down with its own bolt. The cams will sit in a precision counterbore on the sub-bench centered at the bolthole. This approach ensures that the location of the mounts on the bench are defined with as little over-constraint as possible. In order to produce optics with low tolerance stack-up and to simplify alignment, the mirrors share substrates whenever possible. The slicer is carved from a monolithic block of aluminum, while the two columns of pupil mirrors are fabricated from two substrates while sharing a mounting bracket. The field mirrors are broken up into three substrates, with one substrate acting as the mounting bracket for the other two. Additionally the use of precision shims, attached on the tops and sides of the mounting brackets with one bolt each, allow for quick and simple metrology of reference surfaces of the mirror substrates. As a proof of concept for this approach, we designed a benchtop prototype slenslit slicer that was fabricated at Durham Precision Optics; it is described in a previous Proceedings~\cite{StelterColorsToChemistry2021}. Note that the precision shims are used to define the positions of the mirror substrates on their brackets. The mirror surfaces are well-defined in relation to the substrates during the SPDT process, so the shims are an excellent way to provide location information without unduly over-constraining the substrates. \section{EXPECTED PERFORMANCE} \label{sec:performance} The mid-resolution mode offers, for the first time, multiple spaxels with spectral resolution of $\sim3000-6000$. Figure~\ref{fig:resolution-by-filter} shows the instantaneous spectral resolution for each filter and IFS mode. While other instruments such as KPIC~\cite{MawetKPIC2016} have access to much higher spectral resolution, they use single-mode fibers that precludes having more than a few spaxels. In part, this is a trade-off between higher spectral resolution and the number of spatial units, as well as depending on the single-mode fiber to help with speckle suppression. An example of a simulated A0 stellar spectrum is shown in Figure~\ref{fig:A0-spectrum}. Our simulations use the SCALES pipeline and simulator, \texttt{scalessim} (Briesemeister et al, in prep), which includes Fresnel diffraction effects from the lenslet array's lenslet and pinhole apertures. Noise terms from atmospheric and instrumental effects are also included, but do not yet include realistic telluric correction errors. \texttt{scalessim} is available on GitHub (\url{https://github.com/scalessim/scalessim}). The mid-resolution mode will allow for characterization of exo-atmospheres by measuring chemical abundances such as CO, CH\textsubscript{4}, H\textsubscript{2}O, NH\textsubscript{3}, and others. A simulated K, L, and M band spectrum of an \SI{500}{\kelvin} exoplanet at a distance of \SI{15}{parsecs} with a \num{10} hour exposure time is shown in Figure~\ref{fig:exoplanet-spectrum-500K}. This is significantly colder than 51 Eri b, the coldest directly-imaged exoplanet to date~\cite{macintosh201551erib} with a temperature of \SIrange{700}{750}{\kelvin} (the temperature range is due to a degeneracy in fitting cloudless vs cloudy models to the data). A second simulated spectrum in M band of an even colder exoplanet is shown in Figure~\ref{fig:exoplanet-spectrum-300K}. This simulated exoplanet is also at a distance of \num{15} pc and has a temperature of \SI{300}{\kelvin}. \section{CONCLUSION} \label{sec:conclusion} We have presented the opto-mechanical design of the SCALES slenslit optics, and provided examples of its expected performance at medium spectral resolution in comparison to the low-resolution IFS mode. The slenslit (a combination of lenslet array and image slicer) opens up unprecedented spatially-resolved, high-contrast, mid-resolution IFU exoplanet spectroscopy, and will further our understanding of warm and cold exoplanetary atmospheres. SCALES is in its final design phase and we expect to be on-sky in late 2025. \acknowledgments We are grateful to the Heising-Simons Foundation and the Mt. Cuba Astronomical Foundation for their generous support of our efforts. \bibliography{report} % \bibliographystyle{spiebib} %
Title: Galaxies with Fuzzy Dark Matter
Abstract: This is a brief review on some properties of galaxies in the fuzzy dark matter model, where dark matter is an ultra-light scalar particle with mass $m = O(10^{-22})eV$. From quantum pressure, dark matter has a halo length scale which can solve the small scale issues of the cold dark matter model, such as the core-cusp problem, and explain many other observed mysteries of galaxies.
https://export.arxiv.org/pdf/2208.13511
\title{ Galaxies with Fuzzy Dark Matter} \author{Jae-Weon Lee} \affiliation{ Department of Electrical and Electronic Engineering, Jungwon University, 85 Munmu-ro, Goesan-eup, Goesan-gun, Chungcheongbuk-do 28024, Korea} \section{Introduction} Dark matter (DM) is one of the main ingredients of the universe providing a gravitational attraction to form cosmic structures~\cite{Silk:2016srn}. The most popular DM model is the cold dark matter (CDM) model in which numerical simulations successfully reproduce the observed large-scale structures of the universe, such as clusters of galaxies. However, it encounters some difficulties in explaining small-scale structures at galactic scales, such as the core-cusp problem (predicting the cusped central halo density which is not observed) and the missing satellite problem (predicting many more small galaxies than observed) ~\cite{Salucci:2002nc,navarro-1996-462,deblok-2002,crisis}. Therefore, we need a variant of the CDM that acts as CDM on a super-galactic scale while suppressing the formation of smaller structures. This requires a natural length scale with a small galaxy size on the order of $kpc$. Recently, interest in the fuzzy DM model as an alternative to the CDM has been revived. In this model, DM particles are ultra-light scalars with mass $m = O(10^{-22})eV$ in a Bose-Einstein condensate (BEC) \cite{2009JKPS...54.2622L,2014ASSP...38..107S,2014MPLA...2930002R,2014PhRvD..89h4040H,2011PhRvD..84d3531C,2014IJMPA..2950074H,Marsh:2015xka,Hui:2016ltb}. This tiny DM particle mass leads to a very high DM particle number density, which means the wave functions of the particles overlap. A huge, but finite, length scale related to the Compton wavelength $\lambda_c=1/m\sim 0.1 pc$ of the particles naturally arises in this model. Unlike conventional CDM particles that move incoherently, fuzzy DM particles in the BEC state move collectively and form a coherent wave with the de Broglie wavelength $\lambda_{dB}=O(kpc)>\lambda_c$. Despite the tiny mass, the fuzzy DM particles are non-relativistic because the particles in a BEC move as a heavy single entity. This model has many other names, such as BEC DM, scalar field DM, ultra-light axion (ULA), and wave$/\psi$ DM. The idea that galactic DMs are condensated ultra-light scalar particles has been repeatedly suggested ~\cite {1993ApJ...416L..71W,Schunck:1998nq, PhysRevLett.84.3037,PhysRevD.64.123528,repulsive,fuzzy, corePeebles,Nontopological,Mielke2009174,PhysRevD.62.103517,Alcubierre:2001ea,2012PhRvD..86h3535P,2009PhRvL.103k1301S, Fuchs:2004xe,Matos:2001ps,0264-9381-18-17-101,PhysRevD.63.125016,Julien,Boehmer:2007um, 2012arXiv1212.5745B,Eby:2015hsq}. In Ref. \citealp{1983PhLB..122..221B}, self-gravitating bosons with the de Broglie wavelength of the typical galaxy size were considered. In Ref. \citealp{1989PhRvA..39.4207M}, DM halos as a ground state of scalar fields are investigated ~\cite{Matos:2003pe}. In Ref. \citealp{Sin:1992bg}, Sin tried to explain the observed flat rotation curves (RCs) by using excited states of the fuzzy DM and obtained the particle mass $m\simeq 3 \times 10^{-23} eV$ by fitting the observed RC of galaxy NGC2998. Lee and Koh ~\cite{myhalo} suggested that DM halos were giant boson stars and considered the effect of self-interaction. In this paper, we briefly review some properties of galaxies in the fuzzy DM model. \section{Fuzzy dark matter and the small scale crisis} We still lack an exact particle physics model of the fuzzy DM. The fuzzy DM field can be a scalar field $\phi$ with an action \beq \label{action} S=\int \sqrt{-g} d^4x[\frac{-R}{16\pi G} -\frac{g^{\mu\nu}} {2} \phi^*_{;\mu}\phi_{;\nu} -U(\phi)], \eeq where the potential is given by $U(\phi)=\frac{m^2}{2}|\phi|^2$. In the Newtonian limit, this action leads to the following Schr$\ddot{o}$dinger Poisson equation (SPE), which the macroscopic wave function $\psi$ satisfies: \beqa \label{spe} i\hbar \partial_{{t}} {\psi} &=&-\frac{\hbar^2}{2m} \nabla^2 {\psi} +m{V} {\psi}, \no \nabla^2 {V} &=&{4\pi G} \rho, \eeqa where the rescaled field $\psi=\sqrt{m}\phi$, the DM mass density $\rho=m|\psi|^2=m^2|\phi|^2$, and $V(\psi)$ is the gravitation potential. We used the natural units $\hbar=1=c$. Note that the SPE can be seen as a non-linear Schr$\ddot{o}$dinger equation, which can have dispersion-less soliton solutions, unlike the ordinary Schr$\ddot{o}$dinger equation. The ground state of the SPE is one of the solitions. The SPE has a useful scaling property for numerical studies: \beq \{t,r,\psi,\rho,V\}\rightarrow \{\lambda^{-2}t,\lambda^{-1}r,\lambda^{2}\psi,\lambda^{4}\rho,\lambda^{2}V\}, \eeq where $\lambda$ is a scaling parameter. This leads to the following scaling law of parameters: \beq \{M,E,L\}\rightarrow \{\lambda M,\lambda^{3}E,\lambda L\}, \eeq where $M$ is the mass, $E$ is the energy, and $L$ is the angular momentum of a dark matter distribution. For an understanding the role of the fuzzy DM in the formation of cosmological structures, reducing the Schr\"{o}dinger equation to a fluid equation by using the Madelung relation ~\cite{2011PhRvD..84d3531C,2014ASSP...38..107S}, $\psi(r,t)=\sqrt{\rho(r,t)}e^{iS(r,t)}$, is useful. This gives an Euler-like equation \beq \frac{\partial \textbf{v}}{\partial t} + (\textbf{v}\cdot \nabla)\textbf{v} +\nabla V +\frac{\nabla p}{\rho} -\frac{\nabla Q}{m} =0, \label{euler} \eeq and a continuity equation \beq \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \textbf{v})=0. \label{continuity} \eeq Here, the fluid velocity $\textbf{v}\equiv \nabla S/2m$ and the quantum potential $Q\equiv{\hbar^2 \Delta \sqrt{\rho}}/({2m\sqrt{\rho}})$. The pressure $p$ can come from a self-interaction, if any self-interaction exists. (We have ignored the cosmic expansion for simplicity.). The quantum pressure ${\nabla Q}/{m}$ from the uncertainty principle is the key difference between the fuzzy DM model and the conventional CDM models. Below the galactic scale, the quantum pressure suppresses small structure formation, while at scales larger than galaxies, the quantum pressure becomes negligible, and the fuzzy DM behaves like CDM. This interesting property makes the fuzzy DM an ideal alternative to the CDM because it resolves the small-scale problems of the CDM model while sharing the merits of the CDM~\cite{2009NJPh...11j5029P}. To find the length scale during structure formation perturbing the above equations around $\rho=\bar\rho$, $\textbf{v}=0$, and $V=0$ is useful. One can get the following equation for the density perturbation $\delta\rho\equiv \rho-\bar{\rho}$, \beq \label{pert} \frac{\partial^2 \delta\rho}{\partial t^2}+\frac{\hbar^2}{4m^2}\nabla^2 (\nabla^2 \delta \rho) -c^2_s \nabla^2 \delta\rho - 4\pi G \bar{\rho}\delta\rho=0, \eeq where $c_s$ is the sound velocity, and $\bar{\rho}$ is the average density of background matter \cite{2012A&A...537A.127C, Suarez:2011yf}. An equation for the density contrast $\delta\equiv\delta \rho/\bar{\rho}=\sum_k \delta_k e^{ik\cdot r}$ with a wave vector $k$, \beq \frac{d^2 \delta_k}{d t^2} + \left[(c^2_q+c^2_s)k^2-4\pi G \bar{\rho} \right]\delta_k=0, \eeq can be derived from the Fourier-transformed version of Eq. (\ref{pert}), where $c_q=\hbar k/2m$ is a quantum velocity. Because we can assume $c_s$ to be almost independent of $k$, we expect the $c_q$-dependent term from the quantum pressure to dominate only for large $k$ (at a small scale)~\cite{fuzzy,Alcubierre:2002et,PhysRevD.63.063506,Harko:2011jy}. This gives the quantum Jeans length scale~\cite{1985MNRAS.215..575K,Grasso:1990zg,fuzzy} at a redshift $z$: \beq \label{lambdaQ} \lambda_Q(z)= \frac{2\pi}{k}=\left(\frac{\pi^3 \hbar^2 }{Gm^2\bar\rho(z)}\right)^{1/4} \simeq 55.6\left(\frac{\rho_b }{m_{22}^2\Omega_m h^2\bar\rho(z)}\right)^{1/4} kpc, \eeq where $m_{22}=m/10^{-22}eV$, the Hubble parameter $h=0.673$, $\rho_b$ is the current matter density, and the matter density parameter $\Omega_m=0.315$ ~\cite{PDG-2014}. Interestingly, $\lambda_Q(z)$ determines the minimum length scale of galactic halos formed at $z$ ~\cite{Lee:2008ux,Lee:2015cos,Lee:2008jp}. This fact might explain the observed size evolution of the early compact galaxies ~\cite{Lee:2008ux}. Any perturbation below $\lambda_Q(z)$ decays, and no DM structure below this scale can grow. This remarkable property resolves the small scale issues of the CDM model by suppressing the formation of too many small structures ~\cite{corePeebles,PhysRevD.62.103517,0264-9381-17-13-101,PhysRevD.63.063506}. The average mass inside $\lambda_Q$ is the quantum Jeans mass \beq \label{MJ} M_J(z)=\frac{4\pi}{3} \bar{\rho}(z) \lambda_Q^3 =\frac{4}{3} \pi^{\frac{13}{4}}\left(\frac{\hbar}{G^{\frac{1}{2}} m}\right)^{\frac{3}{2}} \bar{\rho}(z)^\frac{1}{4}, \eeq which is the minimum mass of DM structures forming at z. Therefore, one can expect the minimum mass and size of a galaxy to have a quantum mechanical origin. Understanding the core-cusp problem in the fuzzy DM model is easy. If no compact object exists at the center of a galaxy, a natural boundary condition there is a zero-derivative condition, i. e., $\partial \psi/\partial r=0$. This means the DM density is flat at the center, and this solves the core-cusp problem. If a supermassive black hole exists in the DM halo, the central boundary condition should be changed~\cite{UrenaLopez:2002du}. This property has been argued to explain the M-sigma relation of black holes ~\cite{Lee:2015yws}. The heavier the black holes are, the steeper the slope of the central density profile of DM is. This can lead to the M-sigma relation. \section{Fuzzy dark matter and galaxies} Because dwarf galaxies are the smallest DM-dominated objects, they are ideal for studying the nature of DM. A density perturbation with a size larger than $\lambda_Q$ can collapse to form a galactic DM halo. Therefore, we expect the galactic scale to be smaller than $\lambda_Q$. To find a typical size $\xi$ of dwarf galaxies, we consider the approximate energy function of a spherical DM halo from Eq. (\ref{spe}): \beq E(\xi) \simeq\frac{\hbar^2}{2m \xi^2}+\int^{\xi}_0 dr'\frac{Gm}{r'^2}\int^{r'}_0 dr'' 4\pi r''^2 \rho(r''), \eeq where $\rho(r)$ is the DM density at $r$. One can obtain the size of the ground state (the solition), \beq \label{xi} \xi=\frac{\hbar^2}{GMm^2}, \eeq from the condition $\frac{dE(\xi)}{d\xi}=0$ ~\cite{sin1,Silverman:2002qx}. Here, $M\equiv \int^{\xi}_0 dr' 4\pi r'^2 \rho(r')$ is the mass within $\xi$. The size of the halo is inversely proportional to $M$. A natural assumption is that the quantum Jeans mass is similar to the minimum value of $M$. Therefore, the lightest galaxy formed at $z$ has a typical size \beq \label{xiz} \xi(z)=\frac{\hbar^2}{G M_J(z)m^2}=\frac{3\hbar^{1/2}}{4\pi^{13/4} (G m^2 \bar{\rho}(z))^{1/4}}, \eeq which has the same form of $\lambda_Q$ (Eq. (\ref{lambdaQ})) with a somewhat smaller constant. Note that according to the above equation the lightest (dwarf) galaxy has a maximum size among dwarf galaxies in this theory. Antlia II was shown to be close to this upper limit in size and the velocity of stars is consistent with the theory~\cite{Broadhurst:2019fsl}. This model seems to explain the minimum length scale of galaxies ~\cite{Strigari:2008ib} and the size evolution ~\cite{Lee:2008ux} of the most massive galaxies ~\cite{2009Natur.460..717V}. On the other hand, an approximate maximum mass of galaxies can also be obtained from the stability condition of boson star theory. The maximum mass of the ground state is $0.633 m^2_P/m$. If we take this value for the maximum mass of galaxies ($O(10^{12}M_\odot)$), we can derive a constraint $m \geq O( 10^{-28}) eV$. From the maximum stable central density, $m \leq O( 10^{-22}) eV$~\cite{myhalo}. The finite length scale also implies a finite acceleration scale in this model. Using an approximation $\partial_r\sim 1/\xi$ in $\nabla Q/m$ in Eq. (\ref{euler}), one can obtain a typical acceleration scale for DM halos of \beq g^\dagger \equiv \frac{\hbar^2}{2m^2 \xi^3} =2.2\times 10^{-10} \left(\frac{10^{-22}e{\rm V}}{m}\right)^{2} \left(\frac{300{\rm pc}}{\xi}\right)^3 \mbox{m/s}^2 , \label{gdagger} \eeq which is absent in other DM models. This scale is relevant for the baryonic Tully-Fisher relation (BTFR)~\cite{Lee:2019ums}, which is an empirical relation between the total baryonic mass of a disk galaxy and its asymptotic rotation velocity. Interestingly, if we choose the core size of the dwarf galaxies ($\sim 300\,\mbox{pc}$ ~\cite{Strigari:2008ib}) for $\xi$, we can reproduce the observed value $g^\dagger=1.2 \times 10^{-10} \mbox{m/}\mbox{s}^2$ for Modified Newtonian dynamics (MOND), the radial acceleration relation (RAR), and BTFR~\cite{PhysRevLett.117.201101}. Mond was proposed to explain the rotation curves without the DM~\cite{1983ApJ...270..365M}. According to MOND gravitational acceleration of baryonic matter, $g_b$ should be replaced by \beq g_{obs}=\sqrt{g_b g^\dagger}, \label{MOND} \eeq when $g_b<g^\dagger$. MOND and RAR may just be effective phenomena of fuzzy DM. Surprisingly, DM simulations using graphic processing units (GPUs) with an adaptive mesh refinement (AMR) scheme ~\cite{Schive:2014dra} revealed that a solitonic core exists in every halo surrounded by granules from DM interference (see Fig. 1). This configuration is different from the simple excited states of the fuzzy DM or CDM. An approximate numerical solution can be found in Ref. \citealp{Schive:2014dra}: \beq \rho(r)\simeq \frac{\rho_0}{\left(1+0.091(r/r_c)^2\right)^{8}}, \eeq where the central core density is $\rho_0=1.9 a^{-1} \left(m/ 10^{-23}eV\right)^2(kpc/r_c)^4 M_\odot/pc^3$ and $r_c$ is the half-density radius. The outer profile of the halo is similar to the Navarro-Frenk-White (NFW) profile of the CDM (See Fig. 2.). The halo mass $M_{halo}$ was also observed to be related to the soliton mass $M$ by $M\propto M^{1/3}_{halo}$. Figure 1 shows the DM density in a halo in our DM-only numerical simulation using the spectral method. In this example, 10 small halos with mass $M=10^9 M_\odot$ collide with each other. From $\psi$, one can predict the astrophysical properties of galaxies. For example, the rotation velocity at radius $r$ is roughly given by $ v_{rot}(r)=\sqrt{\frac{GM(r)}{r}}, $ where $M(r)=4\pi \int^r_0 r'^2 \rho(r') d{r}'$ is the mass within $r$. This equation can be used to investigate the RCs of galaxies \cite{Matos:2003pe,Lesgourgues2002791,Robles:2012uy,Schive:2014hza} in this model. The mass $m \sim 10^{-22}eV$, which is consistent with other cosmological constraints, was obtained by fitting RCs. \section{Discussions} Numerical studies in this model so far are mainly DM-only simulations. For a more precise simulation of large galaxies we need to understand the role of baryon matter such as stars or gas. For example, it was shown that the gravitational potentials of the fuzzy DM induce spiral arm patterns of stars in galaxies~\cite{2012arXiv1212.5745B}. In Ref. \citealp{Chan_2018} it was numerically shown that the flat RCs appear only when we include visible matter in large galaxies. In summary, the fuzzy DM with mass about $10^{-22}eV$ can explain many mysterious properties of galaxies. To find conclusive proofs, we need more precise fuzzy DM simulations with visible matter. \subsection*{Acknowledgments} This work was supported by NRF-2020R1F1A1061160.
Title: Sublimation Origin of Active Asteroid P/2018 P3
Abstract: Active asteroids show (typically transient) cometary activity, driven by a range of processes. A sub-set, sometimes called main-belt comets, may be driven by sublimation and so could be useful for tracing the present-day distribution of asteroid ice. Object P/2018 P3 has a Tisserand parameter 3.096 but a high eccentricity 0.415, placing it within the dynamical boundary between asteroids and comets. We aim to determine the cause of activity (sublimation or something else) and assess the dynamical stability of P3, in order to better constrain the intrinsic ice content in the main belt. We obtained Hubble Space Telescope images of P3 at the highest angular resolution. We compared the observations with a Monte Carlo model of dust dynamics. We identified and analyzed archival CFHT (2013) and NEOWISE (2018) data. In addition, we numerically integrated the orbits of P3 clones for 100 Myr. P3 has been recurrently active near two successive perihelia (at 1.76 AU), indicative of a sublimation origin. The absence of 4.6 um band excess indicates zero or negligible CO or CO2 gas production from P3. The properties of the ejected dust are remarkably consistent with those found in other main-belt comets (continuous emission of ~0.05-5 mm particles at 0.3-3 m/s speeds), with mass-loss rates of >~2 kg/s. The orbit of P3 is unstable on timescales ~10 Myr. We speculate that P3 has recently arrived from a more stable source (either the Kuiper Belt or elsewhere in the main belt) and has been physically aged at its current location, finally becoming indistinguishable from a weakly sublimating asteroid in terms of its dust properties. Whatever the source of P3, given the dynamical instability of its current orbit, P3 should not be used to trace the native distribution of asteroid ice.
https://export.arxiv.org/pdf/2208.07868
\title{Sublimation Origin of Active Asteroid P/2018 P3} \author{Yoonyoung Kim\inst{1} \and Jessica Agarwal\inst{1,2} \and David Jewitt\inst{3} \and Max Mutchler\inst{4} \and Stephen Larson\inst{5} \and Harold Weaver\inst{6} \and Michael Mommert\inst{7}} \institute{Institute for Geophysics and Extraterrestrial Physics, TU Braunschweig, 38106 Braunschweig, Germany\\ \email{yoonyoung.kim@tu-bs.de} \and Max Planck Institute for Solar System Research, 37077 G\"ottingen, Germany \and Department of Earth, Planetary and Space Sciences, UCLA, Los Angeles, CA 90095-1567, USA \and Space Telescope Science Institute, Baltimore, MD 21218, USA \and Lunar and Planetary Laboratory, University of Arizona, Tucson, AZ 85721-0092, USA \and The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland 20723, USA \and University of St. Gallen, Institute of Computer Science, 9000 St. Gallen, Switzerland} \abstract {Active asteroids show (typically transient) cometary activity, driven by a range of processes. A sub-set, sometimes called main-belt comets, may be driven by sublimation and so could be useful for tracing the present-day distribution of asteroid ice. Object P/2018 P3 has a Tisserand parameter 3.096 but a high eccentricity 0.415, placing it within the dynamical boundary between asteroids and comets.} {We aim to determine the cause of activity (sublimation or something else) and assess the dynamical stability of P3, in order to better constrain the intrinsic ice content in the main belt. } {We obtained Hubble Space Telescope images of P3 at the highest angular resolution. We compared the observations with a Monte Carlo model of dust dynamics. We identified and analyzed archival CFHT (2013) and NEOWISE (2018) data. In addition, we numerically integrated the orbits of P3 clones for 100 Myr.} {P3 has been recurrently active near two successive perihelia (at 1.76 AU), indicative of a sublimation origin. The absence of 4.6~$\mu$m band excess indicates zero or negligible CO or CO$_2$ gas production from P3. The properties of the ejected dust are remarkably consistent with those found in other main-belt comets (continuous emission of $\sim$0.05--5 mm particles at 0.3--3 m s$^{-1}$ speeds), with mass-loss rates of $\gtrsim$2 kg s$^{-1}$. The orbit of P3 is unstable on timescales $\sim$10 Myr.} {We speculate that P3 has recently arrived from a more stable source (either the Kuiper Belt or elsewhere in the main belt) and has been physically aged at its current location, finally becoming indistinguishable from a weakly sublimating asteroid in terms of its dust properties. Whatever the source of P3, given the dynamical instability of its current orbit, P3 should not be used to trace the native distribution of asteroid ice.} \keywords{minor planets, asteroids: general --- minor planets, asteroids: individual (P/2018~P3) --- comets: general} \titlerunning{Active Asteroid P/2018 P3} \authorrunning{Y. Kim et al.} \section{INTRODUCTION} Active asteroids are small solar system bodies that combine asteroid-like orbits and comet-like activity (Jewitt et al. 2015). By definition, active asteroids have Tisserand parameter values of $T_J > 3.08$ (i.e. dynamically decoupled from Jupiter and excludes Encke-type comets), while Kuiper Belt comets and Oort cloud comets have $T_J < 3$. Mass loss mechanisms identified to date include sublimation, impacts, rotational breakup, and combinations of these processes. A subset of the active asteroids called main-belt comets (MBCs; Hsieh \& Jewitt 2006) exhibits recurrent mass loss near perihelion, indicating sublimation-driven activity. Proper identification of MBCs is essential to improve our understanding of the distribution and abundance of ice in the main belt. Active asteroid P/2018 P3 (PANSTARRS, hereafter ``P3'') was discovered in an active state on UT 2018 August 08 (Weryk et al.~2018), two months before perihelion on UT 2018 October 09. Its orbital semimajor axis, eccentricity and inclination are 3.007 AU, 0.415 and 8.90\degr, respectively, leading to an asteroid-like Tisserand parameter, $T_J$ = 3.096. While most of the currently known active asteroids have relatively low eccentricities and are assumed to be native to the main belt, the orbital eccentricity of P3 is comparatively high, which favors excavation and sublimation of sub-surface ice through higher impact velocities and perihelion temperatures (Kim et al. 2018) but also increases the probability for an origin in the Kuiper Belt (Hsieh \& Haghighipour 2016). In this paper we report time-resolved observations from the Hubble Space Telescope (HST) taken to investigate P3 at high spatial resolution. We also aim to determine the cause of the activity and assess the dynamical stability of P3, ultimately to better constrain the native population of main-belt ice. \begin{table*} \caption{Observing Geometry \label{geometry}} \centering \begin{tabular}{lcccrccccr} \hline\hline UT Date and Time & DOY\tablefootmark{a} & $\Delta T_p$\tablefootmark{b} & $\nu$\tablefootmark{c} & $r_H$\tablefootmark{d} & $\Delta$\tablefootmark{e} & $\alpha$\tablefootmark{f} & $\theta_{\odot}$\tablefootmark{g} & $\theta_{-v}$\tablefootmark{h} & $\delta_{\oplus}$\tablefootmark{i}\\ \hline 2018 Sep 28 18:19 - 18:56 & 271 & -11 & 354.6 & 1.758 & 0.786 & 11.9 & 11.9 & 243.2 & 9.1 \\ 2018 Nov 14 20:07 - 20:46 & 318 & 36 & 18.1 & 1.782 & 1.035 & 27.5 & 60.2 & 242.6 & 1.0 \\ 2018 Dec 28 20:54 - 21:32 & 362 & 80 & 38.6 & 1.877 & 1.503 & 31.4 & 66.8 & 239.7 & -3.5 \\ \hline \end{tabular} \tablefoot{ \tablefoottext{a}{Day of Year, UT 2018 January 01 = 1.} \tablefoottext{b}{Number of days from perihelion (UT 2018-Oct-09 = DOY 282).} \tablefoottext{c}{True anomaly, in degrees.} \tablefoottext{d}{Heliocentric distance, in AU.} \tablefoottext{e}{Geocentric distance, in AU.} \tablefoottext{f}{Phase angle, in degrees.} \tablefoottext{g}{Position angle of the projected anti-Solar direction, in degrees.} \tablefoottext{h}{Position angle of the projected negative heliocentric velocity vector, in degrees.} \tablefoottext{i}{Angle of Earth above the orbital plane, in degrees.} } \end{table*} \section{OBSERVATIONS} Observations with the HST were taken under the Director's Discretionary Time allocation GO 15623. We used the UVIS channel of the WFC3 camera with the broadband F606W filter (effective wavelength $\sim$6000\AA, FWHM $\sim$2300\AA). In each HST orbit we obtained eight exposures of 230 s duration using a 1k subarray of WFC3 (40\arcsec$\times$40\arcsec~field), with a 2$\times$2 subsampling dither-pattern. The image scale was initially 0.04\arcsec~pixel$^{-1}$ and we re-sampled the images to have a scale of 0.025\arcsec~pixel$^{-1}$. The remarkably small geocentric distance (0.786 AU) on UT 2018 September 28 provides superb spatial resolution of 17 km per re-sampled WFC3 pixel, giving the opportunity to study the inner coma of an active asteroid near perihelion in great detail. The observations on UT 2018 November 14 were scheduled to coincide with the passage of the Earth through the orbital plane (the out-of-plane angle was 1\degr) to measure the distribution of dust perpendicular to the orbit plane. A journal of observations is given in Table \ref{geometry}. \begin{table*} \caption{Photometry with Fixed Linear Radius Apertures \label{phot}} \centering \begin{tabular}{lccccccc} \hline\hline UT Date & Quantity\tablefootmark{a} & 500 km & 1000 km & 2000 km & 4000 km & 8000 km \\ \hline Sep 28 & $V$ & 18.16$\pm$0.01 & 17.76$\pm$0.01 & 17.45$\pm$0.03 & 17.22$\pm$0.03 & 17.11$\pm$0.03 \\ Sep 28 & $H$ & 16.98$\pm$0.01 & 16.58$\pm$0.01 & 16.27$\pm$0.03 & 16.04$\pm$0.03 & 15.93$\pm$0.03 \\ Sep 28 & $C_e$ & 6.1$\pm$0.2 & 8.8$\pm$0.5 & 11.7$\pm$0.6 & 14.4$\pm$0.6 & 15.9$\pm$0.6 \\\\ Nov 14 & $V$ & 19.57$\pm$0.01 & 19.01$\pm$0.01 & 18.50$\pm$0.03 & 18.06$\pm$0.06 & 17.78$\pm$0.06 \\ Nov 14 & $H$ & 17.14$\pm$0.01 & 16.58$\pm$0.01 & 16.07$\pm$0.03 & 15.63$\pm$0.06 & 15.35$\pm$0.06 \\ Nov 14 & $C_e$ & 5.2$\pm$0.2 & 8.8$\pm$0.4 & 14.0$\pm$0.5 & 20.9$\pm$0.4 & 27.1$\pm$0.4 \\\\ Dec 28 & $V$ & 21.90$\pm$0.01 & 21.16$\pm$0.01 & 20.42$\pm$0.03 & 19.79$\pm$0.06 & 19.37$\pm$0.06 \\ Dec 28 & $H$ & 18.39$\pm$0.01 & 17.65$\pm$0.01 & 16.91$\pm$0.03 & 16.28$\pm$0.06 & 15.86$\pm$0.06 \\ Dec 28 & $C_e$ & 1.6$\pm$0.3 & 3.3$\pm$0.6 & 6.5$\pm$0.8 & 11.5$\pm$0.7 & 17.0$\pm$0.7 \\ \hline \end{tabular} \tablefoot{ \tablefoottext{a}{$V$ = total apparent V magnitude, $H$ = total absolute V magnitude, $C_e$ = effective scattering cross-section in km$^2$ computed with $p_V$ = 0.04.} } \end{table*} \section{RESULTS} \subsection{Photometry} We obtained photometry from each composite image (Figure \ref{images}) using a set of five circular apertures having fixed linear radii from 500 to 8,000~km, projected to the distance of P3. The sky background was determined within a concentric annulus with inner and outer radii of 15\arcsec~and 20\arcsec, respectively. Flux calibration was performed using the online WFC3 Exposure Time calculations for a G2V source in the F606W filter. We converted the total apparent magnitudes, $V$, to total absolute magnitudes, $H$, using \begin{equation} H = V - 5\log_{10}(r_H \Delta) - f(\alpha). \label{absolute} \end{equation} \noindent where $r_H$ and $\Delta$ are the heliocentric and geocentric distances, respectively. The phase function $f(\alpha)$ at solar phase angle $\alpha$ was not measured for P3. We used a linear phase function $f(\alpha) = 0.04\alpha$ (Meech \& Jewitt 1987). The total absolute magnitude is related to the effective scattering cross-section, $C_e$ [km$^2$], by \smallskip \begin{equation} C_e = \frac{1.5\times 10^6}{p_V} 10^{-0.4 H} \end{equation} \noindent where $p_V$ = 0.04 is the assumed geometric albedo (Fern{\'a}ndez et al. 2013). For each date and aperture radius, $V$, $H$, and $C_e$ are listed in Table \ref{phot}. In Figure \ref{ce}, the scattering cross-section within the 8,000~km radius aperture brightened by $\Delta C_e$~=~11 km$^2$ from 16 km$^2$ in September (10 days before perihelion) to 27 km$^2$ in November (1 month after perihelion), then dropped back to 17 km$^2$ at the end of December. The scattering cross-section within the central aperture decreased continuously from September to December, reaching $C_e$ = 1.6 km$^2$ on UT 2018 December 28. A crude upper limit to the nucleus radius (using photometry from the central aperture) is given by $r_n = (C_e/\pi)^{1/2}$ = 0.71 km, assuming $p_V$ = 0.04. \subsection{Morphology} \label{morphology} For a density of 1000 kg m$^{-3}$, the Hill radius of an object having a radius of $r_n$ = 700~m and the orbit of P3 (heliocentric distance $r_H$ = 1.75 AU) is $\sim$200 km. We attempted to search for a potential binary companion and large unbound fragments since our high-resolution images should have good resolution of the Hill sphere (corresponding to $\sim$11 WFC3 pixels). A wide binary system (Agarwal et al. 2020), and gravitationally unbound fragments resulting from a (near-)catastrophic collision (Kim et al. 2017) or spin-up (Drahus et al. 2015) have been found in other active asteroids, providing key constraints determining the cause of activity. Observations on all dates show a comet-like coma (Figure \ref{images}), with no obvious sign of fragments or other structures in the coma. To enhance the inner coma structure, we subtracted a 1/$\rho$ gradient and azimuthal average using the Cometary Coma Image Enhancement Facility (Samarasinha et al. 2013). The results revealed an excess jet-like structure, with a jet axis rotating counter-clockwise that closely follows the changing antisolar direction (Figure \ref{images}). This is consistent with recently-released particles that are small enough to be strongly accelerated by solar radiation pressure, indicating continuous dust emission. In contrast, we observe in the enhanced images no evidence for binarity or companions/fragments. To set a limit to the size of unseen secondary objects, we used the on-line Exposure Time Calculator to find that, in a 8 $\times$ 230 s integration, signal-to-noise ratio SNR = 10 is reached at magnitude $V$ = 26.5. The corresponding limiting absolute magnitude is $H >$ 25.3 and the upper limit to the radius is $r_e <$ 26 m (geometric albedo $p_V$ = 0.04 assumed). The limiting magnitude will be poorer in regions of the image where scattered light from dust elevates the background brightness, but we have not attempted to quantify this effect. The motion of dust particles in interplanetary space is controlled by $\beta$, the ratio of radiation pressure acceleration to solar gravity. $\beta$ is a function of particle size, approximately given by $\beta \sim a^{-1}$, where $a$ is the particle radius expressed in microns. Figure \ref{syn} shows syndyne trajectories, which are the loci of particles of a given $\beta$ released from the nucleus with zero ejection velocity at different times (Finson \& Probstein 1968). The directions of the tail are best matched by syndynes with $\beta \sim$ 0.003 to 0.03, corresponding to particle radii $a \sim$ 30 to 300 $\mu$m. We take $a \sim$ 100 $\mu$m as the nominal grain size in P3. \subsection{Dust Profiles} \label{perpendicular} Observations on UT 2018 November 14 were taken only +1$\degr$ from the projected orbital plane of P3, and provides a constraint on the dust ejection velocity component perpendicular to the orbital plane. For this purpose, we computed surface brightness profiles cut perpendicular to the tail direction in the image plane, in 1--4\arcsec~wide segments, where the tail gradually widened with distance from the nucleus. Figure \ref{profile} shows the positions of peak and half-peak brightness measured separately to the north and south of the tail. Because of the projection effect that occurs when viewing the dust sheet from above the plane (even though by only +1$\degr$), the dust extends slightly further north than south of the tail axis (Figure \ref{syn}). Nevertheless, the profile (Figure \ref{profile}) appeared symmetrical from the peak dust brightness, showing that the ejection is almost symmetric. We take the FWHM of a series of vertical profiles as the measure of the out-of-plane width. The width of the tail, $w_T$, is related to the distance from the nucleus, $\ell_T$ [m], by \begin{equation} V_{\perp} = \left(\frac{\beta g_{\odot}}{8 \ell_T} \right)^{1/2} w_T \label{width} \end{equation} \noindent where $V_{\perp}$ is the ejection velocity perpendicular to the orbit plane and $g_{\odot} \sim 0.002$ m s$^{-2}$ is the local solar gravitational acceleration at $r_H$ = 1.78 AU. For simplicity, we assume that $\ell_T$ is proportional to the projected angular distance from the nucleus, $\theta$. We fitted Equation (\ref{width}) to the FWHM in Figure \ref{profile}, finding $V_{\perp}$ = (20$\pm$2) m s$^{-1}$ on the tail for $\beta = 1$ particles. Within the uncertainties, we take $V_{\perp} \sim 20\sqrt{\beta}$ m s$^{-1}$ as the dust ejection velocity. \section{DISCUSSION} \subsection{Ejection Mechanism} The presence of a persistent anti-solar tail at each epoch of observation (Figure \ref{images}) can be naturally explained by sublimation. More convincing evidence of a sublimation origin for the activity is the existence of archival data showing the recurrence of activity. We identified and analyzed archival data from the Canada-France-Hawaii 3.6~m telescope (CFHT) obtained with MegaCam on UT 2013 July 15 using an $R$ filter. The CFHT data (Figure \ref{2013}) show a prominent dust tail when at heliocentric distance $R$ = 1.794 AU (perihelion was at 1.793 AU on UT 2013 July 08). This places P3 in the category of an ice-bearing MBC (Hsieh \& Jewitt 2006). We measured the total apparent $R$ magnitude of P3 using the MegaCam calibrations but checked the result using field stars from the Pan-STARRS1. Within a $\theta$ = 6.0$\arcsec$ radius aperture (linear radius of 8,000 km at the comet), we obtained $R$ = 20.5$\pm$0.02. Measurements were converted to $V$ using the color of the Sun $V - R$ = 0.35 (Holmberg et al. 2006). We find $V$ = 20.9 $\pm$ 0.1, where the quoted uncertainty reflects the fact that the color of P3 is not known. The corresponding absolute $V$ magnitude computed from Equation (1) is $H_V$ = 17.0 $\pm$ 0.1, about $\sim$1 mag fainter than the 8,000~km aperture measurement from 2018 (Table 2), albeit with considerable uncertainty. Figure \ref{comp} compares total absolute magnitudes from P3's 2013 and 2018 active periods as a function of the true anomaly, $\nu$. The figure shows that the absolute magnitude in 2013 ($H_V \sim$ 17.0) is fainter than the extrapolated absolute magnitude in 2018 (when P3 had a similar true anomaly of $\nu \sim 0\degr$) by 1.2 mag, corresponding to a factor of $\sim$3. This indicates that activity in P3 has increased from orbit to orbit, presumably as a result of the progressive exposure of a larger (but still small) area of sublimating ice. Possible causes are exposure of new ice by slope collapse, edge erosion of the sublimating ice patch, or surface movement caused by rotational instability. We additionally identified and analyzed archival data from the NEOWISE (Mainzer et al. 2011, 2014) obtained on UT 2018 December 1--2 in the W1 (3.4 $\mu$m) and W2 (4.6 $\mu$m) channels. For each filter, 16 images were stacked using the online WISE MOS Search (Figure \ref{wise}). Within a $\theta$ = 8.25$\arcsec$ radius aperture, we obtained flux densities in W1 ((0.14$\pm$0.05) mJy) and W2 ((0.42$\pm$0.11) mJy), respectively. In Figure \ref{spec}, we show the W1 and W2 flux density measurements together with blue and orange curves to indicate the spectrum of scattered sunlight and the thermal emission from P3, respectively. The scattered sunlight spectrum has been normalized to fit the W1 measurement. The thermal emission spectrum was calculated using the total cross-section indicated by W1, assuming a dust temperature 227~K (10\% warmer than the isothermal blackbody temperature) and an emissivity to albedo ratio $\epsilon / A = 3.5$ (Kelley et al. 2016; see Equations (1)--(3) and references therein). The green curve in Figure \ref{spec} shows the sum of the scattered and thermal emission curves. Evidently, the composite spectrum is consistent with the data and provides no evidence for an excess in W2 that might have been associated with gaseous (CO or CO$_2$) emission (c.f.~Bauer et al. 2015). Similarly, null detections of gas production have been reported in other active asteroids (Bauer et al. 2012; Snodgrass et al. 2017). \subsection{Particle Properties} To explore the properties of the ejected dust, we conducted a series of simulations of dust dynamics taking into account both solar gravity and radiation pressure. We created model images of P3 using a Monte Carlo dynamical procedure developed in Ishiguro et al. (2007). As a starting point, we used the model parameters derived by Hsieh et al. (2009). They assumed the terminal ejection velocity $V = V_0 \beta^{u_1} r_H^{-u_2}$ for $10^{-4} \le \beta \le 10^{-2}$, with $V_0$ = 25 m s$^{-1}$, $u_1$ = 0.5, $u_2$ = 0.5 and $r_H$ expressed in AU. The model assumes that dust particles are ejected continuously, in a sunward cone with half-opening angle $\omega = 45\degr$, and follow a differential power-law size distribution with index $q = -3.5$ in the $\beta$ range of $\beta_{\rm min} \leq \beta \leq \beta_{\rm max}$. In the new model for P3, the parameters $V_0$, $\beta_{\rm min}$, $\beta_{\rm max}$, and the onset time of dust ejection, $t_0$, were treated as variables, and the remaining parameters were used the same as in Hsieh et al. (2009). We created a number of model images using different parameters and then visually compared them to the observations. With small modulation of the parameters, we found plausible solutions that could reproduce the direction, extent and overall shape of the coma. Figure \ref{model} compares the observations with the models. Dust ejection is assumed to begin in 2018 July (3 months before perihelion) to match the coma direction. We find the minimum and maximum $\beta$ values between $1\times10^{-4} \le \beta_{\rm min} \le 2\times10^{-4}$ and $1\times10^{-2} \le \beta_{\rm max} \le 5\times10^{-2}$, respectively, where $V_0 = (40\pm10)$ m s$^{-1}$. This corresponds to the dust velocity $V = 40 \sqrt{\beta/1.78} = 30\sqrt{\beta}$ m s$^{-1}$, broadly consistent with $V_{\perp} \sim 20\sqrt{\beta}$ m s$^{-1}$ inferred from the perpendicular profile (Section \ref{perpendicular}). Similar model parameters were found for MBCs 238P, 324P, 358P, and P/2017 S5 (Hsieh et al. 2009; Moreno et al. 2011, 2013; Jewitt et al.~2019), indicating that these objects share similar properties of the ejected dust. The best-fit parameters from the model indicate that the minimum and maximum particle radii are $a_{\rm min} \sim$ 20--100 $\mu$m and $a_{\rm max} \sim$ 0.5--1 cm, respectively. We take $a_{\rm min}$ = 50 $\mu$m and $a_{\rm max}$ = 0.5 cm, yielding $\overline{a} = (a_{\rm min} a_{\rm max})^{1/2}$ = 0.5 mm. We estimate an order of magnitude mass loss using \begin{equation} M_d = \frac{4}{3} \rho \overline{a} \Delta C_e \end{equation} \noindent where $\rho$ = 1000 kg m$^{-3}$ is the assumed particle density, $\overline{a}$ is the mean particle radius and $\Delta C_e$ is the change in the scattering cross-section (where brightening was observed). We take $\overline{a}$ = 0.5 mm, $\Delta C_e$~=~(11.2$\pm$0.7) km$^2$ between September 28 and November 14 (Table \ref{phot}) to obtain a mass ejection $M_d \sim 7.5\times10^6$ kg. If ejected steadily over this 47 day interval, the average dust ejection rate would be $dM_d/dt \sim$ 2 kg s$^{-1}$. We note that this is a lower limit to the average mass loss rate, as dust moving out of the aperture was not considered. We solved the energy balance equation for an exposed water ice surface sublimating in equilibrium with sunlight. At $r_H$ = 1.78 AU ($T$ = 196 K), we find that ice would sublimate at the specific rate $F_s$ = 1.1$\times$10$^{-4}$ kg m$^{-2}$ s$^{-1}$. The area of exposed ice needed to supply dust at a rate, $dM_d/dt \sim$ 2 kg s$^{-1}$, is given by \begin{equation} A_s = \frac{dM_d/dt}{f_{dg} F_s} \label{subl_area} \end{equation} \noindent where $f_{dg}$ is the ratio of the dust to gas production rates. We adopt $f_{dg}$ = 10 (Fulle et al. 2016; Reach et al. 2000) to find $A_s$ = 1800 m$^2$ ($\sim$0.03\% of the surface of a spherical nucleus of radius 700 m) corresponding to a circular, sublimating patch as small as $r_s = (A_s/\pi)^{1/2}$ $\sim$24 m in radius. In Figure \ref{whipple}, the empirical dust grain ejection velocity from P3 (obtained from the Monte Carlo model) is compared with the velocities predicted using the classical Whipple model (Whipple 1951) and the small source approximation (SSA) model (Jewitt et al. 2014). In the SSA, a small source (length scale $r_s \sim$24 m) limits the acceleration length for gas-entrained dust particles and so leads to lower dust velocities. The empirical dust speeds (blue line) are an order of magnitude smaller than predicted by the Whipple model but consistent with those of the SSA model for a vent of the measured size. Ejection velocities of 5 mm particles are comparable to the 0.3 m s$^{-1}$ gravitational escape speed of the (non-rotating) nucleus, while smaller particles are launched at speeds faster than the gravitational escape velocity. \subsection{Dynamics} P3 has a high eccentricity ($e$ = 0.415), perihelion distance ($q$ = 1.756 AU) close to the aphelion of Mars ($Q_{\rm Mars}$ = 1.666 AU), and $T_J$ value ($T_J = 3.096$) in the dynamical boundary region between asteroids and comets. Furthermore, the semimajor axis ($a$ = 3.007 AU) lies close to the 9:4 mean-motion resonance with Jupiter at $a_{9:4}$ = 3.029 AU, suggesting that this resonance may induce dynamical instability. To assess whether P3 is native to its current location, we investigated its long-term dynamical stability. We generated 1000 Gaussian distributed clones in orbital element space, centered on the osculating orbital elements of P3 and used $\sigma$ values equal to the orbital element uncertainties. The osculating orbital elements and their 1$\sigma$ uncertainties were retrieved from the JPL database at epoch 2018 October 21. A more realistic clone can be obtained from a multivariate normal distribution with an orbital covariance matrix, but it is not used here for the present purpose. We integrated the orbits backward for 100 Myr using the Mercury N-body integration package (Chambers 1999), where we set the non-gravitational force equal to zero. For comparison, we additionally generated and integrated 1000 clones of 259P, an MBC that is close to the 8:3 mean-motion resonance with Jupiter (Jewitt et al. 2009). These clones were generated with small deviations to reflect the small orbital element uncertainties of 259P. Figure \ref{clones} shows the percentage of P3 and 259P clones that remain in main-belt-like orbits ($2.064<a<3.277$ AU and $q>1.65$~AU and $Q<4.50$~AU; following Hsieh \& Haghighipour 2016) in the backward dynamical evolution. The number of P3 survivors decreases exponentially on an e-folding time $\sim$12 Myr. During the whole computed period of 100~Myr, 98\% of the P3 clones left the main belt (implying that the orbits are highly unstable), while all 259P clones remained in the main belt. The orbit of P3 is chaotic and, therefore, not possible to follow back except for a short period of time. Below we consider two possible sources for P3. \begin{table*} \caption{Orbital Elements of P/2018 P3 and 233P \label{233P}} \centering \begin{tabular}{lccccccccc} \hline\hline Object & $a$\tablefootmark{a} & $e$\tablefootmark{b}& $i$\tablefootmark{c} & $T_J$\tablefootmark{d} & $q$\tablefootmark{e} & $Q$\tablefootmark{e} & $r_e$\tablefootmark{f} & $Q_{CO_2}$\tablefootmark{g} & Ref.\tablefootmark{h} \\ & (AU) & & (deg) & & (AU) & (AU) & (km) & (mol~s$^{-1}$) \\ \hline P/2018 P3 & 3.007 & 0.415 & 8.90 & 3.096 & 1.756 & 4.257 & $<$0.71 & -- & [1] \\ 233P & 3.033 & 0.410 & 11.27 & 3.081 & 1.787 & 4.279 & $\sim$0.54 & 1.1$\times10^{25}$ & [2] \\ \hline \end{tabular} \tablefoot{ \tablefoottext{a}{Semimajor axis in AU.} \tablefoottext{b}{Orbital eccentricity.} \tablefoottext{c}{Orbital inclination (degrees).} \tablefoottext{d}{Tisserand parameter with respect to Jupiter.} \tablefoottext{e}{Perihelion, $q$ and aphelion, $Q$, distances, in AU.} \tablefoottext{f}{Effective circular radius, in km.} \tablefoottext{g}{CO$_2$ production rate at $r_H$ = 1.8 AU, in molecules~s$^{-1}$.} \tablefoottext{h}{References: [1] This work; [2] Bauer et al. (2015).}} \end{table*} Table \ref{233P} lists the orbital elements of P3 and 233P, which are remarkably similar and suggest the same origin. Hsieh et al. (2018) used synthetic Jupiter-family comets (JFCs) studied by Brasser \& Morbidelli (2013) to find a dynamical path from the Kuiper Belt to the orbit of 233P (Figure 29 of Hsieh et al. 2018). Because of the similarity of their orbital elements, we find that a single synthetic JFC could take on both 233P-like and P3-like orbital elements at some point during their evolution (Figure \ref{JFC}). We infer that there is a small but non-zero probability that JFCs can evolve into P3-like orbits. On the other hand, inspection of synthetic JFCs did not find a dynamical path from the Kuiper Belt to the orbit of 259P (R. Brasser 2018, private communication), consistent with the conclusion in Jewitt et al. (2009) that it originated elsewhere in the main belt. Lastly, we consider the possibility that P3 originated in the main belt. The dynamical evolution of main belt objects into mean-motion resonances due to Yarkovsky drift, and subsequent increase of eccentricity and inclination is a possible scenario (e.g. Hsieh et al. 2020; Kim et al. 2014). Since our integrations did not include non-gravitational forces such as the Yarkovsky effect, we leave their role open in the dynamical evolution of P3. \section{SUMMARY} We present Hubble Space Telescope measurements of active asteroid P/2018 P3 taken on three occasions between UT 2018 September 28 ($r_H$ = 1.758 AU, inbound) and 2018 December 28 ($r_H$~=~1.877 AU, outbound). We additionally identify and analyze archival CFHT data (showing P3 to have been active in 2013) and NEOWISE data (showing the absence of 4.6~$\mu$m band excess). We find that \begin{enumerate} \item Sublimation explains the protracted nature of the activity, and its recurrence near perihelion in both 2013 and 2018. The null detection of a companion or fragments or multiple tails (in high-quality HST data) also rules out the other sources of activity (impact or spin-up). \item The absence of 4.6~$\mu$m band excess indicates zero or negligible CO or CO$_2$ gas production. \item Photometry sets a limit to the effective radius of the nucleus at $r_e <$ 0.7 km (geometric albedo $p_V$ = 0.04 assumed). \item Properties of the ejected dust are remarkably consistent with previously studied MBCs (continuous emission of $\sim$0.05--5 mm particles at 0.3--3 m s$^{-1}$ speeds). This suggests that both P3 and MBCs have small active areas, as they physically age in the main belt. The average dust production rate, $dM_d/dt \gtrsim$ 2 kg s$^{-1}$, could be supplied by the sublimation of water ice covering as little as $A_s \sim$1800 m$^2$. \item Our dynamical analysis suggests that the orbit of P3 is unstable on timescales $\sim$10 Myr and therefore that P3 originated elsewhere. 98\% of the P3 clones left the main belt for the whole backwards computed period of 100~Myr. \end{enumerate} We speculate that P3 has recently arrived in the main belt from a source region (either the Kuiper Belt or elsewhere in the main belt) and has been physically aged at its current location ($q \sim$ 1.76 AU), finally becoming indistinguishable from a weakly sublimating MBC in terms of its dust properties. Whatever the source of P3, given the dynamical instability of its current orbit, P3 should not be used to trace the native distribution of main-belt ice. \bigskip \begin{acknowledgements} We thank the anonymous referee for comments on the manuscript and Ramon Brasser for providing the orbital elements of synthetic JFCs. Based on observations made under GO 15623 with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This publication makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a joint project of the Jet Propulsion Laboratory/California Institute of Technology and the University of Arizona. Y.K. and J.A. acknowledge funding by the Volkswagen Foundation. J.A.'s contribution was made in the framework of a project funded by the European Union's Horizon 2020 research and innovation programme under grant agreement No 757390 CAstRA. \end{acknowledgements} {\it Facilities:} HST (WFC3), NEOWISE. {\it Software:} sbpy (Mommert et al. 2019).
Title: SPT-3G+: Mapping the High-Frequency Cosmic Microwave Background Using Kinetic Inductance Detectors
Abstract: We present the design and science goals of SPT-3G+, a new camera for the South Pole Telescope, which will consist of a dense array of 34100 kinetic inductance detectors measuring the cosmic microwave background (CMB) at 220 GHz, 285 GHz, and 345 GHz. The SPT-3G+ dataset will enable new constraints on the process of reionization, including measurements of the patchy kinematic Sunyaev-Zeldovich effect and improved constraints on the optical depth due to reionization. At the same time, it will serve as a pathfinder for the detection of Rayleigh scattering, which could allow future CMB surveys to constrain cosmological parameters better than from the primary CMB alone. In addition, the combined, multi-band SPT-3G and SPT-3G+ survey data will have several synergies that enhance the original SPT-3G survey, including: extending the redshift-reach of SZ cluster surveys to $z > 2$; understanding the relationship between magnetic fields and star formation in our Galaxy; improved characterization of the impact of dust on inflationary B-mode searches; and characterizing astrophysical transients at the boundary between mm and sub-mm wavelengths. Finally, the modular design of the SPT-3G+ camera allows it to serve as an on-sky demonstrator for new detector technologies employing microwave readout, such as the on-chip spectrometers that we expect to deploy during the SPT-3G+ survey. In this paper, we describe the science goals of the project and the key technology developments that enable its powerful yet compact design.
https://export.arxiv.org/pdf/2208.08559
\keywords{cosmic microwave background, kinetic inductance detectors, South Pole Telescope, kinematic Sunyaev-Zel'dovich effect} \section{Introduction} \label{sec:intro} A combination of satellite and ground-based experiments measuring the temperature and polarization of the cosmic microwave background (CMB) have produced sub-percent constraints on $\Lambda$CDM cosmological parameters. Ground-based experiments such as the existing South Pole Observatory~\cite{hui18,Sobrin2021} and the upcoming Simons Observatory~\cite{simonsobservatorycollab19} and CMB-S4~\cite{cmbs4collab19} will continue to advance these measurements of the primary CMB. Meanwhile, measurements of secondary CMB anisotropies, produced by the interactions of CMB photons with matter along the line of sight after the epoch of recombination, yield science that is highly complementary to that of the primary CMB. Measurements of the thermal and kinematic Sunyaev-Zeldovich effects (tSZ, kSZ), for example, can be used to improve our understanding of galaxy cluster formation, as well as to infer both the duration of the epoch of reionization and the associated optical depth, which can improve constraints on other cosmological parameters such as the sum of the neutrino masses~\cite{abazajian16}. Improving measurements of secondary anisotropies requires observations at higher, sub-mm frequencies and sub-arcminute angular resolution, in order to better characterize and remove astrophysical foregrounds, including the cosmic infrared background (CIB). A set of upcoming projects, including PrimeCam on the CCAT-prime telescope~\cite{CCATp2021} and our new camera, SPT-3G+, for the 10-m South Pole Telescope (SPT), are being developed to conduct surveys optimized for detecting CMB secondaries. SPT-3G+ is a powerful new CMB camera for the SPT that will enable precision measurements of these secondary CMB anisotropies by providing a combination of low noise, broad frequency coverage for excellent control of foregrounds, and sub-arcminute angular resolution. SPT-3G+ will replace the currently operating camera, SPT-3G, with one having \num{34100} kinetic inductance detectors (KIDs) operating in three frequency bands centered at \SIlist[list-units=single]{220;285;345}{\giga\hertz}, while reusing the same ambient temperature optics. Over the course of a 4-year survey, SPT-3G+ will observe the same \SI{1500}{deg^2} survey field as SPT-3G, with the combined dataset having noise levels of \SIlist[list-units=single]{3;2;3;6;28}{\micro\kelvin\textrm{-arcmin}} spanning 5 frequency bands centered at \SIlist[list-units=single]{95;150;220;285;345}{\giga\hertz}, with the excellent angular resolution (0.7~arcmin at 280~GHz) provided by the SPT primary mirror. The uniquely low atmospheric noise at the South Pole---one of the best developed sites for mm- and submm-wavelength observations---will enable the SPT-3G+ survey to study CMB secondaries across a range of angular scales, from the SZ effects at arcminute scales to Rayleigh scattering at sub-degree scales. \section{Science Targets} \label{sec:science} The science program of SPT-3G+ focuses on three main themes: the physics of reionization, Rayleigh scattering, and the evolution of galaxies and clusters over cosmic time. These science themes are studied with multiple observational probes, described in detail in the following sections. The SPT-3G+ dataset will precisely constrain the optical depth due to reionization and its duration with its ultra-sensitive measurements of the kSZ effect (\autoref{sec:kSZ}), in addition to potentially probing the process of star formation during reionization with its mm- and submm-wavelength line-intensity mapping (LIM) measurements (\autoref{sec:lim}). The combined data from SPT-3G and SPT-3G+ will serve as a pathfinder for future measurements of Rayleigh scattering that will reduce the effect of cosmic variance in cosmological parameter extraction (\autoref{sec:rayleigh}). And the combination of ultra-deep maps and high-frequency observing bands will allow SPT-3G+ to study the assembly of proto-clusters into galaxy clusters at high redshift (\autoref{sec:clusters}) as well as the physics connecting magnetic fields and dust in our own local Galaxy (\autoref{sec:galacticdust}). In addition, a fourth theme of SPT-3G+ is technology development. New detector and readout technologies that are demonstrated successfully in the lab often require significant additional R\&D to succeed in the more demanding conditions presented by on-sky observing. Translating technologies from the lab to on-sky conditions requires telescope and cryogenic platforms that are flexible enough to rapidly deploy new detectors, but which also provide dedicated access to long integration times to study instrumental systematics. Following the lineage of the SPT-SZ~\cite{Chang2009}, SPTpol~\cite{Austermann2012}, and SPT-3G cameras~\cite{Sobrin2021}, which each demonstrated multiple early-stage technologies, SPT-3G+ provides an ideal platform for technology development, focusing specifically on KIDs for CMB observations and mm-wave LIM. \subsection{Kinematic Sunyaev-Zeldovich Effect} \label{sec:kSZ} The kSZ effect is the anisotropy induced by CMB photons scattering off electrons with bulk peculiar velocities relative to the Hubble flow, and its correlation with the cosmic velocity field and ionization history can be used to constrain cosmological parameters. The anisotropy is sourced at two distinct redshifts due to different processes, known as the \emph{reionization} or \emph{patchy kSZ} and the \emph{late-time kSZ} effects. The late-time kSZ effect arises due to scattering of CMB photons off electrons in massive halos with a bulk velocity along the line of sight, primarily during $0 \lesssim z \lesssim 3$. In particular, the late-time kSZ can be used to measure the growth of structure and therefore test models of dark energy and gravity on cosmic distance scales~\cite{Mueller2015, keisler13}. On the other hand, the patchy kSZ effect arises due to scattering of CMB photons off expanding bubbles of free electrons with a bulk velocity along the line of sight, during reionization at $z \gtrsim 6$. The optical depth due to reionization and its duration, which can be measured with the patchy kSZ effect~\cite{Alvarez2021}, are currently poorly constrained~\cite{reichardt20, planck18-6} but of great importance for extracting cosmological parameters from CMB data. Constraints on the sum of the neutrino masses by the next generation of CMB experiments like CMB-S4 will be limited by the parameter degeneracy with the optical depth $\tau$; reducing the uncertainty on $\tau$ to the cosmic variance limit would improve the precision of the neutrino mass measurement by $\sim 40\%$~\cite{abazajian16}. The primary experimental challenge in measuring the kSZ power spectrum is the separation of the signal from small-scale foreground contamination such as the cosmic infrared background (CIB) and tSZ effect. Residual noise from these foregrounds in maps cleaned with multi-frequency methods such as internal linear combination (ILC) becomes larger than the kSZ signal itself at sufficiently high multipoles, and this practically limits the number of kSZ modes that can be used in cosmological analyses. SPT-3G+ addresses this problem with its three high-frequency bands, which produce deep maps that can be combined directly with the lower-frequency SPT-3G maps on the same patch of the sky. The resulting residuals due to both CIB and tSZ contamination, shown in \autoref{fig:ilcresiduals}, are significantly lower than for other upcoming experiments, allowing the use of higher-$\ell$ data with less sensitivity to foreground modeling. The high-precision measurements of the kSZ effect by SPT-3G+ will enable powerful new constraints on $\tau$, thereby reducing the impact of parameter degeneracies in future surveys. The patchy kSZ effect has a significant non-gaussian component due to the fact that variation in the cosmic velocity field along different lines of sight causes a modulation in the amplitude of the kSZ power spectrum across the sky. This results in a nonzero kSZ 4-point function, the amplitude of which has a different dependence on the reionization parameters $\Delta z_\textrm{re}$ and $\tau$ than the kSZ power spectrum / 2-point function~\cite{Smith2016,Ferraro2018}. The combination of the kSZ 2-point and 4-point functions can therefore provide a constraint on $\tau$ comparable to that from \planck\ low-$\ell$ E-mode data~\cite{Alvarez2021}. We adopt a framework similar to previous studies~\cite{Ferraro2018,Alvarez2021} to forecast the constraints on reionization parameters expected from SPT-3G+, summarized in \autoref{fig:ksztau}. Contamination from the CIB and our limited ability to model it precludes using data on scales smaller than a given $\ell_\textrm{max}$ for the kSZ measurement. In our forecasts, we set $\ell_\textrm{max} = 4000$ for TT and $\ell_\textrm{max} = 5000$ for TE and EE. The 4-point function is expected to be less susceptible to foreground contamination, so we choose $\ell_\textrm{max} = 7000$ for those data. For SPT-3G+, we obtain $\sigma(\tau) = \sigmatauksz$, which is 20\% tighter than current constraints from the primary CMB measurements \cite{planck18-6}. Even without the \planck\ data, the kSZ-only constraints are within 15\% of the \planck-only constraints, but achieved with a completely independent method, providing an important systematic check. For the duration of reionization, we obtain $\sigma(\Deltazre) = \sigmadeltazreksz$. \subsection{Rayleigh Scattering} \label{sec:rayleigh} The Rayleigh scattering (RS) of CMB photons on neutral hydrogen atoms occurs shortly after recombination. This process generates a secondary CMB anisotropy originating from an additional scattering surface, which has a frequency-dependent redshift slightly lower than that of the surface of last scattering. Because the RS cross section is proportional to $\nu^4$, it produces a frequency-dependent distortion to the primary CMB temperature and polarization anisotropies, the amplitude of which is proportionally larger at higher frequencies~\cite{Lewis2013}. Although its prediction relies on well understood physics, a first detection of RS would be a consistency test of the description of recombination in $\Lambda$CDM, and it would pave the way for using RS to improve measurements of cosmological parameters. For instance, RS provides an independent constraint on the primordial helium fraction $Y_p$, breaking the degeneracy with $N_\textrm{eff}$ that is present in the primary CMB data alone~\cite{Alipour2015}; and future CMB space missions such as PICO may improve their constraints on the neutrino mass scale $\sum m_\nu$ by as much as $2\times$ by including RS~\cite{Beringue2021}. SPT-3G+ has several features that are particularly well suited for overcoming the extreme challenge of detecting CMB RS from the ground. First, the $\nu^4$ frequency scaling of RS means that the signal in the CMB is higher at the frequencies observed by SPT-3G+, above the \SI{150}{\giga\hertz} peak of the CMB blackbody. Second, most of the signal-to-noise in the RS measurement comes from temperature, rather than polarization, at multipoles $\ell < 1000$, so atmospheric noise significantly limits sensitivity. As discussed in detail in \autoref{sec:sitetelescope}, the South Pole atmosphere has consistently low, stable, sky noise for the entire austral winter, and is thus the ideal site at which to attempt the RS measurement. Finally, the broad frequency coverage of SPT-3G+, in combination with SPT-3G data, enables excellent mitigation of extragalactic foregrounds, such as the CIB, which are the dominant source of noise in the temperature-based measurement of RS. Forecasts of the RS signal-to-noise ratio for several upcoming ground-based experiments were performed in Dibert, \emph{et al.}~\cite{Dibert2022} and are presented in \autoref{fig:rayleighscattering} (see also similar results in Zhu, \emph{et al.}~\cite{Zhu2022}). These forecasts include the effects of atmospheric noise, detector noise, and galactic and extragalactic foregrounds, and they combine with \planck{} data on the patch of the sky observed by each experiment. The combined data of SPT-3G and SPT-3G+ will have an RS signal-to-noise of 1.6 to 2.3, depending on the degree of correlation of the atmospheric noise between different frequency bands. While this is similar to the upcoming near-term experiments SO and CCAT-prime, if these two experiments are compared to SPT-3G+ without the addition of \planck\ data (dashed lines of \autoref{fig:rayleighscattering}), SPT-3G+ has the greatest RS signal-to-noise on its own. This occurs because SO and CCAT-prime have much larger sky fractions than SPT-3G+ ($f_\textrm{sky} \sim 0.4$ vs. $f_\textrm{sky} \sim 0.03$), so the \planck\ data contributes relatively more statistical power to forecasts for those instruments. \subsection{Delensing and Primordial B-modes} \label{sec:delensing} One of the key predictions of the inflationary paradigm is the production of gravitational waves that give rise to a B-mode polarization of the CMB on degree angular scales. Multiple ongoing and future CMB experiments, including the BICEP/\emph{Keck} program~\cite{hui18}, Simons Observatory~\cite{simonsobservatorycollab19}, and CMB-S4~\cite{cmbs4collab19} are undertaking searches for degree-scale primordial B modes, the amplitude of which is proportional to the tensor-to-scalar ratio $r$. Although low-resolution refracting telescopes have thus far produced the best upper limits on the value of $r$ due to their low cost and simplicity~\cite{bicep2keck21}, their sensitivity is limited by their inability to distinguish primordial B modes from those produced by the gravitational lensing of E modes by large-scale structure. This motivates the use of high-resolution SPT maps to ``delens'' the low-resolution B-mode maps from BICEP Array~\cite{bkdelensing20}. Motivated by the need for delensing to improve constraints on $r$, the SPT-3G and BICEP Array projects have formed an umbrella collaboration called the South Pole Observatory (SPO), in order to facilitate joint observations and analyses. The \sptnew\ dataset will expand the ability of the SPT to delens BICEP Array data beyond the level of \sptthreeg. For example, \sptnew\ will enable new systematic checks for dust from the improved high-frequency sensitivity, with the \sptnew\ \SI{220}{\giga\hertz} band alone having a noise level comparable to the \sptthreeg\ \SI{90}{\giga\hertz} band, and, compared to \sptthreeg\ alone, \sptnew\ will increase the total survey weight by nearly 50\%. \subsection{Growth of Galaxies and Clusters} \label{sec:clusters} Galaxy clusters are the largest gravitationally bound objects in the universe. However, the details of how these structures are assembled are far from understood, and in particular, we do not yet know how the cluster environment affects galaxy evolution and the impact of changing star-formation rates~\cite{magliocchetti13, miller15}. Over a period of 2~Gyr (between $z \sim 2$ and $z \sim 4$), there must be a dramatic transformation between the proto-cluster stage, with dense aggregations of rapidly star forming galaxies, to the cluster stage, in which baryonic matter is divided between passive galaxies and a more massive halo of hot intracluster gas. The combination of \sptthreeg\ with the higher angular resolution and improved dust sensitivity of \sptnew\ will allow us to explore this frontier, detecting the intracluster medium in clusters at $z>2$ through the tSZ effect, even in the presence of correlated dust emission. Added to \sptthreeg\ maps, \sptnew\ data will increase the yield of high redshift clusters by a factor of $\gtrsim 2$, discovering $\sim 200$ massive SZ clusters at $z>2$. \sptnew\ will provide an unprecedented submm-wave survey and catalog to the extragalactic community. The extended frequency coverage and higher sensitivity will yield 1000$\times$ more dusty sources at $z>1$ than SPT-SZ; whereas SPT-SZ discovered one source at $z=6.9$ \cite{marrone18}, \sptnew\ will discover more than 200 at $z>7$. This will complement the contemporaneous survey of CCAT-prime, which will observe over \SI{200}{deg^2} to the confusion limit, detecting \SI{1200} galaxies at $5<z<8$~\cite{CCATp2021}. The higher signal-to-noise ratio and smaller beam area from \sptnew\ will narrow the uncertainty region for discovered sources, greatly improving our ability to associate them with counterparts at other wavelengths. The combination of the SPT-3G and \sptnew\ surveys will provide a unique and powerful catalog of sources that will be crucial to the galaxy evolution community in the coming decade. \sptnew\ will have strong synergies with major US facilities such as ALMA and JWST by providing an ultra-wide survey field for identifying the rarest and most interesting objects to study in great detail, and optical and infrared surveys such as Rubin/LSST and \textit{Roman} by uncovering the objects otherwise hidden by dust. \subsection{Galactic Dust and Magnetic Fields} \label{sec:galacticdust} The deep, polarization-sensitive \sptnew\ data will provide a detailed view of the diffuse interstellar medium (ISM) in our Galaxy and help elucidate the role magnetic fields play in star-forming molecular clouds. We know this role is important: the energy density of the ordered and turbulent magnetic fields is comparable to the turbulent energy of the ISM \cite{planck15-19}. While existing facilities have been used to reconstruct the magnetic field in individual clouds~\cite{guerra20}, the \sptnew\ Galactic survey will measure the linear polarization of Galactic dust in a large sample of nearby molecular clouds with resolution 10$\times$ finer than \planck\, down to subparsec scales, enabling robust statistical inference of the role magnetic fields play in the star-formation process. Along with CCAT-prime~\cite{CCATp2021}, \sptnew\ will provide one of the first large-area submm-wave Galactic surveys with sufficient angular resolution to measure the magnetic field structure in molecular clouds at 0.1\,pc resolution for clouds within 680 pc, and at 1\,pc resolution for clouds within 6.8\,kpc. The former would allow detailed studies of local molecular clouds down to the filament scale, while the latter would yield a large statistical sample of $\sim$1300 molecular clouds, enabling a detailed statistical exploration of the connection between magnetic fields and star formation as a function of cloud properties. \sptnew\ will also improve on recent measurements of polarized dust emission by \planck\ that have been used to make significant advances in the understanding of magnetically induced dust anisotropy \cite{clark15}, ISM turbulence \cite{caldwell17}, and the Galactic magnetic field \cite{bracco19}. The \sptnew\ measurements would be an order of magnitude more sensitive to polarized dust emission than the \planck\ data in this region of sky and frequency range and will significantly improve our understanding of magnetized ISM and the polarized foregrounds that are key to the search for gravitational-wave signatures in the CMB \cite{bicep2keck18, cmbs4collab19}. \subsection{Astrophysical Transients} \label{sec:transients} With its high instantaneous sensitivity and an observation cadence that revisits the same patches of the sky at intervals ranging from a few hours to a few days, SPT-3G+ will detect a wide array of astrophysical transients. In recent years, SPTpol and SPT-3G have pioneered the study of mm-wave transients, making the first potential detection by a CMB experiment of a gamma-ray burst (GRB) afterglow~\cite{whitehorn16}, initiating a dedicated survey for mm-wave transients that has detected emission from flaring stars and possible extragalactic sources~\cite{guns21}, and performing the first measurements of asteroids with a dedicated CMB survey~\cite{Chichura2022}. \sptnew\ will have similar overall raw flux sensitivity to SPT-3G, but at higher frequencies, with a sensitivity to transients of 1-day duration of \SIlist[list-units=single]{4.6;4.5;20}{\milli\jansky} in the \SIlist[list-units=single]{220;285;345}{\giga\hertz} bands respectively. Notably, the flux sensitivity for \sptnew\ is more than $5\times$ better at \SI{220}{\giga\hertz} than SPT-3G. Since the spectrum of GRBs is expected to be relatively flat at millimeter-wavelengths~\cite{granot02a}, \sptnew\ should detect a similar number of GRB sources as \sptthreeg\, but in a higher frequency range where no dedicated surveys for transient sources have ever been performed. This will enable tests of models of the spectral energy distributions of GRB afterglows. In addition, \sptnew\ will be several times more sensitive than \sptthreeg\ to asteroids due to their thermal spectrum that is brighter at higher frequencies. These mm-wave measurements are sensitive to the properties of asteroid composition several wavelengths below the surface and therefore can constrain models of the temperature and wavelength-dependent emissivity of asteroid regoliths~\cite{keihm13}. \subsection{Line-Intensity Mapping} \label{sec:lim} Line-intensity mapping (LIM) is a powerful, but relatively new, observational technique which probes a wide range of redshifts by mapping the intensity of atomic and molecular emission lines both spatially and spectrally. Far-IR emission lines such as the rotational transitions of CO or the [CII] fine structure line emitted over $0 \lesssim z \lesssim 10$ are redshifted into the same atmospheric frequency ``windows'' at which ground-based CMB experiments such as SPT-3G and SPT-3G+ observe, meaning that LIM observations can be performed with detectors and optics similar to those used by CMB experiments. There is an extensive literature exploring science opportunities from mm- and submm-wavelength LIM observations (e.g. see Kovetz, \emph{et al.}~\cite{Kovetz2019}). For example, at $z \lesssim 6$, future large-scale LIM surveys probe large-scale structure and therefore can constrain fundamental cosmological parameters such as the sum of the neutrino masses $\sum m_\nu$~\cite{MoradinezhadDizgah2022} and the amplitude of non-gaussianity in the primordial curvature perturbations, parameterized by $f_\textrm{NL}$~\cite{MoradinezhadDizgah2018}. The latter is a powerful probe of inflation, which can distinguish between single- and multi-field inflationary models. At higher redshifts, observations of [CII] would provide a probe of the epoch of reionization and would constrain the star formation rate of the earliest galaxies~\cite{Sun2021}. SPT-3G+ will provide a platform for LIM observations capable of fielding on the order of 1000 spectroscopic pixels with $R = \lambda / \Delta \lambda \sim 100 - 300$, with a staged deployment of detectors in the latter half of the SPT-3G+ survey. Combined with the $\gtrsim 90\%$ observing efficiency of the SPT-3G+ camera, such a LIM survey would represent an increase of several orders of magnitude in survey depth (measured in number of spectrometers~$\times$~observation time) compared with ongoing projects~\cite{KarkareSLIMLTD2022}. The on-chip spectrometer technology described in \autoref{sec:limdetectors} is fully compatible with the optics and readout electronics planned for SPT-3G+, enabling it to seamlessly transition between CMB and LIM survey modes. \section{Survey} \label{sec:survey} To achieve these science goals, SPT-3G+ will carry out two surveys: a ``Main'' survey targeting reionization, Rayleigh scattering, and high-redshift galaxy clusters; and a ``Galactic'' survey targeting the measurements of dust in our Galaxy. The Main survey will cover the same \SI{1500}{deg^2} footprint observed by the ongoing SPT-3G experiment, which will produce some of the deepest arcminute-resolution CMB maps upon its completion, with map noise levels of \SIlist{3;2}{\micro\kelvin\textrm{-}arcmin} in temperature at \SIlist{95;150}{\giga\hertz}. Replicating the cadence of SPT-3G, the new camera will observe this footprint each year from late March through the beginning of December, corresponding to the months with the best atmospheric conditions at South Pole. The combined ultra-low-noise maps in five frequencies provide the stringent control of CIB and tSZ residuals that enable the kSZ measurements described in \autoref{sec:kSZ}. During the austral summer months from December through late March, SPT-3G+ will observe a new \SI{7000}{deg^2} footprint covering much of the galaxy. In this part of the year, the diffraction sidelobes of the SPT primary mirror intercept the sun when the telescope is pointed at the Main field, resulting in significant spurious features in temperature maps. This motivates observing the opposite half of the southern sky to a shallower depth, which will be used for the Galactic science theme of SPT-3G+. \begin{table}[] \def\arraystretch{1.0} \setlength{\tabcolsep}{7pt} \centering \begin{tabular}{l c c c c} \hline\hline Survey & Area & 220\,GHz T noise & 285\,GHz T noise & 345\,GHz T noise \\ & [\si{deg^2}] & [\si{\micro\kelvin\textrm{-}arcmin}] & [\si{\micro\kelvin\textrm{-}arcmin}] & [\si{\micro\kelvin\textrm{-}arcmin}] \\ \hline Main & \num{1500} & 2.9 & 5.6 & 28 \\ Galactic & \num{7000} & 13 & 25 & 130 \\ \hline \end{tabular} \caption{ Noise levels of the two surveys to be performed by SPT-3G+, assuming four years of operation. The Main survey covers the same \SI{1500}{deg^2} field currently being observed by the SPT-3G camera in frequency bands centered at \SIlist{95;150;220}{\giga\hertz} and will be conducted during the austral winter for approximately 9 months of the year. The Galactic survey will observe a \SI{7000}{deg^2} area to a shallower depth, using the approximately 3 months of the year when the Main field is contaminated by sidelobes from the Sun. } \label{tab:surveys} \end{table} \section{Site and Telescope Platform} \label{sec:sitetelescope} SPT-3G+ will make use of the 10-m aperture, sub-mm quality SPT, located at the geographic South Pole, the developed site with the world's best conditions for observing at mm- and submm-wavelengths (see \autoref{fig:sptpicture})~\cite{carlstrom11}. The telescope and site are particularly well suited to the high-frequency CMB observations of SPT-3G+ for several reasons. The primary mirror of SPT has a \SI{20}{\micro\meter} rms surface error, which is sufficiently high-quality to enable efficient observations at \SI{345}{\giga\hertz}. The SPT-3G+ frequency bands have significant atmospheric absorption due to water vapor, and the median annual precipitable water vapor (pwv) at the South Pole is only \SI{0.32}{\milli\meter}, which helps enable high-frequency observations through nearly the entire austral winter~\cite{kuo17}. The combination of low pwv, low atmospheric temperatures, absence of diurnal temperature variations, and laminar airflow produce uniquely stable atmospheric conditions that enable the measurement of large-scale cosmological modes, which is an important factor for the CMB anisotropy and Rayleigh scattering science goals. In addition, observations taken during the austral summer season can still be successfully used in analyses of small-angular-scale phenomena; for example, summer data from the completed SPTpol survey were used to construct a catalog of galaxy clusters identified via the SZ effect~\cite{bleem20}. \section{Instrument Design} \label{sec:instrument} To enable the science described in \autoref{sec:science}, the SPT-3G+ instrument reuses the primary and secondary optics of the SPT and includes a new 100~mK dilution-refrigerator-based cryostat housing low-loss silicon lenses and arrays of KIDs observing at 220, 285, and 345~GHz. The design emphasizes efficient use of the optical throughput, with low-loss reimaging optics and high-density detector arrays, to make optimal use of the 2~deg$^2$ field of view of the SPT optics. The use of modular optics tubes and modern, highly multiplexed RF readout electronics furthermore enables the cryostat to function as a platform for performing on-sky demonstrations of new KID-based detector technologies, including on-chip microwave spectrometers for line-intensity mapping. \subsection{Cryostat} \label{sec:cryostat} The new detectors and lenses deployed by SPT-3G+ will be housed in a new cryostat that will replace the existing SPT-3G cryostat on the SPT. This cryostat will contain a dilution refrigerator (DR) with the detectors operating at a base temperature of 100~mK. The use of DRs in ground-based CMB experiments has been pioneered in recent years by projects including ACTPol~\cite{Thornton2016}, AdvACT~\cite{henderson16}, CLASS~\cite{Dahal2019}, and Simons Observatory~\cite{Zhu2021}. Compared with the $^3$He-$^4$He sorption refrigerators that have historically been used by many projects, including SPT-3G~\cite{Sobrin2021}, DRs have several advantages: continuous operation, providing $\sim 20\%$ greater observing efficiency; lower base temperature, reducing generation-recombination noise at the detectors; and approximately $100\times$ greater cooling power at the detector operational temperature. The SPT-3G+ cryostat, shown in \autoref{fig:cryooptics}, will use a single Bluefors SD-250 DR\footnote{\url{https://bluefors.com/products/sd-dilution-refrigerator/}} to cool seven \SI{150}{\milli\meter} detector wafers to an operational temperature of \SI{100}{\milli\kelvin}. The cryogenic reimaging optics (\autoref{sec:optics}) are housed in tubes and cooled to approximately \SI{4}{\kelvin}, with a design similar to the Simons Observatory~\cite{Zhu2021} and CMB-S4~\cite{Gallardo2022} large-aperture telescope cryostats. A second CryoMech PT-415 pulse-tube cooler, in addition to the one integrated into the DR, provides extra cooling power at the \SI{4}{\kelvin} stage. An ultra-high molecular weight polyethylene (UHMWPE) vacuum window and infrared filtering consisting of a combination of HD-30 closed cell foam sheets, alumina at \SI{40}{\kelvin}, and a metal-mesh low-pass filter at the \SI{4}{\kelvin} Lyot stop, complete the optics tube. \subsection{Optics} \label{sec:optics} The cryogenic refracting optics of SPT-3G+ are split into seven independent tubes, which reimage sections of the Gregorian focus onto individual \SI{\sim135}{\milli\meter} diameter focal planes each with a \SI{0.7}{deg} diameter field of view with Strehl ratio $>0.9$ at \SI{375}{\giga\hertz}. The single central tube uses a scaled-down copy of the SPT-3G optics design, with three rotationally symmetric, plano-convex, 6th-order asphere lenses~\cite{Sobrin2021}. The six outer tubes use plano-convex lenses with non-rotationally symmetric surface sags, the design of which is optimized to correct coma in an off-axis section of the Gregorian focus. Despite the more complex shape of the lenses in the outer optics tubes, the Strehl ratio and field of view of the outer tubes is similar to the central tube. At the detector surface, the beams are telecentric in both on- and off-axis tubes. Whereas the SPT-3G optics used \SI{720}{\milli\meter} diameter alumina lenses, SPT-3G+ will use \SI{200}{\milli\meter} Si lenses, and a Lyot stop with a temperature of \SI{4}{\kelvin} is located between the second and third lenses inside the vacuum window. The stop restricts the illumination of the primary mirror to the inner \SI{9}{\meter} and defines the aperture efficiency of the beams of the feedhorns in the focal plane. The multiple smaller optics tubes permits the use of silicon refractive optics, which both have lower bulk loss tangent than alternative materials and can be machined with meta-material anti-reflection coatings having very low reflectivity and wide bandwidth~\cite{Datta2013,Coughlin2018}. For simplicity, although the SPT-3G+ detector arrays are all single-color (see \autoref{sec:cmbdetectors}), the \SI{220}{\giga\hertz} and \SI{285}{\giga\hertz} detectors will use the same broadband AR coating developed for the \SI{220}{\giga\hertz} and \SI{270}{\giga\hertz} dichroic arrays of SO and CMB-S4. For \SI{345}{\giga\hertz}, single-band machined AR coatings in Si lenses have been demonstrated by the TolTEC experiment, and SPT-3G+ will implement a similar design~\cite{toltec2020}. \subsection{CMB Detectors} \label{sec:cmbdetectors} \begin{table}[] \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{15pt} \begin{center} \begin{tabular}{l | c c c } \hline\hline Observing band & 220 GHz & 285 GHz & 345 GHz \\ \hline Number of 150-\si{\milli\meter} wafers & 2 & 3 & 2 \\ % Number of detectors & \num{9744} & \num{14616} & \num{9744} \\ % Number of readout lines & 12 & 18 & 12 \\ % Fractional bandwidth & 0.26 & 0.20 & 0.082 \\ Pixel size [\si{\milli\meter}] ([$F\lambda$]) & 2.2 (1.25) & 2.2 (1.61) & 2.2 (1.95) \\ % NET per detector [\si{\micro\kelvin_{\textrm{CMB}} \sqrt{\second}}] & \num{540} & \num{1300} & \num{5700} \\ Camera NET [\si{\micro\kelvin_{\textrm{CMB}} \sqrt{\second}}] & 6.2 & 13 & 65 \\ % Beam FWHM [arcmin] & 0.8 & 0.6 & 0.5 \\ % \hline \end{tabular} \caption{Detector parameters for the SPT-3G+ focal plane.}\label{tab:detector_params} \end{center} \end{table} SPT-3G+ will deploy seven single-color arrays of feedhorn-coupled KIDs observing at bands centered at 220, 285, and 345~GHz with fractional bandwidths of 26\%, 20\%, and 8.2\% and 4,872 detectors per array. The use of KIDs offers several critical advantages over TESs at frequencies above 200~GHz, including the ability to read out more detectors per detector array. The need for per-detector wirebonds limits TES-based architectures to \num{\sim 2000} detectors per 150-mm silicon wafer. With vastly fewer wirebonds required for each silicon wafer, the detector density of a KID array can be much higher than an equivalent array of TESs, and this results in significantly better mapping speed per unit of focal plane area~\cite{Barry2022}. For example, the mapping speed of an SPT-3G+ detector wafer is more than twice that of an equivalent detector wafer using TESs. We have developed a direct-absorbing pixel design using an integrated backshort, which is simple to fabricate and achieves high optical efficiency and low cross-polarization coupling in simulations. The pixel design, shown in \autoref{fig:modulemontage}, consist of two orthogonal meanders of Al that each form the inductor of a lumped-element KID~\cite{Dibert2021}. Above the pixel is a gold-plated Al block that contains feedhorns that transition to waveguide. Each of the KID meanders couples directly to one of the two orthogonal TE11 fundamental modes of the waveguide. An RF choke around the opening of the waveguide suppresses optical crosstalk between adjacent pixels. The alignment of the feedhorns to the pixels on the silicon wafer is defined by pin and slot alignment features removed from the silicon wafers using a deep reactive ion etch (DRIE), similar to the scheme used by the Simons Observatory~\cite{2021arXiv211201458M}. The backside of the silicon detector wafer beneath each pixel is partially removed with a DRIE and then metallized with Nb in order to form $\lambda / 4$ backshort for the waveguide cavity. The distance between the device layer and the backshort is controlled by using the insulator layer of a silicon-on-insulator wafer to stop the DRIE process at a well defined distance from the device layer. The meander of the KID, shown in \autoref{fig:modulemontage}, serves to increase the volume of the inductor, increasing the quality factor of the resonator under optical load, while simultaneously maintaining low cross-polarization coupling. Simulations indicate a loaded quality factor of $Q\sim 10^5$ for all frequency bands and a cross-polarization of $\lesssim 3$\%. Since the pixels use direct-absorbing KIDs, the low-frequency edge of the bandpass is defined by the frequency cutoff of the waveguide, rather than an on-chip filter. The high-frequency edge of the bandpass is set by a free-space, metal-mesh, low-pass filter, similar to those that are widely used in CMB experiments~\cite{2006SPIE.6275E..0UA}. The KIDs are read out by coupling to Nb interdigitated capacitors and then capacitively coupled to a Nb coplanar waveguide (CPW), with pixels repeatedly patterned over a 150-mm-diameter wafer and split into six readout lines with a multiplexing factor of 812. Achieving this multiplexing factor requires a fractional frequency placement precision of $\lesssim 10^{-4}$. The IDCs of SPT-3G+ are designed to be compatible with laser trimming which allows for post-fabrication editing of the detector resonant frequencies and has been demonstrated to achieve this placement precision by several groups including our own~\cite{Shu:2018fex,McKenney:2018pds,McGeehan2018}. Fabrication of test pixels and prototype subarrays has been performed at the Pritzker Nanofabrication Facility at the University of Chicago~\cite{Dibert2021}. Following the initial maturation of design and fabrication techniques, the process has been transferred to the Center for Nanoscale Materials at Argonne National Laboratory, where the final detector arrays will be produced. The pixel design entails a relatively simple fabrication process---with significantly fewer steps than fabrication of TES-based CMB detectors, for example---which has lent itself to a rapid design iteration cadence in response to lab testing. \subsection{On-chip Spectrometers for Line Intensity Mapping} \label{sec:limdetectors} On-chip spectrometers using KIDs as the detector element in each channel are a scalable technology for LIM using mm-wavelength emission lines. These spectrometers consist of an antenna or feedhorn-coupled orthomode transducer (OMT) which couples radiation from the telescope to a filter bank via microstrip. A series of narrow-band filters ($\lambda / \Delta \lambda \sim 100-1000$) selects the radiation in each spectrometer channel, which is then coupled to a KID, where the microwave energy is dissipated as broken Cooper pairs and read out. On-chip spectrometers have been developed by the DESHIMA~\cite{Endo2019} and SuperSpec~\cite{Karkare2020} collaborations, which are using them for galaxy spectroscopy. Several spectrometer technologies are currently being developed to demonstrate LIM targeting far-IR lines that are redshifted to the millimeter observing band, including gratings~\cite{TIME2014}, Fourier transform spectrometers (FTS)~\cite{Catalano2022}, and Fabry-Perot interferometers~\cite{Vavagiakis2018}, but on-chip spectrometers offer several advantages over these technologies. All filtering and detector elements are fabricated on a single monolithic piece of silicon, which dramatically reduces the volume and mass of cryogenic optics. In addition, the optical coupling to the telescope for an on-chip spectrometer can be identical to that of a CMB detector array, meaning that on-chip spectrometers are drop-in compatible with CMB experimental optics. For this reason, the SPT-3G+ cryostat and readout are fully compatible with on-chip spectrometers, and we plan to deploy arrays in one or more of the optics tubes several years into the 4-year survey of SPT-3G+. We are developing an on-chip spectrometer, closely following the SuperSpec design, with feedhorn-coupled OMTs, a prototype pixel of which is shown in \autoref{fig:spectrometermontage}. The spectrometers observe in the atmospheric window between the \SI{118}{\giga\hertz} oxygen line and \SI{183}{\giga\hertz} water line, and a focal plane unit of these detectors will be demonstrated by the SPT-SLIM camera on the SPT during the 2023-2024 austral summer season~\cite{KarkareSLIM2022}. Following the analysis of SPT-SLIM data, we will deploy either the same focal plane or an upgraded version in SPT-3G+. \subsection{Readout Electronics} The SPT-3G readout electronics consist of a room-temperature subsystem that handles digitization and synthesis of the resonator tones, and a cryogenic subsystem that consists of coaxial cabling and cryogenic low-noise amplifiers (LNAs) inside the cryostat. The room-temperature electronics are based on the ICE readout platform, which has successfully been used for readout of transition-edge sensors (TESs) in SPT-3G and in the signal processing for CHIME~\cite{bandura16}. The ICE readout platform consists of a general-purpose digital motherboard containing a Xilinx Kintex-7 FPGA and an ARM processor, which couples to application-specific mezzanine daughter cards for digitization and synthesis. This architecture affords significant flexibility for adapting to the readout requirements of different types of detectors, while maintaining a very high degree of hardware and software maturity SPT-3G has taken data continuously using 32 IceBoards for 5 years, achieving background-limited white noise performance, excellent low-frequency stability, and negligible readout downtime, and much of the low-level control software framework for TES readout has already been adapted to function with KIDs~\cite{bender19}. SPT-3G+ has adapted the ICE platform to use with MKIDs by replacing the existing TES-style mezzanines with new GHz RF boards centered on the Analog Devices AD9082 device, which houses four 12 GSPS DACs and two 6 GSPS ADCs. This RF frontend interrogates the KIDs at baseband, avoiding the need for mixers or other analog components except attenuators and amplifiers between the detectors and the warm electronics. The firmware for TES readout used in SPT-3G~\cite{smecher2012} has been adapted to the faster ADCs and DACs needed for KIDs, implementing magnitude and phase measurement of the KID resonators at a fixed readout frequency, with a digital active feedback loop enabling continuous tone tracking if required. A version of this firmware supporting $1024\times$ multiplexing per RF chain and using the AD9082-FMCA-EBZ evaluation board\footnote{\url{https://www.analog.com/en/design-center/evaluation-hardware-and-software/evaluation-boards-kits/eval-ad9082.html}} and has been successfully demonstrated in the lab with prototype \SI{220}{\giga\hertz} SPT-3G+ detectors. More details on the electronics and firmware design and in-lab validation are given in Rouble, \emph{et al.} in these proceedings~\cite{Rouble2022}. The cryogenic readout is a mixture of commercial off-the-shelf components similar to other mm/sub-mm KID-based cameras. We plan to use stainless steel and CuNi coaxial cabling, with superconducting NbTi between the RF output side of the focal plane and the LNAs at the 4~K stage of the cryostat. The design uses a single SiGe heterojunction bipolar transistor LNA per RF chain at the 4~K stage, which is produced by CryoElec\footnote{\url{https://www.cryoelec.com/}} and has +32~dB of gain at 1~GHz with a noise temperature of 3-4~K and a 1~dB compression point of -50~dBm. \section{Conclusion} Following the completion of SPT-3G survey operations, the SPT-3G+ camera will be deployed to the SPT, outfitted with KIDs to image the CMB in three bands centered at \SIlist[list-units=single]{220;285;345}{\giga\hertz}. Together with the SPT-3G survey data, the SPT-3G+ dataset will yield constraints on the history of reionization, including the optical depth, with its clean measurements of the kSZ effect. In addition, these data will provide a first hint of Rayleigh scattering at recombination, which could serve as a pathfinder for precision measurements from future space-based experiments. At the same time, the survey data will enable the discovery of new high-redshift clusters and dusty star-forming galaxies. The deep, high-frequency observations of the SPT-3G+ survey are enabled by dense focal planes with a total of \SI{34000} detectors. These detectors use an efficient, direct-absorbing, feedhorn-coupled architecture of which prototypes have already been fabricated and tested in the \SIlist{220;345}{\giga\hertz} bands. The leap in detector density is made possible by the RF-ICE readout platform, which has already achieved $1024\times$ multiplexing and inherits deployment-grade DAQ and control software from its SPT-3G heritage. The modular optical design of SPT-3G+ has excellent image quality and would enable an eventual phased upgrade with on-chip spectrometers, expanding the science reach of the camera to include line-intensity mapping. Finally, using the proven sub-millimeter-quality SPT maximizes the science impact of this hardware, without the need for new telescopes or ambient-temperature optics. \acknowledgments % The South Pole Telescope program is supported by the National Science Foundation (NSF) through the grant OPP-1852617. Partial support is also provided by the Kavli Institute of Cosmological Physics at the University of Chicago. Partial support for SPT-3G+ development is provided by NSF grant OPP-2117894. This work made use of the Pritzker Nanofabrication Facility of the Institute for Molecular Engineering at the University of Chicago, which receives support from Soft and Hybrid Nanotechnology Experimental (SHyNE) Resource (NSF ECCS-2025633), a node of the National Science Foundation’s National Nanotechnology Coordinated Infrastructure. Work supported by the Fermi National Accelerator Laboratory, managed and operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy. Work at Argonne National Laboratory was supported by the U.S. Department of Energy (DOE), Office of Science, Office of High Energy Physics, under contract DE-AC02-06CH1137. Work performed at the Center for Nanoscale Materials, a U.S. Department of Energy Office of Science User Facility, was supported by the U.S. DOE, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357. ZP is supported by Argonne National Laboratory under award LDRD-2021-0186. The McGill authors acknowledge funding from the Natural Sciences and Engineering Research Council of Canada and Canadian Institute for Advanced Research. KD is supported by the Graduate Instrumentation Research Award through the Department of Energy, Office of High Energy Physics. The Melbourne group acknowledges support from the Australian Research Council’s Discovery Projects scheme (DP210102386). The U.S. Government retains and the publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for U.S. Government purposes. \bibliography{spt} % \bibliographystyle{spiebib} %
Title: Enrichment of the Galactic disc with neutron-capture elements: Gd, Dy, and Th
Abstract: The study of the origin of heavy elements is one of the main goals of nuclear astrophysics. In this paper, we present new observational data for the heavy $r$-process elements gadolinium (Gd, Z=64), dysprosium (Dy, Z=66) and thorium (Th, Z=90) in a sample of 276 Galactic disc stars ( --1.0$<$[Fe/H]$<$+0.3). The stellar spectra have a high resolution of 42,000 and 75,000, and the signal-to-noise ratio higher than 100. The LTE abundances of Gd, Dy and Th have been determined by comparing the observed and synthetic spectra for three Gd lines (149 stars), four Dy lines (152 stars) and the Th line at 4019.13 A (170 stars). For about 70% of the stars in our sample Gd and Dy are measured for the first time, and Th for 95% of the stars. Typical errors vary from 0.07 to 0.16 dex. This paper provides the first extended set of Th observations in the Milky Way disc. Together with europium (Eu, Z = 63) data from our previous studies, we have compared these new observations with nucleosynthesis predictions and Galactic Chemical Evolution simulations. We confirm that [Gd/Fe] and [Dy/Fe] show the same behavior of Eu. We study with GCE simulations the evolution of [Th/Fe] in comparison with [Eu/Fe], showing that unlike Eu either the Th production is metallicity dependent in case of a unique source of the r-process in the Galaxy, or the frequency of the Th-rich r-process source is decreasing with the increasing of [Fe/H].
https://export.arxiv.org/pdf/2208.11779
\date{Accepted 2021 xxx. Received 2021 xxx; in original form 2021 xxx} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \pubyear{2021} \label{firstpage} \begin{keywords} stars: abundances -- stars: late-type -- Galaxy: disc -- Galaxy: evolution \end{keywords} \section{Introduction} The nucleosynthesis of heavy neutron-capture elements in stars and their observations is one of the main research drivers for modern nuclear astrophysics. In this context, the origin of the rapid neutron capture process \citep[r-process, e.g.,][and references therein]{cowan:21} is still a major matter of debate. Among others, the most favoured $r$-process sites are neutron-star mergers \cite[e.g.][]{eichler:89, freiburghaus:99, goriely:15, thielemann:17,rosswog:18} and neutron star-black hole mergers \cite[e.g.][]{lattimer:74, surman:08,Fernandez.Foucart.Lippuner:2020}, certain rare classes of fast-rotating supernovae with powerful magnetic fields \cite[e.g.][]{symbalisty:85, cameron:03,nishimura:06, winteler:12,nishimura:2017,moesta:18,obergaulinger:18,reichert:21}, as well as hypernovae or collapsars \cite[e.g.][]{cameron:03,siegel:19,thielemann:20, zenati:20, brauer:21}. In the past, core-collapse supernovae (CCSNe) have been considered as the dominant source of the $r$-process, initially by suggesting neutron-rich innermost ejecta \cite[e.g.][]{hillebrandt:76}, later arguing for fast $(\alpha,n)$-reactions in explosive burning of He shells \cite[e.g.][]{truran:78,thielemann:79,cowan:85}, and afterwards turning to high entropy conditions in neutrino-driven winds during the core-collapse and explosion phase \cite[e.g.][]{woosley:94, takahashi:94,hoffman:97,ning:07, farouqi:10, arcones:13}. At present, realistic CCSN simulations do not provide the right conditions to produce a complete $r$-process pattern, while still a mild weak $r$-process production could be possible \cite[see e.g.,][]{wanajo:11,curtis:19,cowan:21,ghosh:21}. Stellar spectroscopic observations can be used to derive fundamental constraints for theoretical simulations. In particular, a large number of works in the past decade has been made to define the composition of old $r$-process rich stars, formed in the early Milky Way Galaxy \cite[e.g.][]{sneden:03, simmerer:04, beers:05, barklem:05, yong:13, roederer:14, sakari:18, mashonkina:14a, hansen:18}. Such an importance is primarily due to the possibility to trace contributions of one or several stellar sites of production of these elements within the early Galaxy timescales, before global gas mixing might actually take place \cite[e.g.][]{hansen:20}. Therefore, stellar observations can be used to test directly $r$-process predictions from different stellar sites \citep{farouqi:21} – for instance, those resulting from neutron-star mergers \citep{ji:19} or from magnetorotational hypernovae \cite[][]{yong:21}. This includes to study the role of progenitors of satellite galaxies on the early galactic chemical enrichment \cite[e.g.][]{gudin:21}. As the Galaxy evolves, new stars are forming enriched by previous stellar generations. Compared to the early Galaxy, different stellar sources need to be taken into account for the production of neutron-capture elements during the chemical evolution of the Galaxy \citep[GCE, e.g.,][]{prantzos:18, kobayashi:20}. At present in the Milky Way there are two main processes responsible for the production of heavy elements. In addition to the $r$-process, the slow neutron capture process \citep[s-process, e.g.,][]{kaeppeler:11} is responsible for about half of the abundances beyond iron in the solar system. The s-process elements are mainly produced in massive stars \cite[e.g.][]{the:07, pignatari:10, frischknecht:16, limongi:18} and in Asymptotic Giant Branch (AGB) stars \cite[e.g.][]{gallino:98, busso:99, bisterzo:14, cristallo:15, karakas:16, battino:19}. In order to take into account the $r$-process contribution in GCE calculations, these yields are often derived from the solar residual method: the $r$-process abundance pattern is obtained from the solar composition after removing the $s$-process contribution, and then it is assumed to be the same for all metallicities \citep[e.g.,][]{travaglio:04, prantzos:18}. Alternatively, a large range of theoretical $r$-process yields may be adopted. GCE models and simulations are crucial tools to better understand the evolution of $r$-process elements in galaxies (e.g., \citealt{wehmeyer:15,naiman:18,vandeVoort:20}). In particular, the [Eu/Fe] vs [Fe/H] trend in the Galactic disc has been targeted several times to probe the enrichment timescales and contribution of neutron star mergers and rare classes of core-collapse supernovae. Studies have suggested that neutron star mergers alone cannot reproduce the decreasing trend of Eu when assuming a merger delay-time probability distribution (DTD) in the form of $t^{-1}$ (e.g., \citealt{cote:17,cote:19,hotokezaka:18,haynes:19,simonetti:19}). Such an issue, however, can be lifted by implying metallicity-dependent DTDs (e.g., \citealt{simonetti:19}), imposing shorter delay times for mergers relative to Type~Ia supernovae (e.g., \citealt{matteucci:14,wehmeyer:15,cote:17,cavallo:21,wanajo:21}), or adopting different treatments for how $r$-process elements are mixed and distributed within the Galaxy (e.g., \citealt{schonrich:19,banerjee:20,beniamini:20}). Another solution to recover the [Eu/Fe] trend is to involve additional sources alongside neutron star mergers for Eu, such as rare supernovae originating from massive stars (e.g., \citealt{cote:19,siegel:19,kobayashi:20,cavallo:21,greggio:21,farouqi:21}). Europium is the most extensively studied chemical element produced via the $r$-process in the Galactic disc (according to \citep[e.g.][]{bisterzo:14}, the solar $s$-process contribution is about 6 \%). Europium abundance in the Galactic disc has been investigated by many researchers \cite[e.g.][etc.]{mashonkina:01, reddy:03, bensby:05, mishenina:13}. On the other hand, there are only a limited amount of stars with available several $r$-process elements measured together from the same analysis. For instance, in \cite{guiglion:18} gadolinium and dysprosium were examined together with europium and barium within the frameworks of the AMBRE Project based on high-resolution FEROS, HARPS and UVES spectra from the ESO archive. The contribution by the $s$-process to the Gd and Dy solar abundances is estimated to be 15.4 \% and 15.0 \%, respectively \citep[][]{bisterzo:14}, i.e., indicating a dominant $r$-process contribution. Gd, and Dy abundances in thin and thick discs were investigated by \cite{guiglion:18} and in solar twins by \cite{spina:18}. Th abundances and the ratios Th/Eu were obtained for thin disc stars to estimate the age of the disc \citep{peloso:05}. The Th abundance is also measured for samples of solar twins stars \citep{unterborn:15, botelho:19}. As a continuation of our previous research focused on studying of the Galactic disc enrichment with neutron-capture elements \cite[][]{mishenina:13, mishenina:17, mishenina:19a, mishenina:19b}, this paper aims to investigate the abundance distribution of the $r$-process elements Gd, Dy and Th. Our study includes new Gd and Dy measurements for nearly 70 \% of the stars in our sample. For more than 90 \% of stars we present new Th values, and the GCE of the Milky Way disk is done for the first time taking into account both actinide (Th) and lanthanide (Eu, Gd and Dy) observations. The paper is organized as follows. The observations and the definition of the main stellar parameters are described in \S \ref{sec: stellar param}. The abundance determinations and the error analysis are presented in \S \ref{sec: abundance determination}. The analysis of the behaviour of elemental abundances in the pattern of the theory of nucleosynthesis and the chemical evolution of the Galaxy is reported in \S \ref{sec: result, gce}. Conclusions are drawn in \S \ref{sec: conclusions}. \section{Observations and atmospheric parameters} \label{sec: stellar param} The present study was carried out on an initial list of 276 stars and based on the spectra and atmospheric parameters by \cite{mishenina:13}. The 1.93-m telescope at the Observatoire de Haute-Provence (OHP, France) and the echelle-type spectrograph ELODIE \cite[][]{baranne:96} were employed to obtain spectra at the resolving power R = 42,000 in the wavelength range from 4400 to 6800 \AA~and with the signal-to noise (S/N) better than 100 at 5500 \AA. We also used additional spectra from the OHP spectroscopic archive \cite[][]{moultaka:04} collected with the SOPHIE spectrograph \cite[][]{perruchot:08} and covering a similar wavelength range at the spectral resolution R = 75,000. The complex pre-processing of images available on-line and enabling to obtain spectroscopic data in a digital form was carried out immediately during observations \cite[][]{katz:98}. The subsequent processing of the studied spectra was performed using the DECH30 software package developed by G.A. Galazutdinov (see http://www.gazinur.com/DECH-software.html). DECH software provides all stages of the CCD echelle spectral image processing, including bias/background subtraction, flat-field correction (separation), extraction of one-dimensional spectrum from two-dimensional images, diffuse light correction, spectrum addition and exclusion of cosmic-ray features. The programme enables to locate a fiducial continuum, to measure equivalent widths (EWs) of lines by several methods, to determine line positions and shifts and much more besides. In this case, we worked with spectra in the FITS format, using such options as normalisation of individual spectra to the local continuum, identification of spectral lines, development of the dispersion curve, measurements of the line depths and equivalent widths (EWs), elimination of cosmic-ray effects, selection of individual parts of the spectrum, etc. The measured line depths were subsequently used to determine the effective temperature (\Teff) while the EWs of the neutral and ionised iron lines were measured by the Gaussian profile fitting and employed to derive atmospheric parameters (the surface gravity, \logg, and micro-turbulent velocity, \Vt). The stellar atmospheric parameters for the stars under examination in this work were determined by us in previous studies. The procedures employed to derive the effective temperatures \Teff, surface gravities \logg~ and microturbulent velocity \Vt~ for the target stars had been described in detail in \cite[][]{mishenina:01} and \cite[][]{mishenina:04, mishenina:08}. In particular, the effective temperatures \Teff~ were determined by calibrating the line-depth ratios for the pairs of spectral lines that have different low-level excitation potentials with the application of the technique introduced and developed by \cite{kovtyukh:03}. For most of metal-poor stars in our sample, \Teff~ were assumed by adjusting the far-wings of the H$_\alpha$ line \citep{mishenina:01}. In \cite[][]{mishenina:04}, we showed that the temperature scales adopted in \cite[][]{mishenina:01, kovtyukh:03} are consistent. The surface gravities, \logg, were computed from the ionisation equilibrium, which means that similar iron abundances should be obtained from the neutral iron (Fe I) and singly ionised iron (Fe II) lines. In our case, the difference between these values does not exceed 0.03 dex. The microturbulent velocity, \Vt, was derived by factoring out correlations between the iron abundances from Fe I lines and the equivalent widths (EW) of those Fe I lines. We adopted the iron abundance determined from the Fe I lines as the metallicity, [Fe/H]. As is known \cite[e.g][]{thevenin:99, shchukina:01, mashonkina:11, bergemann:12}, the lines of neutral iron are influenced by the deviations from the LTE in the solar and stellar spectra, and hence, these deviations also affect the iron abundances determined from those lines. However, within the temperature and metallicity ranges of our target stars, the NLTE corrections is less than 0.1 dex \cite[e.g][]{mashonkina:11}. Thus, both in the case of accepted [Fe/H] as the iron abundance from Fe I lines and also in the case of using the ionization equilibrium method for iron to derived \logg, this correction does not exceed the errors in determining of these parameters. The list of parameter values obtained, as well as their comparison with the results of other authors, has been given in \cite[][]{mishenina:04, mishenina:08, mishenina:13}. The estimated accuracy of our parameter determinations is as follows: $\Delta$(\Teff)= $\pm$100 K, $\Delta$(\logg)= $\pm$0.2dex, $\Delta$(\Vt) = $\pm$0.2km s$^{-1}$ and $\Delta$([Fe/H]) = $\pm$0.1dex. In this paper, we have compared our parameters with those obtained recently in the studies by \cite{guiglion:18} and \cite{spina:18}, wherein gadolinium and dysprosium abundances were determined, and also those in the studies by \cite{peloso:05}, who reported europium and thorium abundances (five stars in common with our sample) and \cite{unterborn:15} for one star in common with our sample; (see Table \ref{ncap}). To compare with our findings, we chose the data reported by \cite[][]{guiglion:18} for the stars in common with the highest S/N among those available in on-line catalogs. We obtained average differences and errors for 36 stars in common by deducting our data from those by \cite{guiglion:18} (see Table \ref{ncap}). Then, we sorted out the data with \Teff~ from 5100 K to 6300 K and surface gravities within 3.5 $<$ \logg $<$ 5.0 (for 26 stars in common) as such ranges of parameter values had been chosen by the authors as criteria for the selection of stars for further analysis; the resulting differences are slightly smaller (Table \ref{ncap}). In general, we see a good agreement between our findings and those from the literature, as well as a good consistency with the estimated accuracy in parameter determinations adopted earlier. As we can see from Table \ref{ncap}, the mean difference $\Delta$\Teff~ between our effective temperature and that obtained by other authors, does not exceed 25 K, and the rms deviations are within 100 K. A mean difference in gravity values $\Delta$\logg~ does not exceed 0.10, with the rms deviation is only slightly exceeding (0.22), adopted earlier (0.2). In terms of metallicity, the mean value does not exceed $\Delta$([Fe/H])= 0.05 $\pm$0.07 dex. \begin{table*} \begin{center} \caption[]{Comparison of parameters and Eu, Gd, Dy, and Th abundance determinations taken from the literature with our results for the $n$ stars common with our stellar sample. Our data for Eu abundances are from \cite[][]{mishenina:13}. } \label{ncap} \begin{tabular}{ccccccccc} \hline Reference & $\Delta$(\Teff) & $\Delta$(\logg) & $\Delta$([Fe/H]) & $\Delta$([Eu/Fe]) & $\Delta$([Gd/Fe]) &$\Delta$([Dy/Fe]) &$\Delta$([Th/Fe]) & n \\ \hline Guiglion et al.&-14.7& 0.09 & 0.01 & 0.09 &0.09 &0.14 & -- &36\\ 2018 &$\pm$99.4 & $\pm$0.22 & $\pm$0.07 & $\pm$0.16&$\pm$0.18 &$\pm$0.16 &--& \\ Guiglion et al.& 12.1 & 0.06 & 0.0 & 0.09 &0.03 &0.10 & -- & 26 \\ 2018&$\pm$93 & $\pm$0.21 & $\pm$0.07 & $\pm$0.15 & $\pm$0.13 &$\pm$0.13& --& \\ Spina et al.& 14.8 & 0.06 & 0.04 & --0.03 & --0.03 &--0.04 &-- & 6 (4)\\ 2018 &$\pm$25& $\pm$0.1 & $\pm$0.05 & $\pm$0.04&$\pm$0.04&$\pm$0.05& -- & \\ del Peloso et al. & -16.4 & 0.04 & 0.05 & 0.05 &-- &-- & -0.23 & 5 (4) \\ 2005 &$\pm$55 & $\pm$0.08 & $\pm$0.08 & $\pm$0.08 &-- &-- &$\pm$0.14 & \\ Morell et al. & -3.8 & 0.02 & -0.02 & -- & -- &-- & -0.11 & 5 (5) \\ 1992 &$\pm$85 & $\pm$0.20 & $\pm$0.04 & -- & -- &-- &$\pm$0.12 & \\ Unterborn et al. & 23 & 0.05 & 0.04 & -- & -- &-- &0.48 & 1(1) \\ 2015 &-- & -- & -- & -- & -- & --& --& \\ Botelho et al. & -15.6 & -0.05 & -0.04 & 0.02 &-- &-- & -0.09 & 5(3) \\ 2019 &$\pm$28 & $\pm$0.11 & $\pm$0.05 & $\pm$0.04 &-- &-- & $\pm$0.15 & \\ \hline \end{tabular} \end{center} \end{table*} We adopt the kinematic classification of the stars into the thin and thick discs and Hercules stream, as previously described in \cite[][]{mishenina:13}. TM: To determine the components of spatial velocity (U, V, W) and the belonging of stars to different galactic populations, the Hipparhos catalog was used. Since the stars in our sample are bright and tend to have Gaia astrometric errors equivalent to those of the Hipparcos observations, we have not updated our classification with respect to the latest astrometric data from the Gaia Data Release 2 \citep{GDR2:18}. Some stars are even too bright to be measured by Gaia. Our previous sample of 276 stars in total consists of 21 stars belonging to the thick disc, 212 of those in the thin disc, 16 stars related to the Hercules stream and 27 unclassified stars. \section{Abundance determination } \label{sec: abundance determination} The abundances of Dy, Gd and Th were derived in the Local Thermodynamical Equilibrium (LTE) approximation with a new modified STARSP LTE spectral synthesis code \cite[][]{tsymbal:96} using the models by \cite{castelli:04}. For each star, the model was chosen by standard interpolation for \Teff~ and \logg. The metalicity [Fe/H] and the turbulent velocity \Vt~ are not interpolated, in terms of metallicity, a model close to [Fe/H] of stars in $\pm$0.2 dex was selected and the turbulent velocity \Vt~ determined for each star was used. For Gd II lines 4037.89, 4085.56, 4483.33\AA, and Dy II lines 4073.12, 4077.97, 4103.31, and 4449.70 \AA, and Th II 4019.12 \AA~ the oscillator strengths log\,gf were adopted from last version (2016) of the VALD database \citep{kupka:99}. In contrast to the considered Gd and Dy lines, the 4019.129 \AA~ Th line is a complex blend with a contribution to its intensity from the Th and Co abundances at almost the same wavelength \cite[e.g.][]{peloso:05, mashonkina:14b, botelho:19}. Using a list of VALD lines, which includes atomic and molecular lines, to describe the Th line in the solar spectrum, we found a noticeable discrepancy between the observed and calculated spectra in the region of Fe lines; to eliminate this, we corrected the log\,gf Fe I oscillator strengths with an appropriate fit. The values of the oscillator strength adopted by us for the Th and Co lines follow the VALD list, namely, the values of log\,gf = -0.228 for the Th II line \citep[][]{nilsson:02} and log\,gf = -2.270 for the Co I line \citep[][]{lawler:90}, in our case, they are presented with a detailed contributions of the hyperfine structure. A list of our main atomic and molecular lines employed in the thorium 4019 \AA~line region are given in Table \ref{list_thline}. For the Sun, and two stars with stellar parameters (\Teff, \logg, [Fe/H]) HD 22879 (5825; 4.42; -0.91), and HD (5373; 4.30; 0.25) the predominant lines in the region are shown in Fig. \ref{th_sun_prof}. Examples of fitting several Gd, Dy and Th lines in the stellar spectra are presented in Fig. \ref{lines_prof}. \begin{table} \caption{List of lines in the thorium 4019 \AA~line region } \label{list_thline} \begin{tabular}{lllcc} \hline Species & lambda, \AA & Elow, ev & log gf & source \\ \hline Ce II & 4018.820 & 1.55 & -0.959 & VALD \\ Nd II & 4018.820 & 0.06 & -0.890 & VALD \\ Fe I & 4018.887 & 4.26 & -2.781 & solar fit \\ Ce II & 4018.900 & 1.01 & -1.219 & VALD \\ Ce II & 4018.927 & 0.63 & -1.679 & VALD \\ V I & 4018.929 & 2.58 & -0.556 & VALD \\ Pr II & 4018.963 & 0.20 & -1.029 & VALD \\ 13CH & 4018.965 & 0.46 & -3.253 & VALD \\ Mn I & 4018.987 & 4.35 & -1.883 & VALD \\ Fe I & 4019.002 & 4.32 & -2.700 & solar fit \\ Fe I & 4019.042 & 2.61 & -3.100 & solar fit \\ V II & 4019.044 & 3.75 & -1.231 & VALD \\ Ce II & 4019.057 & 1.01 & -0.529 & VALD \\ Mn I & 4019.066 & 4.67 & -0.522 & VALD \\ Ni I & 4019.067 & 1.94 & -3.399 & VALD \\ 13CH & 4019.074 & 0.46 & -3.245 & VALD \\ Co I & 4019.110 & 2.28 & -3.287 & VALD \\ Co I & 4019.118 & 2.28 & -3.173 & VALD \\ Co I & 4019.120 & 2.28 & -2.876 & VALD \\ Co I & 4019.125 & 2.28 & -3.492 & VALD \\ Co I & 4019.126 & 2.28 & -3.298 & VALD \\ Th II & 4019.129 & 0.00 & -0.227 & VALD \\ Co I & 4019.129 & 2.87 & -5.163 & VALD \\ V I & 4019.134 & 1.80 & -2.149 & VALD \\ Co I & 4019.135 & 2.28 & -3.287 & VALD \\ Co I & 4019.135 & 2.28 & -3.474 & VALD \\ Co I & 4019.138 & 2.28 & -3.173 & VALD \\ Co I & 4019.140 & 2.28 & -3.298 & VALD \\ Co I & 4019.143 & 2.87 & -5.142 & VALD \\ Co I & 4019.210 & 2.87 & -4.821 & VALD \\ \hline \end{tabular} \end{table} In order to calculate the synthetic spectrum and the Th abundance, we used the relevant abundances of chemical elements obtained by \cite{mishenina:13}, including nickel. In particular, to take into account the blend due to cobalt, as a first approximation we estimated its abundance from the scaled solar cobalt value. Then, we further refined from the profile fit of the cobalt line at a wavelength of 4020.89 \AA, which was calculated factoring in the hyperfine structure (HFS). We finally derived the Th abundance by taking into account the contribution of cobalt in the blend Th-Co. Therefore, our results obtained for thorium should not be overestimated because of local contribution from other elements. Examples of fitting Co line in the stellar spectra are shown in Fig. \ref{th_prof}. The abundance of europium was determined by us early and for further analysis in this study, we use those obtained in \cite{mishenina:13}. In that study the Eu abundance was derived from the Eu II lines at 6645 \AA, taking into account the hyperfine structure \cite[][]{ivans:06}. The solar abundances of Dy, Gd and Th are determined using the STARSP code \cite[][]{tsymbal:96} from the lines in the spectra of the Moon and asteroids obtained with the ELODIE spectrograph with the line parameters being the same as in the stellar spectra: log A(Gd) = 1.08$\pm$0.05 and log A(Dy) = 1.10$\pm$0.05, which coincide with \cite{asplund:09} (log A(Gd)$_\odot$ = 1.07$\pm$0.04, log A(Dy)$_\odot$ = 1.10$\pm$0.04), and our solar log A(Th) = 0.08$\pm$0.08 is consistent with the value log A(Th)$_\odot$ = 0.08 reported for the Sun in \cite[][]{mashonkina:14b}, the value of \cite{asplund:09} is log A(Th)$_\odot$ = 0.02$\pm$0.10. The stellar parameters and obtained Gd, Dy and Th abundances with the statistical uncertainties associated from line-to-line abundance variation (standard deviation or rms derivation) are given in Table \ref{ncapt}. \subsection{Errors in abundance determinations} The total errors in Gd, Dy, Th abundance determinations mainly result from the errors in sampling the parameter values and fitting the synthetic spectra to observational ones (0.05 dex in Gd and Dy, and 0.08 for Th). To determine the systematic errors in the elemental abundances, resulting from uncertainties in the atmospheric parameters, we derived the elemental abundance of four stars with different set of stellar parameters (\Teff~ in K, \logg, \Vt~ in km s$^{-1}$ , [Fe/H]): HD154345 (5503,4.30,1.3,-0.21), HD82106 (4827,4.10,1.1,-0.11), HD75732 (5373,4.30,1.1,0.25), and HD201891 (5850,4.40,1.0,-0.96) for several models with modified parameters ($\Delta$\Teff = +100~K, $\Delta$\logg = +0.2, $\Delta$\Vt = +0.1). The impact of the parameter uncertainties on the accuracy of elemental abundance determinations, as exemplified by the stars with different \Teff~ and metallicities, is presented in Table \ref{errors}. \begin{table} \caption{ Abundance errors due to atmospheric parameter uncertainties, for four stars with different set of stellar parameters (\Teff, \logg, \Vt, [Fe/H]): HD154345 (5503,4.30,1.3,-0.21), HD82106 (4827,4.10,1.1,-0.11), HD75732 (5373,4.30,1.1,0.25), and HD201891 (5850,4.40,1.0,-0.96). } \label{errors} \begin{tabular}{lllccc} \hline & & HD1545345 && & \\ AN & El & $\Delta$ \Teff+ & $\Delta$ \logg+ & $\Delta$ \Vt+ & tot+ \\ \hline 64 &GdII &0.07 &0.10 &0.01 &0.12 \\ 68 &DyII &0.09 &0.08 &0.02 &0.13 \\ 90 &ThII&0.10& 0.11 & 0.0 &0.16 \\ & & HD82106 && & \\ 64 &GdII &0.07 &0.06 &0.00 &0.10 \\ 68 &DyII &0.10 &0.12 &0.01 &0.15 \\ 90 &ThII&0.05& 0.06 & 0.0 &0.11 \\ & & HD75732 && & \\ 64 &GdII &0.06 &0.10 &0.01 &0.12 \\ 68 &DyII &0.10 &0.09 &0.01 &0.14 \\ 90 &ThII&0.10& 0.06 & 0.0 &0.14 \\ & & HD201891 && & \\ 64 &GdII &0.02 &0.04 &0.02 &0.08 \\ 68 &DyII &0.05 &0.03 &0.01 &0.07 \\ 90 &ThII&0.12& 0.06 & 0.0 &0.15 \\ \hline \end{tabular} \end{table} As can be seen from Table \ref{errors}, uncertainties in \Teff~ and \logg~ contribute maximally to the total error. Total errors due to parameter uncertainties and the measured spectra vary from 0.07 dex to 0.15 dex for Gd and Dy, and from 0.11 to 0.16 dex for Th abundance. To verify our selection of stellar parameters, we present correlations between Gd, Dy and Th abundances and atmospheric parameters \Teff~ and \logg~(see Figs. \ref{el_teff}, \ref{el_logg}). As can be seen in Figures \ref{el_teff}, \ref{el_logg}, there is no correlation between the elemental abundances and chosen parameters. A comparison between the abundance determinations obtained in this study and the data reported by other authors is given in Table \ref{ncap} (see in \S \ref{sec: stellar param}). Also Fig. \ref{ba_eu_gd_dy} shows our [Eu/Fe], [Gd/Fe], and [Dy/Fe] data and ones from \cite{guiglion:18} and \cite{spina:18} as a function of [Fe/H]. For these Figures we selected the data of \cite{guiglion:18} with \Teff~ from 5100 K to 6300 K and surface gravities within 3.5 $<$ \logg $<$ 5.0. Such ranges of parameter values are the same chosen by the authors for further analysis. The Fig. \ref{comp_th} presents our [Th/Fe] determinations and those by \cite{peloso:05}, \cite{morell:92}, \cite{unterborn:15} and \cite{botelho:19}. Table \ref{ncap} shows the mean differences and rms errors for the values of thorium abundance obtained by us and other authors for stars in common. We have 5 common stars with \cite{morell:92}, for which the parameters and content of thorium are determined: $\Delta$([Th/Fe]) (our – Morell) = 0.11 $\pm$0.12 dex, the shift is within the error limits. We also have 5 stars in common with \cite{peloso:05}, for which parameters have been determined, and 4 of them have definitions of thorium abundance. The mean difference is $\Delta$([Th/Fe]) (our-Peloso)= 0.23 $\pm$0.14 dex and it is larger than the resulting error. In addition, there is one star HD 76932 also shared with \cite{morell:92} and \cite{peloso:05}. Our thorium abundance for HD 76932 is [Th/Fe] = 0.57 dex, \cite{morell:92} and \cite{peloso:05} give [Th/Fe] = 0.35dex and [Th/Fe] = 0.30 dex, respectively. There is one common star, namely HD 146233, with \cite{unterborn:15}, it is also was studied by \cite{botelho:19}. For HD 146233 the Th measurements are varying significantly between different works. We obtain [Th/Fe] =--0.09 dex, \cite{unterborn:15} gives [Th/Fe] =0.28 dex and in \cite{botelho:19} [Th/Fe] =0.15 dex. We have 5 stars in common with the \cite{botelho:19}, but among them only 3 stars with measurements of Th abundance. The mean difference and rms between our data and those of \cite{botelho:19} is $\Delta$([Th/Fe]) (our – Botelho) = -0.09 $\pm$0.15. This value (shift) has opposite sign in comparison with those obtained for comparison of \cite{morell:92}, and \cite{peloso:05}, but it is within the limits of errors. From a comparison with \cite{morell:92}, \cite{peloso:05}, and \cite{botelho:19}, the obtained rms of the mean difference are 0.12, 0.14 and 0.15, respectively. At the same time, the \cite{peloso:05} data show a systematic shift relative to our data (0.23), since the mean difference is greater than the scatter. The maximum difference in thorium abundance between our values and these for \cite{peloso:05} reaches 0.4 dex for the star HD 22879. However, this discrepancy is contributed also by a difference of 0.15 dex in metallicity obtained by us ([Fe/H] = -0.91) and ([Fe/H] = -0.76) \cite[][]{peloso:05}. Fig. \ref{th_prof} (top panel) presents the synthetic spectra for the star HD 22879 calculated in this work (blue solid line) and that with the data (stellar parameters, chemical abundances and line list) of \cite[][]{peloso:05} (red solid line). The black solid line shows the calculation from our data (parameters, abundance), but adapted to the Peloso's line list. Blue asterisks shows the calculation based on Peloso's data (parameters, abundances) with our line list in the thorium region. Circles are the corresponding observational spectrum. The bottom panel shows a description of the observed spectrum by our synthetic spectrum in a wider spectral region with the thorium line. We observe a difference in the synthetic calculations for our data and those of \cite[][]{peloso:05} in the region of the iron, nickel, manganese lines at the maximum intensity of this spectral peculiarity due to the difference in oscillator strengths of lines, assumed abundances of elements and the absence of the iron line in the \cite[][]{peloso:05} list. In the part of profile with the thorium-cobalt lines, we see that using our data and different line lists gives similar trends, and different stellar parameters and abundances make a significant contribution to the result, in this case, an increase in the thorium abundance compared to that obtained by \cite[][]{peloso:05} is required. In general, the differences between the data obtained in different works are mostly due to the different lists of used lines in the thorium line region and different parameters and elemental abundances measured in different works. \section{Results and comparison with Galactic chemical evolution models} \label{sec: result, gce} The Milky Way disk stars with the metallicity range covered from this study are formed from interstellar matter enriched by several generations of stars. Therefore, these observations cannot be directly compared with theoretical stellar models, and Galactic chemical evolution (GCE) simulations must be used to study the evolution history that allowed to build the chemical inventory observed today \citep[e.g.,][and references therein]{tinsley80,gibson03,kobayashi:20,matteucci:21}. In this work we focus on the r-process elements Eu, Gd, Dy and Th. Although the main stellar source of the r-process in the Galaxy is still matter of debate, from decades the r-process was typically considered to produce the same abundances independently from the metallicity of the stellar progenitors. The close similarity between the solar residual \citep[where the residual is derived by subtracting the s-process contribution from the solar abundances of heavy elements beyond iron, e.g., ][]{arlandini:99,bisterzo:14,prantzos20} and the abundance patterns measured in r-process-rich metal-poor stars drove and supported such a scenario. Indeed, while there is a significant abundance scatter among different r-process enriched stars for lighter elements in the mass region Sr-Ru, the abundances appear to better align with the solar residual for Ba and heavier elements up to Pb \citep[e.g.,][]{sneden:08, cowan:21}. Within this heavier mass region, GCE simulations could carry the same r-process signature across the evolution of the Galaxy, where the main remaining uncertainties are the source frequency and the quantitative r-process yields associated to each stellar source (e.g., \citealt{matteucci:14,wehmeyer:15,coteLIGO,hotokezaka:18}). A larger observational scatter between different r-process rich stars has been measured for the actinides elements Th and U, which are highly affected by varying the conditions in r-process theoretical calculations \citep[][]{eichler:19, cowan:21}. However, a significant variation is also seen beyond Ba once a larger sample of metal-poor stars is considered \citep[][]{roederer:10}, suggesting that the r-process production does not yield a unique and robust pattern, and a degree of variation should be expected. The observation of actinide-boost stars has further questioned those classical paradigms \citep[e.g.,][]{roederer:10, holmbeck:18,farouqi:21}. At least for metal-poor halo stars in the Galaxy, it is still matter of debate if only one r-process source would be able to explain the early large variations observed in stars for Eu and other heavy r-process element abundances with respect to iron or $\alpha$-elements. As already pointed out by \citet{Qian.Wasserburg:2007} and followed up by \citet{Hansen.Montes.Arcones:2014}, it is actually more plausible that at least two different types of r-process sources were active, contributing with different frequency and timescale. Here we discuss the aspect that two processes contribute to the lanthanide and actinide r-process elements. On the other hand, observations from old metal-poor stars would not exclude that today there is one source with possible abundance variations, dominating the r-process contribution to GCE \citep[e.g.,][and references therein]{wehmeyer:15,cote:19,farouqi:21}. If we consider the r-process lanthanides discussed in this work, i.e., Eu, Gd and Dy, we have seen in Figure~\ref{ba_eu_gd_dy} that their abundance trends with respect to Fe are similar. In particular, we cannot identify if the observed abundance scatter is due to the GCE contribution from multiple r-process sources and/or some different production in the region, or if such a dispersion can be simply due to observation uncertainties. On the other hand, it may be interesting to study the evolution of Th (an actinides element) with respect to Eu. In this context, we have performed GCE models to compare with our new observations. The simulations are made using the Python code \texttt{OMEGA+} (\citealt{coteomega,coteomegap}), which is part of the open-source JINAPyCEE package\footnote{https://github.com/becot85/JINAPyCEE}. It consists of a two-zone model that includes a one-zone GCE model surrounded by a large gas reservoir representing the circumgalactic medium. These two zones are interacting via galactic inflows and outflows, where inflows transfer gas from the circumgalactic medium to the central GCE model (the galaxy), and outflows transfer gas from the galaxy to the circumgalactic medium. In this work, we use the yields of \cite{nomoto13}, \cite{cristallo:15}, and \cite{iwamoto99} for massive stars, low-and intermediate-mass stars, and Type~Ia supernovae, and we use the same galaxy evolution parameters as the best model found in \cite{cote19radio}, which reproduced various observational constraints such as the current star formation rate, gas inflow rate, supernova rates, total stellar mass, and total gas mass. Recent developments have allowed to take into account radioactivity throughout the GCE calculations (\citealt{cote19radio,2022ApJ...924...10T}), using the numerical solver presented in \cite{yague22} to properly follow radioactive decay on timescales shorter than the lifetime of the Milky Way. This numerical solver was modified in OMEGA+ to include the terms for material moving between the two simulated zones along with decay in an unsplit fashion. Our results are shown in Figure~\ref{fig:gce_v2}, and our goal is to address the relative timescales at which Eu (as a representative of the r-process production of lanthanides including Dy and Gd) and Th (as a representative of the actinides) are produced within the Galactic disc with a simple approach. Given this goal, all Eu and Th yields in our models have been included artificially in order to freely explore which scenarios could give rise to the GCE trends provided by our data. The black solid lines assume that all Eu and Th come from one source, with a short delay time typical of CCSNe sources. Here Eu and Th yields are the same in all events, regardless of metallicity. In this case, while our prediction is acceptable for [Eu/Fe], the predicted trend for [Th/Fe] does not decrease as steeply as the observational data, a feature that can also be seen with [Th/Eu]. The relatively flat [Th/Eu] trend shows that the decay of Th (the $^{232}$Th half-life is 14.1 billion years) plays an insignificant role in shaping our predictions, meaning that Th decay would not explain the different degrees at which Th and Eu decrease with respect to metallicity. The dashed orange and solid green lines in Figure \ref{fig:gce_v2} explore two different scenarios matching the Th and Eu trends simultaneously. The dashed orange line still assumes one prompt r-process source and a constant yield for Eu, but assumes metallicity-dependent Th yields where Th is boosted by a factor of 4.5 at low metallicity relative to high metallicity, with a continuous decrease between $Z=0.001$ and 0.02. The solid green line, on the other hand, combines two r-process sources — neutron star mergers with long delay times, and exotic SNe or collapsars with short delays (see also e.g., \citealt{cote:19,haynes:19,siegel:19,molero21, farouqi:21}). In this case, Th and Eu yields are kept constant as a function of metallicity for both sources. However, the frequency of the short-delay source is assumed to be metallicity dependent, such that its rate is three times higher than the long-delay source at $Z<0.001$, and becomes negligible $Z>0.01$. Such a computational experiment would boost the Th production at low metallicity as we may expect. A type of exotic SNe that would fit these assumptions are magneto-rotational (MHD) supernovae. MHD supernovae were originally proposed as a source of a strong r-process \citep[e.g., ][]{winteler:12}. However, it was later shown by \cite{moesta:18} that a strong r-process can only be obtained with (unlikely) very extreme pre-collapse magnetic fields, which are required to eject neutron-rich matter stemming from the electron capture during the collapse to high densities. If sufficient rotation exists also weaker pre-collape magnetic fields can be enhanced by the magneto-rotational instability (MRI), lead to a successful explosion and a highly magnetized neutron star (magnetar), but during the delay encountered before the MRI had its impact, neutrino absorptions enhance the electron fraction and limit the reach of the r-process. On the other hand, black-hole accretion disk outflows can lead to highly r-process enriched matter with $Y_e$-values in the ejecta just in the range leading to an actinide boost, as observed in many r-II stars \citep[see the detailed discussion in][]{farouqi:21}. As the collapsar behavior leading to black holes requires the core-collapse of quite massive progenitors, their frequency is expected to be much higher at low metallicities. The reason is that low-metallicity progenitors have lower opacities, experience - as a consequence - less mass loss during their stellar evolution and possess at the point of core collapse significantly higher masses, favoring the collapse to a black hole. Both scenarios shown in Figure~\ref{fig:gce_v2} lead to an enhancement of Th production in the early Universe, and would both be consistent with the observation of metal-poor actinide-boost stars. From this timescale experiment, it is unfortunately not possible to distinguish between one r-process source with metallicity-dependent yields, multiple r-process sources with metallicity-dependent rates, or a combination of the two. The variations observed in metal-poor stars between r-process elements and the existence of the actinide-boost stars seem to point more toward the second or third scenarios mentioned above \citep[e.g.,][]{roederer:10,farouqi:21} that, as we have seen, it would be consistent with observations at higher metallicities in the Galactic disk. Nevertheless, the fact that [Th/Fe] decreases more steeply than [Eu/Fe] suggests that Th and Eu had a different production history, with Th being more efficiently synthesized at low metallicity than at high metallicity, as compared to Eu. \section{Conclusions} \label{sec: conclusions} In this work, we presented and discussed the abundance measurements of Gd, Dy and Th for 276 disc stars. The analysis is based on LTE assumptions. Typical uncertainties are 0.10 dex for Gd (with a range between 0.08 and 0.12 dex), 0.11 for Dy (between 0.07 and 0.15 dex) and 0.12 dex for Th (with a range between 0.09 and 0.15 dex). The major sources of these uncertainties in the analysis are the stellar surface temperature and gravities. The [Dy/Fe] and [Gd/Fe] ratios show the same trend of [Eu/Fe] in the galactic disk. Due to the present observation uncertainties, it is not possible to use the evolution of Dy and Gd with respect to Eu to disentangle a contribution from different r-process components. On the other hand, [Th/Fe] shows a steeper decrease than the [Eu/Fe] with respect to [Fe/H]. By using GCE models we have explored possible solutions to explain those trends. We found that the observations may be better reproduced by one r-process source but with metallicity-dependent Th yields, or by multiple r-process sources with metallicity-dependent rates for the Th-rich source. We would rather support this second scenario, since it would also be compatible with observations of both actinides boost r-process rich metal-poor stars and not boosted. \section*{Acknowledgements} This article was based on observations collected at OHP Observatory, France. MP acknowledges significant support to NuGrid from NSF grant PHY-1430152 (JINA Center for the Evolution of the Elements) and STFC (through the University of Hull's Consolidated Grant ST/R000840/1), and access to {\sc viper}, the University of Hull High Performance Computing Facility. MP acknowledges the support from the "Lendület-2014" Programme of the Hungarian Academy of Sciences (Hungary). FKT acknowledges support from the European Research Council (FP7) under ERC Advanced Grant Agreement 321263 FISH. BC, AY and MP acknowledges support from the ERC Consolidator Grant (Hungary) funding scheme (project RADIOSTAR, G.A. n. 724560) and from the National Science Foundation (USA) under grant No. PHY-1430152 (JINA Center for the Evolution of the Elements). This article is based upon work from the ChETEC COST Action (CA16117), supported by COST (European Cooperation in Science and Technology). We thank the ChETEC-INFRA project funded from the European Union’s Horizon 2020 research and innovation programme (grant agreement No 101008324), and the IReNA network supported by NSF AccelNet. MP also thank the UK network BRIDGCE. TM is grateful to the Laboratoire d’Astrophysique de l’Universite de Bordeaux for their kind hospitality. \section*{Data Availability} % The data underlying this article will be shared on reasonable request to the corresponding author. \bibliography{proc} \appendix \section{} We presented the stellar parameters and the Gd, Dy, Th abundances with errors in Table A1. \onecolumn \clearpage \begin{longtable}{lcccccccccc} \caption{Stellar parameters and abundances of Gd, Dy, and Th.} \label{ncapt}\\ \hline HD/BD & \Teff, K & \logg & [Fe/H] & \Vt & [Gd/Fe]& stand. deviation & [Dy/Fe] & stand. deviation & [Th/Fe]\\ \hline thin disc& & & & & & & & \\ \hline \endfirsthead \hline HD/BD & \Teff, K & \logg & [Fe/H] & \Vt & [Gd/Fe]& stand. deviation & [Dy/Fe] & stand. deviation & [Th/Fe]\\ \hline \endhead \hline 166 & 5514 & 4.6 & 0.16 & 0.6 &-- &-- & -- & -- & 0.06 \\ 1562 & 5828 & 4.0 & -0.32 & 1.2 &0.14 & 0.04 & 0.12 & 0.05 & 0.31 \\ 1835 & 5790 & 4.5 & 0.13 & 1.1 &-- & -- & -- & -- & -- \\ 3651 & 5277 & 4.5 & 0.15 & 0.6 &0.05 & 0.07 & 0.05 & 0.05 & -0.08 \\ 4256 & 5020 & 4.3 & 0.08 & 1.1 &-- &-- & -- &-- & -- \\ 4307 & 5889 & 4.0 & -0.18 & 1.1 &-- &-- & -- &-- & -- \\ 4614 & 5965 & 4.4 & -0.24 & 1.1 &0.18 & 0.03 & 0.19 & 0.05 & 0.03 \\ 5294 & 5779 & 4.1 & -0.17 & 1.3 &0.19 & 0.00 & 0.11 & 0.03 & 0.26 \\ 6660 & 4759 & 4.6 & 0.08 & 1.4 &-- & -- & -- & -- & -- \\ 7590 & 5962 & 4.4 & -0.10 & 1.4 &0.15 & 0.04 & 0.16 & 0.05 & 0.19 \\ 7924 & 5165 & 4.4 & -0.22 & 1.1 &0.19 & 0.00 & 0.22 & 0.04 & 0.16 \\ 8648 & 5790 & 4.2 & 0.12 & 1.1 &-- & -- & 0.14 & 0.06 & -0.05 \\ 9407 & 5666 & 4.45 & 0.05 & 0.8 &0.09 & 0.06 & -0.02 & 0.03 & 0.12 \\ 9826 & 6074 & 4.0 & 0.10 & 1.3 &-0.03& 0.00 & -0.10 & 0.06 & -0.08 \\ 10086 & 5696 & 4.3 & 0.13 & 1.2 &-- & -- & -- &-- & -- \\ 10307 & 5881 & 4.3 & 0.02 & 1.1 &-- & -- & -- &-- & -0.10 \\ 10476 & 5242 & 4.3 & -0.05 & 1.1 &-- & -- & -- & -- & -0.13 \\ 10780 & 5407 & 4.3 & 0.04 & 0.9 &0.16 & 0.06 & -0.02 & 0.03 & -0.12 \\ 11007 & 5980 & 4.0 & -0.20 & 1.1 &0.13 & 0.05 & 0.15 & 0.06 & 0.29 \\ 11373 & 4783 & 4.65 & 0.08 & 1.0 &-- & -- & -- & -- & -- \\ 12846 & 5766 & 4.5 & -0.24 & 1.2 &0.29 & 0.04 & 0.19 & 0.07 & 0.33 \\ 13507 & 5714 & 4.5 & -0.02 & 1.1 &0.09 & -- & 0.12 & 0.05 & -0.01 \\ 14374 & 5449 & 4.3 & -0.09 & 1.1 &0.06 & -- & 0.07 & 0.06 & 0.18 \\ 16160 & 4829 & 4.6 & -0.16 & 1.1 &0.21 & -- & 0.21 & 0.00 & 0.25 \\ 17674 & 5909 & 4.0 & -0.14 & 1.1 &0.06 & 0.06 & 0.06 & 0.12 & 0.28 \\ 17925 & 5225 & 4.3 & -0.04 & 1.1 &-- & -- & -- & -- & -- \\ 18632 & 5104 & 4.4 & 0.06 & 1.4 &-- & -- & -- & -- & -- \\ 18803 & 5665 & 4.55 & 0.14 & 0.8 &0.03 & 0.07 & -0.03 & 0.05 & -0.12 \\ 19019 & 6063 & 4.0 & -0.17 & 1.1 &0.22 & -- & 0.21 & 0.06 & 0.16 \\ 19373 & 5963 & 4.2 & 0.06 & 1.1 &-0.06& 0.02 & -0.02 & 0.05 & 0.01 \\ 20630 & 5709 & 4.5 & 0.08 & 1.1 &0.07 & 0.04 & 0.00 & 0.03 & -0.06 \\ 22049 & 5084 & 4.4 & -0.15 & 1.1 &-- & -- & -- & -- & -- \\ 22484 & 6037 & 4.1 & -0.03 & 1.1 &0.12 & 0.03 & 0.07 & 0.02 & 0.15 \\ 22556 & 6155 & 4.2 & -0.17 & 1.1 &-- & -- & -- & -- & -- \\ 24053 & 5723 & 4.4 & 0.04 & 1.1 &0.06 & 0.04 & 0.05 & 0.05 & 0.13 \\ 24238 & 4996 & 4.3 & -0.46 & 1.0 &0.28 & -- & 0.31 & 0.07 & 0.23 \\ 24496 & 5536 & 4.3 & -0.13 & 1.5 &0.18 & 0.03 & 0.14 & 0.05 & 0.02 \\ 25665 & 4967 & 4.7 & 0.01 & 1.2 &-- &-- & -- & -- & -- \\ 25680 & 5843 & 4.5 & 0.05 & 1.1 &-- &-- & -- & -- & -- \\ 26923 & 5920 & 4.4 & -0.03 & 1.0 &0.12 & 0.06 & 0.12 & 0.05 & -0.10 \\ 28005 & 5980 & 4.2 & 0.23 & 1.1 &-0.03& 0.04 & -0.06 & 0.03 & 0.14 \\ 28447 & 5639 & 4.0 & -0.09 & 1.1 &0.14 & 0.04 & 0.17 & 0.03 & 0.08 \\ 29150 & 5733 & 4.3 & 0.00 & 1.1 &0.06 & 0.01 & 0.04 & 0.02 & -0.13 \\ 29310 & 5852 & 4.2 & 0.08 & 1.4 &-- & -- & -- & -- & -- \\ 29645 & 6009 & 4.0 & 0.14 & 1.3 &-0.07& 0.05 & -0.09 & 0.04 & -0.12 \\ 30495 & 5820 & 4.4 & -0.05 & 1.3 &-- & -- & -- & & -- \\ 33632 & 6072 & 4.3 & -0.24 & 1.1 &0.11 & 0.04 & 0.19 & 0.04 & 0.28 \\ 34411 & 5890 & 4.2 & 0.10 & 1.1 &-- &-- & -- &-- & \\ 37008 & 5016 & 4.4 & -0.41 & 0.8 &0.18 &-- & 0.26 &-- & 0.18 \\ 37394 & 5296 & 4.5 & 0.09 & 1.1 &-- &-- & -- & -- & -- \\ 38858 & 5776 & 4.3 & -0.23 & 1.1 &0.19 & 0.09 & 0.18 & 0.07 & 0.12 \\ 39587 & 5955 & 4.3 & -0.03 & 1.5 &0.11 & 0.03 & 0.05 & 0.03 & 0.20 \\ 40616 & 5881 & 4.0 & -0.22 & 1.1 &0.17 & 0.04 & 0.15 & 0.03 & 0.16 \\ 41330 & 5904 & 4.1 & -0.18 & 1.2 &0.15 & 0.05 & 0.19 & 0.06 & 0.32 \\ 41593 & 5312 & 4.3 & -0.04 & 1.1 &-0.02& 0.11 & 0.01 & 0.08 & -0.14 \\ 42618 & 5787 & 4.5 & -0.07 & 1.0 &0.11 & 0.03 & 0.17 & 0.05 & 0.01 \\ 42807 & 5719 & 4.4 & -0.03 & 1.1 &0.09 & 0.08 & 0.06 & 0.05 & 0.05 \\ 43587 & 5927 & 4.1 & -0.11 & 1.3 &0.01 & 0.06 & 0.07 & 0.05 & 0.30 \\ 43856 & 6143 & 4.1 & -0.19 & 1.1 &0.19 & 0.04 & 0.09 & 0.04 & -- \\ 43947 & 6001 & 4.3 & -0.24 & 1.1 &0.06 & 0.04 & 0.12 & 0.03 & 0.33 \\ 45088 & 4959 & 4.3 & -0.21 & 1.2 &0.33 & 0.07 & 0.21 & 0.00 & 0.25 \\ 47752 & 4613 & 4.6 & -0.05 & 0.2 &-- & -- & -- & -- & -0.13 \\ 48682 & 5989 & 4.1 & 0.05 & 1.3 &-0.08& 0.09 & -0.05 & 0.04 & 0.07 \\ 50281 & 4712 & 3.9 & -0.20 & 1.6 &-- & -- & -- & -- & -- \\ 50692 & 5911 & 4.5 & -0.10 & 0.9 &0.11 & 0.06 & 0.12 & 0.03 & 0.24 \\ 51419 & 5746 & 4.1 & -0.37 & 1.1 &0.32 & 0.11 & 0.25 & 0.08 & 0.39 \\ 51866 & 4934 & 4.4 & 0.00 & 1.0 &-- & -- & -- & -- & -- \\ 53927 & 4860 & 4.64 & -0.22 & 1.2 &0.17 & 0.04 & 0.27 & 0.00 & 0.11 \\ 54371 & 5670 & 4.2 & 0.06 & 1.2 &-- & -- & -- & -- &-- \\ 55575 & 5949 & 4.3 & -0.31 & 1.1 &0.18 & 0.04 & 0.21 & 0.06 & 0.35 \\ 58595 & 5707 & 4.3 & -0.31 & 1.2 &0.20 & 0.08 & 0.21 & 0.04 & 0.30 \\ 59747 & 5126 & 4.4 & -0.04 & 1.1 &-0.04& 0.07 & -0.01 & -- & -0.04 \\ 61606 & 4956 & 4.4 & -0.12 & 1.3 &-- & -- & -- & -- & -- \\ 62613 & 5541 & 4.4 & -0.10 & 1.1 &0.14 & 0.08 & 0.11 & 0.08 & 0.14 \\ 63433 & 5693 & 4.35 & -0.06 & 1.9 &0.08 & 0.00 & 0.23 & 0.03 & 0.15 \\ 64468 & 5014 & 4.2 & 0.00 & 1.2 &0.07 & 0.00 & -0.02 & 0.06 & -0.13 \\ 64815 & 5864 & 4.0 & -0.33 & 1.1 &0.28 & 0.08 & 0.23 & 0.08 & -- \\ 65874 & 5936 & 4.0 & 0.05 & 1.3 &-- & -- & -- & -- & -- \\ 66573 & 5821 & 4.6 & -0.53 & 1.1 &0.38 & 0.06 & 0.37 & 0.05 & 0.45 \\ 68638 & 5430 & 4.4 & -0.24 & 1.1 &0.14 & 0.06 & 0.09 & 0.05 & 0.23 \\ 70923 & 5986 & 4.2 & 0.06 & 1.1 &0.00 & 0.05 & -0.07 & 0.05 & 0.06 \\ 71148 & 5850 & 4.2 & 0.00 & 1.1 &-- & -- & -- & -- & -0.13 \\ 72760 & 5349 & 4.1 & 0.01 & 1.1 &-- & -- & -- & -- & -- \\ 72905 & 5884 & 4.4 & -0.07 & 1.5 &-- & -- & -- & -- & -- \\ 73344 & 6060 & 4.1 & 0.08 & 1.1 &0.02 & 0.04 & -0.11 & 0.05 & -0.06 \\ 73667 & 4884 & 4.4 & -0.58 & 0.9 &0.42 & 0.06 & 0.28 & 0.05 & 0.55 \\ 75732 & 5373 & 4.3 & 0.25 & 1.1 &-0.07& 0.04 & -0.13 & 0.06 & 0.02 \\ 75767 & 5823 & 4.2 & -0.01 & 0.9 &-- & -- & -- &-- &-- \\ 76151 & 5776 & 4.4 & 0.05 & 1.1 &-- & -- & -- &-- &-- \\ 79969 & 4825 & 4.4 & -0.05 & 1.0 &-- & -- & -- & -- & -- \\ 82106 & 4827 & 4.1 & -0.11 & 1.1 &0.08 & 0.07 & 0.04 & 0.04 & 0.25 \\ 82443 & 5334 & 4.4 & -0.03 & 1.3 &-0.03& 0.08 & 0.10 & 0.03 & 0.10 \\ 87883 & 5015 & 4.4 & 0.00 & 1.1 &-- & -- & -- & -- & -- \\ 88072 & 5778 & 4.3 & 0.00 & 1.1 &0.05 & 0.03 & 0.07 & 0.03 & 0.02 \\ 89251 & 5886 & 4.0 & -0.12 & 1.1 &-- & -- & 0.14 & 0.08 &-- \\ 89269 & 5674 & 4.4 & -0.23 & 1.1 &0.19 & 0.05 & 0.09 & 0.05 & 0.17 \\ 91347 & 5931 & 4.4 & -0.43 & 1.1 &0.24 & 0.05 & 0.26 & 0.03 & 0.50 \\ 94765 & 5077 & 4.4 & -0.01 & 1.1 &0.15 & 0.06 & 0.01 & 0.04 & -0.07 \\ 95128 & 5887 & 4.3 & 0.01 & 1.1 &0.06 & 0.00 & 0.02 & 0.03 & 0.01 \\ 97334 & 5869 & 4.4 & 0.06 & 1.2 &-0.02& 0.04 & -0.04 & 0.03 & 0.06 \\ 97658 & 5136 & 4.5 & -0.32 & 1.2 &0.29 & 0.07 & 0.27 & 0.04 & 0.21 \\ 98630 & 6060 & 4.1 & 0.22 & 1.4 &-- & -- & -- & -- & -- \\ 101177 & 5932 & 4.1 & -0.16 & 1.1 &0.13 & 0.07 & 0.14 & 0.03 & 0.25 \\ 102870 & 6055 & 4.0 & 0.13 & 1.4 &-0.09& 0.05 & -0.04 & 0.05 & 0.09 \\ 105631 & 5416 & 4.4 & 0.16 & 1.2 &-- & -- & -- & -- & -- \\ 107705 & 6040 & 4.2 & 0.06 & 1.4 &-- & -- & -- & -- & -- \\ 108954 & 6037 & 4.4 & -0.12 & 1.1 &-- & -- & -- & -- & 0.21 \\ 109358 & 5897 & 4.2 & -0.18 & 1.1 &0.14 & 0.05 & 0.12 & 0.03 & 0.32 \\ 110463 & 4950 & 4.5 & -0.05 & 1.2 &-- & -- & -- &-- & -- \\ 110833 & 5075 & 4.3 & 0.00 & 1.1 &-- & -- & -- &-- & -- \\ 111395 & 5648 & 4.6 & 0.10 & 0.9 &0.02 & 0.09 & -0.05 & 0.00 & -- \\ 112758 & 5203 & 4.2 & -0.56 & 1.1 &-- & & -- & -- & -- \\ 114710 & 5954 & 4.3 & 0.07 & 1.1 &-0.05& 0.04 & -0.03 & 0.05 & -0.05 \\ 115383 & 6012 & 4.3 & 0.11 & 1.1 &-0.06& 0.03 & 0.01 & 0.03 & 0.01 \\ 115675 & 4745 & 4.45 & 0.02 & 1.0 &-- & -- & -- & -- & -- \\ 116443 & 4976 & 3.9 & -0.48 & 1.1 &-- & -- & -- & -- & 0.35 \\ 116956 & 5386 & 4.55 & 0.08 & 1.2 &0.06 & 0.06 & -0.01 & 0.03 & -0.21 \\ 117043 & 5610 & 4.5 & 0.21 & 0.4 &-0.02& 0.03 & -0.14 & 0.03 & -0.09 \\ 119802 & 4763 & 4.0 & -0.05 & 1.1 &-- &-- & -- & -- & -- \\ 122064 & 4937 & 4.5 & 0.07 & 1.1 &-- &-- & -- & -- & -0.05 \\ 124642 & 4722 & 4.65 & 0.02 & 1.3 &-- & -- & -- & -- &-- \\ 125184 & 5695 & 4.3 & 0.31 & 0.7 &-- & -- & -- & -- &-- \\ 126053 & 5728 & 4.2 & -0.32 & 1.1 &0.19 & 0.05 & 0.16 & 0.05 & 0.46 \\ 127506 & 4542 & 4.6 & -0.08 & 1.2 &-- & -- & -- &-- & -- \\ 128311 & 4960 & 4.4 & 0.03 & 1.3 &0.16 & 0.05 & 0.00 & 0.03 & -- \\ 130307 & 4990 & 4.3 & -0.25 & 1.4 &0.27 & 0.05 & 0.23 & 0.03 & -- \\ 130948 & 5943 & 4.4 & -0.05 & 1.3 &-- & -- & -- & -- & -- \\ 131977 & 4683 & 3.7 & -0.24 & 1.8 &-- & -- & -- & -- &-- \\ 135599 & 5257 & 4.3 & -0.12 & 1.0 &0.07 & 0.04 & 0.11 & 0.05 & 0.21 \\ 137107 & 6037 & 4.3 & 0.00 & 1.1 &0.05 & 0.03 & 0.05 & 0.04 & -0.13 \\ 139777 & 5771 & 4.4 & 0.01 & 1.3 &0.11 & 0.00 & 0.08 & 0.05 & -- \\ 139813 & 5408 & 4.5 & 0.00 & 1.2 &-- & -- & -- & -- & -- \\ 140538 & 5675 & 4.5 & 0.02 & 0.9 &-- & -- & -- & -- & -- \\ 141004 & 5884 & 4.1 & -0.02 & 1.1 &0.12 & 0.03 & 0.08 & 0.06 & 0.04 \\ 141272 & 5311 & 4.4 & -0.06 & 1.3 &0.16 & 0.03 & 0.09 & 0.03 & 0.00 \\ 142267 & 5856 & 4.5 & -0.37 & 1.1 &-- & -- & -- & -- & -- \\ 144287 & 5414 & 4.5 & -0.15 & 1.1 &0.22 & 0.05 & 0.21 & 0.03 & 0.19 \\ 145675 & 5406 & 4.5 & 0.32 & 1.1 &-0.05& 0.05 & -0.10 & 0.03 & 0.10 \\ 146233 & 5799 & 4.4 & 0.01 & 1.1 &0.14 & 0.03 & 0.13 & 0.02 & -0.09 \\ 149661 & 5294 & 4.5 & -0.04 & 1.1 &0.16 & 0.07 & 0.09 & 0.05 & -0.14 \\ 149806 & 5352 & 4.55 & 0.25 & 0.4 &-- &-- & -- &-- &-- \\ 151541 & 5368 & 4.2 & -0.22 & 1.3 &-- &-- & -- &-- &-- \\ 153525 & 4810 & 4.7 & -0.04 & 1.0 &-- & -- & -- & -- & -- \\ 154345 & 5503 & 4.3 & -0.21 & 1.3 &0.18 & 0.05 & 0.11 & 0.04 & 0.35 \\ 156668 & 4850 & 4.2 & -0.07 & 1.2 &0.17 & 0.04 & 0.22 & 0.07 & 0.16 \\ 156985 & 4790 & 4.6 & -0.18 & 1.0 &-- & & -- & & 0.17 \\ 158633 & 5290 & 4.2 & -0.49 & 1.3 &0.20 & 0.08 & 0.19 & 0.04 & 0.51 \\ 160346 & 4983 & 4.3 & -0.10 & 1.1 &-- & -- & -- & -- & -- \\ 161098 & 5617 & 4.3 & -0.27 & 1.1 &-- & -- & -- & -- & -- \\ 164922 & 5392 & 4.3 & 0.04 & 1.1 &-- & -- & -- & -- & -- \\ 165173 & 5505 & 4.3 & -0.05 & 1.1 &0.15 & 0.06 & 0.09 &-- & -0.03 \\ 165341 & 5314 & 4.3 & -0.08 & 1.1 &-- & -- & -- &-- & -0.03 \\ 165476 & 5845 & 4.1 & -0.06 & 1.1 &-- & -- & -- &-- & -- \\ 165670 & 6178 & 4.0 & -0.10 & 1.5 &-- & -- & -- & -- & -- \\ 165908 & 5925 & 4.1 & -0.60 & 1.1 &0.25 & 0.04 & 0.30 & 0.06 & --0.37 \\ 166620 & 5035 & 4.0 & -0.22 & 1.0 &-- &-- & -- & -- & -- \\ 171314 & 4608 & 4.65 & 0.07 & 1.0 &-- &-- & -- & -- & -- \\ 174080 & 4764 & 4.55 & 0.04 & 1.0 &-- &-- & -- & -- & 0.08 \\ 175742 & 5030 & 4.5 & -0.03 & 2.0 &-- &-- & -- & -- & -- \\ 176377 & 5901 & 4.4 & -0.17 & 1.3 &0.14 & 0.07 & 0.18 & 0.02 & 0.11 \\ 176841 & 5841 & 4.3 & 0.23 & 1.1 &-- &-- & -- &-- & -0.11 \\ 178428 & 5695 & 4.4 & 0.14 & 1.0 &-- &-- & -- &-- & -0.17 \\ 180161 & 5473 & 4.5 & 0.18 & 1.1 &-- &-- & -- &-- & -- \\ 182488 & 5435 & 4.4 & 0.07 & 1.1 &-- & -- & -- &-- & 0.05 \\ 183341 & 5911 & 4.3 & -0.01 & 1.3 &-- & -- & -- & -- & -- \\ 184385 & 5536 & 4.45 & 0.12 & 0.9 &0.00 & 0.05 & -0.03 & 0.05 & -- \\ 185144 & 5271 & 4.2 & -0.33 & 1.1 &0.10 & 0.05 & 0.06 & 0.03 & 0.37 \\ 185414 & 5818 & 4.3 & -0.04 & 1.1 &0.04 & 0.04 & 0.08 & 0.02 & -0.14 \\ 186408 & 5803 & 4.2 & 0.09 & 1.1 &-- & -- & -- & -- & -0.22 \\ 186427 & 5752 & 4.2 & 0.02 & 1.1 &-- & -- & -- & -- & -0.10 \\ 187897 & 5887 & 4.3 & 0.08 & 1.1 &-0.01& 0.05 & -0.02 & 0.06 & -0.01 \\ 189087 & 5341 & 4.4 & -0.12 & 1.1 &-- & -- & -- & -- & 0.01 \\ 189733 & 5076 & 4.4 & -0.03 & 1.5 &0.13 & 0.04 & 0.13 & 0.05 & -- \\ 190007 & 4724 & 4.5 & 0.16 & 0.8 &-- & -- & -- & -- & -- \\ 190406 & 5905 & 4.3 & 0.05 & 1.0 &-- & -- & -- & -- & 0.07 \\ 190470 & 5130 & 4.3 & 0.11 & 1.0 &0.00 & 0.06 & -0.10 & 0.05 & -0.14 \\ 190771 & 5766 & 4.3 & 0.13 & 1.5 &-- & -- & -- & -- & -- \\ 191533 & 6167 & 3.8 & -0.10 & 1.5 &-- & -- & -- & -- & 0.19 \\ 191785 & 5205 & 4.2 & -0.12 & 1.2 &0.12 & 0.04 & 0.08 & 0.06 & 0.26 \\ 195005 & 6075 & 4.2 & -0.06 & 1.3 &-- & -- & -- & -- & 0.08 \\ 195104 & 6103 & 4.3 & -0.19 & 1.1 &-- & -- & -- & -- & -- \\ 197076 & 5821 & 4.3 & -0.17 & 1.2 &0.09 & 0.05 & 0.11 & 0.05 & 0.29 \\ 199960 & 5878 & 4.2 & 0.23 & 1.1 &-- & -- & -- & -- & -- \\ 200560 & 5039 & 4.4 & 0.06 & 1.1 &-- & -- & -- & -- & -- \\ 202108 & 5712 & 4.2 & -0.21 & 1.1 &0.16 & 0.08 & 0.19 & 0.03 & 0.35 \\ 202575 & 4667 & 4.6 & -0.03 & 0.5 &-- & -- & -- & -- & 0.20 \\ 203235 & 6071 & 4.1 & 0.05 & 1.3 &-- & -- & -- & -- & -- \\ 205702 & 6020 & 4.2 & 0.01 & 1.1 &0.06 & 0.07 & -0.01 & 0.05 & 0.21 \\ 206860 & 5927 & 4.6 & -0.07 & 1.8 &0.09 & 0.07 & 0.10 & -- & 0.16 \\ 208038 & 4982 & 4.4 & -0.08 & 1.0 &-- & -- & -- & -- & -- \\ 208313 & 5055 & 4.3 & -0.05 & 1.0 &-- & -- & -- & -- & -- \\ 208906 & 5965 & 4.2 & -0.80 & 1.7 &0.44 & 0.13 & 0.39 & 0.05 & 0.57 \\ 210667 & 5461 & 4.5 & 0.15 & 0.9 &0.12 & 0.05 & 0.03 & 0.03 & -- \\ 210752 & 6014 & 4.6 & -0.53 & 1.1 &0.40 & & 0.41 & 0.04 & -- \\ 211472 & 5319 & 4.4 & -0.04 & 1.1 &0.04 & 0.03 & 0.06 & 0.03 & 0.06 \\ 214683 & 4747 & 4.6 & -0.46 & 1.2 &-- & -- & -- &-- & 0.53 \\ 216259 & 4833 & 4.6 & -0.55 & 0.5 &-- & -- & -- &-- & -- \\ 216520 & 5119 & 4.4 & -0.17 & 1.4 &-- & -- & -- & -- & -- \\ 217014 & 5763 & 4.3 & 0.17 & 1.1 &-0.08& 0.08 & 0.02 & 0.05 & -0.10 \\ 217813 & 5845 & 4.3 & 0.03 & 1.5 &0.04 & 0.00 & 0.02 & 0.00 & -- \\ 218868 & 5547 & 4.45 & 0.21 & 0.4 &-- &-- & -- & -- & -0.19 \\ 219538 & 5078 & 4.5 & -0.04 & 1.1 &-- &-- & -- & -- & -- \\ 219623 & 5949 & 4.2 & 0.04 & 1.2 &-- & -- & -- & -- & -0.12 \\ 220140 & 5144 & 4.6 & -0.03 & 2.4 &-- &-- & -- & -- & -- \\ 220182 & 5364 & 4.5 & -0.03 & 1.2 &-- &-- & -- & -- & -- \\ 220221 & 4868 & 4.5 & 0.16 & 0.5 &0.09 & 0.04 & 0.06 & 0.03 & -0.04 \\ 221851 & 5184 & 4.4 & -0.09 & 1.0 &0.16 & 0.07 & 0.12 & 0.03 &-- \\ 222143 & 5823 & 4.45 & 0.15 & 1.1 &-- & -- & -- & -- &-- \\ 224465 & 5745 & 4.5 & 0.08 & 0.8 &-- & -- & -- & -- & -- \\ 263175 & 4734 & 4.5 & -0.16 & 0.5 &0.16 & 0.03 & 0.14 & 0.04 & 0.13 \\ BD12063 & 4859 & 4.4 & -0.22 & 0.6 &0.19 & 0.00 & 0.21 & 0.05 & 0.34 \\ BD124499& 4678 & 4.7 & 0.00 & 0.5 &-- & -- & -- & -- & -- \\ \hline thick disc& & & & & & & & \\ \hline 245 & 5400 & 3.4 & -0.84 & 0.7 & 0.38 & 0.12 & 0.47 & 0.05 & 0.46 \\ 3765 & 5079 & 4.3 & 0.01 & 1.1 & 0.06& 0.07 & 0.07 & 0.05 & -0.01 \\ 5351 & 4378 & 4.6 & -0.21 & 0.5 & -- & -- & -- & -- & 0.33 \\ 6582 & 5350 & 4.5 & -0.83 & 0.4 & 0.28& 0.06 & 0.22 & 0.06 & 0.60 \\ 13783 & 5350 & 4.1 & -0.75 & 1.1 & -- & -- & -- & -- & 0.57 \\ 18757 & 5741 & 4.3 & -0.25 & 1.0 & 0.15& 0.03 & 0.16 & 0.05 & 0.32 \\ 22879 & 5825 & 4.42 & -0.91 & 0.9 & 0.41& 0.03 & 0.38 & 0.03 & 0.58 \\ 65583 & 5373 & 4.6 & -0.67 & 0.7 & 0.37& 0.06 & 0.34 & 0.08 & 0.49 \\ 76932 & 5840 & 4.0 & -0.95 & 1.0 & -- & -- & -- & -- & 0.57 \\ 106516 & 6165 & 4.4 & -0.72 & 1.1 & -- & -- & -- & -- & -- \\ 110897 & 5925 & 4.2 & -0.45 & 1.1 & 0.15& 0.06 & 0.16 & 0.05 & 0.42 \\ 135204 & 5413 & 4.0 & -0.16 & 1.1 & -- & -- & -- & & 0.03 \\ 152391 & 5495 & 4.3 & -0.08 & 1.3 & 0.13& 0.06 & 0.07 & 0.05 & 0.10 \\ 157089 & 5785 & 4.0 & -0.56 & 1.0 & -- & -- & -- & -- & 0.43 \\ 157214 & 5820 & 4.5 & -0.29 & 1.0 & -- & -- & -- & -- & 0.21 \\ 159062 & 5414 & 4.3 & -0.40 & 1.0 & 0.34& 0.03 & 0.23 & 0.05 & 0.27 \\ 165401 & 5877 & 4.3 & -0.36 & 1.1 & -- & -- & -- & -- & 0.13 \\ 190360 & 5606 & 4.4 & 0.12 & 1.1 & -0.02& 0.03 & 0.04 & -- & 0.15 \\ 201889 & 5600 & 4.1 & -0.85 & 1.2 & 0.40& 0.03 & 0.41 & 0.05 & 0.57 \\ 201891 & 5850 & 4.4 & -0.96 & 1.0 & 0.45& 0.03 & 0.37 & 0.03 & 0.58 \\ 204521 & 5809 & 4.6 & -0.66 & 1.1 & 0.40& 0.03 & 0.36 & 0.05 & 0.58 \\ \hline Hercules stream & & & & & & & & \\ \hline 13403 & 5724 & 4.0 & -0.31 & 1.1& 0.20 & 0.03 & 0.19 & 0.05 & 0.48 \\ 19308 & 5844 & 4.3 & 0.08 & 1.1& -0.03& 0.03 & -0.04 & 0.02 & 0.14 \\ 23050 & 5929 & 4.4 & -0.36 & 1.1& 0.23 & 0.07 & 0.22 & 0.02 & 0.43 \\ 30562 & 5859 & 4.0 & 0.18 & 1.1& -- & -- & -- & -- & -- \\ 64606 & 5250 & 4.2 & -0.91 & 0.8& 0.33 & 0.05 & 0.35 & 0.05 & 0.73 \\ 68017 & 5651 & 4.2 & -0.42 & 1.1& -- & -- & -- & -- & -- \\ 81809 & 5782 & 4 & -0.28 & 1.3& 0.18 & 0.03 & 0.15 & 0.03 & 0.25 \\ 107213 & 6156 & 4.1 & 0.07 & 1.6& -0.05& 0.07 & -0.10 & 0.04 & 0.15 \\ 139323 & 5204 & 4.6 & 0.19 & 0.7& -0.09& 0.06 & 0.04 & 0.03 & -0.02 \\ 139341 & 5242 & 4.6 & 0.21 & 0.9& -- & -- & -- & -- & -- \\ 144579 & 5294 & 4.1 & -0.70 & 1.3& 0.35 & 0.03 & 0.35 & 0.04 & 0.67 \\ 159222 & 5834 & 4.3 & 0.06 & 1.2& 0.01 & 0.05 & 0.00 & 0.06 & -0.04 \\ 159909 & 5749 & 4.1 & 0.06 & 1.1& -- & -- & -- & -- & -- \\ 215704 & 5418 & 4.2 & 0.07 & 1.1& -- & -- & -- & -- & -- \\ 218209 & 5705 & 4.5 & -0.43 & 1.0 & 0.32 & 0.03 & 0.32 & 0.05 & 0.50 \\ 221354 & 5242 & 4.1 & -0.06 & 1.2& 0.06 & & 0.09 & & 0.08 \\ \hline nonclassified & & & & & & & & \\ \hline 4628 & 4905 & 4.6 & -0.36 & 0.5 & -- & -- & 0.24 & 0.05 & -- \\ 4635 & 5103 & 4.4 & 0.07 & 0.8 & 0.05 & 0.10 & 0.11 & 0.10 & 0.15 \\ 10145 & 5673 & 4.4 & -0.01 & 1.1 & -- & -- & -- & -- & -- \\ 12051 & 5458 & 4.55 & 0.24 & 0.5 & -- & -- & -- & -- & -- \\ 13974 & 5590 & 3.8 & -0.49 & 1.1 & 0.14 & 0.03 & 0.07 & 0.03 & 0.31 \\ 17660 & 4713 & 4.75 & 0.17 & 1.3 & -- & -- & -- & -- & -- \\ 20165 & 5145 & 4.4 & -0.08 & 1.1 & -- & -- & -- & -- & -- \\ 24206 & 5633 & 4.5 & -0.08 & 1.1 & 0.10 & 0.00 & 0.12 & 0.05 & 0.20 \\ 32147 & 4945 & 4.4 & 0.13 & 1.1 & 0.02 & 0.04 & -0.01 & 0.03 & 0.29 \\ 45067 & 6058 & 4.0 & -0.02 & 1.2 & -- & -- & -- & -- &-- \\ 84035 & 4808 & 4.8 & 0.25 & 0.5 & -- & -- & -- & -- &-- \\ 86728 & 5725 & 4.3 & 0.22 & 0.9 & -- & -- & -- & -- & -- \\ 90875 & 4788 & 4.5 & 0.24 & 0.5 & -- & -- & -- & -- & -- \\ 117176 & 5611 & 4.0 & -0.03 & 1.0 & 0.10 & 0.05 & 0.11 & 0.03 & 0.25 \\ 117635 & 5230 & 4.3 & -0.46 & 0.7 & -- & -- & -- & -- & -- \\ 154931 & 5910 & 4.0 & -0.10 & 1.1 & -- & -- & -- & -- & -- \\ 159482 & 5620 & 4.1 & -0.89 & 1.0 & -- &-- & -- & -- & -- \\ 168009 & 5826 & 4.1 & -0.01 & 1.1 & -- &-- & -- & -- & -- \\ 173701 & 5423 & 4.4 & 0.18 & 1.1 & -0.01& 0.07 & -0.02 & 0.05 & 0.14 \\ 182736 & 5430 & 3.7 & -0.06 & 1.0 & 0.10 & 0.03 & 0.11 & 0.04 & 0.23 \\ 184499 & 5750 & 4.0 & -0.64 & 1.5 & 0.31 & 0.00 & 0.33 & 0.03 & 0.66 \\ 184768 & 5713 & 4.2 & -0.07 & 1.1 & -- &-- & -- & -- & -- \\ 186104 & 5753 & 4.2 & 0.05 & 1.1 & -- &-- & -- & -- & -- \\ 215065 & 5726 & 4.0 & -0.43 & 1.1 & 0.27 & 0.03 & 0.18 & 0.04 & 0.35 \\ 219134 & 4900 & 4.2 & 0.05 & 0.8 & -- & -- & -- & -- & -- \\ 219396 & 5733 & 4.0 & -0.10 & 1.2 & 0.09 & 0.03 & 0.15 & 0.05 & 0.32 \\ 224930 & 5300 & 4.1 & -0.91 & 0.7 & 0.33 & 0.05 & 0.20 & 0.02 & 0.61 \\ \hline \end{longtable} \label{lastpage} \bsp
Title: Multiband Gravitational Wave Cosmography with Dark Sirens
Abstract: Gravitational waves might help resolve the tension between early and late Universe measurements of the Hubble constant, and this possibility can be enhanced with a gravitational wave detector in the decihertz band as we will demonstrate in this study. Such a detector is particularly suitable for the multiband observation of stellar-mass black hole binaries between space and ground, which would significantly improve the source localization accuracy thanks to a long baseline for timing triangulation, hence promoting the "dark siren" cosmology. Proposed decihertz concepts include DECIGO/B-DECIGO, TianGO, and others. We consider here the prospects of multiband observation of dark siren binaries with a variety of network configurations. We find that a multiband observation can uniquely identify a black hole binary to a single galaxy to a cosmological distance, and thus a dark siren behaves as if it had an electromagnetic counterpart. Considering only fully localized dark sirens, we use a Fisher matrix approach to estimate the error in the Hubble constant and matter density parameter. We find that a decihertz detector substantially improves our ability to measure cosmological parameters because it enables host galaxies to be identified out to a larger distance without the systematics from statistical techniques based on comparing the population distribution.
https://export.arxiv.org/pdf/2208.01668
\title{Multiband Gravitational Wave Cosmography with Dark Sirens } \author{Brian C. Seymour\,\orcidlink{0000-0002-7865-1052}} % \email{seymour.brianc@gmail.com} \author{Hang Yu\,\orcidlink{0000-0002-6011-6190}} % \author{Yanbei Chen\,\orcidlink{0000-0002-9730-9463}} % \affiliation{TAPIR, Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA 91125, USA} \date{\today} \section{Introduction}\label{sec:intro} The Hubble constant $H_0$ describes the current expansion rate of the universe. Currently, there is substantial deviation between Planck measurements of the cosmic microwave background fluctuations \cite{Planck:2018vyg} and SH0ES measurements of Type 1a supernova with the distance ladder \cite{Riess:2020fzl, Riess:2016jrr}. Notably, the Hubble tension between these early and late universe measurements differs by at least $ 4 \sigma$ \cite{Verde:2019ivm,Camarena:2021jlr}. Moreover, the tension has occurred since the first Planck results \cite{Planck:2013pxb} and strengthened with time. It is important to validate whether such Hubble tension truly exists or whether it is due to astrophysical systematics because it could signify violation from the $\Lambda$CDM concordance model \cite{Camarena:2021jlr,DiValentino:2021izs}. The detection of gravitational waves (GW) can provide an independent late universe measurement of the Hubble constant. In particular, the luminosity distance of the source can be obtained from the measured gravitational waveform. A Hubble constant measurement can be readily attained from a standard siren: a binary neutron star (BNS) merger with a coincident EM counterpart \cite{Schutz:1986gp, Holz:2005df}. With the optical measurement of the redshift from EM followup and luminosity distance measurement from the GW detector, one can directly measure the Hubble constant. Indeed, the Hubble constant was measured with the BNS GW170817 \cite{LIGOScientific:2017adf} and its corresponding EM counterpart \cite{Nicholl:2017ahq,Coulter:2017wya,LIGOScientific:2017vwq}. However, only a small number of GW events are expected to be bright BNS mergers with EM counterparts. The majority of observed GW events are binary black hole (BBH) events without EM counterparts, which are thus known as dark sirens. Notably, many BBH events have already been detected and cataloged \cite{LIGOScientific:2020ibl,LIGOScientific:2021djp, Venumadhav:2019lyq, Olsen:2022pin, Nitz:19, Nitz:20}. {\it Dark sirens} can measure the Hubble constant by statistical techniques using galaxy catalogues \cite{DelPozzo:2011vcw,Chen:2017rfc, Fishbach:19, Scelfo:20, Finke:21, CigarranDiaz:22, Mukherjee:22} and features in the mass distribution~\cite{Taylor:12, Farr:19, Mastrogiovanni:21, Yu:22}. These statistical techniques can be further extended with realistic galaxy clustering which provide improvements in identifying the redshift due to galaxy density correlations \cite{MacLeod:2007jd, Nair:2018ign,Gray:2019ksv, Mukherjee:21}. These statistical techniques have been applied to the GWTC-3 catalog, and the Hubble constant is measured as $H_0 = 68^{+13}_{-12} \kmsmpc $ using only dark sirens~\cite{LIGOScientific:2021aug} at 68\% credible level. By combining the statistical method with the only standard siren GW170817, the Hubble constant is measured as $H_0 = 68^{+8}_{-6} \kmsmpc $. For reference, GW170817 alone gives a Hubble constant value of $H_0 = 69^{+17}_{-8} \kmsmpc $ \cite{LIGOScientific:2017adf}. We need to bear in mind that the statistical dark siren approach relies fundamentally on population models so there is additional systematic uncertainties \cite{LIGOScientific:2021aug, Yu:22}. In contrast, the Planck Hubble constant measurement was $H_0 = 67.4^{+0.5}_{-0.5} \kmsmpc $ \cite{Planck:2018vyg} and the SH0ES measurement was $H_0 = 72.5^{+1.0}_{-1.0} \kmsmpc $ \cite{Riess:2021jrx} which corresponds to the $ 4 \sigma$ tension~\cite{Verde:2019ivm}. One new potential class of detector is one in the decihertz range (0.01 - 1 Hz), and such a detector may aid in measuring the Hubble constant. This detector would lie in between the millihertz LISA band \cite{LISA:2017pwj} and the 10 - 1000 Hz ground band. A decihertz detector has many advantages for measuring the Hubble constant. First, it would provide early warning for BNS mergers which would help guarantee EM identification \cite{Kuns:2019upi, Sedda:2019uro}. Second, a joint decihertz detection would improve the parameter estimation for stellar mass BBH by measuring their waves several years before they enter into the ground band \cite{Kuns:2019upi, Yu:2020dlm}. Since statistical approaches to dark sirens are degraded by having too many galaxies inside of the localization volume, having a better angular localization will significantly help measure the cosmological parameters. Furthermore, the fascinating possibility of a multiband detection exists where a decihertz detector observes a BBH's inspiral and then the ground based detectors measure the merger and ringdown. A decihertz multiband detection has been found to substantially improve parameter estimation accuracy~\cite{Kuns:2019upi}. By combining decihertz and ground detectors, the detector network can uniquely localize a BBH to a its host galaxy without any EM counterpart. While a ground network can do this on its own~\cite{Chen:2016tys}, the addition of a decihertz detector will significantly increase the range at which the BBH can be localized. In this way, a multiband detection of a BBH can behave like a standard siren. Right now, there are a number of existing and proposed gravitational wave detectors. Advanced LIGO \cite{LIGOScientific:2014pky}, Advanced Virgo \cite{VIRGO:2014yos}, and KAGRA \cite{KAGRA:2018plz} are operating ground based gravitational wave detectors and are second-generation (2G) detectors. Following the 2G detectors, LIGO Voyager aims to maximize the reach of existing LIGO observatory facilities by adding cryogenic operation, heavier silicon test masses, and improved quantum squeezing~\cite{Adhikari:17, LIGO:2020xsf}. Einstein Telescope \cite{Punturo:2010zz} and Cosmic Explorer \cite{Reitze:2019iox} are the 3rd generation of ground-based detectors with planned arm lengths of 10 km and 40 km respectively which aim to begin observation in the mid 2030s. Due to technical challenges~\cite{hall2021gravitational,harms2013low}, detecting gravitational waves below $\sim 1$\,Hz may best be carried out in space. LISA \cite{LISA:2017pwj}, TianQin \cite{TianQin:2015yph}, and Taiji \cite{Hu:2017mde, Ruan:2018tsw}, are proposed space based detectors which focus on the $\sim 10^{-3} - 10^{-1}$ Hz bands. Furthermore, there are a number of space based plans for a decihertz detector in the $0.01 - 1$ Hz band. The Japanese detector DECIGO is an ambitious prospect that consists of three clusters of interferometers with a 1000km arm length \cite{Kawamura:2020pcg, Sato:2017dkf, Seto:2001qf}. Big Bang Observer is concept like DECIGO by the European Space Agency \cite{Crowder:2005nr}. Previous work found that Big Bang Observer alone would provide precision cosmological tests by measuring and localizing nearly every GW event in the universe \cite{Cutler:2009qv}. B-DECIGO is a planned pathfinder mission of DECIGO with a single interferometer and a 100 km arm length \cite{Kawamura:2020pcg, Sato:2017dkf}. Finally, TianGO is a space based decihertz concept which is designed with nearer-term technology \cite{Kuns:2019upi,kunsthesis}. For this analysis, we study how well we can measure the expansion rate of the universe by measuring BBH with future ground detectors and decihertz concepts. We consider two representative decihertz detectors: (i) TianGO in the LIGO Voyager era, and (ii) B-DECIGO in the ET/CE era. TianGO is chosen because it represents a possible near term decihertz detector. In such a timescale, it would be operational in late 2020s/early 2030s and be working with the LIGO Voyager network. B-DECIGO is a longer term prospect, which would be operational in the late 2030s. We forecast how well a dark siren can be localized with the Fisher matrix formalism~\cite{Finn:1992wt, Cutler:1994ys} with both detector setups. If such a comoving volume contains only one galaxy, we consider the dark siren to be \textit{localized}. We consider the case where localized events will have measured redshift due to either spectroscopic follow-up or from a complete galaxy catalog. We find that adding a decihertz detector to the network improves the range at which a dark siren can be localized. We then constrain the Hubble constant and matter density parameter by stacking the localized dark siren events together with the BBH merger rate inferred by LIGO~\cite{LIGOScientific:2021psn}. We assume that the Hubble constant and matter density are the Planck values and fix all other cosmological parameters. Our study motivates how a decihertz detector can complement the cosmological measurement capabilities of ground based detectors. The rest of the paper is organized as follows. In Sec.~\ref{sec:BBH-waveform}, we describe the observed strain in a space based detector, and we use the Fisher matrix formalism to forecast the measurement uncertainties with a multiband detection. In Sec.~\ref{sec:cosmo-constraints}, we describe how we stack localized events together and the forecast dark siren constraints on the Hubble constant and matter density parameter for various detector setups. We then conclude this work in Sec.~\ref{sec:conclusion}. Finally, App.~\ref{app:antenna} delves into the space-based waveform specifics, and App.~\ref{app:bayes} justifies the conservative approach of considering only localized dark sirens. Throughout the work, we use $G=c=1$. \section{Measurement of a Binary Black Hole}\label{sec:BBH-waveform} \subsection{TianGO Waveform} Let us first model the waveform in a space detector. TianGO is orbiting the sun at an inclination of $60^\circ$, similar to LISA's orbit~\cite{Dhurandhar:05}. Thus, there are two coordinate frames for the geometry of TianGO. We denote the ecliptic frame to have basis $(\ubarvect x, \ubarvect y, \ubarvect z)$ where $\ubarvect z$ is normal to the orbit of the earth. The frame with $(\uvect x, \uvect y, \uvect z)$ is fixed on the center of TianGO with $(\uvect x, \uvect y)$ oriented along its two arms. We denote $\uvect N$ as the line of sight vector and $\uvect L$ is the direction of binary angular momentum. We can write the waveform as~\cite{Yu:2020dlm} \begin{equation}\label{eq:space-waveform} \tilde h(f) = \Lambda(f) e^{- i \left[\Phi_P(f) +\Phi_D(f)\right]} \tilde h_c(f) \,, \end{equation} where $\tilde h_c(f)$ is the carrier waveform, $\Lambda(f)$ is the amplitude in Eq.~\eqref{eq:lambdaf}, $\Phi_P(f)$ is the polarization phase in Eq.~\eqref{eq:phip}, and $\Phi_D(f)$ is the phase modulation due to Doppler effect in Eq.~\eqref{eq:phid}. The carrier waveform is independent of the antenna patterns and only depends on the intrinsic parameters $(\mathcal{M}_z, q, D_L, t_c, \phi_c)$ where $\mathcal{M}_z = (1 +z) \mathcal{M}_c$ is the detector frame chirp mass, $q$ is the mass ratio, $D_L$ is the luminosity distance, and $t_c, \phi_c$ are the time and phase of coalescence. Because we wish to model the gravitational waveform over the frequencies in both TianGO and Voyager, the carrier waveform is modeled with a phenomenological waveform that combines inspiral, merger and ringdown. Specifically, we use a \verb!IMRPhenomD! waveform \cite{Khan:2015jqa, Husa:2015iqa}. The notable difference for a space-based detector compared to a ground one is that the orientation and location change with time. Thus, the amplitude and polarization phase which characterize the antenna patterns acquire a frequency dependence and are derived in \cite{Cutler:1997ta, Apostolatos:1994mx} for a space based detector. We write them as \begin{align} \Lambda(f) &= \left[ A_+^2 F_+^2(f) + A_\times^2 F_\times^2(f) \right]^{1/2} \, , \label{eq:lambdaf}\\ \Phi_P &= \arctan \left[ \frac{- A_\times F_\times(f)}{A_+ F_+(f)} \right] \, \label{eq:phip}. \end{align} $F_{+,\times}(\phi_S, \theta_S, \psi_S)$ are the detector beam pattern coefficient where $(\phi_S, \theta_S)$ are the direction of $\uvect N$ in the TianGO corotating frame and the barred ones denote quantities in the ecliptic frame, and $\psi_S$ is the polarization phase. The polarization amplitudes are $A_+ = 1 + (\uvect L \cdot \uvect N )^2$ and $A_\times = 2 \uvect L \cdot \uvect N$. Additionally, there is a phase modulation due to the Doppler effect induced by the detector's orbital motion (which we have assumed to be a heliocentric one), \begin{align} \Phi_D(f) &= 2 \pi f \tau \, \label{eq:phid},\\ &= 2 \pi f R_\text{AU} \sin \bar\theta_S \cos \left( \bar\phi_t(f) - \bar\phi_S \right) \, , \end{align} where $\tau = - \vect d \cdot \uvect N$, $\vect d$ is the vector from barycenter to detector, $R_{\rm AU}$ is one AU, and $\bar\phi_t(f)$ is the azimuthal location of the detector's solar orbit. The explicit expressions for $F_{+,\times}$, $\uvect L \cdot \uvect N$, $\bar\phi_t(f)$ are given in App.~\ref{app:antenna}. The ground waveforms are the same as Eq.~\eqref{eq:space-waveform}, but they are approximated as $f\rightarrow \infty$ for $\Lambda(f),\Phi_P(f),\Phi_D(f)$ since the antenna patterns are nearly constant while it is in band. In Fig.~\ref{fig:detector-asd-waveform}, we plot a sample TianGO BBH waveform, along with the sensitivity of some gravitational wave detectors. This waveform terminates on the left side because of the 5 year observation time. It exhibits amplitude modulation around $f\sim 2 \cdot 10^{-2}$ Hz because TianGO's orientation $\uvect N$ is changing with a period of a year. \subsection{Parameter Estimation Background} Let us now describe how we use the Fisher analysis to estimate parameter uncertainties. The Fisher matrix formalism provides a useful approximation to parameter estimation in the high SNR limit \cite{Finn:1992wt,Cutler:1994ys, Vallisneri:2007ev}. We consider a binary with parameters $\vect{\theta}^a$ and \begin{equation} \vect{\theta}^a = \left( \ln \mathcal{M}_z, q, \ln D_L, t_c, \phi_c, \bar\phi_S,\bar\theta_S,\bar\phi_L,\bar\theta_L \right) \, . \end{equation} The variance for a specific parameter $\vect{\theta}^a$ is found on the diagonal of the inverse of the Fisher matrix \begin{equation} \Delta \vect{\theta}^a = \sqrt{\left( \Gamma^{-1} \right)_{aa}} \, , \end{equation} where the Fisher information matrix is defined as \begin{equation} \Gamma_{ab} \equiv \left( \frac{\partial \tilde h}{\partial \vect{\theta}_a} \Big| \frac{\partial \tilde h}{\partial \vect{\theta}_b}\right) \, , \end{equation} and the waveform template $\tilde h(f,\vect\theta)$ is a function of frequency $f$ and parameters $\vect \theta$. The inner product between two signals $\tilde h(f), \tilde g(f)$ is defined as \begin{equation} \left( \tilde g \big|\tilde h \right) = 4 \, \text{Re} \int_0^\infty \frac{\tilde g^{\ast}(f) \tilde h(f)}{S_n(f)} df \end{equation} where $S_n(f)$ is the detector noise spectral density. In the case of a network of detectors, we sum the individual Fisher matrix for each detector $d$ \begin{equation} \left( \Gamma_{ab} \right)^\text{net} = \sum_d \Gamma_{ab}^d \,. \end{equation} \subsection{Results from Parameter Estimation} To understand how a decihertz detector can enhance the parameter estimation of a BBH, we examine the results obtained using TianGO with the HLI Voyager network. The luminosity distance is defined by \begin{equation} D_L(z) = \frac{1+z}{H_0}\int_0^z \frac{dz'}{E(z')} \end{equation} where \begin{equation} E(z) \equiv \sqrt{\Omega_m \left( 1+z \right)^3 + \Omega_\Lambda } \, . \end{equation} For precision tests of cosmology, we are mostly interested in the luminosity distance accuracy and volume localization. The size of the solid angle ellipse $\Delta \Omega$ can be expressed by \cite{Cutler:1997ta} \begin{equation} \Delta \Omega = 2 \pi \sin \bar\theta_S \sqrt{ \Sigma_{\bar\phi_S\bar\phi_S} \Sigma_{\bar\theta_S\bar\theta_S} - \left( \Sigma_{\bar\theta_S\bar\theta_S} \right)^2} \, . \end{equation} The uncertainty in comoving volume can be related to the angular uncertainty by Eq.~(28) of Ref.~\cite{Hogg:1999ad} \begin{equation} \Delta V_{\mathrm{C}}=\frac{D_{L}^{2}}{(1+z)^{2}} \Delta \Omega \Delta D_C \, , \end{equation} where the comoving distance equals $D_C = D_L / \left( 1+z \right)$. Using a change of variables, the comoving volume uncertainty can be rewritten as \begin{equation} \label{eq:Vc} \Delta V_{\mathrm{C}}=\frac{D_L^2}{(1+z)^{3} + D_{L} H(z) \left( 1+z \right)} \Delta \Omega \Delta D_{L} \, , \end{equation} where $H(z) = H_0 E(z)$. Systematic errors beyond the detector sensitivity can degrade the accuracy of the luminosity distance. The first of which is the gravitational lensing which changes the luminosity distance. We use the fit from \cite{Hirata:2010ba} \begin{equation} \frac{\left( \Delta D_L \right)_\text{lens}}{D_L}=0.066\left[\frac{1-(1+z)^{-0.25}}{0.25}\right]^{1.8} \, . \end{equation} Once a particular galaxy is identified, the peculiar velocity adds uncertainty to the amount of cosmological redshift. The measured redshift is the sum of the cosmological and Doppler redshift. We can express the peculiar velocity systematic error as \cite{Gordon:2007zw} \begin{equation} \frac{\left( \Delta D_L \right)_\text{pv}}{D_L} = \Big|1 - \frac{\left( 1+z \right)^2 }{D_L H(z)} \Big| \sigma_v \, , \end{equation} where we have assumed $\sigma_v = 200 \text{ km s}^{-1} / c$. The relative magnitude of this effect decreases rapidly with distance since the cosmological redshift increases while the RMS peculiar velocity is approximately constant. Figure~\ref{fig:PE} gives the measurement accuracy for luminosity distance, angular resolution, and spatial localization. We considered a binary of $\mathcal{M}_c = 25 M_\odot$, $q = 1.05$, a trailing angle between earth and TianGO of $t_a = 5^\circ$, and a $5$ year observation. The measurement accuracy strongly depends upon inclination $\iota$ of the binary, in addition to orientation of the detector network at merger. Therefore, we randomize over $(\bar\phi_S,\bar\theta_S, \bar\phi_L, \bar\theta_L)$ in the figure. The line represents the median measurement accuracy while the shaded region contains $80 \%$ of possible systems. While we use a 5 year observing time for TianGO, the TianGO's parameter estimation isn't particularly sensitive to the observing time as long as it's above $\sim 1 \text{ week}$ as most of the SNR comes from frequencies above $0.1 \text{ Hz}$ (see Fig.~\ref{fig:detector-asd-waveform}). In the top part of Fig.~\ref{fig:PE}, we show the fractional uncertainty in the luminosity distance $\Delta D_L / D_L$ versus redshift. One can see that the addition of TianGO doesn't significantly improve the ability to measure the luminosity distance compared with the HLI network. Most of the SNR from the event comes from the ground network, so the addition of TianGO improves the luminosity distance measurement by a factor of only 1.5\footnote{Note that we have published a previous paper where we found that TianGO added to HLI Voyager network's luminosity distance measurement (Fig.~3 and Fig.~13 of \cite{Kuns:2019upi}). There was an error in the space waveform code.}. We also plot the lensing and peculiar velocity systematic errors here. We see that the systematic error due to peculiar velocity is only large enough to affect our measurement for very close events. Meanwhile, the effect of lensing is negligible and can be ignored in the future sections about cosmology. In the middle panel of Fig.~\ref{fig:PE}, we give the angular resolution $\Delta \Omega$ versus redshift. We see an angular resolution improvement by a factor of 20 for the addition of TianGO to the HLI Voyager network. The long baseline between earth and TianGO is responsible for this upgraded sky localization sensitivity. Finally, let us describe the comoving volume localization in the bottom panel of Fig.~\ref{fig:PE}. We plot the comoving volume localization from Eq.~\eqref{eq:Vc}, and find that adding TianGO improves the comoving volume localization by a factor 30. We use a comoving galaxy density of $n_\text{gal} = 0.01 \text{ gal}/\text{Mpc}^3$ \cite{Chen:2016tys}. This corresponds to the number density which are about 25\% as bright as the Milky Way. This is because the majority of the GW are expected to come from galaxies at least this luminous \cite{Chen:2017rfc}. If $n_\text{gal} \Delta V_C <1$, we say the galaxy was localized. Using this criterion, we find that HLI Voyager can localize galaxies up to $z\sim 0.15$, while TianGO + HLI Voyager can localize them up to $z\sim 0.30$. \subsection{Event Rate} To infer cosmological parameters, we stack all dark siren events that the network can localize. Let us now estimate how many dark sirens can be localized. First, the merger rate density $\mathcal{R}(z)$ describes the number of mergers in a comoving volume per year. We model it with a power law model and choose with $\kappa = 2.7$ so that it corresponds to the Madau-Dickinson star formation rate \cite{Madau:2014bja} \begin{equation} \mathcal{R}(z) = \mathcal{R}_0 \left( 1 + z \right)^{\kappa} \, . \end{equation} Since this is the source frame merger rate density, an additional factor of $1/(1+z)$ is needed to convert time from the source frame to the detector frame. Therefore, we write the detector-frame merger rate of sources with $z<z_m$ as \begin{equation} \label{eq:merger-rate} R_\text{obs}(z_m) = \int_0^{z_m} \mathcal{R}(z') \frac{1}{1+z'} \frac{d V_c}{dz'} dz' \, , \end{equation} where \begin{equation} \frac{d V_c}{d z} = \frac{4 \pi }{H_0} \frac{d_c^2(z)}{E(z)} \, . \end{equation} We use the BBH merger rate $\mathcal{R}_0 = 20 \text{ Gpc}^{-3} \text{yr}^{-1}$ and $\kappa = 2.7$ which consistent with GWTC-3 \cite{LIGOScientific:2021psn}. In Fig.~\ref{fig:localization}, we give the number of detections per year which can be fully localized for HLI Voyager with and without TianGO. We see that TianGO will nearly double the range at which a BBH can be localized to a single host. This corresponds to an order of magnitude increase in localization rate. Furthermore, since the localizations occur at higher redshift, we can probe cosmological parameters beyond just the Hubble constant. \section{Cosmological Constraints}\label{sec:cosmo-constraints} Given a set of gravitational wave observations, we wish to compute the consistent values of the cosmology. Others have studied how to measure the Hubble constant with dark sirens using statistical inference \cite{Chen:2017rfc, LIGOScientific:2019zcs, Yu:22}. Currently, statistical methods are used because the LVK's best localized BBHs have comoving volume resolution of $\Delta V_c \sim 10^5 \text{Mpc}^3$ \cite{LIGOScientific:2021aug} which has thousands of galaxies inside. Since our sources are well localized, we can directly measure the redshift of each dark siren event from the uniquely identified host galaxy. We demonstrate this in 2D with a mock simulation in App.~\ref{app:bayes} that the likelihood function breaks down to the particularly simple answer for well localized sources. We stress that our approach of using the localization condition of $n_\text{gal} \Delta V_C <1$ is a conservative approach. This doesn't require a catalogue since optical telescopes can measure the redshift of the galaxy after the event. Furthermore, galaxy clustering can improve the cosmology constraints \cite{MacLeod:2007jd}. Additionally, more massive galaxies are statistically more likely to be the source of the GW, so this would further improve the ability to localize a GW in the Bayesian approach. Under the localization assumption, a dark siren (BBH) will behave like a bright one (i.e., BNS) for cosmology. Let us now describe how to compute confidence intervals on the cosmology with a set of dark siren observations. For a set of cosmological parameters $\vect H = (H_0,\Omega_m, ... )$, we can compute their confidence intervals with a Fisher matrix \begin{equation}\label{eq:fisher-cosmo} \tilde \Gamma_{ij} = \sum_{\text{event } k} \frac{1}{\left(\Delta D_L(z_k)\right)^2} \frac{\partial D_L(z_k,\vect H)}{\partial H_i}\frac{\partial D_L(z_k,\vect H)}{\partial H_j} \, , \end{equation} where we use the tilde $\tilde \Gamma$ to distinguish from the waveform parameter estimation matrix used in the last section. Then the error in a cosmological parameter is \begin{equation} \Delta H_i = \sqrt{(\tilde \Gamma^{-1})_{ii}} \, . \end{equation} In the nearby universe, the Fisher matrix result reduces to $(\Delta H_0/H_0)^2 = (\Delta D_L/D_L)^2$ In Fig.~\ref{fig:main-plot}, we plot the two sigma confidence intervals on the Hubble constant and matter density parameter using only uniquely localized BBH events. We use a five year observation period, and randomly pick $(\bar\phi_S,\bar\theta_S, \bar\phi_L, \bar\theta_L)$. We use $\mathcal{M}_c = 25 M_\odot$, $q = 1.05$, a trailing angle of $5^\circ$, and uniformly randomize the time until merger. The luminosity distance of the events was sampled accordingly by Eq.~\eqref{eq:merger-rate}. This corresponds to 2515 events with $z<0.4$. There were 43 events localized by HLI Voyager alone and 476 events localized by HLI Voyager + TianGO. The addition of TianGO substantially improves our ability to measure the cosmology. Fig.~\ref{fig:main-plot} shows the improvement of using TianGO for measuring the Hubble constant and matter density parameter. Because a multiband measurement increases the distance we can uniquely localize a galaxy, we can measure the matter density parameter much more accurately. HLI Voyager measures $H_0$ to $1\%$ and $\Omega_m$ to $40\%$, and TianGO upgrades $H_0$ to $0.3\%$ and $\Omega_m$ to $8\%$, while Planck measured $H_0$ to $0.8\%$ and $\Omega_m$ to $2\%$. We also give the uncertainty ellipse for a possible 3G network consisting of 2 CE2's and 1 ET-D, and also we combine B-DECIGO with the 3G network. We can see an improvement in both near-term and long-term networks by adding a decihertz detector, particularly in the matter density parameter since its effect is most pronounced at larger redshifts. Using the covariance matrix containing $(H_0, \Omega_m)$, we can see how well the expansion rate is measured as a function of redshift. In Fig.~\ref{fig:expansion-rate-plt}, we plot the expansion rate $H(z)/(1+z)$ versus redshift where we shade the 68\% CL regions. We can see that gravitational wave detectors are measuring the redshift region $z\sim0.2$ well because the localizations are occurring here because most of localized events are at this redshift. At large redshifts, the cosmic expansion rate uncertainty grows because the matter density parameter is more poorly measured. For reference, we also plot the constraints from GW170817 and Planck 2018. Finally, we estimate the constraints on the Hubble constant and matter density parameter for various 2G to 3G detector networks in Tab.~\ref{tab:network-configs}. Specifically, we compare the cosmological constraints from localized dark sirens during a 5 year observation period. For the 3G detectors, we consider Cosmic Explorer 2 (CE2), and Einstein Telescope D (ET-D). We see that even with 2 CE2's and ET-D, TianGO improves the ability to measure the Hubble constant by a factor of 2, and the matter density parameter by a factor of 3. This is because we see a sizable improvement in the number of localized events. For the long-term multiband case, we use a network consisting of B-DECIGO, CE2, and ET-D. Because the orbit of B-DECIGO is still under discussion \cite{Kawamura:2018esd}, we placed it in a trailing 5$^\circ$ orbit like TianGO. We performed the same analysis as in Section~\ref{sec:cosmo-constraints}. We find that the addition of B-DECIGO can improve the cosmological measurement capabilities of the 3G detectors. \begin{table*}[] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|l|l|l|l|l|} \hline & $\Delta H_0 / H_0$ & $\Delta \Omega_m$ & Localizations / 5 yr & Notes \\ \hline \begin{tabular}[c]{@{}l@{}}3 V \\ (+ T)\end{tabular} & \begin{tabular}[c]{@{}l@{}}$1 \times 10^{-2}$ \\ ($2 \times 10^{-3}$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}$1 \times 10^{-1}$\\ ($2 \times 10^{-2}$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}43\\ (476)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Voyager at Hanford, Livingston, India\\ sites.\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}1 CE2 + 1 ET-D\\ (+ T)\end{tabular} & \begin{tabular}[c]{@{}l@{}}$2 \times 10^{-3}$\\ ($6 \times 10^{-4}$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}$1 \times 10^{-2}$\\ ($3 \times 10^{-3}$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}382\\ (1930)\end{tabular} & \begin{tabular}[c]{@{}l@{}}CE2 at Hanford, \\ ET-D at GEO-600 sites.\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}2 CE2 + 1 ET-D\\ (+ T)\end{tabular} & \begin{tabular}[c]{@{}l@{}}$1 \times 10^{-3}$\\ ($5 \times 10^{-4}$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}$6 \times 10^{-3}$\\ ($2 \times 10^{-3}$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}843\\ (2410)\end{tabular} & \begin{tabular}[c]{@{}l@{}}CE2 at Hanford, Livingston.\\ ET-D at GEO-600 sites.\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}2 CE2 + 2V\\ (+ T)\end{tabular} & \begin{tabular}[c]{@{}l@{}}$1 \times 10^{-3}$\\ ($6 \times 10^{-4}$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}$9 \times 10^{-3}$\\ ($3 \times 10^{-3}$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}556\\ (2211)\end{tabular} & \begin{tabular}[c]{@{}l@{}}CE2 at Virgo, India sites. Voyager at \\ Hanford, Livingston sites.\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}1 CE2 + 1 ET-D\\ (+ B-Decigo)\end{tabular} & \begin{tabular}[c]{@{}l@{}}$2 \times 10^{-3}$\\ ($5 \times 10^{-4}$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}$1 \times 10^{-2}$\\ ($2 \times 10^{-3}$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}380\\ (4758)\end{tabular} & \begin{tabular}[c]{@{}l@{}}CE2 at Hanford, ET-D at GEO-600 sites.\\ B-Decigo placed in 5$^\circ$ trailing Heliocentric orbit\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}2 CE2 + 1 ET-D\\ (+ B-Decigo)\end{tabular} & \begin{tabular}[c]{@{}l@{}}$1 \times 10^{-3}$\\ ($3 \times 10^{-4}$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}$6 \times 10^{-3}$\\ ($1 \times 10^{-3}$)\end{tabular} & \begin{tabular}[c]{@{}l@{}}835\\ (5770)\end{tabular} & \begin{tabular}[c]{@{}l@{}}CE2 at Hanford, Livingston, ET-D at GEO-600 sites.\\ B-Decigo placed in 5$^\circ$ trailing Heliocentric orbit\end{tabular} \\ \hline \end{tabular}% } \caption{Dark siren constraints on the Hubble constant and matter density parameter for various detector configurations. We use the same methodology as for this table as in the rest of this paper. We find the Fisher matrix confidence interval on the cosmological parameters by using only dark sirens which are completely localized. } \label{tab:network-configs} \end{table*} \section{Conclusion}\label{sec:conclusion} In this paper, we studied how a space-based decihertz detector can enhance the sensitivity of a ground network for dark siren cosmological measurement. We construct the case that these detectors will measure a significant number of `bright' dark siren BBH -- GW from which we can uniquely localize and uniquely identify the host galaxy. We then use a Fisher matrix formalism to place constraints on the cosmological parameters. We estimated how well the Hubble constant and matter density parameter could be measured by BBH dark sirens with a five year observation of TianGO plus three LIGO Voyagers. The result is the multiband detection of dark sirens improves the measurement of the Hubble constant by about a factor of 3. The larger redshift localized events allows the matter density parameter to be resolved in the multiband case. In the future, it would be interesting to extend our analysis to include dark sirens which are non-uniquely identified, but are still well localized. Since the fully localized criterion leaves out events with just a small number of galaxies, information about the cosmology can still be extracted from these events. Moreover, there are other effects which can improve the sensitivity further, such as exploiting the clustering of galaxies to improve localization \cite{MacLeod:2007jd} and weighting the galaxies by luminosity \cite{Gray:2019ksv}. Measuring the cosmology with gravitational waves is easier when the host galaxy is uniquely identified. The statistical dark siren approach is degenerate with parameters such the merger rate evolution with redshift and the BBH population model (as discussed in the GWTC-3 cosmology paper \cite{LIGOScientific:2021aug}). Simultaneously measuring the cosmology and these population parameters can be done by looking at the distribution of BBH events \cite{You:21, Mukherjee:21, Yu:22}, but would result in a less sensitive measurement of the cosmological parameters. Otherwise, if these factors are not jointly measured, this would bias the measurement of the Hubble constant \cite{Trott:2021fnx, Yu:22}. Consequently, a multiband detection of dark sirens with uniquely identified hosts has the potential to isolate the measurement of cosmological parameters from these population parameters. \begin{acknowledgements} B.S. acknowledges support by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1745301. H.Y. acknowledges the support of the Sherman Fairchild Foundation. Y.C. and B.S. acknowledge support from the Brinson Foundation, the Simons Foundation (Award Number 568762), and by NSF Grants PHY-2011961, PHY-2011968, PHY--1836809. \end{acknowledgements} \appendix \onecolumngrid \section{Antenna Patterns of TianGO}\label{app:antenna} The standard formula for the plus and cross antenna patterns of a detector is \begin{align} F_+ &= \left( \frac{1+\cos^2\theta_S}{2} \right) \cos 2\phi_S \cos 2\psi_S-\cos\theta_S \sin2\phi_S \sin2\psi_S \, ,\\ F_\times &= \left( \frac{1+\cos^2\theta_S}{2} \right) \cos 2\phi_S \cos 2\psi_S + \cos\theta_S \sin2\phi_S \sin2\psi_S \, . \end{align} where $(\phi_S,\theta_S)$ are in the detector's frame. We use the pycbc detector class to get the ground based antenna patterns \cite{Biwer:2018osg}. The antenna patterns of a space detector are more complicated however, because the detector has changing orientation. This means that the antenna patterns have time dependence $F_{+,\times}(t)$, which we will use the time frequency relation to find their frequency dependence. To find the detector beam pattern coefficients, let us first describe the geometry of the system. We have two coordinate systems: unbarred coordinates $(\uvect{x},\uvect{y},\uvect{z})$ which correspond to the individual detector and barred coordinates $(\uvect{\bar{x}},\uvect{\bar{y}},\uvect{\bar{z}})$ in the ecliptic frame. The relationship between the orientation of the detector's frame and the ecliptic is \begin{align} \uvect{x}(t) &=-\frac{\sin 2 \phi_{\mathrm{t}}}{4} \uvect{\bar{x}}+\frac{3+\cos 2 \bar{\phi}_{\mathrm{t}}}{4} \uvect{\bar{y}}+\frac{\sqrt{3}}{2} \sin \bar{\phi}_{\mathrm{t}} \uvect{\bar{z}} \, ,\nn\\ \uvect{ y}(t) &= \uvect z(t) \times \uvect x(t)\, , \nn\\ \uvect{z}(t)&=-\frac{\sqrt{3}}{2}\left(\cos \bar{\phi}_{\mathrm{t}} \uvect{\bar{x}}+\sin \bar{\phi}_{\mathrm{t}} \uvect{\bar{y}}\right)+\frac{1}{2} \uvect{\bar{z}} \,, \label{eq:detector-coords} \end{align} where the phase of TianGO in the ecliptic frame is equal to \begin{equation}\label{eq:phit} \bar{\phi}_t(f) = \frac{2 \pi t(f)}{1 \text{ yr}} - t_a \, , \end{equation} where $t_a$ is the trailing angle, and equal to $5^\circ$ for TianGO. The time as a function of frequency is \cite{Kuns:2019upi} \begin{equation} t(f) = t_c - 5 \left( 8 \pi f \right)^{-8/3} \mathcal{M}_z^{-5/3} \left[ 1 + \frac{4}{3} \left( \frac{743}{336} + \frac{\mu}{M} x - \frac{32 \pi}{5}x^{3/2}\right) \right] \, , \end{equation} where $\mu$ is the reduced mass and \begin{equation} x = \left( \pi M_z f \right)^{2/3} \, . \end{equation} We can now write $(\phi_S(f),\theta_S(f),\psi_S(f))$ for the TianGO detector using Eq.~\eqref{eq:detector-coords}, \begin{align} \cos \theta_S(f) &= \frac{1}{2} \cos \bar\theta_S - \frac{\sqrt{3}}{2} \sin\bar\theta_S \cos\left( \bar\phi_t(f) - \bar\phi_S \right) \, , \\ \phi_S(f) &= \bar\phi_t(f) + \arctan \left[ \frac{\sqrt{3}\cos\bar\theta_S + \sin\bar\theta_S \cos \left( \bar\phi_t(f)-\bar\phi_S \right)}{2 \sin\bar\theta_S \sin\left( \bar\phi_t(f)-\bar\phi_S \right)} \right] \, . \end{align} The polarization phase of TianGO is \begin{equation} \tan \psi_S(f) = \frac{\uvect L \cdot \uvect z - \left( \uvect L \cdot \uvect N \right)\left( \uvect z \cdot \uvect N \right)}{\uvect N \cdot \left( \uvect L \times \uvect z \right)} \end{equation} where \begin{align} \uvect N \cdot \uvect z &= \cos\theta_S(f) \, , \\ \uvect L \cdot \uvect z &= \frac{1}{2} \cos \bar\theta_L - \frac{\sqrt{3}}{2} \sin \bar\theta_L \cos \left[ \bar\phi_t(f) - \bar\phi_L \right] \, ,\\ \uvect L \cdot \uvect N &= \cos\bar\theta_L \cos\bar\phi_S + \sin\bar\theta_L \sin\bar\theta_S \cos \left( \bar\phi_L -\bar\phi_S \right) \, , \label{eq:LdN}\\ \uvect N \cdot \left( \uvect L \times \uvect z \right) &= \frac{1}{2} \sin \bar{\theta}_{L} \sin \bar{\theta}_{S} \sin \left(\bar{\phi}_{L}-\bar{\phi}_{S}\right) \nn \\ -\frac{\sqrt{3}}{2} &\cos \bar{\phi}_t(f)\left(\cos \bar{\theta}_{L} \sin \bar{\theta}_{S} \sin \bar{\phi}_{S}-\cos \bar{\theta}_{S} \sin \bar{\theta}_{L} \sin \bar{\phi}_{L}\right) \nn\\ -\frac{\sqrt{3}}{2} &\sin\bar{\phi}_t(f)\left(\cos \bar{\theta}_{S} \sin \bar{\theta}_{L} \cos \bar{\phi}_{L}-\cos \bar{\theta}_{L} \sin \bar{\theta}_{S} \cos \bar{\phi}_{S}\right)\, . \end{align} \section{Consistency of Statistical Method}\label{app:bayes} In the statistical method, we wish to break the $z - D_L$ degeneracy by using a galaxy catalog with the gravitational wave observation. We will use the method described in a variety of sources \cite{Chen:2017rfc,Gray:2019ksv}. If we wish to constrain the cosmological parameters $\vect H$ and have gravitational wave data $d_\text{GW}$, then with Bayes theorem, we have \begin{equation} p(\vect H | \dgw) \propto p(H_0) p(\dgw | \vect H) \end{equation} where \begin{align}\label{eq:bayes_like} p(\dgw | \vect H) &= \frac{1}{\beta(\vect H)}\int p(\dgw, D_L, \phi_S,\theta_S, z| \vect H) d D_L d \phi_S d\theta_S dz \, , \\ &= \frac{1}{\beta(\vect H)} \int p(\dgw| D_L(z, \vect H), \phi_S, \theta_S) p_0(z,\phi_S,\theta_S) d \phi_S d\theta_S dz \, .\label{eq:bayes_like2} \end{align} the first term in the integral is approximated with a multivariate Gaussian distribution \begin{equation} p(d_\text{GW}| D_L(z, \vect H), \phi_S, \theta_S) = N(D_L(z,\vect H) - \hat D_L,\sigma_{D_L}^2) N(\phi_S - \hat\phi_S,\sigma_{\phi_S}^2)N(\theta_S - \hat\theta_S,\sigma_{\theta_S}^2) \, , \end{equation} where $N(x - \mu,\sigma^2)$ is the probability density function of the normal distribution, $(\hat D_L, \hat \phi_S, \hat \theta_S)$ are the true event parameters, and $(\sigma_{D_L}, \sigma_{\phi_S}, \sigma_{\theta_S})$ is given by the Fisher matrix analysis in Eq. (10). The second term in the integral is the galaxy catalog \begin{equation}\label{eq:catalogue-sum} p_0(z,\phi_S,\theta_S|\vect H) = \frac{1}{N_\text{gal}} \sum^{N_\text{gal}}_i N(z- z^i,\sigma_{z_i}^2)\delta(\phi_S- \phi_S^i)\delta(\theta_S - \theta_S^i) \, , \end{equation} where $\sigma_{z_i}$ is the variance due to the peculiar velocity. The variables $(z^i, \phi^i,\theta^i)$ are the mean redshift and angular location of the $i$th galaxy, while unbarred variables are parameters. The angular uncertainty is negligible and the distribution is replaced with a Dirac delta function $\delta(\phi_S- \bar\phi_S^i)$ and similarly for $\theta_S$. Finally, the normalization $\beta(\vect H)$ is \begin{equation} \beta(\vect H) = \int_{d_{\rm GW}> d_{\rm GW}^{\rm th}} p(\dgw, D_L, \phi_S,\theta_S, z| \vect H) d D_L d \phi_S d\theta_S dz \,d \dgw \, , \end{equation} where \begin{equation} p(\dgw, D_L, \phi_S,\theta_S, z| \vect H) = p(\dgw| D_L(z, \vect H), \phi_S, \theta_S) p_0(z,\phi_S,\theta_S) \, . \end{equation} and where $d_{\rm GW}^{\rm th}$ is the detection threshold. Note that Eq.~\eqref{eq:bayes_like} reduces to the Fisher matrix confidence interval Eq.~\eqref{eq:fisher-cosmo} on $\vect H$ if only one galaxy has nonvanishing likelihood. This reduction can be derived by examining \eqref{eq:bayes_like2} in the case that there is only one galaxy inside the volume. This happens when all other galaxies in the sum in $p_0(z,\phi_S,\theta_S|\vect H)$ do not contribute to the integral in Eq.~\eqref{eq:bayes_like2}. Now, let us demonstrate the statistical method in 2D and examine its convergence as a function of the number of galaxies inside the localization region. We assume that $D_L = z/H_0$ and that the peculiar velocity uncertainty is subdominant. Thus, we assume the peculiar velocity is a very sharp Gaussian and absorb it into $\sigma_{D_L}$. If we call $h = (H_0) / (H_0)_\text{true}$, the likelihood function is \begin{equation} \label{eq:single-plt} p(\dgw | h ) = \frac{1}{\beta(h)}\frac{1}{N_\text{gal}} \sum_{i} N(\hat D_L-D_L^i(h), \sigma_{D_L}^2) N(\hat \phi_S - \phi_S^i,\sigma_{\phi_S}^2)N(\hat \theta_S - \theta_S^i,\sigma_{\theta_S}^2) \end{equation} where $D_L^i(h)=z^i/H_0=z^i/[h(H_0)_{\rm true}]$ and $\sigma_{z} = \sigma_{D_L} (H_0)_\text{true}$. In this 2D case, $\beta(h) \propto h^2$. If we need to stack events, we generalize Eq.~\eqref{eq:single-plt} to be the product of the likelihood function of each event\footnote{Technically, there is another factor $p(N | h)$ in front of the product which depends on the intrinsic astrophysical merger rate and comoving volume surveyed. It is discussed after Eq.~(7) in Ref.~\cite{Gray:2019ksv}.}, \begin{equation} p(\left\{ \dgw \right\} | h ) = \prod_{\text{event } e}^N p( \left( \dgw \right)_e | h ) \, . \end{equation} If we assume a uniform prior on $h$, then $p(h | \left\{ \dgw \right\} ) \propto p(\left\{ \dgw \right\} | h )$. In Fig.~\ref{fig:LL-vs-ngal}, we plot the posterior on $h$ for 30 and 300 events. In this figure, we vary the angular resolution of the events for each curve. We plot the median number of potential host galaxies for the events. One can see that as events are nearly perfectly localized $(n \rightarrow 0)$, the posterior on $h$ approaches the Fisher likelihood in Eq.~(\ref{eq:fisher-cosmo}). Due to the potential systematics possible in such an experiment, we list the precise choices we used to make the plot. Our distance resolution was $\Delta D_L / D_L = 0.15 z + 10^{-2}$ and our angular resolutions varied between $\Delta \phi_S = \frac{z}{1000} \deg$ to $\Delta \phi_S = 100 z \deg$. These scaled with redshift linearly due to the SNR scaling of parameter measurement, while the $10^{-2}$ is the same order as the peculiar velocity error (so a few close events don't dominate). We uniformly placed $3\times 10^6$ galaxies thoughout the disc in the $z\in[0,2)$ 'redshift window'. For each event, we randomly picked a galaxy with $z\in [0,1)$. The particular redshift window can have a systematic effect on the statistical method \cite{Trott:2021fnx}, and we chose our galaxy disc to be much bigger than the redshift window to avoid artificial boundary effects. \bibliography{bibliography.bib}
Title: NuSTAR Observations of Intrinsically X-ray Weak Quasar Candidates: An Obscuration-Only Scenario
Abstract: We utilize recent NuSTAR observations (co-added depth $\approx55$-120 ks) of PG $1001+054$, PG $1254+047$, and PHL 1811 to constrain their hard X-ray ($\gtrsim5$ keV) weakness and spectral shapes, and thus to investigate the nature of their extreme X-ray weakness. These quasars showed very weak soft X-ray emission, and they were proposed to be intrinsically X-ray weak, with the X-ray coronae producing weak continuum emission relative to their optical/UV emission. However, the new observations suggest an alternative explanation. The NuSTAR 3-24 keV spectral shapes for PG $1001+054$ and PHL 1811 are likely flat (effective power-law photon indices $\Gamma_{\rm eff}=1.0^{+0.5}_{-0.6}$ and $\Gamma_{\rm eff}=1.4^{+0.8}_{-0.7}$, respectively), while the shape is nominal for PG $1254+047$ ($\Gamma_{\rm eff}=1.8\pm0.3$). PG $1001+054$ and PHL 1811 are significantly weak at hard X-ray energies (by factors of $\approx26$-74 at rest-frame 8 keV) compared to the expectations from their optical/UV emission, while PG $1254+047$ is only hard X-ray weak by a factor of $\approx3$. We suggest that X-ray obscuration is present in all three quasars. We propose that, as an alternative to the intrinsic X-ray weakness + X-ray obscuration scenario, the soft and hard X-ray weakness of these quasars can be uniformly explained under an obscuration-only scenario. This model provides adequate descriptions of the multi-epoch soft and hard X-ray data of these quasars, with variable column density and leaked fraction of the partial-covering absorber. We suggest that the absorber is the clumpy dust-free wind launched from the accretion disk. These quasars probably have super-Eddington accretion rates that drive powerful and high-density winds.
https://export.arxiv.org/pdf/2208.04961
\title{NuSTAR Observations of Intrinsically \hbox{X-ray} Weak Quasar Candidates: An Obscuration-Only Scenario} \author{Chaojun~Wang} \affiliation{School of Astronomy and Space Science, Nanjing University, Nanjing, Jiangsu 210093, China} \affiliation{Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China} \author{B.~Luo} \affiliation{School of Astronomy and Space Science, Nanjing University, Nanjing, Jiangsu 210093, China} \affiliation{Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China} \author{W.~N.~Brandt} \affiliation{Department of Astronomy \& Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802, USA} \affiliation{Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA} \affiliation{Department of Physics, 104 Davey Lab, The Pennsylvania State University, University Park, PA 16802, USA} \author{D.~M.~Alexander} \affiliation{Centre for Extragalactic Astronomy, Department of Physics, Durham University, Durham DH1 3LE, UK} \author{F.~E.~Bauer} \affiliation{Instituto de Astrof{\'{\i}}sica and Centro de Astroingenier{\'{\i}}a, Facultad de F{\'{i}}sica, Pontificia Universidad Cat{\'{o}}lica de Chile, Casilla 306, Santiago 22, Chile} \affiliation{Millennium Institute of Astrophysics, Nuncio Monse{\~{n}}or S{\'{o}}tero Sanz 100, Of 104, Providencia, Santiago, Chile} \affiliation{Space Science Institute, 4750 Walnut Street, Suite 205, Boulder, Colorado 80301, USA} \author{S.~C.~Gallagher} \affiliation{Department of Physics \& Astronomy and Institute for Earth and Space Exploration, The University of Western Ontario, London, ON, N6A 3K7, Canada} \author{Jian Huang} \affiliation{School of Astronomy and Space Science, Nanjing University, Nanjing, Jiangsu 210093, China} \affiliation{Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China} \author{Hezhen Liu} \affiliation{School of Astronomy and Space Science, Nanjing University, Nanjing, Jiangsu 210093, China} \affiliation{Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China} \author{D.~Stern} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, MS 169-224, Pasadena, CA 91109, USA} \keywords{accretion, accretion disks -- galaxies: active -- galaxies: nuclei -- quasars: absorption lines -- quasars: emission lines -- X-rays: galaxies} \section{INTRODUCTION} Active galactic nuclei (AGNs) generally produce luminous X-ray emission, which is believed to originate largely from the accretion-disk corona in the vicinity of the central supermassive black hole (SMBH) % via inverse-Compton scattering of the optical/UV seed photons from the accretion disk (e.g., \citealt{Turner2009,Done2010,Gilfanov2014,Fabian2017}). AGN X-ray continua typically have an intrinsic power-law shape ($N_E \propto E^{-\Gamma}$), and the mean value of the photon indices ($\Gamma$) for radio-quiet AGNs is around \hbox{1.9--2.0} with a scatter of $\approx 0.2$ \cite[e.g.,][]{Reeves1997,Just2007,Scott2011}. Observations of the X-ray and UV emission from large samples of radio-quiet AGNs have revealed that the \hbox{X-ray} flux is closely correlated with the optical/UV flux, indicating a strong physical connection between the accretion disk and the X-ray corona. The correlation is typically expressed as a negative relation between the X-ray-to-optical \hbox{power-law} slope parameter ($\alpha_{\rm OX}$)\footnote{$\alpha_{\rm OX}$ is defined as $\alpha_{\rm OX}$ = 0.3838log($f_{\rm 2keV}$/$f_{\rm 2500~{\textup{\AA}}}$) \citep{Tananbaum1979}, where $f_{\rm 2keV}$ and $f_{\rm 2500~{\textup{\AA}}}$ are the \hbox{rest-frame} 2~keV and 2500~\AA\ flux densities.} and the ${2500~{\textup{\AA}}}$ monochromatic luminosity ($L_{\rm 2500~{\textup{\AA}}}$), and it is highly significant across a broad population of AGNs, ranging from moderate-luminosity AGNs to the most luminous quasars (e.g., \citealt{Strateva2005,steffen2006,Just2007,Lusso2017,Liu2021}). The observed X-ray emission from AGNs may be modified by line-of-sight obscuration, resulting in lower observed \hbox{X-ray} fluxes than those expected from the $\alpha_{\rm OX}\textrm{--}L_{2500~\textup{\AA}}$ relation. A common approach to parameterize the amount of \xray\ weakness uses the $\Delta\alpha_{\rm OX}$ parameter, defined as the difference between the observed $\alpha_{\rm OX}$ value and the $\alpha_{\rm OX}$ value expected from the $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation; $\Delta\alpha_{\rm OX}=-0.3838$ thus corresponds to a factor of X-ray weakness of 10 at rest-frame 2 keV. Type 2 AGNs are generally \xray\ obscured, likely due to the dusty ``torus'' \citep[e.g.,][]{Netzer2015,Hickox2018}. Type 1 AGNs may also have X-ray obscuration from largely dust-free gas.\footnote{Similar obscuration from dust-free gas might also be present in some of the type 2 AGNs, though usually not distinguishable from the torus obscuration.} For example, broad absorption line (BAL) quasars, which are characterized by blueshifted broad UV absorption lines (e.g., \iona{C}{iv} $\lambda1549$), generally show weak X-ray emission \cite[e.g.,][]{Gallagher2002,Gallagher2006,Fan2009,Gibson2009}. One frequently adopted physical model for BAL quasars is the \hbox{disk-wind} model, where the observed BALs originate from an outflowing equatorial wind launched from the accretion disk and radiatively driven by UV-line pressure \citep[e.g.,][]{Murray1995,Proga2000,Matthews2016}. This model usually invokes ``shielding'' gas between the wind and nucleus or a clumpy wind \citep[e.g.,][]{Baskin2014,Matthews2016,Giustini2019} to provide obscuration of the nuclear extreme UV (EUV) and \xray\ radiation which might otherwise overionize the wind and hamper radiative acceleration. BAL quasars are considered generally to have larger inclination angles than non-BAL quasars, with the line of sight to the UV continuum region of the accretion disk intersecting the wind, leading to the observed BALs. The line of sight to the X-ray emitting corona, though not necessarily the same as the UV line of sight, is likely also through the shielding gas or the clumpy wind, resulting in the often observed X-ray weakness (e.g., Figure~1 of \citealt{Luo2013}). Besides BAL quasars, a small fraction ($5.8\pm0.7\%$) of non-BAL type~1 quasars have been found to be X-ray weak, likely due to absorption \citep[e.g.,][]{Pu2020}. They may share a similar nature to the BAL quasars; they do not show any UV BALs probably due to geometric effects (e.g., small inclination angles) or a low velocity of the wind along the UV line of sight \citep[e.g.,][]{Giustini2019}. A small subset of AGNs has been proposed to be intrinsically \hbox{X-ray} weak, producing much less X-ray emission than expected from the \hbox{$\alpha_{\rm OX}$--$L_{\rm 2500{\textup{\AA}}}$} relation. These candidates are observed to be significantly X-ray weak with no clear evidence of \xray\ obscuration. X-ray weakness caused by X-ray obscuration is usually identified from an X-ray spectral shape that is flatter than the intrinsic $\Gamma\approx2$ power law, as soft \hbox{X-ray} photons are more heavily absorbed than hard X-ray photons. The effective power-law photon index ($\Gamma_{\rm eff}$) of an obscured spectrum should be smaller than 2; in case of heavy absorption, the 0.5--5 keV $\Gamma_{\rm eff}$ can even reach a negative value. However, due to the frequent appearance of partial-covering absorption in AGN X-ray spectra \citep[e.g.,][]{Immler2003,Ricci2017,Leighly2019} with a small fraction of the intrinsic coronal emission leaking through the absorber, a soft X-ray \hbox{($\lesssim5$~keV)} spectrum alone might be insufficient for identifying heavy ($N_{\rm H}\gtrsim5\times10^{23}$~cm$^{-2}$) or Compton-thick ($N_{\rm H}>1.5\times10^{24}$~cm$^{-2}$) \xray\ obscuration, as the X-ray emission could be extremely weak (e.g., $\Delta\alpha_{\rm OX}<-0.3838$ or X-ray weakness factor of $>10$) yet the spectral shape is nominal with $\Gamma_{\rm eff}\approx2$, dominated by the leaked component (e.g., see Section 4.2 below for illustration). On the other hand, the hard \xray\ \hbox{($\gtrsim5$~keV)} spectrum in this case should generally still be flat with $\Gamma_{\rm eff}\approx0\textrm{--}1$, as the Compton-reflection component, from either the absorber or other reflectors (e.g., disk or torus), is expected to dominate \citep[e.g.,][]{George1991,Comastri2011,Gandhi2014,Rovilos2014}. It is based on these arguments that a small number of quasars have been proposed to be intrinsically \xray\ weak; they are significantly X-ray weak yet with nominal ($\Gamma_{\rm eff}\approx2$) hard X-ray spectral shapes. The low-redshift ($z\lesssim1$) candidates include a few BAL quasars that have \nustar\ observations \citep{Luo2013,Luo2014}, and the high-redshift (\hbox{$z\approx1.5$--3}) candidates include a few Large Bright Quasar Survey (LBQS) BAL quasars with \chandra\ observations \citep{Liu2018} and a few luminous quasars with \xmm\ observations \citep{Nardini2019}. Given the significant X-ray weakness, the \xray\ spectra of these candidates have very limited photon statistics, and thus the spectral shapes are poorly constrained and sometimes even rely upon stacking analysis to obtain information on their average spectral properties \citep[e.g.,][]{Luo2014}. One quasar that is considered a prototypical example of intrinsically X-ray weak quasars is PHL~1811, a non-BAL quasar at $z = 0.192$ \citep{Leighly2007}. It is \hbox{X-ray} weak by a factor of \hbox{$\approx40$--130} with a steep 0.3--5~keV \xmm\ spectrum ($\Gamma\approx2.3$). There are no published hard X-ray constraints, but its fast \hbox{X-ray} variability (flux varying by a factor of $\approx4$ in 12 days) argues against a heavily obscured spectrum dominated by distant reflection/scattering \citep{Leighly2007-2,Leighly2007}. PHL~1811 has distinctive UV emission-line properties, including a small \iona{C}{iv} rest-frame equivalent width (REW), a large \iona{C}{iv} blueshift, and strong UV \iona{Fe}{ii} and \iona{Fe}{iii} emission. \citet{Luo2015} conducted a systematic survey of the X-ray properties of PHL~1811 analogs, a sample of high-redshift quasars selected with UV emission-line properties similar to those of PHL~1811. This sample of 18 PHL~1811 analogs turned out to be almost exclusively (17/18; 94\%) X-ray weak, by an average X-ray weakness factor of 39. However, unlike PHL~1811 itself, many of these PHL~1811 analogs appear to be X-ray obscured, as the sample stacked effective photon index ($\Gamma_{\rm eff}=1.16_{-0.32}^{+0.37}$) indicates a flat spectral shape on average. \citet{Luo2015} speculated that Occam's razor would favor a uniform explanation for the \hbox{X-ray} weakness of all these objects and that perhaps hard X-ray data of PHL~1811 would reveal a highly obscured component. In this paper, we present improved hard X-ray constraints from deeper \nustar\ observations of two low-redshift intrinsically \hbox{X-ray} weak quasar candidates in \citet{Luo2014}, PG~$1001+054$ (hereafter PG~1001) and PG~$1254+047$ (hereafter PG~1254). We also provide for the first time hard \hbox{X-ray} constraints for PHL~1811 using archival \nustar\ and \xmm\ observations. Based on the results, we suggest that X-ray obscuration is present in all three quasars, and we propose that intrinsic X-ray weakness is not required to explain the X-ray weakness of these objects. The \nustar, \chandra, and \xmm\ data of these quasars can be uniformly explained with an obscuration scenario where the partial-covering absorber has variable column density and leaked fraction (partial-covering fraction). The paper is organized as follows. We describe the soft and hard \hbox{X-ray} observations and data analyses in Section~2. We present \xray\ and multiwavelength properties of the three quasars in Section~3. In Section~4, we propose that these quasars are likely X-ray obscured and describe how an obscuration scenario may explain their soft and hard \hbox{X-ray} weakness, without invoking intrinsic \hbox{X-ray} weakness. The absorber is likely the clumpy dust-free wind launched from the accretion disk. We summarize in Section~5. In the Appendix, we present new hard \xray\ constraints from a recent \xmm\ observation of LBQS~$1442-0011$, a high-redshift intrinsically \hbox{X-ray} weak quasar candidate in \citet{Liu2018}; the results are consistent with our proposed obscuration scenario. Throughout this paper, we use a cosmology with $H_0=67.4$~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_{\rm M}=0.315$, and $\Omega_{\Lambda}=0.685$ \citep{Planck2020}. Uncertainties are quoted at a 1$\sigma$ confidence level, and limits are at a $90\%$ confidence level. The energy ranges used in our photometric and spectroscopic analyses are 0.3--8~keV for \chandra\ observations, 0.3--10~keV for \xmm\ observations, and 3--24~keV for \nustar\ observations, unless otherwise specified. Due to the X-ray weakness of the three quasars, all X-ray spectra were grouped with at least one count per bin, and the W statistic\footnote{\url{ https://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/manual/XSappendixStatistics.html.} \label{wstat}} of XSPEC (v12.10.1; \citealt{Arnaud1996}) was used in spectral fitting. Galactic absorption \citep{HI4PI2016} was included in the \xray\ spectral modeling. \section{Object Properties, X-ray Observations, and X-ray Data Analyses} \begin{deluxetable*}{lcccccccrrr} \tablecaption{Basic Object Properties and List of Observations} \tablehead{ \colhead{Object } & \colhead{$z$} & \colhead{$m_{B}$} & \colhead{$\log M_{\rm BH}$} & \colhead{$L/L_{\rm Edd}$} & \colhead{$\log L_{\rm 2500~{\textup{\AA}}}$} & \colhead{$N_{\rm H, Gal}$}& \colhead{Observatory} & \colhead{Observation} & \colhead{Obs. Date} & \colhead{Exp} \\ \colhead{Name} & \colhead{} & \colhead{} & \colhead{($M_{\Sun}$)} & \colhead{} & \colhead{(erg s$^{-1}$ Hz$^{-1}$)} & \colhead{($10^{20}$~cm$^{-2}$)} & \colhead{} & \colhead{ID} & \colhead{} & \colhead{(ks)}\\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} & \colhead{(9)} & \colhead{(10)} & \colhead{(11)} } \startdata PG~1001+054 &0.161 &16.1 &7.74 &0.5 & 29.9& 2.38&\xmm &0150610101 &2003 May 4 &8.6 \\ & & & & & & &\chandra &11852 & 2010 Jan 11 &4.9 \\ & & & & & & &\nustar &60001122002 & 2013 Jun 28 &19.6 \\ & & & & & & &\nustar &60501014001& 2020 May 23 &101.1\\ PG~1254+047 &1.026 &15.8 &9.68 &0.3 &31.5&2.03 &\chandra &832 &2000 May 29 &36.0 \\ & & & & & & &\nustar &60001123002 & 2013 Jun 27 &29.4 \\ & & & & & & &\nustar &60401013002 & 2019 Jun 8 &88.0 \\ PHL~1811 &0.192 &13.9 &8.25 &1.6 &30.9&4.22 &\chandra &2975 & 2001 Dec 5 &9.3 \\ & & & & & & &\chandra & 2985 & 2001 Dec 17 &9.8 \\ & & & & & & &\xmm &0204310101 & 2004 Nov 1 &23.5 \\ & & & & & & &\xmm &0761910201 & 2015 Nov 29 &39.9 \\ & & & & & & &\nustar &60101004002 & 2015 Nov 28 &54.7 \enddata \tablecomments{ Cols. (1) and (2): object name and redshift. Col. (3): $B$--band magnitude. Cols. (4) and (5): single-epoch virial BH mass and Eddington ratio from \citealt{shen2011} (for PG~1001 and PG~1254) or \citealt{Leighly2007} (for PHL~1811). Col. (6): 2500~{\textup{\AA}} monochromatic luminosity from \citealt{shen2011} (for PG~1001 and PG~1254) or \citealt{Leighly2007} (for PHL~1811). Col. (7): Galactic neutral hydrogen column density \citep{HI4PI2016}. Cols. (8) and (9): observatory and observation ID. Col. (10): observation start date. Col. (11): cleaned exposure time; for \nustar\ observations, it is the average value of the FPMA and FPMB exposure times. } \end{deluxetable*} \subsection{Basic Object Properties} The basic properties of the three quasars are listed in Table~1. PG~1001 is at $z = 0.161$ with a $B$-band magnitude of 16.1. Its H$\beta$-based single-epoch virial SMBH mass ($M_{\rm BH}$) is $\approx5.5\times10^{7} M_{\Sun}$, and the estimated Eddington ratio is $\approx0.5$ \citep{shen2011}. The full width at half maximum (FWHM) of its H$\beta$ emission line is 1740~km~s$^{-1}$ and thus it was classified as a narrow-line type 1 quasar (NLQ1; \citealt{Wills2000}), which refers to type 1 quasars with narrow (FWHM$<2000$~km~s$^{-1}$) H$\beta$ emission lines (e.g., Footnote 1 of \citealt{Wills2000}). It also shows weaker [\iona{O}{iii}]~$\lambda5007$ emission compared to typical quasars (REW smaller by a factor of $\approx2$; \citealt{Vandenberk2001,shen2011}). PG~1254 is a luminous quasar at $z = 1.026$ with a $B$-band magnitude of 15.8. The \iona{Mg}{ii}-based single-epoch virial SMBH mass is $\approx4.8\times10^{9} M_{\Sun}$, and the estimated Eddington ratio is $\approx0.3$ \citep{shen2011}. There are no H$\beta$ or [\iona{O}{iii}] measurements available in the literature. PHL~1811, at $z = 0.192$, is very bright with a $B$-band magnitude of 13.9. Its H$\beta$-based single-epoch virial SMBH mass is $\approx1.8\times10^{8} M_{\Sun}$, and its estimated Eddington ratio is $\approx1.6$ \citep{Leighly2007}. We caution that the virial SMBH mass and the estimated Eddington ratio are subject to large uncertainties, especially in the super-Eddington regime, as the virialization assumption may no longer be valid when the broad emission-line region (BELR) is likely exposed to large and anisotropic radiation pressure \citep[e.g.,][]{Marconi2008,Marconi2009, Netzer2010}. PHL~1811 was classified as a NLQ1 with no apparent [\iona{O}{iii}]~$\lambda5007$ emission \citep{Leighly2007-2}. \subsection{\nustar\ Observations and Data Analysis} PG~1001 and PG~1254 were observed by \nustar\ in 2013 with cleaned exposure times of 19.6~ks and 29.4~ks, respectively. They were not detected in the 8--24 keV band individually, but they were considered good candidates for intrinsically X-ray weak quasars from X-ray stacking analysis \citep{Luo2014}. We obtained much deeper \nustar\ observations of PG~1254 in 2019 and PG~1001 in 2020 with exposure times of $\approx 100$~ks {(PI: W.~N.~Brandt)}, with the aim of detecting them individually in the 8--24 keV band and providing improved spectral-shape constraints. PHL~1811 has an archival \nustar\ observation in 2015 with a cleaned exposure time of 54.7~ks, simultaneous to a 39.9~ks \xmm\ observation (PI: K.~Leighly). The details of the \nustar\ observations are listed in Table~1. For \nustar\ data reduction, we used HEASoft (v6.29) and the \nustar\ Data Analysis Software (NuSTARDAS; v2.1.1) with \nustar\ CALDB 20210728. We used the {\sc nupipeline} script to generate calibrated clean event files from the unfiltered event files of the two focal plane module detectors (FPMA and FPMB). For the 2019 \nustar\ observation of PG~1254, background event rates were slightly elevated around the South Atlantic Anomaly (SAA), and we followed the recommendations from the \nustar\ instrument team and applied an additional SAA filter ({\sc saamode} = {\sc optimized}, {\sc tentable} = {\sc yes}, and {\sc saacalc} = 1) during the {\sc nupipeline} processing. We created \nustar\ images using the Chandra Interactive Analysis of Observation (CIAO; v4.13)\footnote{http://cxc.harvard.edu/ciao/.} tool {\sc dmcopy} in three energy bands: \hbox{3--24~keV} (full band), 3--8~keV (soft band), and 8--24~keV (hard band). For each \nustar\ observation, we co-added the FPMA and FPMB images to improve sensitivity, which helps in detecting and characterizing faint \nustar\ sources \citep[e.g.,][]{Lansbury2017}. The co-added images were created by combining the FPMA and FPMB images in each of the three bands using the HEAsoft tool {\sc ximage}. These images were then used for source detection and aperture photometry. For each co-added image, we searched for \hbox{X-ray} sources using the CIAO tool {\sc wavdetect} \citep{Freeman2002} with a false-positive probability threshold of $10^{-5}$ and wavelet scales of 2, 2.83, 4, 5.66, 8, 11.31, and 16 pixels \citep[e.g.,][]{Luo2014}; the \nustar\ pixel size is $2.46\arcsec$. In the 2013 observations, PG~1001 and PG~1254 were detected in the soft and full bands but not in the hard band. They were detected in all three bands in the latest deeper observations. PHL~1811 was also detected in all three bands. In the following analysis, we adopted the full-band {\sc wavdetect} positions as the X-ray positions. The offsets between the X-ray and optical positions range from $3.4\arcsec$ to $9.4\arcsec$, with are typical for faint \nustar\ sources \citep[e.g.,][]{Lansbury2017}. We extracted source and background spectra using the {\sc nupipeline} script. We used $35\arcsec$-radius circular source regions centered on the X-ray positions, and annular background regions centered on the X-ray positions with inner radii of $120\arcsec$ and outer radii of $180\arcsec$. We verified that there are no sources in the background regions. For each observation, we merged the FPMA and FPMB source spectra, background spectra, and response files using the HEASoft tool {\sc addspec}. \subsection{Archival \chandra\ and \xmm\ Observations and Data Analyses} The archival \chandra\ and \xmm\ observations of the three quasars are listed in Table~1.\footnote{PHL 1811 has another \chandra\ observation in 2012 with an exposure time of 2.0~ks (see Footnote 18 of \citealt{Luo2015}); we do not use this short observation in the present study as it does not have sufficient statistics to place meaningful constraints on relevant parameters.} For \chandra\ data reduction, we used the CIAO {\sc chandra\_repro} script to generate new level 2 event files. Background flares were then filtered with the {\sc deflare} script using an iterative 3$\sigma$ clipping algorithm. We created 0.5--8~keV images from the cleaned event files using {\sc dmcopy}. We searched for \hbox{X-ray} sources in the \hbox{X-ray} images using {\sc wavdetect} with a false-positive probability threshold of $10^{-6}$ and wavelet scales of 1, 1.414, 2, 2.828, 4, 5.656, and 8 pixels. The three quasars were all detected within $0.22\arcsec$--$0.42\arcsec$ of the optical positions. For each observation, the source spectrum was extracted using the {\sc specextract} tool from a circular region centered on the \hbox{X-ray} position with a radius of $2\arcsec$. The background spectrum was extracted from an annular region centered on the \hbox{X-ray} position with an inner radius of 6\arcsec\ and an outer radius of 10\arcsec; we verified that the background regions do not contain any X-ray sources. For the \xmm\ observations, we used only the data from the pn camera.\footnote{We have checked the MOS data, which have lower photon statistics, especially in the 5--10~keV band that is of interest to this study; combining the pn spectra with the low signal-to-noise ratio MOS spectra might introduce additional systematic uncertainties.} We used the Science Analysis System (SAS; v1.2) to process the data, following the standard procedure described in the SAS Data Analysis Threads.\footnote{http://www.cosmos.esa.int/web/xmm-newton/sas-threads.} We used the {\sc epproc} tool to produce calibrated event files. A threshold of 0.4~cts~s$^{-1}$ was adopted to filter background flares. We created good-time-interval files using the {\sc tabgtigen} script, and we generated cleaned event files using the {\sc evselect} tool. For each observation, we used the {\sc evselect} tool to extract source and background spectra, with a $30\arcsec$-radius circular source region centered on the optical position of the quasar and a $80\arcsec$-radius circular, source-free background region on the same CCD chip as the source region. For the 2015 \xmm\ observation of PHL~1811, we also used the Optical Monitor (OM) photometric data for constructing its SED.\footnote{The 2004 OM observation of PHL~1811 used only one filter and its photometric measurement does not suggest any variability compared to the 2015 OM results.} We generated nine exposures for the five filters (UVW2, UVM2, UVW1, U, B) using the {\sc omichain} script, and the photometric measurements of every exposure were recorded in the SWSRLI files. We extracted the magnitude measurements from these files and adopted the mean magnitude for each filter. The coincidence loss corrections (due to the high count rates) were applied during the pipeline processing. \section{\hbox{X-ray} and Multiwavelength Properties} \subsection{Soft \hbox{X-ray} Weakness and Obscuration Signatures from Archival X-ray Observations} \subsubsection{PG~1001} PG~1001 was observed by \xmm\ and \chandra\ in 2003 and 2010 with exposure times of 8.6~ks and 4.9~ks, respectively, and the results were presented in \citet{Schartel2005} and \citet{Saez2012}. As described in Section 2.3, we reduced these data and extracted the corresponding \xray\ spectra. We performed simple power-law spectral fitting of the 2003 \xmm\ spectrum in the 1--10~keV band and the 2010 \chandra\ spectrum in the \hbox{1--8~keV} band; a lower energy bound of 1~keV was adopted here because there is apparent soft X-ray excess emission in the 0.3--1~keV \xmm\ spectrum that is probably related to ionized absorption \citep{Schartel2005}. The resulting power-law photon indices are $0.8\pm0.2$ and $1.3^{+1.1}_{-1.0}$, indicative of \xray\ obscuration. From the best-fit results, we derived two $\alpha_{\rm OX}$ values for PG~1001, and they are shown in the $\alpha_{\rm OX}$ versus $L_{\rm 2500~{\textup{\AA}}}$ plane in Figure~1; the $L_{\rm 2500~{\textup{\AA}}}$ measurement is from \citet{shen2011}. Besides the soft X-ray weakness, there is also X-ray flux variability between these two observations. We then computed the $\Delta\alpha_{\rm OX}$ parameters, which are $-0.58\pm0.15$ for the \xmm\ observation and $-0.75\pm0.16$ for the \chandra\ observation, corresponding to X-ray weakness factors of $32^{+47}_{-19}$ and $88^{+144}_{-55}$ at rest-frame 2~keV, respectively. The $\Delta\alpha_{\rm OX}$ uncertainties were dominated by the $\alpha_{\rm OX}$ rms scatter ($\approx0.15$; Table~5 of \citealt{steffen2006}) of the $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation. We note that there does not appear to be any strong long-term UV/optical variability of the three quasars in our study (maximum flux variability amplitudes $\approx16\%$--80\%; see Section~3.3 below), and thus the large X-ray weakness factors derived using non-simultaneous \xray\ and UV/optical data are not significantly affected by any UV/optical variability. \subsubsection{PG~1254} PG~1254 has a 36.0~ks \chandra\ observation in 2000 that was studied in \citet{Sabra2001}. We performed simple power-law spectral fitting of this \chandra\ spectrum, and the resulting photon index is $0.6\pm0.3$, indicative of X-ray obscuration. Its location in the $\alpha_{\rm OX}$ versus $L_{\rm 2500~{\textup{\AA}}}$ plane is shown in Figure~1; the $L_{\rm 2500~{\textup{\AA}}}$ measurement is from \citet{shen2011}. The corresponding $\Delta\alpha_{\rm OX}$ value is $-0.74\pm0.16$, indicating an X-ray weakness factor of $87^{+139}_{-53}$ at \hbox{rest-frame} 2~keV. \subsubsection{PHL~1811} PHL 1811 has two \chandra\ observations and two \xmm\ observations, as listed in Table~1; the first three were presented in \citet{Leighly2007}. From simple power-law spectral fitting of the two \chandra\ and two \xmm\ spectra, we obtained steep spectral shapes with photon indices in the range of $\approx$2.0--2.6, consistent with those in \citet{Leighly2007}. The two $\alpha_{\rm OX}$ values from the 2001 December 5 and December 17 \chandra\ observations are shown in Figure 1, using the $L_{\rm 2500~{\textup{\AA}}}$ value adopted from \citet{Leighly2007}. The $\Delta\alpha_{\rm OX}$ values are $-0.78\pm0.15$ for the December 5 observation and $-0.58\pm0.15$ for the December 17 observation, corresponding to X-ray weakness factors of $108^{+160}_{-64}$ and $33^{+48}_{-20}$ at rest-frame 2~keV, respectively. We do not show the $\alpha_{\rm OX}$ values from the two \xmm\ observations in Figure~1, as they are very close to the 2001 December 5 \chandra\ value (e.g., $\Delta\alpha_{\rm OX}=-0.81$ for the 2015 \xmm\ observation). In summary, previous soft X-ray observations of the three quasars have revealed significant soft X-ray weakness, with the rest-frame 2~keV fluxes $\approx32$--108 times weaker compared to the expectations from their optical/UV emission. The soft X-ray spectra of PG~1001 and PG~1254 have flat spectral shapes, indicative of X-ray obscuration, while the soft X-ray spectra of PHL~1811 do not show evidence for X-ray obscuration. \subsection{Hard \hbox{X-ray} Weakness and Spectral Shape Constraints from \nustar} For each \nustar\ observation, we performed aperture photometry using the co-added FPMA $+$ FPMB images in the three bands (soft, hard, and full). In each image, we extracted source counts ($S$) and background counts ($B$) from the same source and background regions used in the spectral extraction in Section 2.2. The encircled-energy fraction of the source region is $63.9\%$ according to the \nustar\ point-spread function. We determined the source significance by calculating the binomial no-source probability ($P_{\rm B}$; e.g., \citealt{Luo2013}), which is defined as \begin{equation} P_{\rm B}(X\ge S)=\sum_{X=S}^{N}\frac{N!}{X!(N-X)!}p^X(1-p)^{N-X}~. \end{equation} In this expression, $N$ = $S+B$ and $p = 1/(1 +BACKSCAL$), where $BACKSCAL$ is the ratio between the exposure-time weighted areas of the background and source regions. A smaller $P_{\rm B}$ value indicates a more significant signal. We considered the source detected in a band if the measured $P_{\rm B}$ value is smaller than 0.01 (corresponding to a $>2.6\sigma$ significance level). With this criterion, PG~1001 and PG~1254 were not detected in the hard band in their 2013 observations, and the three quasars were detected in all the other images; these results are consistent with the {\sc wavdetect} results in Section 2.2. In the hard band, the $P_{\rm B}$ values for PG 1001 and PG 1254 in their latest observations are $5.4\times10^{-7}$ ($5.0\sigma$) and $3.8\times10^{-6}$ ($4.6\sigma$), respectively, and $P_{\rm B}$ is $1.1\times10^{-3}$ ($3.3\sigma$) for PHL~1811; these values indicate significant detections in the hard band. The \nustar\ hard-band images of the there quasars are displayed in Figure~2. For the detected sources, we computed their \hbox{aperture-corrected} net counts $(S-B/BACKSCAL)/0.639$. The associated errors were derived from the $1\sigma$ Poisson errors of the extracted source and background counts \citep{Gehrels1986}. Compared to PG~1001 and PG~1254, PHL~1811 has larger relative count errors in the hard band, consistent with its larger hard-band $P_{\rm B}$ value (lower detection significance). For undetected sources, we calculated $90\%$ confidence-level upper limits on the source counts following the Bayesian approach of \citet{Kraft1991}. The net counts and upper limits in the three bands are listed in Table~2. For each quasar in each observation, we derived an effective power-law photon index ($\Gamma_{\rm eff}$) from the band ratio, which is the ratio between the hard-band (8--24~keV) and soft-band (3--8~keV) counts, based on the following procedure. (1) For a given set of $\Gamma$ values, we produced a set of mock power-law spectra using the XSPEC {\sc fakeit} routine and the spectral response files. (2) For each mock spectrum, we computed the corresponding band ratio. (3) We interpolated the $\Gamma$ versus band ratio set to derive the $\Gamma_{\rm eff}$ value from the measured band ratio. The $\Gamma_{\rm eff}$ values are listed in Table~2. The $1\sigma$ errors on $\Gamma_{\rm eff}$ were propagated from the errors of the band ratios derived using {\sc behr} \citep{Park2006}. If the quasar was not detected in the hard band, we computed the lower limit on $\Gamma_{\rm eff}$ from the upper limit on the band ratio calculated using {\sc behr}. In this case, $\Gamma_{\rm eff} = 2.0$ was adopted in the following calculations of fluxes, flux densities, and luminosities; we consider this more appropriate than using the lower limit value, and adopting a value different from $2.0$ would not affect the results significantly. To compute fluxes, we obtained conversion factors from count rates to fluxes in the three bands using the mock spectrum with a photon index of $\Gamma_{\rm eff}$. Flux errors were propagated from the count errors, and flux upper limits were derived from the upper limits on the net counts. The luminosity in the rest-frame 2--10~keV band ($L_{\rm X}$) in each observation was derived from the full-band flux adopting a power-law spectrum with a photon index of $\Gamma_{\rm eff}$. The X-ray fluxes and luminosities are listed in Table~2. For PG~1001 and PG~1254 that have two \nustar\ observations, the photometric properties from the two observations are overall consistent within the errors, except for the $\Gamma_{\rm eff}$ constraints of PG~1001 which suggest possible spectral-shape evolution. We stacked their photometric measurements to derive average properties; the results are also listed in Table~2. We calculated the factor of \hbox{X-ray} weakness at rest-frame 8~keV ($f_{\rm w}$), which is defined as the ratio between the expected and observed 8~keV flux density {($f_{\rm w}={f_{\rm \nu,~expected}}/{f_{\rm \nu,~observed}}$).} The observed 8~keV flux density was computed from the full-band flux for a power-law spectrum with a photon index of $\Gamma_{\rm eff}$, and the expected 8~keV flux density was calculated from the \citet{steffen2006} $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation adopting a $\Gamma=2$ power-law spectrum. The $f_{\rm w}$ uncertainties were propagated from the $\alpha_{\rm OX}$ rms scatter of the $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation. {We caution that $f_{\rm w}$ is different from the factor of X-ray weakness quantified by the $\Delta\alpha_{\rm OX}$ parameter (Section 1), which is for rest-frame 2~keV.} Compared to the previous hard-band non-detections, the deeper \nustar\ observations of PG~1001 and PG~1254 improve the source-detection significance and provide better constraints on the hard X-ray weakness factors and hard \xray\ spectral shapes. The \nustar\ observation of PHL~1811 provides first-ever hard X-ray constraints for this extreme quasar. PG~1001 and PHL~1811 are significantly hard X-ray weak ($f_{\rm w}\approx26$--74), while PG~1254 is only \xray\ weak by a factor of $\approx2.7$ in the hard X-rays despite its significant soft X-ray weakness (Figure~1). Although the 2013 observation of PG~1001 suggests a potentially typical hard X-ray spectral shape ($\Gamma_{\rm eff}>1.5$), the 2020 observation and the $2013+2020$ stacked data suggest flat spectral shapes ($\Gamma_{\rm eff}\approx0.4$--1.0). Both observations of PG~1254 suggest nominal spectral shapes with $\Gamma_{\rm eff}\approx1.8$. The PHL~1811 observation surprisingly reveals that its hard X-ray photon index ($\Gamma_{\rm eff}=1.4_{-0.7}^{+0.8}$), though loosely constrained, appears marginally smaller than its soft X-ray (0.3--5 keV) photon index ($2.3\pm0.1$) from the 2004 \xmm\ observation \citep{Leighly2007}. \begin{deluxetable*}{lccccccccccc} \tablecaption{\nustar\ Photometric Properties} \tablehead{ \colhead{Object Name } & \colhead{Obs. Year} & \multicolumn{3}{c}{Net Source Counts$^{a}$} & \colhead{$\Gamma_{\rm eff}$$^{b}$} & \multicolumn{3}{c}{Flux ($10^{-14}$~\flux)} & \colhead{$\log L_{\rm X}$ (erg s$^{-1})$} & \colhead{$f_{\rm w}$$^{c}$} \\ \cline{3-5} \cline{7-9} \colhead{} & \colhead{} & \colhead{3--24} & \colhead{3--8} & \colhead{8--24} & \colhead{} & \colhead{3--24} & \colhead{3--8} & \colhead{8--24} & \colhead{2--10} & \colhead{} \\ \colhead{} & \colhead{} & \colhead{keV} & \colhead{keV} & \colhead{keV} & \colhead{} & \colhead{keV} & \colhead{keV} & \colhead{keV} & \colhead{keV} & \colhead{} & } \startdata PG~1001 &2013 &$ 52^{+16}_{-15}$&$40^{+13}_{-11}$&$ <26$&$>1.5~(2.0)$&$ 7.8^{+2.4}_{-2.2}$&$ 4.4^{+1.2}_{-1.4}$&$ <5.9$& $42.7$& $16^{+24}_{-10}$\\ PG~1001 &2020 &$ 172^{+35}_{-34}$&$57^{+24}_{-23}$&$ 113^{+27}_{-25}$&$ 0.4^{+0.6}_{-0.9}$&$ 7.8^{+1.6}_{-1.5}$&$ 1.1^{+0.5}_{-0.4}$&$6.6^{+1.6}_{-1.5}$& $42.0$& $34^{+50}_{-20}$\\ PG~1001 &2013+2020&$ 223^{+38}_{-37}$&$98^{+26}_{-25}$&$125^{+29}_{-27}$&$ 1.0^{+0.5}_{-0.6}$&$7.2\pm1.2$&$1.7^{+0.5}_{-0.4}$&$5.5^{+1.3}_{-1.2}$&$42.2$&$26^{+38}_{-15}$\\ \\ PG~1254 &2013 &$ 63^{+21}_{-19}$&$35^{+15}_{-14}$&$ <50$&$>0.4~ (2.0)$&$ 6.2^{+2.0}_{-1.9}$&$ 2.5^{+1.1}_{-1.0}$&$ <7.3$& $44.5$& $3.4^{+5.0}_{-2.0}$\\ PG~1254 &2019 &$ 275^{+38}_{-36}$&$171^{+28}_{-26}$&$ 104^{+27}_{-25}$&$ 1.8^{+0.5}_{-0.4}$&$ 10.0^{+1.4}_{-1.3}$&$ 4.3\pm0.7$&$ 5.7^{+1.5}_{-1.4}$& $44.5$& $2.5^{+3.6}_{-1.5}$\\ PG~1254 &2013+2019&$ 338^{+43}_{-41}$&$ 206^{+31}_{-30}$&$ 132^{+30}_{-29}$&$ 1.8\pm0.3$&$ 9.1^{+1.2}_{-1.1}$&$ 3.8\pm0.6$&$ 5.3\pm1.2$& $44.5$&$2.7^{+3.9}_{-1.6}$\\ \\ PHL~1811 & 2015&$ 113^{+28}_{-26}$&$ 59^{+20}_{-18}$&$ 55^{+21}_{-19}$&$1.4^{+0.8}_{-0.7}$&$ 7.0^{+1.7}_{-1.6}$&$ 2.2^{+0.8}_{-0.7}$&$4.9^{+1.8}_{-1.7}$& $42.6$ & $74^{+109}_{-44}$ \enddata \tablenotetext{a}{{The errors were derived from the 1$\sigma$ errors of the extracted source and background counts \citep{Gehrels1986}. For undetected sources, we calculated $90\%$ confidence-level upper limits on the source counts following the Bayesian approach of \citet{Kraft1991}.}} \tablenotetext{b}{Effective power-law photon index. If the source is not detected in the 8--24~keV band, a lower limit value is provided, but $\Gamma_{\rm eff} = 2.0$ (as shown in parentheses) was adopted in calculating the fluxes, flux densities, and luminosity.} \tablenotetext{c}{Factor of X-ray weakness at rest-frame 8 keV, derived by comparing the observed 8 keV flux density to that expected from the \citet{steffen2006} $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation assuming a $\Gamma = 2$ power-law spectrum. The uncertainty is dominated by the scatter of the $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation.} \end{deluxetable*} \subsection{Spectral Energy Distributions and Optical/Infrared Variability} We constructed infrared (IR) to \hbox{X-ray} SEDs for the three quasars. We collected IR--UV photometric data from the {Wide-field Infrared Survey Explorer} ({WISE}; \citealt{Wright2010}), { Near-Earth Object WISE} ({NEOWISE}; {\citealt{Mainzer2014}}), Two Micron All Sky Survey (2MASS; \citealt{Skrutskie2006}), Sloan Digital Sky Survey (SDSS; \citealt{York2000}), and {Galaxy Evolution Explorer} ({GALEX}; \citealt{Martin2005}) catalogs. For the two PG quasars, we also included their SED data from \citet{Neugebauer1987}. We added Spitzer photometric measurements for PG 1001 \citep{Veilleux2009}. For PHL~1811, we included the 2001 HST STIS UV spectrum and the 2015 \xmm\ OM measurements. The optical and UV data have been corrected for Galactic extinction following the de-reddening approach in \citet{Cardelli1989} and \citet{O'Donnell1994}. The SEDs are shown in Figure~3. For comparison, we also plotted in each panel the mean SED of high-luminosity radio-quiet quasars in \citet{Krawczyk2013} normalized to the 2500~\AA\ luminosity. The X-ray component of the mean quasar SED is a $\Gamma=2$ power-law continuum that follows the $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation. The \hbox{IR-to-UV} SEDs of the three quasars are broadly consistent with those of typical quasars; {the slight deviations in the IR for PG~1001 and PHL~1811 are within the scatters ($\approx0.2$--0.25 dex) of the mean quasar SED at these frequencies. } We added soft and hard X-ray measurements to the SEDs. We used the 2~keV luminosities from the power-law spectral fitting of the \chandra\ or \xmm\ spectra (Section~3.1). From the \nustar\ full-band fluxes (Section~3.2; stacked results were used for PG~1001 and PG~1254), we derived 8~keV and 15~keV luminosities adopting power-law spectra with the measured $\Gamma_{\rm eff}$ values. Compared to the typical quasar SED which follows the $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation, the soft and hard X-ray weakness of the three quasars is evident. For PHL~1811, we also show the soft X-ray spectral slopes from the two \xmm\ observations and the hard \xray\ spectral slope constrained from the \nustar\ observation. The spectral slopes differ beyond the 1$\sigma$ level, suggesting that X-ray obscuration might also be present in PHL~1811. {The SEDs of the three quasars also show clearly that they deviate from the observed $L_{\rm X}$--$L_{\rm MIR}$ relations for typical quasars \citep[e.g.,][]{Lutz2004,Mateos2015,Stern2015,Chen2017,Martocchia2017}, with $L_{\rm MIR}$ being the mid-IR luminosity measured at rest-frame 6~$\mu$m. The offsets from the relations are approximately quantified by the $f_{w}$ values (Table 2), and PHL~1811 is $\approx70$ times X-ray underluminous compared to its mid-IR luminosity.} Integrating the IR-to-X-ray SEDs, we estimate the bolometric luminosities to be $3.4\times10^{45}$~erg~s$^{-1}$ for PG~1001, $2.1\times10^{47}$~erg~s$^{-1}$ for PG~1254, and $4.2\times10^{46}$~erg~s$^{-1}$ for PHL~1811. These values are consistent with those provided in \citet{shen2011} and \citet{Leighly2007}. The \hbox{IR-to-UV} SED data are not simultaneous and may be affected by variability. Mild variability is apparent in the SED of PG~1001, where the more recent optical and near-IR measurements are $\approx2$--60\% lower than the \citet{Neugebauer1987} data. To investigate the optical variability of these three quasars, we further examined their long-term optical light curves constructed using the public catalogs of the Zwicky Transient Facility Data Release 9 (ZTF DR9; \citealt{Bellm2019}) and the Catalina Real-Time Transient Survey (CRTS; \citealt{Drake2009}). The maximum flux variability amplitudes in the ZTF $g$ band range from 13\% to 40\%, and in the CRTS $V$ band they range from 16\% to 80\%; PG 1001 varied the most among the three quasars. Since PG~1001 has the strongest optical variability, we also checked its IR light curve from the NEOWISE catalog. Between 2014 May and 2020 November, its maximum flux variability amplitude in the W1 (W2) band is 18\% (13\%). The mild optical/IR variability observed in these quasars suggests that the overall accretion power did not change significantly (e.g., by factors of $>2$) over the years. Compared to the soft and hard X-ray weakness factors ($\Delta\alpha_{\rm OX}$ and $f_{\rm w}$ in Sections 3.1 and 3.2), the UV/optical variability factors are much smaller, indicating that the X-ray weakness factors assessed using non-simultaneous UV/optical data are not heavily biased. \section{Discussion} The three quasars in this study were considered among the best candidates for intrinsically X-ray weak quasars, with coronae that produce weaker X-ray emission than expected from the $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation. Intrinsic X-ray weakness and X-ray obscuration are not mutually exclusive. In fact, at least for PG~1001 and PG~1254, the presence of X-ray obscuration is clearly indicated by their flat spectral shapes in the soft \hbox{X-rays} (Section 3.1). Therefore, it was proposed in \citet{Luo2014} that the \chandra, \xmm, and \nustar\ data of PG~1001 and PG~1254 could be explained by intrinsically weak X-ray continua modified by Compton-thin absorption. In the subsections below, we first suggest that X-ray obscuration is also present in PHL~1811 given the hard X-ray constraints. We then propose that, as an alternative to the intrinsic X-ray weakness $+$ X-ray obscuration scenario, the soft and hard X-ray weakness of these quasars can be uniformly explained under an X-ray obscuration-only scenario, without invoking the extra mechanism of intrinsic X-ray weakness. The X-ray absorber in this case might be a clumpy accretion-disk wind that provides variable partial-covering absorption to the nuclear X-ray emission. \subsection{Presence of X-ray Obscuration in PHL 1811} PHL 1811 was considered an intrinsically X-ray weak quasar without any absorption based mainly on the following properties: (1) significantly weak emission with a steep spectral shape ($\Gamma=2.3\pm0.1$) in the 2004 \xmm\ \hbox{0.3--5}~keV spectrum, and (2) flux variability by a factor of $\approx4$ in 12 days \citep{Leighly2007}. Hard X-ray ($>5$ keV) data were not available previously. The 2004 \xmm\ spectrum of PHL 1811 is dominated by background above 5~keV, with a $P_{\rm B}$ value (see Section 3.2) of only 0.1 in the 5--10 keV band. In the deeper 2015 \xmm\ observation, PHL 1811 was significantly detected in the 5--10~keV band, with a $P_{\rm B}$ value of $1.0\times10^{-4}$ ($3.9\sigma$), allowing investigation of its hard X-ray properties. We first derived its photometric properties following the same procedure described in Section~3.2, though applied to the 0.3--10~keV band instead of the 3--24~keV band as for the \nustar\ data. The net source counts (without aperture correction) in the \hbox{0.3--2~keV} and 2--10 keV bands are $872_{-34}^{+35}$ and $174^{+21}_{-20}$, respectively, and the resulting 0.3--10 keV effective photon index ($\Gamma_{\rm eff,0.3-10}$) is $2.0\pm0.1$ from the band ratio of the above two bands and the spectral response files. If adopting a different set of energy bands, the 0.3--5~keV and 5--10 keV bands, the derived counts are $1000^{+39}_{-38}$ and $47^{+14}_{-13}$, and the $\Gamma_{\rm eff,0.3-10}$ value becomes $1.8^{+0.1}_{-0.2}$. The slightly different $\Gamma_{\rm eff,0.3-10}$ values suggest that the spectral shape deviates from a simple power law. We then fit the 2015 \xmm\ 0.3--10 keV spectrum with a simple power-law model using XSPEC, under the assumption that PHL~1811 is an unabsorbed, intrinsically \xray\ weak source. The spectrum was grouped with at least one count per bin, and the W statistic (Footnote \ref{wstat}) was used. The resulting photon index is $\Gamma=2.57\pm0.09$; if limiting the energy range to 0.3--5~keV, the resulting $\Gamma$ ($2.63\pm0.09$) is consistent within the errors. The data and the best-fit model are shown in Figure~4a, and there are significant residuals above $\approx3$~keV. It is possible that a Compton-reflection component from a distant reprocessor (e.g., the torus) contributes to the hard X-ray excess emission. We thus added an XSPEC \texttt{pexrav} component to the model with its photon index and normalization tied to those of the power-law component; the only free parameter is the reflection factor (reflection fraction) and the other parameters are fixed at their default values. The best-fit results are shown in Figure~4b. The hard X-ray excess can be explained by the Compton-reflection component, but an unrealistically large reflection factor ($\approx23$) is required; the reflection factors for AGN samples are typically $\lesssim2$ \citep[e.g.,][]{deRosa2008,Ricci2011,Ricci2017,Panagiotou2019}. We fixed the power-law photon index to 2.57 in the above test; when was allowed to vary, a larger $\Gamma$ ($\approx3.0$) and an even larger reflection factor ($\approx81$) was derived. Therefore, the 2015 \xmm\ spectrum of PHL~1811 cannot be well described by a simple unabsorbed power-law continuum plus a reasonable amount of Compton reflection. {In Figure 4a, the hard X-ray residuals peak around observed-frame 6~keV (rest-frame $\approx 7$~keV), and we thus also considered a model where the hard X-ray excess has some contribution from a broad Fe K$\alpha$ line produced via relativistic disk reflection \citep[e.g.,][]{Ross2005,Fabian2013}. We fit the 2015 \xmm\ 0.3--10 keV spectrum with the XSPEC \texttt{relxill} model \citep[][]{Dauser2014,Garcia2014}. The free parameters are $\Gamma$, SMBH spin, ionization parameter, inclination angle, and reflection factor (reflection fraction), and the other parameters were fixed at their default values. This model describes well the spectrum with similar residuals to those in Figure 4b, with a best-fit $\Gamma$ value of $2.02\pm0.06$ and a best-fit reflection factor of $10^{+65}_{-1}$. The $\Gamma$ value is smaller than 2.57 because the soft X-rays are now dominated by ionized-disk reflection instead of an intrinsic power-law continuum. The reflection factor is still large, but unlike the Figure~4b modeling with a distant reflector, relativistic reflection from the inner accretion disk could produce an extremely large reflection factor if much of the coronal emission cannot reach the observer due to the light bending effects near the SMBH \citep[e.g.,][]{Dauser2014}. Nevertheless, a reflection-dominated spectrum with strong light bending effects is still in contrast with the scenario of intrinsic X-ray weakness where the observed spectrum should be dominated by the intrinsic power-law continuum from a weak corona.} The 3--24 keV spectral shape constrained from the 2015 simultaneous \nustar\ observation, $\Gamma_{\rm eff}=1.4^{+0.8}_{-0.7}$ (Section~3.2), provides additional support that the spectral shape likely deviates from a steep unabsorbed power law. We thus applied the above \texttt{zpow} $+$ \texttt{pexrav} model to jointly fit the 2015 \xmm\ $+$ \nustar\ spectra and investigate if the spectra can be explained by an unabsorbed, intrinsically weak power-law continuum plus typical Compton reflection. The results are shown in Figure~5a, and they are consistent with those for the \xmm\ spectrum alone (Figure~4b), requiring a huge reflection factor. We then tested replacing the \texttt{pexrav} component with the self-consistent Compton-reflection model \texttt{borus02} \citep{Balokovi2018} with the photon index and normalization tied to those of the power-law component. The complete XSPEC model is \texttt{phabs} $*$ (\texttt{zpow} $+$ \texttt{atable\{borus02.fits\}}), and the free parameters are the power-law normalization, photon index ($\Gamma$), and column density ($N_{\rm H}$) of the reprocessed component.\footnote{ The other parameters of the \texttt{borus02} model are fixed at the default values including a high-energy cutoff of 300~keV, an inclination angle of 60\degr, a torus covering factor of 0.5 (corresponding to a half-opening angle of 60\degr), and an iron relative abundance of 1. \label{footnotebo}} The best-fit results are displayed in Figure~5b. There are still significant fitting residuals above $\approx5$~keV, demonstrating again that a typical level of Compton reflection cannot account for the excess emission in the hard X-rays. The 2015 \xmm\ and \nustar\ observations of PHL~1811 thus reveal that, in addition to a steep power-law ($\Gamma=2.63\pm0.09$) continuum in the 0.3--5~keV band, there is significant hard X-ray ($\gtrsim5$~keV) excess emission. The excess emission can be modeled with Compton reflection of a soft X-ray continuum that is much stronger (by a factor of $\gtrsim20$) than the observed one. We note that, from the best-fit \texttt{zpow} $+$ \texttt{pexrav} results in Figure~5a, PHL~1811 was intrinsically X-ray weak by a factor of $\approx140$ compared to the \citet{steffen2006} $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation. If we instead assume that PHL~1811 was intrinsically X-ray normal (i.e., raising the power-law normalization by a factor of 140) and add a scaling factor to the power-law component \texttt{c}\,*\,\texttt{zpow} $+$ \texttt{pexrav}, the spectra can be equally well described with $\texttt{c}=0.7\%$ ($1/140$) and a reasonable reflection factor of 0.17 ($24/140$). A physical interpretation of the above model is that PHL~1811 was affected by Compton-thick absorption and the observed \xmm\ and \nustar\ spectra were dominated by a fraction of the intrinsic continuum scattered by a large-scale highly ionized ``mirror'' ($f_{\rm scatter}$; typically within a few percent; e.g., \citealt{Turner1997}; \citealt{Cappi2006}; \citealt{Ueda2007}; \citealt{Winter2009}; \citealt{Yamada2020}; \citealt{Gupta2021}) in the soft X-rays and a reprocessed component from the absorber in the hard \hbox{X-rays}. The steep soft continuum could also represent a fraction ($f_{\rm leak}$) of the intrinsic continuum leaking through a clumpy absorber. Therefore, instead of employing intrinsic X-ray weakness plus an unrealistically large reflection factor, we could interpret the 2015 \xmm\ and \nustar\ data of PHL~1811 with Compton-thick obscuration. In addition, the second property mentioned above, the short-term soft X-ray variability between the two \chandra\ observations in 2001 (e.g., see Figure~1), was used to argue against a large-scale scattered component ($f_{\rm scatter}$ above) dominating the \hbox{0.3--5 keV} X-ray spectrum. However, from recent investigations of extreme X-ray weakness and extreme X-ray variability among super-Eddington accreting AGNs (e.g., SDSS J$075101.42+291419.1$ and samples from \citealt{Liu2019,Liu2021}; SDSS J$081456.10+532533.5$ from Huang~J. et al. in prep.), it appears plausible that such weak, steep, and variable soft \xray\ emission might originate from variable fractions ($f_{\rm leak}$) of leaked intrinsic continuum through a large solid-angle, high column-density, clumpy (partial-covering) absorber. Thus fast variability and steep spectral shapes in the soft X-rays do not necessarily rule out X-ray obscuration. In summary, unlike the model considered previously where PHL~1811 lacks absorption, hard \hbox{X-ray} data suggest that \xray\ obscuration may well be present. \subsection{An Obscuration-Only Scenario Without Intrinsic X-ray Weakness} The 2013 \nustar\ observations of PG~1001 and PG~1254, with no hard-band detections or $\Gamma_{\rm eff}$ measurements, were not sufficiently constraining to establish that intrinsic X-ray weakness must be present, which motivated the present study with deeper \nustar\ observations. The deeper \nustar\ observations now provide hard-band detections and $\Gamma_{\rm eff}$ measurements (albeit with large uncertainties). The PG~1001 spectral shape appeared flatter in the 2020 observation ($\Gamma_{\rm eff}=0.4^{+0.6}_{-0.9}$ compared to $\Gamma_{\rm eff}>1.5$), suggesting the presence of absorption and probably spectral-shape evolution between these two epochs. The stacked $\Gamma_{\rm eff}$ value ($1.0^{+0.5}_{-0.6}$) from the two \nustar\ observations is still small compared to the typical value of $\approx2$ for an unabsorbed spectrum. The PG~1254 spectral slope appears typical for unobscured quasars ($\Gamma_{\rm eff}=1.8^{+0.5}_{-0.4}$ in the 2019 observation), but it is also clear that this quasar is X-ray weak by only a factor of a few in the hard X-rays (see Figure 3). Considering its significant soft X-ray weakness and the flat spectral shape in the soft X-rays (Section 3.1), the nominal hard X-ray spectral shape in the \nustar\ data might be explained by Compton-thin absorption. Since PG 1254 is at $z=1.026$, we are likely observing the penetrating hard X-rays with \nustar\ through an absorber with a large but Compton-thin $N_{\rm H}$ value; this would explain the strong X-ray weakness at 2~keV and much reduced \xray\ weakness at higher energies. As discussed in Section 4.1 above, hard X-ray data suggest that \hbox{X-ray} obscuration is also present in PHL~1811. Motivated by these \nustar\ results, we argue that intrinsic X-ray weakness is probably not required to explain the extreme X-ray weakness of these quasars. We explore the possibility of interpreting universally the \nustar, \chandra, and \xmm\ spectra with an obscuration-only scenario where these quasars are intrinsically \hbox{X-ray} normal (following the $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation). We investigate below, via XSPEC spectral fitting, if the multi-epoch soft and hard X-ray spectra can be described by nominal-strength X-ray emission modified by our adopted obscuration model with the absorber parameters (the column density and partial-covering fraction) allowed to vary. Since the X-ray data do not have sufficient statistics for complex spectral fitting, we had to simplify the model and fix many of the model parameters. Moreover, we actually cannot rule out the scenario of obscuration $+$ intrinsic X-ray weakness, which has an extra degree of freedom (i.e., normalization of the \xray\ continuum) compared to the obscuration-only scenario. Our focus here is to investigate if we can explain the observed X-ray emission without involving intrinsic X-ray weakness from, e.g., an anomalous corona. Although the \texttt{pexrav} model appears able to describe the PHL~1811 spectra well (Section 4.1 and Figure~5a), it does not provide constraints on the absorption column densities. We thus employed the self-consistent \texttt{borus02} model to describe the reprocessed component from the absorber. The XSPEC spectral model is \begin{eqnarray*} \texttt{phabs}\,*\,\left(\texttt{zphabs}\,*\,\texttt{cabs}\,*\,\texttt{c}_1\,*\,\texttt{zpow}\right.\\ +\,\left.\,\texttt{c}_2\,*\,\texttt{zpow}+\,\texttt{atable\{borus02.fits\}}\,\right). \end{eqnarray*} In this model, \texttt{phabs} accounts for the Galactic absorption, and \texttt{zpow} is the intrinsic power-law continuum that is X-ray normal with respect to the \citet{steffen2006} $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation. A large fraction (\texttt{c}$_1$) of the intrinsic continuum is modified by heavy neutral absorption (\texttt{zphabs}) and Compton scattering (\texttt{cabs}). The absorber is clumpy, allowing a fraction ($f_{\rm leak}=$\texttt{c}$_2=1-$\texttt{c}$_1$) of the intrinsic continuum to leak through. There is probably also a large-scale scattered component ($f_{\rm scatter}$). Since the leaked component usually dominates, we treated them together and do not separate $f_{\rm scatter}$ from $f_{\rm leak}$ in the following study. The reprocessed component from the absorber is modeled with \texttt{borus02} with the normalization and photon index tied to those of \texttt{zpow}. We fixed the inclination angle to 60\degr\ and allowed the absorber covering factor to vary. The other \texttt{borus02} parameters were fixed at the default values (Footnote \ref{footnotebo}). We also tied the absorption column densities ($N_{\rm H}$) in the \texttt{zphabs}, \texttt{cabs}, and \texttt{borus02} components. The emergent spectrum is thus a combination of the transmitted (absorbed) component, leaked component, and reprocessed component. A schematic illustration of the setup is shown in Figure~6a, and an example of the X-ray spectral components for a given set of the absorber parameters is displayed in Figure~6b. { There are a couple of caveats regarding the above model. First, the absorber is probably partially ionized instead of being neutral. Ionized absorption produces distinctive spectral features below $\approx1$~keV, but the continuum shapes above $\approx1$~keV are similar to those from neutral absorption unless the ionization level is high (e.g., Figure~1 of \citealt{Netzer1993}). For the three quasars studied here, PG~1254 has few photons below 1~keV, PHL~1811 shows no absorption signatures below $\approx5$~keV, and PG~1001 has clear soft X-ray excess emission in the 0.3--1~keV band which was interpreted with ionized absorption \citep{Schartel2005}. Their $>1$~keV spectra do not have sufficient photon statistics to distinguish ionized absorption from neutral absorption or constrain reliably ionization parameters. Therefore, in the above model, we adopted neutral absorption for simplicity, and we did not use the $<1$~keV \xmm\ or \chandra\ data for PG~1001. The soft excess of PG~1001 in the obscuration scenario will be discussed in Section~4.2.1 below. Second, since there is no optical/UV extinction, the absorber should not be the torus described in the \texttt{borus02} model with a toroidal geometry. Instead, it is likely a small-scale clumpy dust-free wind launched from the accretion disk (see Section~4.3 below). Therefore, the \texttt{borus02} model does not provide an accurate description of the reprocessed emission (both the continuum and the Fe K$\alpha$ line) from the absorber (e.g., wind). However, since our purpose here is not to recover precise absorber parameters but to simply investigate if the obscuration-only scenario is a valid alternative to the scenario of intrinsic X-ray weakness $+$ obscuration, and the current simplified model appears able to explain reasonably well the multi-epoch \hbox{X-ray} spectra of the three quasars (as discussed below), we defer detailed modeling to future studies which likely will require much better spectral quality.} We applied the above model to explain the multi-epoch \hbox{X-ray} spectra of the three quasars. We jointly fit the \nustar, \chandra, and \xmm\ spectra for each of the three quasars. The free parameters are $\Gamma$, $f_{\rm leak}$ (\texttt{c}$_2$), \nh, and the absorber covering factor ($\cos\theta_{\mbox{\scriptsize oa}}$ in Figure~6a); $f_{\rm leak}$ and $N_{\rm H}$ were allowed to vary between the observations while the other two parameters were tied. In addition, for the latest \nustar\ observation of PG~1001 and the two \nustar\ observations of PG~1254, we tied the $f_{\rm leak}$ parameter to that of the non-simultaneous \chandra\ observation, as the \nustar\ spectra are not sensitive to this parameter (high-energy spectra do not have a significant leaked component). These quasars are considered to be intrinsically \hbox{X-ray} normal, with the intrinsic $f_{\rm 2keV}$ values fixed at those expected from the \citet{steffen2006} $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation. The $f_{\rm 2keV}$ and $\Gamma$ values define the intrinsic continua (normalizations of \texttt{zpow}). For each object, the \texttt{zpow} normalization was first fixed at the value derived with $\Gamma=2$, and then an iterative procedure was performed to adjust the \texttt{zpow} normalization according to the best-fit $\Gamma$ value. A few iterations were needed until the intrinsic continuum converged. The best-fit parameters are listed in Table~3 and the best-fit models are displayed in Figure~7. We also list in Table~3 the main components (transmitted, reprocessed, and/or leaked) that dominate the emergent spectrum of each observation. The $\Gamma$ values for PG~1001 and PHL~1811 pegged at 2.6, the upper bound allowed by the \texttt{borus02} model. Several $N_{\rm H}$ values do not have upper errors as the emergent spectrum is dominated by the leaked component which is not sensitive to $N_{\rm H}$; a few of these pegged at $\log N_{\rm H}/{\rm cm}^{-2}=25.5$, the upper bound allowed by the \texttt{borus02} model. The fitting results are acceptable overall considering the fitting statistics ($W/{\rm dof}$; Figure~7) and residuals, indicating that the obscuration scenario is able to explain the multi-epoch X-ray data without involving intrinsic X-ray weakness. From the best-fit models, we computed the $f_{\rm w}$ values for the \nustar\ spectra, and these are also listed in Table~3. These hard X-ray weakness factors are comparable to those listed in Table~2, and any discrepancy is mainly due to the different models used (a simple power-law model is assumed in deriving the photometric results in Table 2). \begin{deluxetable*}{lcccccccc} \tablecaption{Best-fit Parameters for the Multi-epoch X-ray Spectral Fitting} \tablehead{ \colhead{Object Name} & \colhead{Observatory}& \colhead{Obs. Date}& \colhead{$\cos\theta_{\mbox{\scriptsize oa}}$} & \multicolumn{3}{c}{Partial-Covering Absorption} & \colhead{$f_{\rm w}^{a}$}& \colhead{Main~Com.$^{b}$}\\ \cline{5-7} \colhead{}& \colhead{}& \colhead{}& \colhead{}& \colhead{$f_{\rm leak}$}& \colhead{$\Gamma$}& \colhead{$\log N_{\rm H}$ (cm$^{-2})$}& \colhead{}& \colhead{} } \startdata PG~1001&\xmm\ &2003 May 4&$0.54^{+0.06}_{-0.07}$&$1.9^{+0.4}_{-0.3}\times10^{-2}$& $2.6_{-0.02}$$^d$&$23.5\pm0.1$ & --& leak,~tra\\ PG~1001&\chandra\ &2010 Jan 11&--&$7.5^{+4.6}_{-3.5}\times10^{-3}$& --&$23.7^{+1.3}_{-0.2}$& --& leak,~tra\\ PG~1001&\nustar\ &2013 Jun 28&--&$0.10\pm0.05$& --&$24.3^{+0.9}_{-0.2}$& 16.7& leak,~rep\\ PG~1001&\nustar\ &2020 May 23&--&$7.5\times10^{-3}$ (tied)& --&$24.2\pm0.1$&52.7& tra,~rep\\ \\ PG~1254&\chandra\ &2000 May 29 &$0.38^{+0.06}_{-0.19}$&$7.4^{+2.7}_{-2.1}\times10^{-3}$& $2.12^{+0.03}_{-0.09}$&$25.5_{-1.0}$& --& leak,~rep \\ PG~1254&\nustar\ &2013 Jun 8&--&-- (tied)& --&$24.0\pm0.2$& 8.7& tra,~rep\\ PG~1254&\nustar\ &2019 Jun 8&--&-- (tied)& --&$23.6^{+0.2}_{-0.1}$& 3.2& tra \\ \\ PHL~1811&\chandra\ &2001 Dec 5&$0.58^{+0.01}_{-0.02}$&$6.0^{+0.8}_{-0.7}\times10^{-3}$& $2.6_{-0.05}$&$24.1^{+0.5}_{-0.2}$& --& leak,~rep\\ PHL~1811&\chandra\ &2001 Dec 17&--&$(2.8\pm0.2)\times10^{-2}$& --&$24.9_{-0.5}$& --& leak\\ PHL~1811&\xmm\ &2004 Nov 1&--&$(6.7\pm0.4)\times10^{-3}$& --&$25.5_{-0.5}$& --& leak\\ PHL~1811&\xmm\ &2015 Nov 29&--&$(7.4\pm0.3)\times10^{-3}$& --&$24.6_{-0.2}$& --& leak,~rep\\ PHL~1811&\nustar\ &2015 Nov 28&--&$(1.1\pm0.8)\times10^{-2}$& --&$24.8\pm0.2$& 98.3& leak,~rep\\ PHL~1811&\xmm\ + \nustar$^{c}$ & 2015 Nov&--& $(7.4\pm0.3)\times10^{-3}$& --& $24.8^{+0.4}_{-0.2}$&--& leak,~rep \enddata \label{tbl-obs} \tablenotetext{a}{The factor of X-ray weakness at rest-frame 8 keV derived from the best-fit model, for comparison with the results in Table 2.} \tablenotetext{b}{The ``Main~Com.'' column lists the dominant component/components in the emergent spectrum: ``leak'' represents the leaked/scattered component, ``tra'' represents the transmitted (absorbed) component through the absorber (dashed line in Figure 6), and ``rep'' represents the reprocessed component from the absorber (dotted curves in Figure~6).} \tablenotetext{c}{In this case, the fitting parameters for the \xmm\ and \nustar\ observations were tied.} \tablenotetext{d}{A value without an upper error is effectively bound by the allowed upper bound of the \texttt{borus02} model.} \end{deluxetable*} Given the limited X-ray data quality and the simplifications/assumptions made during the modeling process, we do not consider these spectral fitting results fully accurate descriptions of the absorber properties. Nevertheless, they provide important clues for explaining the unusual X-ray properties under our proposed obscuration-only scenario. The best-fit results are consistent with our qualitative expectation above. The multi-epoch spectra of PG~1001 are explained by heavy or even Compton-thick obscuration. A strong leaked component is required to explain its 2013 \nustar\ spectrum as the spectral shape is likely steeper than an absorbed power law ($\Gamma_{\rm eff}>1.5$), while the 2020 \nustar\ spectrum is affected by typical Compton-thick obscuration. The \chandra\ spectrum of PG~1254 requires very Compton-thick obscuration due to the significant weakness of this $z\approx1$ spectrum. The \nustar\ observations of PG~1254 are explained by heavy but Compton-thin obscuration. PHL~1811 was almost always affected by Compton-thick obscuration, and the emergent spectra are largely dominated by the leaked component (though with small $f_{\rm leak}$ values). The reprocessed component does not contribute much as the absorber appears to have a large covering factor ($\cos\theta_{\mbox{\scriptsize oa}}\approx0.6$) which blocks direct reflected radiation from the opposite side of the absorber (e.g., see the top dotted curve in Figure 6). Therefore, PHL~1811 can reach a very large hard X-ray weakness factor ($f_{\rm w}$) in the \nustar\ observation. For the 2015 simultaneous \xmm\ and \nustar\ observations of PHL~1811, the derived parameters differ slightly. We also tried to tie all the parameters in these two observations, and the results are listed in the last row of Table~3. We do not consider the small discrepancy a serious issue as there may be cross-calibration uncertainties between the \xmm\ and \nustar\ data \citep[e.g.,][]{Madsen2021}. \subsubsection{Soft X-ray Excess of PG~1001 in the Obscuration-Only Scenario} {Soft X-ray excess emission (typically below $\approx1$~keV) is observed in a large fraction of type 1 AGNs, the origin of which is still under debate and may be attributed to ionized absorption \citep[e.g.,][]{George1998,Gierlinski2004}, ionized disk reflection \citep[e.g.,][]{Ross2005,Crummy2006}, or Comptonization in a warm corona \citep[e.g.,][]{Done2012}. Of the three quasars in this study, only PG~1001 shows a clear soft-excess component in both its \xmm\ and \chandra\ spectra. We thus did not consider the soft excess in the above modeling, excluding the the $<1$~keV data for PG~1001. We explore here if the soft X-ray excess emission of PG~1001 can be explained in the obscuration-only scenario. We focus on the \xmm\ spectrum in the following discussion, as the \chandra\ spectrum has only 19 counts in the 0.3--8 keV band. Nevertheless, we verified that the \chandra\ spectrum yields consistent results. The soft-excess emission of PG~1001 is at a comparable flux level to the $>1$~keV power-law component that is significantly weak compared to the expectation from the $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation (Section 3.1.1). Therefore, the soft excess is also significantly weak compared to typical levels. In the obscuration-only scenario, PG~1001 has a nominal-strength hot corona, and likely also a nominal-strength warm corona. A natural interpretation would then be that the soft-excess emission is also filtered by the absorber if it is from the warm corona. We thus fitted the 0.3--10 keV \xmm\ spectrum with the same model described in Section 4.2 above plus an additional component to describe the soft excess from the warm corona. We tested simple power-law (\texttt{zpow}), disk multi-black body (\texttt{diskbb}), or Comptonization (\texttt{compTT}) models for this soft-excess component, and the three choices were all able to describe the spectrum well with comparable statistics. In Figure~8, we show the best-fit results with the power-law model. The soft excess has a large photon index ($\approx5.0$) and a small normalization ($\approx0.7\%$ of the normalization for the intrinsic $>1$~keV power-law continuum), and the other free parameters ($f_{\rm leak}$ and $N_{\rm H}$) are consistent with those in Table 3 (first row) within the errors. One interpretation is thus that the observed soft excess is the leaked portion of the warm corona emission through the same dust-free absorber, and the leaked fraction is similar to or even the same as that for the main component. The soft X-ray excess emission of PG~1001 has also been suggested to be due to ionized absorption \citep{Schartel2005}. We verified that the 0.3--10 keV \xmm\ spetrum can be acceptably fitted with a simple partial-covering ionized absorption model (\texttt{zxipcf*zpow}), fixing $\Gamma=2.6$ and the power-law normalization at the X-ray nominal value from the $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation. The resulting ionization parameter is $\xi\approx91$~erg~cm~s$^{-1}$ with $N_{\rm H}\approx4.7\times10^{23}$~cm$^{-2}$ and a covering fraction of $\approx99.1\%$. Replacing the neutral absorption (\texttt{zphabs}) in the Section 4.2 model with \texttt{zxipcf} yields consistent results, as the reprocessed component (\texttt{borus02}) is not important in the \xmm\ spectrum. The soft excess can thus also be explained with ionized absorption, which is possible considering that the absorber (e.g., disk wind) is probably partially ionized. Overall, we consider that our proposed obscuration-only scenario can plausibly explain the soft X-ray excess emission of PG~1001. } \subsection{Clumpy Accretion-Disk Wind as the Absorber and Implications} Under our proposed obscuration scenario above, PG~1001 and PG~1254 are BAL quasars with intrinsically normal \xray\ emission. Thus, they are probably similar to the typical BAL quasars that generally show X-ray obscuration. Besides the few low-redshift BAL quasars with \nustar\ observations \citep{Luo2013,Luo2014,Teng2014}, a few high-redshift BAL quasars have been suggested to be intrinsically X-ray weak \citep{Liu2018}. From the systematic \chandra\ survey of the 29 high-ionization LBQS BAL quasars at $z\approx2$, two {intrinsically \xray\ weak} candidates were identified based on their nominal spectral shapes ($\Gamma_{\rm eff}\approx2$) and significant hard X-ray weakness factors ($f_{\rm w}\approx12$--15); at these redshifts, \chandra\ observations were able to provide rest-frame hard \xray\ constraints over a similar band to the \nustar\ observations of low-redshift objects. We recently obtained a long \xmm\ observation for one of the candidates, LBQS $1442-0011$. The observation was heavily affected by background flares and the cleaned exposure time is only 40\% of the total. The results are summarized in the Appendix. The \xmm\ observation suggests that the spectral shape became flatter ($\Gamma_{\rm eff}\approx1$) and the factor of hard X-ray weakness ($f_{\rm w}$) also dropped to $\approx4$. We thus consider that the \xray\ weakness of this quasar can also be described by our proposed scenario of variable obscuration, without invoking intrinsic \xray\ weakness. It is natural to consider that the \xray\ absorbers in PG~1001, PG~1254, and LBQS~$1442-0011$ are similar to those in typical BAL quasars; i.e., the shielding gas or clumpy accretion-disk wind. They might have an extreme version of the absorber in terms of its high column density and/or large covering factor. An obscuration explanation for the X-ray weakness of PHL~1811 would connect it with the PHL~1811 analogs studied in \citet{Luo2015}. PHL~1811 and its analogs belong to a broader category of quasars, weak emission-line quasars (WLQs), a small population of type~1 quasars that show unusually weak UV emission lines (e.g., \iona{C}{iv}). Systematic X-ray surveys of WLQ samples have revealed that a large fraction ($\approx30\%$--50\%) of them are X-ray weak \citep[e.g.,][]{Luo2015,Ni2018,Ni2022,Pu2020}. These WLQs have typically been selected to lie at $z\gtrsim1.5$, and thus \chandra\ or \xmm\ observations provide rest-frame hard X-ray constraints. The individual and stacked effective power-law photon indices for the X-ray weak WLQs are in general flat ($\Gamma_{\rm eff}\approx1.2$), suggesting an obscuration scenario. The absorber is proposed to be the geometrically thick inner accretion disk and/or its associated disk wind, which also shields the nuclear EUV/X-ray ionizing radiation from reaching the BELR, causing the weak emission lines. Thick inner accretion disks are expected in WLQs as they are considered to have high or even super-Eddington accretion rates that result in thick disks \citep[e.g.,][]{Abramowicz1988,Mineshige2000,Wang2003,Ohsuga2011,Jiang2014}. Powerful accretion-disk winds launched via radiation pressure are expected in such systems \citep[e.g.,][]{Takeuchi2014,Jiang2019,Giustini2019}. PHL~1811 likely has a super-Eddington accretion rate, and the large covering-factor absorber is probably the strong wind associated with a thick accretion disk. Therefore, the three quasars in this study likely share a similar nature, with partial-covering absorption from clumpy dust-free winds (e.g., Figure 6). X-ray absorption from clumpy winds/outflows has been observed in typical type~1 AGNs \citep[e.g.,][]{Kaastra2014,Mehdipour2017,Mehdipour2021,Dehghanian2019,Laha2021}, although the absorption strength is often not comparable to the extreme X-ray weakness found in our three quasars. The wind strength and density likely have an Eddington ratio dependence, as heavy or even Compton-thick absorption has been observed in super-Eddington accreting AGNs \citep[e.g.,][]{Longinotti2019,Liu2021}. The three quasars in this study probably have super-Eddington accretion rates that drive powerful and high-density clumpy winds. They have large estimated Eddington ratios as listed in Table~1. We note that these Eddington ratios do not represent accurately the accretion power in the super-Eddington regime as a large fraction of the power may be advected into the SMBH or converted into mechanical energy of the wind \citep[e.g.,][]{Jiang2019}; also see Section~2.1 for further discussion. The large intrinsic X-ray photon indices derived from spectral fitting ($\Gamma\approx2.1$--2.6; Table~3) also suggest super-Eddington accretion rates \citep[e.g.,][]{Shemmer2008,Huang2020}. The NLQ1 classification and the weak [\iona{O}{iii}] emission of PG~1001 and PHL~1811 (Section 2.1) provide additional support of super-Eddington accretion in these two quasars \citep[e.g.,][]{Boroson1992,Sulentic2000,Shen2014}. It is somewhat odd that PG~1001 shows a strong \iona{C}{iv} emission line in its UV spectrum \citep{Brandt2000}, as the strong and high-density wind and/or the thick inner accretion disk should be able to shield the BELR from the nuclear ionization, resulting in a WLQ like PHL~1811. It also appears unusual that PHL~1811 does not show any significant UV absorption lines (i.e., BALs). Perhaps the dynamical nature of the wind (e.g., variable $N_{\rm H}$ and covering factor) causes the apparent discrepancy, and multi-epoch UV spectra might be able to shed some light. For example, a $z\approx2$ WLQ has recently been found to undergo BAL transformation \citep{Yi2022}. Geometric effects might also play a role, as the line of sight to the X-ray corona, line of sight to the accretion-disk UV continuum region, and the direction from the nucleus to the BELR are different from each other, and thus the emergent X-ray and UV spectra depend on the physical configuration of the clumpy wind \citep[e.g.,][]{Giustini2019}. Although our proposed obscuration scenario was based on the new sensitive \nustar\ observations of these three quasars, the general connection with obscuration from the disk wind suggests that this scenario may be applicable to the other intrinsically \xray\ weak quasar candidates (e.g., those in \citealt{Nardini2019} and \citealt{Laurenti2021}). Obscuration from the clumpy disk wind would predict X-ray variability from varying obscuration. For the three quasars in this study, PG~1001 and PHL~1811 showed clear soft X-ray variability (e.g., Figure~1); the \hbox{12-day} variability timescale of PHL~1811 does not provide any strong constraints on the wind velocity as a wind clump only needs to move a fraction of the corona size. The \nustar\ observations of PG~1001 suggest some hard \xray\ variability, at least in the spectral shape (Table~2). The PG~1254 \nustar\ observations do not provide sufficient photon statistics to identify hard \xray\ variability. In addition, LBQS~$1442-0011$ likely has hard X-ray variability (see the Appendix). The WLQs have limited multi-epoch observations, and a few of them have been found to vary strongly between X-ray normal and \xray\ weak states \citep{Miniutti2012,Ni2020}. A small fraction of super-Eddington accreting AGNs have also been found to vary between X-ray normal and X-ray weak states \citep[e.g.,][]{Liu2019,Boller2021,Liu2021}, and a few also show steep spectra in the low state. {Another characteristic property of such \hbox{X-ray} variability is that there is no contemporaneous optical/UV continuum or emission-line variability, which argues against changes of accretion rates and supports the obscuration scenario. This property also makes these AGNs distinct from the unusual population of ``changing-look'' AGNs (e.g., 1ES~$1927+654$; \citealt{Trakhtenbrot2019,Ricci2021}) that also show extreme X-ray variability but are generally attributed to changes of accretion rates or tidal-disruption events.} Multi-epoch X-ray observations of the intrinsically \xray\ weak quasar candidates might be able to reveal X-ray variability and help clarify their nature. \section{Summary and Future Work} In this paper, we used \nustar\ observations of PG~1001, PG~1254, and PHL~1811 to constrain their hard \xray\ (\hbox{$\gtrsim5$~keV}) weakness and spectral shapes, and thus to investigate the nature of their extreme X-ray weakness. These quasars show very weak soft X-ray emission (Figure 1), and they were previously proposed to be intrinsically X-ray weak, with the X-ray coronae producing weak continuum emission relative to their optical/UV emission (deviating below the $\alpha_{\rm OX}$--$L_{\rm 2500~{\textup{\AA}}}$ relation). The multi-epoch soft and hard X-ray observations are summarized in Table 1. \nustar\ aperture photometry was presented in Section 3.2, and the results are summarized in Table 2 and Figure 3. The \nustar\ spectral shapes for PG~1001 and PHL~1811 appear flat ($\Gamma_{\rm eff}=1.0^{+0.5}_{-0.6}$ and $\Gamma_{\rm eff}=1.4^{+0.8}_{-0.7}$, respectively), while the shape is nominal for PG~1254 ($\Gamma_{\rm eff}=1.8\pm0.3$). PG~1001 and PHL~1811 are significantly hard X-ray weak compared to the expectations from their optical/UV emission ($f_{\rm w}$ at 8~keV $\approx26$--74), while PG~1254 is only X-ray weak by a factor of $\approx3$. The PHL~1811 hard X-ray photon index appears smaller than its soft X-ray (0.3--5 keV) photon index ($2.3\pm0.1$). Spectral modeling suggests that its 2015 \xmm\ and \nustar\ spectra cannot be described by an intrinsically weak continuum plus a reasonable amount of Compton reflection (Section 4.1 and Figures 4 \& 5). In light of the new \nustar\ results, a variable X-ray absorber can account for all the observations of these X-ray weak quasars. We propose that, as an alternative to the intrinsic X-ray weakness $+$ X-ray obscuration scenario, the soft and hard X-ray weakness of these quasars can be uniformly explained under an X-ray obscuration-only scenario, without invoking the extra mechanism of intrinsic X-ray weakness (Section 4.2). In this scenario, the weak emergent spectrum is a combination of the transmitted component modified by absorption, the leaked component through a clumpy absorber (including a distant scattered component), and the reprocessed component reflected/scattered from the absorber (Figure~6). This partial-covering absorption scenario provides adequate explanations of the multi-epoch X-ray data of these quasars, and the X-ray variability is mainly induced by the varying column density and leaked fraction (partial-covering fraction) of the absorber (Table 3 and Figure 7). We propose that the absorber is a clumpy dust-free wind launched from the accretion disk (Section 4.3). These quasars probably have super-Eddington accretion rates which result in geometrically thick inner accretion disks and powerful winds with high column densities and large covering factors. Although we cannot rigorously prove that intrinsic \xray\ weakness is not present in these systems, the connections of these quasars to other X-ray weak quasars including WLQs and super-Eddington accreting quasars point to a universal wind obscuration scenario for the weak X-ray emission found in type 1 quasars, or even type 1 AGNs in general. Multi-epoch X-ray observations of the intrinsically X-ray weak quasar candidates will further help clarify their nature. Besides variability investigations, deeper \nustar\ observations of PHL~1811 could provide further evidence of heavy X-ray obscuration. Also, higher signal-to-noise ratio and higher spectral resolution observations with future generation X-ray observatories (e.g., Athena; \citealt{Nandra2013}) could reveal spectral features (e.g., the Fe lines in the reprocessed component) that help discriminate between different scenarios. ~\\ C.W. and B.L. acknowledge financial support from the National Natural Science Foundation of China grant 11991053, China Manned Space Project grants NO. CMS-CSST-2021-A05 and NO. CMS-CSST-2021-A06. W.N.B. acknowledges support from the V.M. Willaman Endowment, NASA grants 80NSSC20K0029 and 80NSSC22K0071, and Penn State ACIS Instrument Team Contract SV4-74018 (issued by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060). F.E.B acknowledges support from ANID-Chile BASAL AFB-170002 and FB210003, FONDECYT Regular 1200495 and 1190818, and Millennium Science Initiative Program – ICN12\_009. S.C.G thanks the Natural Science and Engineering Research Council of Canada. We have made use of data from the NuSTAR mission, a project led by the California Institute of Technology, managed by the Jet Propulsion Laboratory and funded by the National Aeronautics and Space Administration. We thank the NuSTAR Operations, Software and Calibration teams for support with the execution and analysis of these observations. This research has made use of the NuSTAR Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA). The Chandra ACIS Team Guaranteed Time Observations (GTO) utilized were selected by the ACIS Instrument Principal Investigator, Gordon P. Garmire, currently of the Huntingdon Institute for X-ray Astronomy, LLC, which is under contract to the Smithsonian Astrophysical Observatory via Contract SV2-82024. \appendix \section{XMM-Newton Observation of LBQS~$1442-0011$} LBQS~$1442-0011$ is a BAL quasar at $z = 2.226$ with a $B$--band magnitude of 18.2 \citep{Gallagher2006}. Its H$\beta$-based single-epoch virial SMBH mass is $\approx8\times10^{9} M_{\Sun}$, and its estimated Eddington ratio is $\approx0.17$ \citep{Yuan2003}. Its previous \chandra\ observations have a co-added depth of 15.9~ks. Through systematic analyses of the \chandra\ observations of the \citet{Gallagher2006} LBQS BAL quasar sample, \citet{Liu2018} identified LBQS~$1442-0011$ as one of the two good candidates for intrinsically X-ray weak quasars based on its significant hard X-ray weakness (by a factor of $f_{\rm w}=12^{+12}_{-8}$) and its nominal hard X-ray spectral shape ($\Gamma_{\rm eff} = 1.9^{+0.9}_{-0.8}$). The other candidate is LBQS~$1203+1530$. Due to the large uncertainties of the $f_{\rm w}$ and $\Gamma_{\rm eff}$ values from the \chandra\ results, we proposed for deeper \xmm\ observations of the two candidates, aiming to improve the parameter constraints. The targets were accepted at priority C, and LBQS~1442--0011 was observed on 2021 February 6 with a nominal exposure time of 87~ks. Unfortunately, the observation was affected significantly by background flares, and the cleaned exposure time is only 34~ks. Thus the sensitivity of the new \xmm\ observation is only comparable to the previous co-added \chandra\ exposure. We processed the \xmm\ data following the procedure described in Section~2.3. We chose a smaller source extraction region with a radius of $25\arcsec$ in order to increase the signal-to-noise ratio of this faint source. We also limited the upper-energy bound to 8~keV. The resulting net source counts are $48^{+18}_{-17}$ in the 0.3--2~keV band (rest-frame 1.0--6.5 keV) and $33^{+15}_{-14}$ in the 2--8~keV band (rest-frame 6.5--26 keV). The $\Gamma_{\rm eff,0.3-8}$ value inferred from the band ratio is $1.0^{+0.7}_{-0.6}$. We also fit the spectrum with a power-law model modified by Galactic absorption (\texttt{phabs}\,*\,\texttt{zpow}), and the best-fit $\Gamma$ value ($0.9\pm0.4$) is consistent with the photometric result. The derived factor of hard X-ray weakness is $f_{\rm w}=4\pm2$. Compared to the previous \chandra\ constraints, the \xmm\ results suggest that the hard X-ray spectrum became flatter, and the observed hard X-ray emission became brighter ($f_{\rm w}$ dropped). Therefore, we suggest that the X-ray weakness of LBQS~$1442-0011$ is also caused by variable partial-covering absorption, similar to the three quasars studied here (see Section~4.3 for discussion).
Title: A Semi-blind PCA-based Foreground Subtraction Method for 21 cm Intensity Mapping
Abstract: The Principal Component Analysis (PCA) method and the Singular Value Decomposition (SVD) method are widely used for foreground subtraction in 21 cm intensity mapping experiments. We show their equivalence, and point out that the condition for completely clean separation of foregrounds and cosmic 21 cm signal using the PCA/SVD is unrealistic. We propose a PCA-based foreground subtraction method, dubbed "Singular Vector Projection (SVP)" method, which exploits a priori information of the left and/or right singular vectors of the foregrounds. We demonstrate with simulation tests that this new, semi-blind method can reduce the error of the recovered 21 cm signal by orders of magnitude, even if only the left and/or right singular vectors in the largest few modes are exploited. The SVP estimators provide a new, effective approach for 21 cm observations to remove foregrounds and uncover the physics in the cosmic 21 cm signal.
https://export.arxiv.org/pdf/2208.14675
\title{A Semi-blind PCA-based Foreground Subtraction Method for 21~cm Intensity Mapping} \correspondingauthor{Shifan Zuo, Xuelei Chen, Yi Mao} \email{sfzuo@tsinghua.edu.cn (SZ), xuelei@cosmology.bao.ac.cn (XC), ymao@tsinghua.edu.cn (YM)} \author[0000-0003-3858-6361]{Shifan Zuo} \affiliation{Department of Astronomy, Tsinghua University, Beijing 100084, China} \affiliation{Key Laboratory of Computational Astrophysics, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China} \author[0000-0001-6475-8863]{Xuelei Chen} \affiliation{Key Laboratory of Computational Astrophysics, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China} \affiliation{School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China} \affiliation{Department of Physics, College of Sciences, Northeastern University, Shenyang 110819, China} \affiliation{Center of High Energy Physics, Peking University, Beijing 100871, China} \author[0000-0002-1301-3893]{Yi Mao} \affiliation{Department of Astronomy, Tsinghua University, Beijing 100084, China} \keywords{Astronomical methods (1043); Cosmology (343); H I line emission (690); Principal component analysis (1944)} \section{Introduction}\label{S:intro} The 21~cm radiation due to the hyperfine transition of atomic hydrogen \citep{Furlanetto2006,Morales2010,Pritchard2012} has emerged as a promising cosmological probe that can be used to reconstruct the history from cosmic dawn to the epoch of reionization, as well as the large-scale structure of the Universe after reionization. A number of ongoing radio interferometric array experiments have targeted to implement the 21~cm intensity mapping, including the Precision Array for Probing the Epoch of Reionization (PAPER; \citealp{Parsons2010}), the Giant Meterwave Radio Telescope (GMRT; \citealt{Paciga2013,2017A&A...598A..78I}), the Murchison Widefield Array (MWA; \citealt{Bowman2013}), the LOw Frequency Array (LOFAR; \citealt{Patil2017,Gehlot2019}), the Green Bank Telescope (GBT; \citealt{Chang2010,Masui2013,Switzer2013}), the Parkes Radio Telescope \citep{Anderson2018}, the Owens Valley Long Wavelength Array (OVRO-LWA; \citealt{Eastwood2018,Eastwood2019}), the MeerKAT telescope \citep{2021MNRAS.505.3698W}, the Canadian Hydrogen Intensity Mapping Experiment (CHIME; \citealt{Newburgh2014,2022arXiv220201242C}), the Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX; \citealt{2022JATIS...8a1019C}), the Tianlai experiment \citep{Chen2012,Xu2015,Li:2020ast,Wu:2020jwm}, and the BAO from Integrated Neutral Gas Observations (BINGO; \citealt{Battye2012}). Next generation experiments with higher sensitivities are also under construction, including the Hydrogen Epoch of Reionization Array (HERA; \citealt{DeBoer2017}), and the Square Kilometre Array (SKA; \citealt{Koopmans2015,Maartens2015}). Detection of the cosmic 21~cm signal is technically very challenging, even for next-generation interferometric arrays with large receiving area, because the astrophysical foregrounds from the galactic and extragalactic sources are four to five orders of magnitude larger than the cosmic 21~cm signal. Advanced techniques for foreground subtraction or avoidance have been developed (see, e.g. the review of \citealt{2020PASP..132f2001L}). However, residual foreground still remains a main source of systematics for 21~cm intensity mapping. These methods include the low order polynomial fitting \citep{Wang2006,Jelic2008,Liu2009a,Liu2009b}, the Principal Component Analysis (PCA) method \citep{Masui2013,Alonso2015,Sazy2015}, the Singular Value Decomposition (SVD) method \citep{Switzer2013,Switzer2015}, the robust PCA method \citep{Zuo2019}, the Independent Component Analysis (ICA) method \citep{Chapman2012,Wolz2014,Alonso2015}, the Generalized Morphological Component Analysis (GMCA) method \citep{Chapman2013}, and the Gaussian Process Regression (GPR) method \citep{Mertens2018}. In particular, the PCA method has been shown to work well in simulation tests \citep{Alonso2015,Sazy2015} and is applied extensively to the analysis of observational data \citep{Masui2013,2022arXiv220601579C}. It has been demonstrated using the mock data from simulations that the residual foreground can be effectively suppressed to the level comparable to, or even below, that of cosmic 21~cm signal, after removing four to five PCA modes (see, e.g. \citealt{Alonso2015,Zuo2019}). Those demonstrations often only assume that the spectrum of foreground is smooth in the frequency direction. Furthermore, it is shown that satisfactory performance can be obtained by removing six to seven PCA modes, if we take into account complex instrumental effects such as frequency-dependent beam, $1/f$ noise and other imperfections \citep{Alonso2015,Sazy2015}. \citet{Masui2013} made the measurement of the 21~cm brightness fluctuation power spectrum at $z\sim 0.8$ by the cross-correlation of 21~cm signal using GBT with the large-scale structure traced by galaxies that are optically detected in the WiggleZ Dark Energy Survey. Nevertheless, it is necessary to subtract out 20 PCA modes in the GBT observations in order to suppress the residual foreground to an acceptably low level \citep{Masui2013}. Moreover, in a recent, similar detection of the cross-correlation of 21~cm signal using MeerKAT with galaxies from the WiggleZ data, 30 PCA modes are removed \citep{2022arXiv220601579C}, due to the modulation of instrumental responses such as calibration imperfections, residual Radio Frequency Interference (RFI) and other systematics. Removing such a large number of PCA modes results in severe signal loss and foreground mixing. In fact, even after removal of 30 PCA modes, the resulting 21~cm signal in MeerKAT has not reached the estimated auto-power spectrum level \citep{2022arXiv220601579C}, which implies that there are still residual RFI, foregrounds, and other systematics in the 21~cm data. To avoid these additive biases, \citet{Masui2013,2022arXiv220601579C} employed the cross-correlation technique of the 21~cm signal with galaxy surveys for the detection of the 21~cm brightness fluctuations. This difficulty calls for improvements of the PCA method in order to remove the foregrounds to such a cleaned level that we can realize the detection of the cosmic 21~cm signal through the auto-power spectrum. The standard PCA method is a {\it blind} method, in the sense that no prior information about the signal or foreground is assumed other than the assumption of smooth spectrum of the foregrounds in frequency, which is too generic. Some {\it semi-blind} methods have been proposed to make assumptions about the foregrounds and/or signal that are not so strong as assuming an ansatz of foregrounds in the polynomial fitting method, and demonstrated to alleviate the problem of signal loss in foreground subtraction. These methods include the Karhunen-Lo\`eve (KL) transform method \citep{Shaw2014,Shaw2015} that employs the modeled foregrounds and signal covariance matrices to form a set of modes that are ordered by their signal-to-contaminant ratios, and the Generalized Needlet Internal Linear Combination (GNILC) method \citep{Olivari2016} that uses the H{~\sc i} covariance matrix to separate the foregrounds from the signal in a wavelet (or needlet) space. While it is indeed easier to model the covariance matrix of the foregrounds and signal than the foregrounds or the signal {\it per se}, the overall magnitude of the covariance matrix may be highly biased, because the foregrounds are four to five orders of magnitude stronger than cosmic 21~cm signal. If we take into account the thermal noise, modeling the covariance matrix can be even more challenging due to the correlation between the noise and signal. These issues impose practical difficulty in applying these methods to the observational data. In this paper, instead, we will propose a new semi-blind PCA-based method for foreground subtraction, dubbed {\it Singular Vector Projection} (SVP). The SVP method assumes that the covariance matrix of foregrounds can be modeled {\it a priori} by theoretical modeling or from observational data, and then the left and/or right singular vectors of the foregrounds can be obtained by eigen-decomposition of the covariance matrix of the foregrounds in frequency and/or pixel space. Note that the overall magnitude of the covariance matrix affects its eigenvalues, but not the left and right singular vectors. If we can design the SVP estimators in such a manner that they contain the singular vectors, but not the eigenvalues, as will be shown in the paper below, then the estimators are independent of the overall magnitude of the covariance matrix. In other words, the estimators would not be affected if the overall magnitude of the covariance matrix would be biased. We will also demonstrate the effectiveness of SVP in reducing the signal loss and foreground mixing in foreground subtraction. The rest of this paper is organized as follows. In \S\ref{S:mo}, we review the PCA and SVD methods for foreground subtraction briefly, show their equivalence, and derive the ideal condition for completely clean separation of foregrounds and cosmic 21~cm signal (i.e.\ without any signal loss or foreground mixing). We introduce our SVP method for foreground subtraction in \S\ref{S:sigvec}, and demonstrate its effectiveness with simulation tests in \S\ref{S:ex}. We make concluding remarks in \S\ref{S:con}. We leave some technical details (the mathematical proof of the inequalities for signal losses) to Appendix~\ref{S:csl}. \section{The PCA and SVD methods} \label{S:mo} In this section, we briefly review the standard PCA and SVD methods for foreground subtraction. We will show their equivalence, and prove the ideal, yet unrealistic, conditions for completely clean separation of foregrounds and cosmic 21~cm signal. \subsection{Problem Setup} The 21~cm observational data is presented as a 3D image cube with two angular directions and one frequency direction. The two angular directions can be combined into a single direction of the dimension $p$, where $p$ is the number of image pixels, and therefore the dataset is represented in form of a matrix $\mat{D} \in \mathbb{R}^{n \times p}$, where $n$ is the number of frequency bins. This general representation of the dataset is valid for the observation of a patch of sky as well as the full sky. Without losing generality, we assume $n \le p$ in the analysis below, i.e.\ the number of frequency bins is equal to or less than the number of pixels in the map. This is often the case, but the conclusions herein are also valid for the case of $n > p$. The observational data is a linear combination of the foregrounds, cosmic 21~cm signal, and systematic noise, by writing it as \begin{equation} \label{eq:DFN} \mat{D} = \mat{F} + \mat{N}, \end{equation} where the total foregrounds are denoted as $\mat{F}$, and $\mat{N}$ includes both cosmic 21~cm signal and noise. \subsection{The PCA Method} \label{S:pca} We follow \citet{Alonso2015,Sazy2015,Masui2013} for the review of the PCA method for foreground subtraction. Consider the covariance matrix $\mat{R}\in \mathbb{R}^{n \times n}$ of the dataset in frequency space, $\mat{R} = \mat{D} \mat{D}^{\rm T} $, and perform the eigen-decomposition \begin{equation} \label{eq:RDD} \mat{R} = \mat{U} \mat{\Lambda} \mat{U}^{\rm T}\,. \end{equation} Here, $\mathcal{O}^{\rm T}$ denotes the transpose of a matrix $\mathcal{O}$. $\mat{\Lambda}$ is a $n \times n$ diagonal matrix in which the diagonal elements are the eigenvalues $\left\{\lambda_{i}\right\}$ of the matrix $\mat{R}$. The matrix $\mat{U}$ is a $n \times n$ real orthogonal matrix in which its $i^{\rm th}$ column is the eigenvector of $\mat{R}$ corresponding to the $i^{\rm th}$ eigenvalue $\lambda_{i}$. The magnitude of $\lambda_{i}$ gives the variance of the corresponding eigen-mode, and each eigenvalue measures the contribution of its corresponding eigenvector to the total sky variance. Since the foregrounds dominate the full data overwhelmingly, we can project out the dominant components by picking up the $m$ largest eigenvalues and their corresponding eigenvectors, so the foregrounds and the 21~cm signal can be estimated, respectively, by \begin{eqnarray} \mat{F}_{\rm PCA} &=& \mat{U} \mat{\Pi}_{n,m} \mat{U}^{\rm T} \mat{D}\,, \label{eq:Fp} \\ \mat{N}_{\rm PCA} &=& \mat{D} - \mat{F}_{\rm PCA} = \mat{U} (\mat{I}_{n} - \mat{\Pi}_{n,m}) \mat{U}^{\rm T} \mat{D}\,. \label{eq:Np} \end{eqnarray} Here, $\mat{I}_{n}$ is a $n \times n$ identity matrix, and $\mat{\Pi}_{n,m}$ is a projection matrix from dimension $n$ to $m$, i.e.\ a $n \times n$ diagonal matrix in which $m$ diagonal elements are unity if they correspond to the picked eigenvalues, and all other diagonal elements are zero. \subsection{The SVD Method} \label{S:svd} The dataset $\mat{D}$ can be decomposed with SVD as \begin{equation} \label{eq:Dsvd} \mat{D} = \mat{U}' \mat{S} \mat{V}^{\rm T}\,. \end{equation} Here, $\mat{S} \in \mathbb{R}^{k \times k}$ is a diagonal matrix in which its positive diagonal elements $\left\{s_{i}\right\}$ are the singular values, where the integer $k \le \text{min}(n, p)$ is the number of singular values of the dataset. The matrices $\mat{U}' \in \mathbb{R}^{n \times k}$ and $\mat{V} \in \mathbb{R}^{p \times k}$ are the corresponding left and right singular vectors, respectively. These two singular-vector matrices satisfy the conditions, $\mat{U}'^{\rm T} \mat{U}' = \mat{I}_{k}$ and $\mat{V}^{\rm T} \mat{V} = \mat{I}_{k}$, where $\mat{I}_{k}$ is a $k \times k$ identity matrix. However, in general, they do not necessarily meet the following conditions, $\mat{U}' \mat{U}'^{\rm T} = \mat{I}_{n}$ (unless $k = n$), and $\mat{V} \mat{V}^{\rm T} = \mat{I}_{p}$ (unless $k = p$). For this reason, a matrix like $\mat{U}'$ and $\mat{V}$ is called a {\it partial} orthogonal matrix, because it contains some columns of an orthogonal matrix. Using SVD, the foregrounds can be projected out by picking up the largest $m$ singular value modes \citep{Switzer2013,Switzer2015}, similar to the PCA. The foregrounds and the 21~cm signal can be estimated, respectively, by \begin{eqnarray} \mat{F}_{\text{SVD}} &=& \mat{U}' \mat{\Pi}_{k,m} \mat{S} \mat{V}^{\rm T}\,, \label{eq:Fsvd} \\ \mat{N}_{\text{SVD}} &=& \mat{D} - \mat{F}_{\text{SVD}} = \mat{U}' (\mat{I}_{k} - \mat{\Pi}_{k,m}) \mat{S} \mat{V}^{\rm T}\,. \label{eq:Nsvd} \end{eqnarray} \subsection{The Equivalence of PCA and SVD} Note that the dimension $k$ of the matrix $\mat{S}$ is also the number of positive eigenvalues for the covariance matrix $\mat{R}$. The eigenvalues of $\mat{R}$ are non-negative, which means that there are $n-k$ zero eigenvalues of $\mat{R}$. Below, we assume that the $k$ positive eigenvalues of $\mat{R}$ (or equivalently $k$ singular values of $\mat{S}$) are in descending order. Substituting Eq.~(\ref{eq:Dsvd}) to Eq.~(\ref{eq:RDD}), we get $\mat{U} \mat{\Lambda} \mat{U}^{\rm T} = \mat{U}' \mat{S}^{2} \mat{U}'^{\rm T}$. It is straightforward to prove that $\mat{U} \mat{\Lambda} \mat{U}^{\rm T} = \mat{U}_{k} \mat{\Lambda}_{k} \mat{U}^{\rm T}_{k}$, where $\mat{\Lambda}_{k}$ is the $k \times k$ subset of $\mat{\Lambda}$, i.e.\ the diagonal matrix in which the diagonal elements are the $k$ positive eigenvalues of $\mat{R}$, and $\mat{U}_{k}$ is the $n \times k$ subset of $\mat{U}$, i.e.\ the matrix in which its columns are the first $k$ columns of $\mat{U}$ corresponding to the $k$ positive eigenvalues. So we find $\mat{\Lambda}_{k} = \mat{S}^{2}$ and $\mat{U}_{k} = \pm \mat{U}'$. The undetermined sign is due to the fact $\mat{U}' \mat{S} \mat{V}^{\rm T} = (-\mat{U}') \mat{S} (-\mat{V}^{\rm T})$. Ignoring this sign degeneracy, we have \begin{equation} \mat{U}' = \mat{U}_{k}\,. \label{eq:id0} \end{equation} It is straightforward to prove that \begin{eqnarray} \mat{U} \mat{\Pi}_{n,m} \mat{U}^{\rm T} &=& \mat{U}_{k} \mat{\Pi}_{k,m} \mat{U}^{\rm T}_{k}\,, \label{eq:id1}\\ \mat{D} = \mat{U} \mat{U}^{\rm T} \mat{D} &=& \mat{U}_{k} \mat{U}_{k}^{\rm T} \mat{D} \,.\label{eq:id2} \end{eqnarray} The proof uses the identity $\mat{U}_{k}^{\rm T} \mat{U}_{k} = \mat{I}_{k}$. Substituting Eq.~(\ref{eq:Dsvd}) to Eqs.~(\ref{eq:Fp}) and \ref{eq:Np}, therefore, we find $ \mat{F}_{\text{PCA}} = \mat{F}_{\text{SVD}} $, and $ \mat{N}_{\text{PCA}}= \mat{N}_{\text{SVD}}$. This shows that foreground subtraction results using PCA and SVD methods are equivalent. As such, the estimators in the PCA/SVD are \begin{eqnarray} \mat{F}_{\rm PCA/SVD} &=& \mat{U} \mat{\Pi} \mat{U}^{\rm T} \mat{D} = \mat{U} \mat{\Pi} \mat{S} \mat{V}^{\rm T}\,, \label{eq:Fpv} \\ \mat{N}_{\rm PCA/SVD} &=& \mat{U} (\mat{I} - \mat{\Pi}) \mat{U}^{\rm T} \mat{D} = \mat{U} (\mat{I} - \mat{\Pi}) \mat{S} \mat{V}^{\rm T}\,. \label{eq:Npv} \end{eqnarray} For simplicity, hereafter in this paper, we will drop the prime ($'$) in $\mat{U}'$ (because of Eq.~\ref{eq:id0}), and drop the subscripts in $\mat{I}$, $\mat{\Pi}$, and $\mat{U}$, because the dimensionalities of these matrices can be understood both in the context of PCA (where $\mat{U}$ is a $n \times n$ matrix) and in the context of SVD (where $\mat{U}$ is a $n \times k$ matrix) due to Eqs.~(\ref{eq:id1}) and (\ref{eq:id2}), as long as the interpretation of dimensionalities are self-consistent. \subsection{Conditions for Completely Cleaned Separation} \label{S:cond} In this subsection, we attempt to answer this question in the framework of PCA/SVD method: under what condition can we separate the foregrounds from the 21~cm signal in such a completely cleaned manner that there is no signal loss or foreground mixing? Consider the SVD of the foregrounds and the signal, respectively, \begin{eqnarray} \mat{F} & = & \mat{U}_{f} \mat{S}_{f} \mat{V}^{\rm T}_{f}\,\\ \mat{N} &=& \mat{U}_{n} \mat{S}_{n} \mat{V}^{\rm T}_{n}\,. \end{eqnarray} The subscripts $f$ and $n$ denote the foreground and signal (with noise), respectively. If the foregrounds and the signal are fully separated, then \begin{align} \mat{U}_{f} \mat{S}_{f} \mat{V}^{\rm T}_{f} &= \mat{U} \mat{\Pi} \mat{S} \mat{V}^{\rm T}, \notag \\ \mat{U}_{n} \mat{S}_{n} \mat{V}^{\rm T}_{n} &= \mat{U} (\mat{I} - \mat{\Pi}) \mat{S} \mat{V}^{\rm T}. \notag \end{align} Since the SVD is unique, the completely cleaned separation can be realized if and only if the singular values of $\mat{F}$ are the largest $m$ positive singular values of $\mat{D}$, and the columns of $\mat{U}_{f}$ ($\mat{V}_{f}$) are the corresponding singular vectors in $\mat{U}$ ($\mat{V}$), while the singular values of $\mat{N}$ are the remaining positive singular values of $\mat{D}$, and the columns of $\mat{U}_{n}$ ($\mat{V}_{n}$) are the corresponding singular vectors in $\mat{U}$ ($\mat{V}$). Thus, we find the conditions: (1) $\mat{U}_{f}^{\rm T} \mat{U}_{n} = \mat{0}$; (2) $\mat{V}_{f}^{\rm T} \mat{V}_{n} = \mat{0}$; (3) $\min{\mat{S}_{f}} > \max{\mat{S}_{n}}$. The first two conditions are equivalent to the orthogonality conditions \begin{align} \label{eq:FN} \mat{F}^{\rm T} \mat{N} &= \mat{N}^{\rm T} \mat{F} = \mat{0}, \notag \\ \mat{F} \mat{N}^{\rm T} &= \mat{N} \mat{F}^{\rm T} = \mat{0}. \end{align} The first condition means that there is no pixel-wise cross-correlation between the foregrounds and the signal, so the pixel covariance can completely separate the contributions from the foregrounds and from the signal, i.e.\ $\mat{D}^{\rm T} \mat{D} = \mat{F}^{\rm T} \mat{F} + \mat{F}^{\rm T} \mat{N} + \mat{N}^{\rm T} \mat{F} + \mat{N}^{\rm T} \mat{N} = \mat{F}^{\rm T} \mat{F} + \mat{N}^{\rm T} \mat{N} $. Similarly, the second condition means that there is no frequency-wise cross-correlation between the foregrounds and the signal. Eq.~(\ref{eq:FN}) is the necessary condition for complete separation, but there is another implicit condition in practice --- the number $m$ of PCA/SVD modes of the foregrounds should be known {\it a priori}. If $m$ is known, the foregrounds can be reconstructed from the left and right singular vectors corresponding to the largest $m$ singular values of $\mat{D}$, and the signal is recovered from the remaining singular vectors. However, we note that these conditions for complete separation of foregrounds and signal are ideal and unsatisfied in most cases. As the result, the signal loss and foreground mixing are unavoidable in practice. \subsection{Signal Loss, Foreground Mixing and Recovery Error} Substituting Eq.~(\ref{eq:DFN}) to Eq.~(\ref{eq:Npv}), the estimated signal with the PCA/SVD is \begin{align} \label{eq:Np2} \mat{N}_{\rm PCA/SVD} &= \mat{U} (\mat{I} - \mat{\Pi}) \mat{U}^{\rm T} \mat{F} + \mat{U} (\mat{I} - \mat{\Pi}) \mat{U}^{\rm T} \mat{N} \notag \\ &= \mat{U} (\mat{I} - \mat{\Pi}) \mat{U}^{\rm T} \mat{U}_{f} \mat{S}_{f} \mat{V}_{f}^{\rm T} + \mat{U} (\mat{I} - \mat{\Pi}) \mat{U}^{\rm T} \mat{U}_{n} \mat{S}_{n} \mat{V}_{n}^{\rm T}. \end{align} For PCA/SVD, the {\it foreground mixing} is \begin{equation} \label{eq:Fmix} \mat{F}^{\text{mix}}_{\rm PCA/SVD} = \mat{U} (\mat{I} - \mat{\Pi}) \mat{U}^{\rm T} \mat{F}, \end{equation} and the {\it signal loss} is \begin{equation} \label{eq:Nloss} \mat{N}^{\text{loss}}_{\rm PCA/SVD} = \mat{N} - \mat{U} (\mat{I} - \mat{\Pi}) \mat{U}^{\rm T} \mat{N} = \mat{U} \mat{\Pi} \mat{U}^{\rm T} \mat{N}. \end{equation} Since $\mat{U}^{\rm T} \mat{U}_{f} \ne \mat{0}$ and $\mat{U}^{\rm T} \mat{U}_{n} \ne \mat{0}$ in general, $\mat{F}^{\text{mix}}_{\rm PCA/SVD}$ and $\mat{N}^{\text{loss}}_{\rm PCA/SVD}$ are non-zero, i.e.\ in the standard PCA/SVD method, there are signal loss and foreground mixing, if the conditions in \S\ref{S:cond} are not met. We define the {\it recovery error} as the difference between the true signal and the recovered signal, $\Delta \mat{N} = \mat{N} - \hat{\mat{N}}$. For PCA/SVD, the recovery error is \begin{equation} \Delta \mat{N}_{\rm PCA/SVD} = \mat{N}^{\text{loss}}_{\rm PCA/SVD} - \mat{F}^{\text{mix}}_{\rm PCA/SVD}\,. \end{equation} \section{Singular Vector Projection} \label{S:sigvec} In what we call the ``Singular Vector Projection'' method, we propose to exploit the information of the singular vectors of the foregrounds, $\mat{U}_f$ and/or $\mat{V}_f$, if it is known {\it a prior}, to improve the accuracy of foreground subtraction. We propose the new estimators in \S\ref{S:u} and \S\ref{S:rf}, provide a pedagogical example in \S\ref{S:se}, and discuss the feasibility of obtaining {\it a prior} information of $\mat{U}_f$ and/or $\mat{V}_f$ in \S\ref{S:mv}. \subsection{The SVP Estimators}\label{S:u} We propose four estimators for the data $\mat{D}$, as follows. \begin{align} \mat{N}_{\text{L}} &= \mat{D} - \mat{U}_{f} \mat{U}_{f}^{\rm T} \mat{D} \,, \\ \mat{N}_{\text{R}} &= \mat{D} - \mat{D} \mat{V}_{f} \mat{V}_{f}^{\rm T} \,, \\ \mat{N}_{\text{B}} &= \mat{D} - \mat{U}_{f} \mat{U}_{f}^{\rm T} \mat{D} \mat{V}_{f} \mat{V}_{f}^{\rm T} \,,\\ \mat{N}_{\text{D}} &= \mat{D} - \mat{U}_{f} (\mat{U}_{f}^{\rm T} \mat{D} \mat{V}_{f})_{\text{diag}} \mat{V}_{f}^{\rm T} \,. \end{align} Here, $\mathcal{O}_{\text{diag}}$ denotes a diagonal matrix in which its elements are the same as the diagonal elements of the matrix $\mathcal{O}$. If only the left (right) singular vector of the foregrounds, $\mat{U}_f$ ($\mat{V}_f$), is known {\it a priori}, then the estimator $\mat{N}_{\text{L}}$ ($\mat{N}_{\text{R}}$) can be applied. If both left and right singular vectors of the foregrounds are known {\it a priori}, then the estimators $\mat{N}_{\text{B}}$ and $\mat{N}_{\text{D}}$ can be applied. The subscripts ``L'', ``R'', ``B'', ``D'' stand for ``left'', ``right'', ``both'', and ``diagonal'', respectively. It is straightforward to prove the following results, \begin{align} \mat{N}_{\text{L}} &= \mat{N} - \mat{U}_{f} \mat{U}_{f}^{\rm T} \mat{N}, \label{eq:SVP-L} \\ \mat{N}_{\text{R}} &= \mat{N} - \mat{N} \mat{V}_{f} \mat{V}_{f}^{\rm T}, \label{eq:SVP-R} \\ \mat{N}_{\text{B}} &= \mat{N} - \mat{U}_{f} \mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f} \mat{V}_{f}^{\rm T}, \label{eq:SVP-B} \\ \mat{N}_{\text{D}} &= \mat{N} - \mat{U}_{f} (\mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f})_{\text{diag}} \mat{V}_{f}^{\rm T}. \label{eq:SVP-D} \end{align} This shows that these estimators can project out the foregrounds $\mat{F}$ completely, i.e.\ no foreground mixing. In fact, these are the only four estimators that meet this requirement. This is an advantage against the blind PCA/SVD method, because given that the foreground is several orders of magnitude stronger than the signal, even a small residual foreground mixing can result in large recovery error. The proof of Eqs.~(\ref{eq:SVP-L})---(\ref{eq:SVP-B}) uses the identity $\mat{U}_{f} \mat{U}_{f}^{\rm T} \mat{F} = \mat{F} \mat{V}_{f} \mat{V}_{f}^{\rm T} = \mat{F}$, which is due to $\mat{U}_f^{\rm T} \mat{U}_f = \mat{I}$ and $\mat{V}_f^{\rm T} \mat{V}_f = \mat{I}$. The proof of Eq.~(\ref{eq:SVP-D}) uses the identity $(\mat{U}_{f}^{\rm T} \mat{F} \mat{V}_{f})_{\text{diag}} = (\mat{S}_f)_{\text{diag}} = \mat{S}_f$, because $\mat{S}_f$ is diagonal. The recovery error and signal loss for these estimators are as follows, respectively. \begin{align} \Delta \mat{N}_{\text{L}} = \mat{N}_{\text{L}}^{\text{loss}} &= \mat{U}_{f} \mat{U}_{f}^{\rm T} \mat{N}\,, \notag \\ \Delta \mat{N}_{\text{R}} = \mat{N}_{\text{R}}^{\text{loss}} &= \mat{N} \mat{V}_{f} \mat{V}_{f}^{\rm T}\,, \notag \\ \Delta \mat{N}_{\text{B}} = \mat{N}_{\text{B}}^{\text{loss}} &= \mat{U}_{f} \mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f} \mat{V}_{f}^{\rm T}\,, \notag \\ \Delta \mat{N}_{\text{D}} = \mat{N}_{\text{D}}^{\text{loss}} &= \mat{U}_{f} (\mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f})_{\text{diag}} \mat{V}_{f}^{\rm T}\,. \label{eq:4loss} \end{align} If the condition for the left singular vectors $\mat{U}_{f}^{\rm T} \mat{U}_{n} = \mat{0}$ is met, then $\Delta \mat{N}_{\text{L}} = \Delta \mat{N}_{\text{B}} = \Delta \mat{N}_{\text{D}} = \mat{0}$, and if the condition for the right singular vectors $\mat{V}_{n}^{\rm T} \mat{V}_{f} = \mat{0}$ is met, then $\Delta \mat{N}_{\text{R}} = \Delta \mat{N}_{\text{B}} = \Delta \mat{N}_{\text{D}} = \mat{0}$. Clearly, using the estimators $\mat{N}_{\text{B}}$ and $\mat{N}_{\text{D}}$ with the information of both left and right singular vectors is more likely to get smaller recovery error than using either $\mat{N}_{\text{L}}$ or $\mat{N}_{\text{R}}$ with only the left or right singular vector. In fact, roughly speaking, the ``magnitude'' of recovery error for these estimators have the following relation: $\Delta \mat{N}_{\text{D}} \le \Delta \mat{N}_{\text{B}} \le \Delta \mat{N}_{\text{L}}$ and $\Delta \mat{N}_{\text{D}} \le \Delta \mat{N}_{\text{B}} \le \Delta \mat{N}_{\text{R}}$. We leave it to Appendix~\ref{S:csl} to give an accurate definition of their ``magnitudes'' and the proof of these relations. Note that these inequality relations are valid for the whole matrix, and do not necessarily hold for each individual frequency bin. \subsection{SVP with Incomplete Singular Vectors} \label{S:rf} In \S\ref{S:u}, we implicitly assume that the left and/or right singular vectors of the foregrounds, $\mat{U}_f \in \mathbb{R}^{n \times k}$ and $\mat{V}_f \in \mathbb{R}^{p \times k}$, for {\it all} $k$ modes can be well modeled or measured {\it a priori}. In practice, this might not be satisfied. Consider a relaxed condition in which the left and/or right singular vectors of the foregrounds for only a small number of modes which correspond to the $l$ largest singular values can be modeled {\it a priori}, labeled as $\mat{U}_{f_{1}} \in \mathbb{R}^{n \times l}$ and $\mat{V}_{f_{1}} \in \mathbb{R}^{p \times l}$, and the other $k-l$ singular vectors of the foregrounds, labeled as $\mat{U}_{f_2} \in \mathbb{R}^{n \times (k-l)}$ and $\mat{V}_{f_2} \in\mathbb{R}^{p \times (k-l)}$, are not known. The total foregrounds can be written as the sum of two parts, $\mat{F} = \mat{F}_{1} + \mat{F}_{2}$, where \begin{align} \mat{F}_{1} &= \mat{U}_{f_1} \mat{S}_{f_1} \mat{V}^{\rm T}_{f_1}, \notag \\ \mat{F}_{2} &= \mat{U}_{f_2} \mat{S}_{f_2} \mat{V}^{\rm T}_{f_2}. \label{eq:F12} \end{align} Here, $\mat{S}_{f_1}\in\mathbb{R}^{l \times l}$ is the diagonal matrix of the $l$ largest singular values, and $\mat{S}_{f_2} \in\mathbb{R}^{(k-l) \times (k-l)}$ is the diagonal matrix of the rest $k-l$ singular values. We assume that the former is significantly larger than the latter. In this case with incomplete information of singular vectors of the foregrounds, the estimators for the data $\mat{D}$ are as follows. \begin{align} \mat{N}_{\text{L}} &= \mat{D} - \mat{U}_{f_1} \mat{U}_{f_1}^{\rm T} \mat{D} \,, \\ \mat{N}_{\text{R}} &= \mat{D} - \mat{D} \mat{V}_{f_1} \mat{V}_{f_1}^{\rm T} \,, \\ \mat{N}_{\text{B}} &= \mat{D} - \mat{U}_{f_1} \mat{U}_{f_1}^{\rm T} \mat{D} \mat{V}_{f_1} \mat{V}_{f_1}^{\rm T} \,,\\ \mat{N}_{\text{D}} &= \mat{D} - \mat{U}_{f_1} (\mat{U}_{f_1}^{\rm T} \mat{D} \mat{V}_{f_1})_{\text{diag}} \mat{V}_{f_1}^{\rm T} \,. \end{align} Similar to the results in \S\ref{S:u}, it is straightforward to prove the following results, \begin{align} \mat{N}_{\text{L}} &= \mat{N} - \mat{U}_{f_1} \mat{U}_{f_1}^{\rm T} \mat{N} + \mat{F}_{2} \,, \label{eqn:L1} \\ \mat{N}_{\text{R}} &= \mat{N} - \mat{N} \mat{V}_{f_1} \mat{V}_{f_1}^{\rm T} + \mat{F}_{2} \,, \\ \mat{N}_{\text{B}} &= \mat{N} - \mat{U}_{f_1} \mat{U}_{f_1}^{\rm T} \mat{N} \mat{V}_{f_1} \mat{V}_{f_1}^{\rm T}+\mat{F}_{2} \,, \\ \mat{N}_{\text{D}} &= \mat{N} - \mat{U}_{f_1} (\mat{U}_{f_1}^{\rm T} \mat{N} \mat{V}_{f_1})_{\text{diag}} \mat{V}_{f_1}^{\rm T} + \mat{F}_{2} \,.\label{eqn:D1} \end{align} This shows that these estimators project out the major foreground component $\mat{F}_{1}$, but the unknown, minor foreground component $\mat{F}_{2}$ is left as the residual foreground mixing, $\mat{F}^{\text{mix}} = \mat{F}_{2}$. The proof of Eqs.~(\ref{eqn:L1})---(\ref{eqn:D1}) uses the orthogonality relation $\mat{U}^{\rm T}_{f_1} \mat{U}_{f_2} = \mat{0}$ and $\mat{V}^{\rm T}_{f_1} \mat{V}_{f_2} = \mat{0}$, and the identities $\mat{U}_{f_1} \mat{U}^{\rm T}_{f_1} \mat{F}_1 = \mat{F}_1 \mat{V}_{f_1} \mat{V}_{f_1}^{\rm T} = \mat{U}_{f_1} \mat{U}^{\rm T}_{f_1} \mat{F}_1 \mat{V}_{f_1} \mat{V}_{f_1}^{\rm T} = \mat{U}_{f_1} (\mat{U}^{\rm T}_{f_1} \mat{F}_1 \mat{V}_{f_1})_{\text{diag}} \mat{V}_{f_1}^{\rm T} = \mat{F}_{1}$. The recovery errors for these estimators are as follows. \begin{align} \Delta \mat{N}_{\text{L}} &= \mat{U}_{f_1} \mat{U}_{f_1}^{\rm T} \mat{N} - \mat{F}_{2}\,, \notag \\ \Delta \mat{N}_{\text{R}} &= \mat{N} \mat{V}_{f_1} \mat{V}_{f_1}^{\rm T} - \mat{F}_{2}\,, \notag \\ \Delta \mat{N}_{\text{B}} &= \mat{U}_{f_1} \mat{U}_{f_1}^{\rm T} \mat{N} \mat{V}_{f_1} \mat{V}_{f_1}^{\rm T}-\mat{F}_{2} \,, \notag \\ \Delta \mat{N}_{\text{D}} &= \mat{U}_{f_1} (\mat{U}_{f_1}^{\rm T} \mat{N} \mat{V}_{f_1})_{\text{diag}} \mat{V}_{f_1}^{\rm T} - \mat{F}_{2} \,. \label{eq:svpinc} \end{align} Finally, we note that there is actually a freedom of normalization in the singular vectors. However, our SVP estimators contain the singular vectors that are always in pairs, such as $\mat{U}_{f} \mat{U}_{f}^{\rm T}$ and $\mat{V}_{f} \mat{V}_{f}^{\rm T}$, so the estimators are independent of this normalization in the singular vectors. Also, the estimators do not depend on the eigenvalues $\mat{\Lambda}_{f}$ of the covariance matrix, so the estimators are not affected by the overall magnitude of the covariance matrix. In other words, even if the covariance matrix of the foregrounds would be biased in overall amplitude from observations, the SVP estimators would not be affected. \subsection{Pedagogical Example} \label{S:se} In this subsection, we use a pedagogical example to illustrate the SVP. Consider a simple case where the image $\mat{D}$ has two frequency bins and two pixels. Suppose the foregrounds is a rank-one matrix, $\mat{F} = s_{f} \vec{u}_{f} \vec{v}_f^{\rm T}$ for simplicity, and the signal is a rank-two matrix, $\mat{N} = s_{n,1} \vec{u}_{n,1} \vec{v}_{n,1}^{\rm T} + s_{n,2} \vec{u}_{n,2} \vec{v}_{n,2}^{\rm T}$. Here $\vec{u}_{f}$ ($\vec{v}_{f}$) is the left (right) singular vector of $\mat{F}$; $\vec{u}_{n,1}$ and $\vec{u}_{n,2}$ ($\vec{v}_{n,1}$ and $\vec{v}_{n,2}$) are the left (right) singular vectors of $\mat{N}$, and perpendicular to each other, i.e.\ $\vec{u}_{n,1} \perp \vec{u}_{n,2}$ and $\vec{v}_{n,1} \perp \vec{v}_{n,2}$. They are all unit vectors of the size $2\times 1$. In this example, we set $s_{f} = 10.0$, $s_{n,1} = 1.0$ and $s_{n,2} = 0.8$, so that the magnitude of $\mat{F}$ is about ten times larger than $\mat{N}$. For such a simple case, vectors of the size $2\times 1$ can be visualized in a 2D plane, so we plot $\mat{F}$ by the rescaled vectors $\sqrt{s_{f}} \vec{u}_{f}$ and $\sqrt{s_{f}} \vec{v}_{f}$, and plot $\mat{N}$ by the rescaled vectors $\sqrt{s_{n,1}} \vec{u}_{n,1}$, $\sqrt{s_{n,2}} \vec{u}_{n,2}$, $\sqrt{s_{n,1}} \vec{v}_{n,1}$, and $\sqrt{s_{n,2}} \vec{v}_{n,2}$ in Figure~\ref{fig:uv}. Similar rescaling will be applied when we refer to the ``magnitude'' of a vector in this subsection. In the standard SVD analysis, the image is decomposed as $\mat{D} = s_{d,1} \vec{u}_{d,1} \vec{v}_{d,1}^{\rm T} + s_{d,2}\vec{u}_{d,2} \vec{v}_{d,2}^{\rm T}$. Here, $s_{d,1} > s_{d,2}$, and $\vec{u}_{d,1}\perp \vec{u}_{d,2}$, $\vec{v}_{d,1} \perp \vec{v}_{d,2}$. In this decomposition, the first term $s_{d,1} \vec{u}_{d,1} \vec{v}_{d,1}^{\rm T}$ is identified and thus removed as the foregrounds, but Figure~\ref{fig:uv} shows that the direction of the left (right) singular vector $\vec{u}_{d,1}$ ($\vec{v}_{d,1}$) is different from that of the left (right) singular vectors $\vec{u}_{f}$ ($\vec{v}_{f}$) of the true foregrounds $\mat{F}$. On the other hand, the second term $s_{d,2}\vec{u}_{d,2} \vec{v}_{d,2}^{\rm T}$ of the SVD decomposition is identified as the recovered signal $\mat{N}_{\rm PCA/SVD}$, but it is rank-one in this case, while the true signal $\mat{N}$ is rank-two. Also, Figure~\ref{fig:uv} shows that the magnitude of $\mat{N}_{\rm PCA/SVD}$ is much smaller than $\mat{N}$, which means a significant signal loss in the SVD foreground removal. Now, assuming that we know $\vec{u}_{f}$ and $\vec{v}_{f}$, as discussed in Sec.~\ref{S:u}, we can project out the foregrounds in four ways. The optimal estimators are $\mat{N}_{\rm B}$ and/or $\mat{N}_{\rm D}$ that exploit the information of both $\vec{u}_{f}$ and $\vec{v}_{f}$. For the pedagogical example considered herein, the two estimators give the same results because $\vec{u}_{f}^{\rm T} \mat{D} \vec{v}_{f}$ is a $1 \times 1$ matrix, i.e. a number, in this example, so $\vec{u}_{f}^{\rm T} \mat{D} \vec{v}_{f} = (\vec{u}_{f}^{\rm T} \mat{D} \vec{v}_{f})_{\text{diag}}$. We can rewrite the estimated foregrounds as $\mat{F}_{\rm B/D} = s_{df} \vec{u}_{f} \vec{v}_f^{\rm T}$, where we define $s_{df} \equiv \vec{u}_f^{\rm T} \mat{D} \vec{v}_{f}$ for this case. The left (right) singular vector $\sqrt{s_{df}} \vec{u}_{f}$ ($\sqrt{s_{df}} \vec{v}_{f}$) of the estimated foregrounds $\mat{F}_{\rm B/D}$ has the same direction as the left (right) singular vector $\sqrt{s_{f}} \vec{u}_{f}$ ($\sqrt{s_{f}} \vec{v}_{f}$) of the true foregrounds $\mat{F}$. In principle, the magnitude of the former is slightly larger than the latter, because $s_{df} > s_{f}$, but in this simple example, their magnitudes are very close, as shown in Figure~\ref{fig:uv}. This implies that the subtracted foregrounds contain the full foregrounds $\mat{F}$ and some amount of signal. So the recovered signal, $\mat{N}_{\rm B/D} = \mat{D} - \mat{F}_{\rm B/D}$ is free from contamination of any residual foregrounds, but suffers from some amount of signal loss. Also, $\mat{N}_{\rm B/D} = s_{s,1} \vec{u}_{s,1} \vec{v}_{s,1}^{\rm T} + s_{s,2} \vec{u}_{s,2} \vec{v}_{s,2}^{\rm T}$ is still a rank-two matrix. Figure~\ref{fig:uv} shows that the recovered signal $\mat{N}_{\rm B/D}$ is close to the true signal $\mat{N}$. This simple example clearly shows that using the information of the left and right singular vectors of the foregrounds can help improve the performance of foreground subtraction. \subsection{Modeling the Singular Vectors} \label{S:mv} The left and right singular vectors of the foregrounds, $\mat{U}_{f}$ and $\mat{V}_f$, can be obtained by solving for the eigen-decomposition of the frequency and pixel covariance matrix, respectively, i.e.\ $\mat{F} \mat{F}^{\rm T} = \mat{U}_{f} \mat{\Lambda}_{f} \mat{U}_{f}^{\rm T}$, and $\mat{F}^{\rm T} \mat{F} = \mat{V}_{f} \mat{\Lambda}_{f} \mat{V}_{f}^{\rm T}$, where $\mat{\Lambda}_{f} = \mat{S}_{f}^2$. Our assumption is that the covariance matrix of the foregrounds can be estimated from modeling or observations {\it a priori}. In practice, we may keep only the singular vectors with positive eigenvalues, and drop all modes with nearly zero eigenvalues. We follow \citet{Costa2008}, which is essentially a PCA method, to estimate the foregrounds. We summarize the approach below, and refer interested readers to \citet{Costa2008} for details. Typically, we have the dataset from a number of available surveys that altogether can cover the full sky, but their overlapping sky coverage is only a patch of sky with $n_{\text{pix}}$ pixels with data at all $n_f$ frequencies. For the data subset of the overlapping sky map $\mat{y} \in \mathbb{R}^{n_f \times n_{\rm pix}}$, we begin by estimating its matrix of second moments of the size $n_{f} \times n_{f}$ which is essentially the normalized covariance matrix $ \mat{C} \equiv \mat{y} \mat{y}^{\rm T}/n_{\text{pix}} $, and then estimating the correlation matrix $\mathcal{R}_{ij} \equiv C_{ij}/\sigma_{i} \sigma_{j}$ that corresponds to the dimensionless correlation coefficients between all pairs of frequencies. Here $\sigma_{i} \equiv C_{ii}^{1/2}$ is the rms fluctuations at each frequency. We then perform a standard eigenvalue decomposition to diagonalize the matrix $\mat{\mathcal{R}}$ as $\mat{\mathcal{R}} = \mat{P} \mat{\Lambda} \mat{P}^{\rm T}$, where $\mat{P}$ is an orthogonal matrix in which the columns are the eigenvectors (principal components) and $\mat{\Lambda}$ is a diagonal matrix containing the corresponding eigenvalues. We determine the first $n_{c}$ principal components that best approximate the data using the overlapping sky region with data at all $n_{f}$ frequencies according to some accuracy criteria (see \citealt{Costa2008}). We then fit for the $n_c$ principal component maps (the matrix product of the $n_c$ principal components and the normalized maps, i.e. the input maps rescaled to have rms fluctuations of unity at each frequency) across the entire sky pixel by pixel by using the normalized input maps that have data for that pixel. To predict sky maps at other frequencies, we can further fit the frequency dependence of both $\log{\sigma_{i}}$ and each of the best $n_{c}$ principal components with a cubic spline as a function of $\log{\nu}$, and use them and the fitted $n_c$ principal component maps to reconstruct the sky maps at the required frequencies. % Here we implicitly assume that these are smooth, slowly varying functions (as shown in Figure~\ref{fig:fguvec}). With this machinery, we can construct a map of the foreground emission in the target frequency range with $n$ frequency bins in the target patches of sky with $p$ pixels, from which we can estimate the frequency (pixel) covariance matrix, and subsequently the left (right) singular vectors of the foregrounds by an eigenvalue decomposition of the covariance matrix. Note that this singular vector modeling process can be improved as more, higher-quality survey data at different frequencies become available. Advanced techniques presented in \citet{Zheng2017} may be also applied to better account for different survey data that has non-overlapping regions. As an example, we apply this method for modeling the singular vectors to the Tianlai array \citep{Chen2012,Xu2015}, which operates in the frequency band between 700 and 800 MHz with 256 frequency bins. We use the survey data at the same 11 frequencies (taken from the {\tt PyGSM}\footnote{\url{https://github.com/telegraphic/PyGSM}} package) as used in \citet{Costa2008}. The first three principal components with the largest three eigenvalues are plotted in the left panel of Figure~\ref{fig:fguvec}. From this, we can predict the frequency dependence at the frequencies between 700 and 800 MHz where no survey data is available, by fitting each principal component with a cubic spline as a function of $\log{\nu}$. Figure~\ref{fig:fguvec} demonstrates that this fitting works well, because the frequency dependencies of these functions are indeed smooth and slowly varying in this frequency band. From the sky map of predicted foreground emission in this frequency band, we estimate the frequency (pixel) covariance matrix and solve for the left (right) singular vector of the foregrounds. As an illustration, we show the three left singular vectors of the foregrounds corresponding to the largest three eigenvalues in the right panel of Figure~\ref{fig:fguvec}. \section{Simulation Test} \label{S:ex} \subsection{Simulation Setup} \label{S:sim} In this section, we test the performance of SVP in terms of recovery errors with simulation data. We use the {\tt CORA}\footnote{\url{https://github.com/radiocosmology/cora}} \citep{Shaw2014,Shaw2015} package to generate the simulated dataset, which includes the H{~\sc i} 21~cm signal and the mock foregrounds with two dominant components --- galactic synchrotron emission and extragalactic point sources. We assume the Planck 2013 cosmological model \citep{Ade2014}. For the H{~\sc i} emission, the 21~cm power spectrum is given by \begin{equation} \label{eq:PTb} P_{T_{b}}(\vec{k}; z, z') = \bar{T}_{b}(z) \bar{T}_{b}(z') (b + f \mu^{2})^{2} P_{m}(k; z, z'), \end{equation} where $b$ is the bias, $f$ is the growth rate, and $P_{m}\left(k ; z, z^{\prime}\right)=P(k) D_{+}(z) D_{+}\left(z^{\prime}\right)$ is the real-space matter power spectrum, $D_{+}$ is the growth factor normalized such that $D_{+}(0)=1$. The mean brightness temperature takes the form \citep{Chang2008} \begin{equation}\label{eq:Tbz} \bar{T}_{b}(z) = 0.3 \, \left( \frac{\Omega_{\text{H{~\sc i}}}}{ 10^{-3}} \right) \left( \frac{1 + z}{2.5} \right)^{1/2} \left[ \frac{\Omega_{\rm m} + (1 + z)^{-3} \Omega_{\Lambda}}{0.29} \right]^{-1/2}\,\text{mK}. \end{equation} We adopt the typical values of {\tt CORA} parameters: $\Omega_{\text{H{~\sc i}}} b = 6.2 \times 10^{-3}$ \citep{Switzer2013} and $b = 1$. The 21~cm angular power spectrum is given by \citep{Datta2007} $C_{l}(\Delta \nu) \propto \int k^{2} dk j_{l}(k \chi) j_{l}(k \chi') P_{T_{b}}(\vec{k}; z, z')$, where $\Delta \nu = \nu' - \nu$. Here, $\chi$ ($\chi'$) is the comoving distance to redshift $z$ ($z'$) that corresponds to the frequency $\nu$ ($\nu'$). In the flat-sky approximation that is accurate to the percentage level, the 21~cm angular power spectrum is \citep{Datta2007,Shaw2014} \begin{equation} \label{eq:Clzz} C_{l}(z, z') = \frac{1}{\pi \chi \chi'} \int_{0}^{\infty} dk_{\parallel} \cos(k_{\parallel} \Delta \chi) P_{T_{b}}(\vec{k}; z, z'), \end{equation} where $\Delta \chi = \chi-\chi'$. In the integration, the vector $\vec{k} = (k_{\parallel},k_\perp=l/\bar{\chi})$, where $\bar{\chi}=(\chi+\chi')/2$. To model the foregrounds, for simplicity, we only consider the foregrounds with two main sources at low frequencies --- the galactic synchrotron radiation and the extragalactic radio point sources, and ignore other minor sources such as free-free emission and dust emission. The angular power spectra of the foregrounds with these two main sources can be modeled in the form of \begin{equation} \label{eq:clmdl} C_{l}(\nu, \nu') = A \left( \frac{l}{100} \right)^{-\alpha} \left( \frac{\nu \nu'}{\nu_{0}^{2}} \right)^{-\beta} e^{-\frac{1}{2\xi_{l}^{2}}\ln^{2}(\nu / \nu')}\,, \end{equation} where we choose the pivot frequency $\nu_0 = 130$ MHz \citep{Centos2005}. In principle, $\xi_{l}$ is a function of $l$. For simplicity, {\tt CORA} assumes that $\xi$ is independent of $l$. We use the recalibrated model parameters for the 700-800 MHz band of H{~\sc i} intensity mapping experiment. The {\tt CORA} package also implemented the polarized emission model but for simplicity we only consider the total intensity model. We follow \citet{Shaw2014} for choosing the values of the model parameters, as listed in Table~\ref{tab:clp}. To generate the galactic synchrotron emission, {\tt CORA} uses the processed 408 MHz Haslam map (with bright point sources and striping removed) \citep{Haslam1982,Remazeilles2015} as a template, and extrapolate it to other frequencies using a spectral index from the Global Sky Model (GSM; \citealt{Costa2008}), with a Gaussian random realization that is consistent with the angular power spectra of the foregrounds (Eq.~\ref{eq:clmdl}) and adds fluctuations in frequency and on small angular scales. The extragalactic point sources simulations come from three components: a population of bright point sources ($S > 10$~Jy at 151~MHz), a synthetic population of dimmer sources down to 0.1~Jy at 151~MHz, and an unresolved background of dimmer sources ($S < 0.1$~Jy) modeled as a Gaussian random realization from Eq.~(\ref{eq:clmdl}) with the point source model parameters listed in \autoref{tab:clp}. \begin{table} \centering \caption{Parameter Values of the Foreground Model} \begin{tabular}{llllll} \hline\hline Component & Polarization & $A (\text{K}^{2})$ & $\alpha$ & $\beta$ & $\xi$ \\ \hline Galaxy & TT & $6.6 \times 10^{-3}$ & 2.80 & 2.8 & 4.0 \\ Point sources & TT & $3.55 \times 10^{-4}$ & 2.10 & 1.1 & 1.0 \\ \hline\hline \end{tabular} \label{tab:clp} \end{table} We generate all components of foregrounds in each of 256 frequencies uniformly sampled between 700 and 800 MHz. For visualization, we show these simulated components only at the central frequency 750 MHz in Figure~\ref{fig:maps}. To include instrumental effects in the real observation, the generated sky maps are convoluted with a symmetric circular frequency-dependent Gaussian beam with FWHM $= 1.22 \,\lambda / D$, where $\lambda$ is the observing wavelength and $D$ is the diameter of a telescope. We assume $D = 100$~m, which is about the dish size of the GBT, currently the largest fully-steerable telescope \citep{Chang2010,Masui2013,Switzer2013}, and also the optimal dish size for the mid-redshift 21~cm intensity mapping experiments \citep{Chang2008,Seo2010,Ansari2012}. \subsection{Results} \label{S:esl} We apply the (blind) PCA/SVD method and the (semi-blind) SVP method to the simulated dataset, and test their performance using the measures of the $l_2$-norm, power spectrum, and Pearson correlation coefficient, as follows. For the PCA/SVD, the largest five PCA modes are removed, because these five modes dominate our dataset when instrumental effects are included. For the SVP, we assume that the left and/or right singular vectors ($\mat{U}_{f}$ and/or $\mat{V}_{f}$) for all or at least some largest modes are known {\it a priori}. \subsubsection{$l_{2}$-norm} The $l_{2}$-norm of a $1\times p$ vector $\vec{x}$ is defined as $$ \| \vec{x} \| = \sqrt{\vec{x} \cdot \vec{x}^{T}} = \sqrt{\sum_{i=1}^{p} x_{i}^{2}}\,. $$ At a given frequency, the recovery error can be treated as a $1\times p$ vector, so we compute the $l_{2}$-norm of recovery error as a function of frequency in Figures~\ref{fig:l2} and \ref{fig:l25}, as a statistical measure of the performance of different estimators. A smaller $l_{2}$-norm means the better recovery of the true signal. In Figure~\ref{fig:l2}, we assume that the left and/or right singular vectors ($\mat{U}_{f}$ and/or $\mat{V}_{f}$) for {\it all} modes are known {\it a priori}. We further consider the scenario of incomplete information of singular vectors in Figure~\ref{fig:l25}, where we assume that only the largest five left and/or right singular vectors of the foregrounds ($\mat{U}_{f_1}$ and/or $\mat{V}_{f_1}$) are known {\it a priori}. Both Figures~\ref{fig:l2} and \ref{fig:l25} show that the $l_{2}$-norm of recovery error for $\mat{N}_{\rm D}$ is the smallest ($\sim 10^{-4}$) at almost all frequencies; the $l_{2}$-norm of $\mat{N}_{\rm B}$ and that of $\mat{N}_{\rm R}$ are comparable ($\sim$ a few of $ 10^{-4}$) but both larger than $\mat{N}_{\rm D}$; the $l_{2}$-norm of $\mat{N}_{\rm L}$ and that of PCA/SVD are the largest ($\sim$ a few of $ 10^{-2}$). This demonstrates that the SVP estimators except for $\mat{N}_{\text{L}}$\footnote{To understand the comparable results of the PCA/SVD and $\mat{N}_{\text{L}}$, we note that while the SVP method can reduce the foreground mixing (for incomplete information of singular vectors) even down to zero (for complete information of all modes), the signal loss in this process of foreground subtraction is not necessarily smaller than in the blind PCA/SVD. That is indeed the motivation of our tests with simulations.} perform better than the PCA/SVD estimator, generally. In particular, the $\mat{N}_{\rm D}$ estimator can reduce the $l_{2}$-norm of recovery error by two orders of magnitude upon the PCA/SVD method. Also, these results agree with the relations found in Appendix~\ref{S:csl}: $\|\Delta \mat{N}_{\text{D}}\| \le \|\Delta \mat{N}_{\text{B}} \|\le \|\Delta \mat{N}_{\text{L}}\|$ and $\|\Delta \mat{N}_{\text{D}}\| \le \|\Delta \mat{N}_{\text{B}}\| \le \|\Delta \mat{N}_{\text{R}}\|$. Roughly speaking, this implies that exploiting additional information in the singular vectors can improve the accuracy of signal recovery upon the blind PCA/SVD method for foreground subtraction, and exploiting more information (in both left and right singular vectors) is better than only using partial information (in either left or right singular vector). We note that if only a small number of modes of singular vectors corresponding to the largest singular values are exploited in SVP, there is residual foreground mixing (the $\mat{F}_2$ term in Eq.~\ref{eq:svpinc}) that contributes to the recovery error. However, comparing the results of Figures~\ref{fig:l2} and \ref{fig:l25}, we find the $l_2$-norm of recovery error for the SVP estimators with only the largest few modes of singular vectors is at the same level as that with full information of singular vectors. This is encouraging because in real observations, it is more likely to only obtain the information of the largest few modes than that of all modes. \subsubsection{Power Spectrum} In Figure~\ref{fig:pk}, we plot the 1D power spectrum along the line-of-sight (LoS) of the 21~cm signal, $P_{21}(k_{\parallel})$, for the input, true signal $\mat{N}$ as the benchmark, and for the recovered signal using the PCA/SVD and SVP estimators. For the SVP estimators, we only exploit the largest five left and/or right singular vectors of the foregrounds, i.e.\ $\mat{U}_{f_1}$ and/or $\mat{V}_{f_1}$. We find that these estimators can recover the input power spectrum well at small scales $k_\parallel > 0.1\,h\,{\rm Mpc}^{-1}$. On the other hand, at large scales, while the PCA/SVD method and the $\mat{N}_{\rm L}$ estimator lose the power significantly, the $\mat{N}_{\rm D}$, $\mat{N}_{\rm B}$ and $\mat{N}_{\rm R}$ estimators can recover the input power spectrum at high accuracy. In comparison, the $\mat{N}_{\rm D}$ estimator performs the best, with the absolute difference $< 0.001\,{\rm mK}^2\,h^{-1}\,{\rm Mpc}$ and relative error $< 0.01\%$. These results are consistent with the findings in the $l_2$-norm test. \subsubsection{Pearson Correlation Coefficient} The Pearson correlation coefficient $r$ is a statistical measure of the degree of linear correlation between two signals. For the input $1\times p$ vector $\vec{x}$ and the recovered vector $\hat{\vec{x}}$, it is defined as \begin{equation} r = \frac{\Delta\vec{x} \cdot \Delta\hat{\vec{x}}^{T}}{\sqrt{\Delta\vec{x} \cdot \Delta\vec{x}^{T}} \, \sqrt{\Delta\hat{\vec{x}} \cdot \Delta\hat{\vec{x}}^{T}} } = \frac{\sum\nolimits_{i}(x_{i} - \bar{x})(\hat{x}_{i} - \bar{\hat{x}})}{\sqrt{\sum\nolimits_{i}(x_{i} - \bar{x})^{2}} \, \sqrt{\sum\nolimits_{i}(\hat{x}_{i} - \bar{\hat{x}})^{2}}} \,, \end{equation} where $\Delta\vec{x} = \vec{x} - \bar{x}$, and $\Delta\hat{\vec{x}} = \hat{\vec{x}} - \bar{\hat{x}}$. Here, $\bar{x}$ is the mean of $\vec{x}$, and $\bar{\hat{x}}$ is the mean of $\hat{\vec{x}} $. The value of $r$ is close to unity if two signals are very correlated. We compute the Pearson correlation coefficient $r$ between the input, true 21~cm signal and the recovered signal using different estimators. For visualization purpose, we plot the value of $1-r$ in Figure~\ref{fig:r}, because the values of $r$ are all close to unity, but how $1-r$ is close to zero shows the degree of similarities between the recovered signal and the true signal. We find that the values of $1-r$ for the $\mat{N}_{\rm D}$, $\mat{N}_{\rm B}$ and $\mat{N}_{\rm R}$ estimators are comparable ($\sim 10^{-5} - 10^{-6}$) and significantly smaller than that of the PCA/SVD and the $\mat{N}_{\rm L}$ estimator ($\sim 10^{-1} - 10^{-2}$). Here we only exploit the largest five left and/or right singular vectors of the foregrounds for the SVP estimators. These results are consistent with the results in the $l_2$-norm and power spectrum tests. \section{Conclusions} \label{S:con} Foreground subtraction is one of the key challenges to the 21~cm observations due to the fact that the foregrounds are four to five orders of magnitudes larger than cosmic 21~cm signal. In this paper, we show that the PCA method and the SVD method, which are widely employed for foreground subtraction, are actually equivalent in principle. We also provide the conditions in which the PCA/SVD method can separate the foregrounds from the 21~cm signal completely, i.e.\ with zero residual foreground mixing and zero signal loss. Nevertheless, we point out that in general the foreground mixing and signal loss are unavoidable for the PCA/SVD method, because those conditions are hardly satisfied in practice. In this paper, we propose a new class of {\it semi-blind} method for foreground subtraction, based on the PCA/SVD method, called the {\it Singular Vector Projection}. The SVP method is semi-blind in the sense that it exploits {\it a prior} information of the left and/or right singular vectors of the foregrounds --- if only the left (right) singular vector is known, then the estimator $\mat{N}_{\rm L}$ ($\mat{N}_{\rm R}$) can be employed; if both left and right singular vectors are known, then two estimators, $\mat{N}_{\rm B}$ and $\mat{N}_{\rm D}$, can be employed. The virtue of SVP is that, in principle, the residual foreground mixing is zero for all four SVP estimators, if the information of singular vectors for {\it all} modes are known. We generate the mock maps of the 21~cm signal and the foregrounds from simulations, and use them to test the performance of the SVP estimators and the standard PCA/SVD estimator in terms of the $l_2$-norm of recovery error, the LoS power spectrum of the 21~cm signal, and the Pearson correlation coefficient between the input, true signal and the recovered signal. We find that while the results of $\mat{N}_{\rm L}$ estimator are comparable to those of the PCA/SVD method, the other SVP estimators ($\mat{N}_{\rm R}$, $\mat{N}_{\rm B}$ and $\mat{N}_{\rm D}$) can significantly improve the accuracy of recovery by orders of magnitude. In particular, the $\mat{N}_{\rm D}$ estimator performs the best in general. We also consider the more realistic scenario of incomplete foreground information, in which only the largest few modes of the left and/or right singular vectors of the foregrounds are known {\it a priori}. In this case, there is residual foreground mixing due to the other small modes of residual foregrounds. However, the accuracy of recovery with the SVP estimators, if the largest five modes of the foregrounds are given in our demonstration, is not degraded with respect to the case with the information of all-mode singular vectors. This indicates that the SVP estimator reaches a balance between signal loss and residual foreground mixing. Regarding the availability of {\it a priori} information, the left and right singular vectors can be solved by eigen-decomposition of the frequency and pixel covariance matrices of the foregrounds, respectively. Specifically, accurate frequency covariance matrix of the foregrounds can be obtained with low cost by observations at other frequency bands. Our test results reflect the fact that the frequency spectrum information of the foregrounds is insufficient for the 21~cm experiments to reach high precision, and the spatial information of the foregrounds should be taken into account as well. We also note that the SVP estimators are independent of the overall magnitude of the covariance matrix, which can be highly biased. This may be an advantage against some other semi-blind methods that depend on the overall magnitude of the foreground covariance matrix, e.g.\ the Karhunen-LoГЁve (KL) transform method \citep{Shaw2014,Shaw2015}. When such {\it a prior} information of the foregrounds is available, our paper suggests that the SVP estimators can improve the recovery results significantly upon the standard PCA/SVD method which is blind against the prior information. In particular, the right singular vector of the foregrounds can help improve the foreground subtraction in a more effective manner than the left singular vector. Furthermore, combining both left and right singular vectors with the $\mat{N}_{\rm D}$ estimator works the best. These SVP estimators provide a new, effective approach for 21~cm observations to remove foregrounds and uncover the physics in the cosmic 21~cm signal. \section*{Acknowledgements} This work is supported by National SKA Program of China (grant No.~2020SKA0110401), NSFC (grant No.~11821303, 11633004), National Key R\&D Program of China (grant No.~2018YFA0404502), the MOST inter-government cooperation program China-South Africa Cooperation Flagship project (grant No.~2018YFE0120800), the Chinese Academy of Sciences (CAS) Frontier Science Key Project (grant No.~QYZDJ-SSW-SLH017), the CAS Strategic Priority Research Program (grant No.~XDA15020200). We thank Fengquan Wu and Yichao Li for useful discussions and helps. We acknowledge the Tsinghua Astrophysics High-Performance Computing platform at Tsinghua University for providing computational and data storage resources that have contributed to the research results reported within this paper. \software{CORA \citep{Shaw2014}, PyGSM \citep{Costa2008,Zheng2017}, h5py \citep{Collette2021}, healpy \citep{Zonca2019}, Matplotlib \citep{Hunter2007}, NumPy \citep{Harris2020}, SciPy \citep{Virtanen2020}} \appendix \section{Inequalities for Signal Loss} \label{S:csl} In this section, we prove some inequalities for the signal loss using the SVP estimators. These inequalities focus on the comparison of the ``magnitude'' of signal loss. We quantify the ``magnitude'' of signal loss using the Frobenius norm \citep{Noble1977}. For a $m \times n$ matrix $\mat{A}$, its Frobenius norm $\| \mat{A} \|_{\rm F}$ is defined as \begin{equation} \label{eq:AF} \| \mat{A} \|_{\rm F} = \sqrt{\text{Tr}(\mat{A}^{\rm T} \mat{A})} = \sqrt{\sum_{i=1}^m \sum_{j=1}^n |a_{ij}|^2}. \end{equation} We first prove three inequalities of Frobenius norm. (1) \begin{equation} \label{eq:ABF} \| \mat{A} \mat{B} \|^2_{\rm F} \le \| \mat{A} \|^2_{\rm F} \, \| \mat{B} \|^2_{\rm F}. \end{equation} The proof is as follows. \begin{align} \| \mat{A} \mat{B} \|^2_{\rm F} &= \sum_{i=1}^m \sum_{j=1}^p \left|\sum_{k=1}^n a_{ik} b_{kj} \right|^2 \notag \\ &\le \sum_{i=1}^m \sum_{j=1}^p \left[ \left(\sum_{k=1}^n |a_{ik}|^2 \right) \left(\sum_{l=1}^n |b_{lj}|^2 \right) \right] \notag \\ &= \left(\sum_{i=1}^m \sum_{k=1}^n |a_{ik}|^2 \right) \left(\sum_{l=1}^n \sum_{j=1}^p |b_{lj}|^2 \right)\notag \\ &= \| \mat{A} \|^2_{\rm F} \, \| \mat{B} \|^2_{\rm F}. \label{eq:ABFp} \end{align} The second line in Eq.(\ref{eq:ABFp}) uses the Cauchy-Schwarz inequality. (2) For a square matrix $\mat{S}$, it is straightforward by its definition to prove \begin{equation} \| \mat{S}_{\text{diag}} \|_{\rm F} \le \| \mat{S} \|_{\rm F}\,. \end{equation} (3) For a partial orthogonal matrix $\mat{U}$, \begin{equation} \| \mat{A} \mat{U} \|_{\rm F} \le \| \mat{A} \|_{\rm F}\,. \end{equation} The proof is as follows. \begin{align} \| \mat{A} \mat{U} \|^2_{\rm F} &= \text{Tr}(\mat{U}^{\rm T} \mat{A}^{\rm T} \mat{A} \mat{U}) = \text{Tr}(\mat{A}^{\rm T} \mat{A} \mat{U} \mat{U}^{\rm T}) \notag \\ &= \sum_i \sum_j (\mat{A}^{\rm T} \mat{A})_{ij} (\mat{U} \mat{U}^{\rm T})_{ji} \notag \\ &\le \sum_i \sum_j (\mat{A}^{\rm T} \mat{A})_{ij} \delta_{ji} = \sum_i (\mat{A}^{\rm T} \mat{A})_{ii} \notag \\ &= \text{Tr}(\mat{A}^{\rm T} \mat{A}) = \| \mat{A} \|^2_{\rm F}\,. \label{eq:AUF} \end{align} Using these properties, we have \begin{align} \| \mat{N}_{\rm B}^{\text{loss}} \|^2_{\rm F} &= \| \mat{U}_{f} \mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f} \mat{V}_{f}^{\rm T} \|^2_{\rm F} \notag \\ &= \text{Tr}(\mat{V}_f \mat{V}^{\rm T}_f \mat{N}^{\rm T} \mat{U}_f \mat{U}^{\rm T}_{f} \mat{U}_{f} \mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f} \mat{V}_{f}^{\rm T}) \notag \\ &= \text{Tr}(\mat{V}^{\rm T}_f \mat{N}^{\rm T} \mat{U}_{f} \mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f}) \notag \\ &= \| \mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f} \|^2_{\rm F} \\ &\le \| \mat{U}_{f}^{\rm T} \mat{N} \|^2_{\rm F} = \| \mat{N}_{\rm L}^{\text{loss}} \|^2_{\rm F} \,. \end{align} Similarly, \begin{equation} \| \mat{N}_{\rm B}^{\text{loss}} \|^2_{\rm F} = \| \mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f} \|^2_{\rm F} \le \| \mat{N} \mat{V}_{f} \|^2_{\rm F} = \| \mat{N}^{\text{loss}}_{\rm R} \|^2_{\rm F}\,. \end{equation} Also, \begin{align} \| \mat{N}_{\rm D}^{\text{loss}} \|^2_{\rm F} &= \| \mat{U}_{f} (\mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f})_{\text{diag}} \mat{V}_{f}^{\rm T} \|^2_{\rm F} \notag\\ &= \text{Tr}\left(\mat{V}_f (\mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f})^{\rm T}_{\text{diag}} \mat{U}^{\rm T}_{f} \mat{U}_{f} (\mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f})_{\text{diag}} \mat{V}_{f}^{\rm T}\right) \notag\\ &= \text{Tr}((\mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f})^{\rm T}_{\text{diag}} (\mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f})_{\text{diag}} ) \notag\\ &= \| (\mat{U}_{f}^{T} \mat{N} \mat{V}_{f})_{\text{diag}} \|^2_{\rm F} \notag\\ &\le \| \mat{U}_{f}^{\rm T} \mat{N} \mat{V}_{f} \|^2_{\rm F} = \| \mat{N}^{\text{loss}}_{\rm B} \|^2_{\rm F}\,. \end{align} Altogether, the following inequalities are proven: \begin{align} \| \mat{N}_{\rm D}^{\text{loss}} \|_{\rm F} & \le \| \mat{N}^{\text{loss}}_{\rm B} \|_{\rm F} \le \| \mat{N}_{\rm L}^{\text{loss}} \|_{\rm F}, \label{eq:N431} \\ \| \mat{N}_{\rm D}^{\text{loss}} \|_{\rm F} & \le \| \mat{N}^{\text{loss}}_{\rm B} \|_{\rm F} \le \| \mat{N}^{\text{loss}}_{\rm R} \|_{\rm F}. \label{eq:N432} \end{align} Since there is no foreground mixing for the SVP estimators if all modes of the left and/or right singular vectors of foregrounds are known {\it a priori}, the inequalities for signal loss are equivalent to the inequalities for recovery error, $ \| \Delta \mat{N}_{\text{D}} \|_{\rm F} \le \| \Delta \mat{N}_{\text{B}} \|_{\rm F} \le \| \Delta \mat{N}_{\text{L}}\|_{\rm F} $ and $\| \Delta \mat{N}_{\text{D}} \|_{\rm F} \le \| \Delta \mat{N}_{\text{B}} \|_{\rm F} \le \| \Delta \mat{N}_{\text{R}}\|_{\rm F}$. \bibliographystyle{aasjournal} \bibliography{pca}