text
stringlengths
1
6.27k
id
int64
0
3.07M
raw_id
stringlengths
2
9
shard_id
int64
0
0
num_shards
int64
16
16
The model's parameters are determined by minimizing the χ 2 given by the observed and predicted positions of the images and the galaxy. The best model leads to χ 2 = 0.65 and has parameters as given in Table 3. This value is reasonable for a model with one degree of freedom (r c can not be counted as a free parameter here, because without the restriction of r c ≥ 0 it would become negative in the fit). The positions of the images and the galaxy in Table 3 are parameters of the model and have to be compared with the observed positions in Table 2. As shown above, no elliptical potential without a shear, and no spherical potential with shear can reproduce the observational data. For our pseudo-isothermal model, this results in large χ 2 values of 50.5 and 63.7, respectively. These values are much too high for the two degrees of freedom. Discussion Thanks to our new high-resolution imaging data, the QSO RX J0911.4+0551 is resolved into four images. In addition, deconvolution with the new MCS algorithm reveals the lensing galaxy, clearly confirming the lensed nature of this system. The image deconvolution provides precise photometry and astrometry for all the components of the system. Reddening in components A2 and A3 relative to A1 is observed from our U , V , and I frames that were taken within three hours on the same night. The absence of reddening in component B and the difference in reddening between components A2 and A3 suggest extinction
600
17220528
0
16
by the deflecting galaxy. Note that although our near-IR data were obtained from 15 days to 6 weeks after the optical images, they appear to be consistent with the optical fluxes measured for the QSO images, i.e. flux ratios increase continuously with wavelength, from U to K, indicating extinction by the lensing galaxy. We have discovered a good galaxy cluster candidate in the SW vicinity of RX J0911.4+0551 from our field photometry in the I, J, and K bands. Comparison of our color-magnitude diagram with that of a blank field (e.g., Moustakas et al. 1997) shows that the galaxies around RX J0911.4+0551 are redder than field-galaxies at an equivalent apparent magnitude. In addition, the brightest galaxies in Fig. 3 lie on a red sequence at I − K ∼ 3.3, typical for the early type members of a distant galaxy cluster. The two dashed lines indicate our ±0.4 color error bars at K ∼ 19 around I − K ∼ 3.3. Most of these galaxies are grouped in the region around a double elliptical at a distance of ∼ 38 ′′ and a position angle of ∼ 204 • relative to A1. This can also be seen in Fig. 2 which shows a group of red galaxies with similar colors centered on the double elliptical (in the center of the circle). Consequently, there is considerable evidence for at least one galaxy cluster in the field. The redshift of our best candidate cluster (the one circled in Fig. 2) can be estimated from the I and K
601
17220528
0
16
band photometry. We have compared the K-band magnitudes of the brightest cluster galaxies with the empirical K magnitude vs. redshift relation found by Aragón-Salamanca et al. (1998). We find that our cluster candidate, with its brightest K magnitude of about ∼ 17.0, should have a redshift of z ∼ 0.7. A similar comparison has been done in the I-band without taking into account galaxy morphology. We compare the mean I magnitude of the cluster members with the ones found by Koo et al. (1996) for galaxies with known redshifts in the Hubble Deep Field and obtain a cluster redshift between 0.6 and 0.9. Finally, comparison of the I − K color of the galaxy sequence with data and models from Kodama et al. (1998) confirm the redshift estimate of 0.6-0.8. In order to calculate physical quantities from the model parameters found in section 4, we assume a simple model for the cluster which may be responsible for the external shear. For an isothermal potential, the true shear and convergence are of the same order of magnitude. As the convergence is not explicitly included in the model, the deduced shear is a reduced shear leading to an absolute convergence of κ = γ/(1 + γ) = 0.241. For a cluster redshift of z d = 0.7 and with cosmological parameters Ω = 1, λ = 0 this corresponds to a velocity dispersion of about 1100 km s −1 if the cluster is positioned at an angular distance of 40 ′′ . See Gorenstein, Falco & Shapiro (1988)
602
17220528
0
16
for a discussion of the degeneracy preventing a direct determination of κ. From the direction of the shear φ, (see Table 3) we can predict the position angle of the cluster as seen from the QSO to be 12 • or 192 • . The latter value agrees well with the position of our cluster candidate SW of the QSO images. Note also the good agreement between the position angle θ G derived from the observed light distribution, and the predicted position angle corresponding to our best fitting model of the lensing potential. Interestingly, this is in good agreement with Keeton, Kochanek & Falco (1998) who find that projected mass distributions are generally aligned with the projected light distributions to less than 10 • . The color of the main lensing galaxy is very similar to that of the cluster members, suggesting that it might be a member of the cluster. Using the same model for the cluster as above, assuming the galaxy at the same redshift as the cluster, and neglecting the small ellipticity of ǫ < 0.05, the velocity dispersion of the lensing galaxy can be predicted from the calculated deflection angle α 0 to be of the order of 240 km s −1 . Since the galaxy profile is sharp towards the nucleus in K, we cannot rule out the possibility of a fifth central image of the source, as predicted for non-singular lens models. Near-IR spectroscopy is needed to get a redshift determination of the lens and to show whether it is
603
17220528
0
16
blended or not with a fifth image of the (QSO) source. Some 10 ′′ SW from the lens, we detect a small group of even redder objects. These red galaxies can be seen in Fig. 2 a few arcseconds to the left and to the right of the cross. They might be part of a second galaxy-group at a higher redshift, and with a position in better agreement with the X-ray position mentioned by B97. However, since the measured X-ray signal is near the detection limit, and the 1-σ positional uncertainty is at least 20 ′′ , the X-ray emission is compatible with both the QSO and these galaxy groups in the field. Furthermore, this second group, at z > 0.7, would most likely be too faint in the X-ray domain to be detected in the RASS. In fact, even our lower redshift cluster candidate would need to have an X-ray luminosity of the order of L 0.1−2.4keV ∼ 7.10 44 erg s −1 (assuming a 6 keV thermal spectrum, H 0 = 50 km s −1 Mpc −1 , q 0 = 0.5), in order to be detected with 0.02 cts s −1 by ROSAT. This is very bright but not unrealistic for high redshift galaxy clusters (e.g., MS 1054-03, Donahue, Gioia, Luppino et al. 1997). RX J0911.4+0551 is a new quadruply imaged QSO with an unusual image configuration. The lens configuration is complex, composed of one main lensing galaxy plus external shear possibly caused by a galaxy cluster at redshift between 0.6 and 0.8
604
17220528
0
16
and another possible group at z > 0.7. Multi-object spectroscopy is needed in order to confirm our cluster candidate/s and derive its/their redshift and velocity dispersion. In addition, weak lensing analysis of background galaxies might prove useful to map the overall lensing potential involved in this complex system. Note. -Results obtained from our non-photometric data. All measurements are given along with their 1-σ errors. Koo, D.C., Vogt, N.P., Phillips, A.C., et al., 1996, ApJ, 469, 535 Magain, P., Courbin, F., Sohy, S., 1998, ApJ, 494, 452 Moustakas, L.A., Davis, M., Graham, J.R., et al., 1997, ApJ, 475, 445 Witt, H.J., Mao, S., 1997, MNRAS, 291, 211 This preprint was prepared with the AAS L A T E X macros v4.0. For all the three bands the object is clearly resolved into four QSO images, labeled A1, A2, A3, and B, plus the elongated lensing galaxy. The field of the optical and near IR data are respectively 7 ′′ and 9 ′′ on a side. North is to the top and East to the left in all frames. Fig. 2.-Composite image of a 2 ′ field around RX J0911.4+0551. The frame has been obtained by combining our I, J and K-band data. North is up and East to the left. Note the group of red galaxies with similar colors, about 38 ′′ SW of the quadruple lens (circle) and the group of even redder galaxies 10 ′′ SW of the lens (cross). Fig. 3.-Color magnitude diagram of the 2.5 ′ field around RX J0911.4+0551. The lensing galaxy
605
17220528
0
16
and many of the cluster members form the overdensity around I − K ∼ 3.3. The lensing galaxy in RX J0911.4+0551 is marked by a star and the two blended galaxies in the center of the cluster candidate are plotted as triangles. Stars are not plotted in this diagram. 0.000 ± 0.004 −0.259 ± 0.007 +0.013 ± 0.008 +2.935 ± 0.002 +0.709 ± 0.026 y( ′′ ) 0.000 ± 0.008 +0.402 ± 0.006 +0.946 ± 0.008 +0.785 ± 0.003 +0.507 ± 0.046 Note. -The astrometry is given relative to component A1 with x and y coordinates defined positive to the North and West of A1. All measurements are given along with their 1-σ errors. The 1-σ errors on the photometric zero points are 0.02 in all bands.
606
17220528
0
16
Higher-Harmonic Collective Modes in a Trapped Gas from Second-Order Hydrodynamics Utilizing a second-order hydrodynamics formalism, the dispersion relations for the frequencies and damping rates of collective oscillations as well as spatial structure of these modes up to the decapole oscillation in both two- and three- dimensional gas geometries are calculated. In addition to higher-order modes, the formalism also gives rise to purely damped"non-hydrodynamic"modes. We calculate the amplitude of the various modes for both symmetric and asymmetric trap quenches, finding excellent agreement with an exact quantum mechanical calculation. We find that higher-order hydrodynamic modes are more sensitive to the value of shear viscosity, which may be of interest for the precision extraction of transport coefficients in Fermi gas systems. I. INTRODUCTION Strongly interacting quantum fluids (SIQFs) such as high T c superconductors [1], clean graphene [2], the quark-gluon plasma [3], and Fermi gases tuned to a Feschbach resonance [4] seem to lack a description in terms of quasi-particle degrees of freedom. This has fueled interest in developing new tools to understand the transport properties of these fluids, as well as trying to experimentally determine those properties more precisely. As of yet, one of the cleanest experimental realization of SIQFs is a Fermi gas tuned to a Feschbach resonance. Fermi gases offer unprecedented control of a multitude of properties such as interaction strength, system geometry, spin imbalance [5,6], and mass imbalance [7]. In the case of a spin and mass balanced gas, there have been a number experiments aimed at the extraction of shear viscosity [8][9][10][11], and -to
607
56386759
0
16
a lesser extent -bulk viscosity [12]. The Navier-Stokes equations provide a relatively straightforward model for the dependence of cloud expansion and collective oscillation phenomena on transport coefficients, making them a seemingly ideal candidate for extraction of such coefficients. Yet, in the low density corona of trapped atom gases the local mean free path becomes large, and hence one cannot expect the Navier-Stokes equations to apply. This, as well as uncertainties arising e.g. from trap averaging, gives rise to a large systematic error in transport coefficients thus extracted from experimental data. Hence a theory which can address both the hydrodynamic behavior of the high density region as well as the low density corona of the cloud is desirable. It has been shown that by including extra "non-hydrodynamic" degrees of freedom in a fluid dynamical description, termed anisotropic fluid dynamics, one can obtain a smooth crossover between Navier-Stokes dynamics in the high density core of the gas cloud and kinetic theory in the low density corona [13]. This theory has been recently used to determine the shear viscosity in the high temperature regime with an error of five percent by comparing experimental data for an expanding cloud to an anisotropic hydrodynamic description [14]. Similar precision determinations for transport properties at lower temperatures, e.g. close to the superfluid transition, are still outstanding. This present work is related to studies using anisotropic hydrodynamics in the sense that we will also employ a hydrodynamic description beyond Navier-Stokes ("second-order hydrodynamics") in order to study collective oscillations of harmonically trapped Fermi gases above
608
56386759
0
16
the super-fluid transition (T > T c ). In the linear response regime we are considering in this work, it turns out that the second-order and anisotropic hydrodynamic equations of motion are identical. We will refer to our approach as second-order hydrodynamics to simplify the discussion, but the only difference to an anisotropic hydrodynamics framework will be in name. Our work is closely related to Ref. [15], and in many aspects is complementary to the results therein. In this work, the focus is on the effects arising from a non-vanishing shear viscosity, and we limit our consideration to ideal equation of state, whereas in Ref. [15] collective modes for polytropic equations of state and zero shear viscosity were studied. The outline of the paper is as follows: We begin by describing our theoretical framework of second-order hydrodynamics in Sec. II. We then proceed to calculate the frequencies, damping rates, and spatial structures for the collective modes of harmonically trapped gases in both two and three dimensions in Secs. III,IV. We calculate mode excitation amplitudes for experimentally relevant conditions in Sec. V and offer our conclusions in Sec. VI. Detailed results on the spatial mode structure and mode amplitude calculations can be found in three appendices. II. SECOND-ORDER HYDRODYNAMICS The Navier-Stokes equations are conservation equations for mass, momentum, and energy. To close the system of equations, constitutive relations between the viscous stresses and fluid variables need to supplied. For Navier-Stokes, the viscous stress tensor π ij is set to first-order gradients of the fluid dynamic variables, mass
609
56386759
0
16
density ρ, flow velocity u, and temperature T . While widely successful in many fluid dynamics applications, such a first order gradient expansion suffers from certain problems, in particular, in systems where the fluid speed approaches the speed of light [16]. Thus, more recently a second-order hydrodynamic framework has been developed which -true to its nameincludes second-order gradients in the expansion of the stress tensor, with appropriate new second-order transport coefficients. Unlike the similar framework of the Burnett equations, second-order hydrodynamics in addition contains a resummation procedure which ensures that it is a consistent, causal and instability-free generalization of Navier-Stokes (cf. the reviews in Refs. [4,17]). For the case of a unitary Fermi gas, scale-invariance seems to be a good symmetry, and the resulting non-relativistic form of the second-order hydrodynamic equations has been derived in Ref. [18]. II.1. Basic Equations In the following we consider what is maybe the simplest possible second-order hydrodynamics formalism to describe a fluid in d spatial dimensions with a trapping force F. Namely, we utilize a relaxation equation for the stress tensor. In this case our fluid equations are given by where is the energy density, η is the shear viscosity, and τ π is the relaxation time for the stress tensor. In the above equations, and σ ij are specified in terms of the fluid velocity, mass density and pressure P as Note that Eq. (5) corresponds to the equation of state for a scale-invariant system. It is easy to show that the familiar Navier-Stokes equations are recovered upon taking
610
56386759
0
16
the limit τ π → 0 in Eq. (4). II.2. Assumptions For simplicity, we have assumed the bulk viscosity and heat conductivity coefficients to vanish. The assumption of vanishing bulk viscosity is consistent with measurements in two dimensions. [19,20]. Furthermore, calculations of bulk viscosity in d = 3 imply that the value of bulk viscosity near unitarity in the high temperature limit should be small [21]. Since we will consider a Fermi gas in the normal phase, i.e. above the superfluid transition temperature T c , taking the bulk viscosity to vanish should be a good approximation in the case d = 3 as well. The assumption of vanishing thermal conductivity is justified as it is already a second-order gradient effect as discussed in Ref. [22]. Hence we assume the gas is isothermal, but it is straightforward to see how the procedure below can be extended to the non-isothermal case. As a consequence, the temperature is a function of time only and not of spatial coordinates. In order to obtain analytically tractable results, we additionally make the approximation that the gas may be described with an ideal equation of state: where n is the number density of particles (we let = k B = 1 throughout). The effects of a realistic non-ideal equation of state on collective mode behavior in a viscous fluid typically require numerical treatments such as those presented in Ref. [23]. Moreover, we assume η/P to be constant. While this assumption is not expected to hold in the low density corona, it will
611
56386759
0
16
allow analytic access to the spatial structure, frequency and damping rates of collective modes using a secondorder hydrodynamics framework. More accurate numerical studies including temperature and density effects on the shear viscosity are left for future work. Finally, in order to access collective mode behavior of the gas, we will assume small perturbations around a time independent equilibrium state characterized by ρ 0 (x), u 0 = 0, and T 0 which are solutions to Eqs. (1)- (7). Thus, we set ρ = ρ 0 (1 + δρ), u = δu, and T = T 0 + δT with δρ, δu, δT assumed to be small. Working in the frequency domain we have δρ(t, x) = e −iωt δρ(x), with similar expressions holding for δu and δT . To simplify notation, from now on perturbations such as δρ denote quantities where the time dependence has been factored out, unless otherwise stated. II.4. Configuration Space Expansion For a harmonic trapping potential with trapping frequency ω ⊥ , the solution for the equilibrium density configuration is given by ]. In the following, we will be using dimensionless units such that all distances are measured in units of (T 0 /(mω 2 ⊥ )) 1/2 , times are measured in units of ω −1 ⊥ , temperatures in units of T 0 , and densities in units of 0 . In these units the equilibrium solution is given by where A 0 is a dimensionless positive number setting the number of particles (cf. the discussion in App. C). In
612
56386759
0
16
the absence of a trapping potential, it is usually convenient to perform a spatial Fourier transform of Eqs. (8)-(11) in order to obtain the collective modes of the system. However, here we are interested in a harmonic trapping potential (linear trapping force) which breaks translation symmetry. Thus, it is more convenient to use a different expansion basis for the perturbations. Here we choose to expand perturbations in tensor Hermite polynomials, though any complete basis of linearly independent polynomials will do. The N th order tensor Hermite polynomials in d spatial dimensions are given by the Rodrigues formula [24] where i k ∈ {1, 2, ...d} for k = {1, 2, ..., N }. The tensor Hermite polynomials are orthogonal with respect to a Gaussian weight which makes them particularly useful for the case of a harmonic trapping potential. In particular, they satisfy the orthonormality condition Assuming translational invariance along the z-axis in d = 3 spatial dimensions, the expansion in both d = 2 and d = 3 will involve only the tensor Hermite polynomials for d = 2. Recalling the assumption that the gas is isothermal, the polynomial expansion of perturbations is then given by δT where, in the sum over j, m j is understood to run over all combinations of indices unique up to permutations. For example, if M = 2 the second sum runs over m = {(1, 1), (1, 2), (2, 2)}, while (2, 1) is excluded. The reason for this restriction is that H (M ) m (x) is fully
613
56386759
0
16
symmetric in the indices as can be seen from Eq. (12). One should also note that in Eqs. (14), b (M ) m is used as shorthand for the polynomial coefficients of all components of δu, and for a given M and m is a column vector with d components. Let us now discuss the details of accessing the collective modes whose spatial structure is associated with polynomials of low degree ("low-lying modes"). Substituting Eqs. (14) truncated at polynomial order N into the linearized secondorder hydrodynamics equations and taking projections onto different tensor Hermite polynomials of order K ≤ N we obtain a matrix equation for the polynomial coefficients in Eqs. (14). The (complex) collective mode frequenciesω are then obtained from requiring a non-trivial null-space of this matrix, and subsequently the spatial structures are obtained from the corresponding null-vectors. III. COLLECTIVE MODE SOLUTIONS IN d = 2 Results for the density and velocity of low-lying collective modes in d = 2 are shown in Fig. 1. In particular, we find a breathing (monopole) mode which corresponds to a cylindrically symmetric oscillatory change in cloud volume, a sloshing (or dipole) mode where the center of mass of the cloud oscillates about the trap center, a quadrupole mode which is elliptical in shape, and higher-order modes corresponding to higher-order geometric shapes. Note that the spatial structure of these collective modes are similar to those reported in Ref. [15]. More detailed information about the d = 2 collective modes can be found in App. A. FIG. 1. Time snap
614
56386759
0
16
shots of density profiles and subsequent momentum density (ρu) for the oscillatory modes in d = 2. Note that the center of the monopole mode is at a lower density than the centers of the other modes since it is volume changing and has a larger radius than the equilibrium configuration. The damping rate of higher-order modes is more sensitive to η/P as discussed in the text. Also note that non-hydrodynamic modes share the same spatial structure as their hydrodynamic counterpart. The collective mode frequencies ω and damping rates Γ are given as the real and imaginary parts of roots of polynomials, which generally do not admit simple closed form expressions. Hence, in Tab. I we choose to report expressions for the complex frequencies and spatial mode structure from second-order hydrodynamics for the lowlying modes in the hydrodynamic limit η/P 1 and τ π 1 (assuming that τ π and η/P are of the same order of magnitude), in which case simple analytic expressions can be obtained. In addition to the modes shown in Fig. 1 there are three modes in Tab. I which have zero complex frequency. The first corresponds to a change in total particle number, the second corresponds to a change in temperature and width of the cloud, and the third "zero mode" is simply a rotation of the fluid about the central axis. While they are required for the mode amplitude analysis (see Sec. V), the role of the first two of these zero frequency modes is relatively uninteresting. Hence, we relegate
615
56386759
0
16
detailed discussion of these modes to App. C. The rows of Tab. I starting with the number mode and ending with the decapole mode are all hydrodynamic modes. We note that at order O(η/P ) the results for these modes match those from an analysis of the mode frequencies of the Navier-Stokes equations at the same order. However, for values of η/P where corrections to the hydrodynamic limit become significant, the frequencies found from the Navier-Stokes equations and second-order hydrodynamics disagree. In Fig. 2 we show the full dependence of the hydrodynamic mode frequencies and damping rates on η/P (assuming τ π = η/P based on kinetic theory [22,25,26]). Note that the result of second-order hydrodynamics for the quadrupole mode exactly matches the result from kinetic theory when setting τ π = τ R = η/P [23,27]. I. Frequencies and damping rates in d = 2 from linearized second-order hydrodynamics assuming η P , τπ 1. The hydrodynamic mode damping rates depend on η/P times a prefactor which increases with mode order. Note that for d = 2 there is no non-hydrodynamic sloshing or breathing mode. Furthermore, results shown in in Tab. I demonstrate that the hydrodynamic mode damping rates depend on η/P times a prefactor which increases with mode order. This is completely analogous to what has been observed in experiments on relativistic ion collisions, where simultaneous measurements of multiple modes have been used to obtain strong constraints on the value of η/s, cf. Ref. [28]. While higher-order modes have not yet been studied in
616
56386759
0
16
experiment, it is conceivable that measuring their damping rates could lead to a similarly strong experimental constraint on shear viscosity in the unitary Fermi gas. We are not aware of this approach having been suggested elsewhere in the literature. When aiming for using higher-order modes to analyze shear viscosity in Fermi gases we recall that the present analysis is based on a linear response treatment. Quantitative analysis of higher-order flows will, however, require the inclusion of nonlinear effects, especially for analysis of flows beyond hexapolar order due to mode mixing. For this reason, we suggest the hexapolar mode as a prime candidate for the use of higher-order modes to extract shear viscosity. Finally, Tab. I also indicates the presence of non-hydrodynamic modes (e.g. modes not present in a Navier-Stokes description). The physics of non-hydrodynamic modes is largely unexplored (cf. Refs. [29,30] for a brief discussion of the topic in the context of cold quantum gases). Results shown in Tab. I imply that several such non-hydrodynamic modes exist, all of which are purely damped in second-order hydrodynamics. The non-hydrodynamic mode damping rates are sensitive to τ π and η/P . Thus the value of τ π could be extracted experimentally by measuring any of the non-hydrodynamic mode damping rates in combination with a hydrodynamic mode damping rate required to determine η P . In Fig. 3, non-hydrodynamic damping rates are shown as a function of η/P when setting τ π = η/P . 3. Two-dimensional non-hydrodynamic collective mode damping rates Γ as a function of η/P (using
617
56386759
0
16
τπ = η/P ). Subscripts denote mode name (quadrupole "Q"; hexapole "H"; octupole "O"; decapole "D"). Dotted line is Γ nh = 1/τπ (cf. Ref. [29]). Note that in d = 2 there is no non-hydrodynamic sloshing or breathing mode. IV. COLLECTIVE MODE SOLUTIONS IN d = 3 In the case of a three-dimensional gas in a harmonic trap with trapping frequencies ω z ω x = ω y , the resulting gas cloud takes on an elongated cigar shaped geometry. For ω z = 0, the configuration space expansion Eqs. (14) can be applied because there is no dependence on the coordinate z if we assume a translationally invariant system along the z-axis. In this case, the collective mode structures in d = 3 are qualitatively similar to those obtained in the two-dimensional case, cf. the discussion in Refs. [31,32]. We report results for the low-lying modes in the limit η/P, τ π 1 in Tab. II whereas the full dependence of frequencies and damping rates on η P is shown in Figs. 4,5 for the case τ π = η/P . The only qualitative difference with respect to the d = 2 case is that the breathing mode in d = 3 has a different frequency, a non-zero damping rate, and there is now a non-hydrodynamic breathing mode. See App. B for more details about the spatial structure of the d=3 collective modes. It should be pointed out that, while second-order hydrodynamics predicts purely damped non-hydrodynamic modes for both d = 2, 3, more
618
56386759
0
16
general (string-theory-based) calculations suggest that there should be a non-vanishing frequency component in the case of d = 3 [30]. It would be interesting to measure non-hydrodynamic mode frequencies and damping rates in order to describe transport beyond Navier-Stokes on a quantitative level. V. MODE AMPLITUDES CALCULATIONS In this section, experimentally relevant scenarios to excite the collective modes of the previous sections are discussed, and the corresponding mode amplitudes are calculated. For simplicity, we assume τ π = η/P in the following. In particular, we focus on studying the excitation of the non-hydrodynamic quadrupole (in d = 2, 3) and non-hydrodynamic breathing (in d = 3) modes, leaving a study of higher-order modes for future work. For simplicity, only simple trap quenches (rapid changes in trap configuration) are considered. We will assume the gas cloud to start in an equilibrium configuration of a (possibly biaxial, i.e. ω x, init = ω y, init ) harmonic trap. At some initial time, a rapid quench will bring the trap configuration into a final harmonic form, which is assumed to be isotropic in the x-y plane with trapping frequency ω x, f inal = ω y, f inal = 1 in our units. In the case of Navier-Stokes equations, initial conditions are fully specified through the initial density ρ init , velocity u init , and temperature T init or appropriate time derivatives of such quantities. However, second-order hydrodynamics treats the stress tensor π ij as a hydrodynamic variable, so, in addition, an initial condition π ij, init or
619
56386759
0
16
its time derivative needs to be specified. For equilibrium initial conditions of a general biaxial harmonic trap with trapping force given by F = −γ x x − γ y y TABLE II. Frequencies and damping rates in d = 3 from linearized second-order hydrodynamics assuming η P , τπ 1. The hydrodynamic mode damping rates depend on η/P times a prefactor with increases with mode order. Note that there is no non-hydrodynamic sloshing mode, but, unlike for d = 2, there is a non-hydrodynamic breathing mode for d = 3. we have where T init also needs to be specified. Initial equilibrium implies the condition γ i = ω 2 i, init = σ i /T init so that the cloud width is fully specified once γ i for i = x, y and T init are fixed. In addition, equilibrium of the initial trap allows us to take π ij, init = 0. The mode amplitudes can then be obtained by projecting initial conditions onto the collective modes found in the preceding sections (see App. C for details of the calculation). Isotropic Trap Quench in d = 2 We first consider the case of an isotropic trap quench γ x = γ y ≡ γ in d = 2 and assume A i /A 0 = 1 and T init = 1 for simplicity. Although this case does not exhibit non-hydrodynamic or higher-order collective mode excitation, it does allow us to make direct comparison to results from the literature for the breathing mode excitation
620
56386759
0
16
amplitude. This type of initial condition corresponds to a rotationally symmetric trap quench with no initial fluid angular momentum. Symmetry then implies that only the number, temperature, and breathing modes Three-dimensional non-hydrodynamic collective mode damping rates Γ as a function of η/P (using τπ = η/P ). Subscripts denote mode name (monopole "B"; quadrupole "Q"; hexapole "H"; octupole "O"; decapole "D"). Dotted line is Γ nh = 1/τπ (cf. Ref. [29]). Note that in d = 2 there is no non-hydrodynamic sloshing mode, but there is a non-hydrodynamic breathing mode. can be excited (cf. Tab. I), and the initial amplitude for these modes are readily calculated. Fig. 6 displays the (dimensionless) breathing mode amplitude as a function of the quench strength γ. (Note that the amplitude of the temperature mode is identical to the breathing mode amplitude in this case.) The number mode is not excited since the number of atoms taken in the initial condition match the number of atoms we assumed in our final trap equilibrium (A i /A 0 = 1). The amplitude of the breathing mode for the isotropic trap quench is compared to the results from an exact quantum mechanical scaling solution by Moroz [33] in Fig. 6. As can be seen from this figure, there is exact agreement between the calculations for all strength values γ. Note that the amplitudes in this case are independent of η/P since for d = 2, the breathing mode does not couple to the shear stress tensor π ij . [33]. Right: Absolute value
621
56386759
0
16
of the (dimensionless) breathing ("B"), hydrodynamic ("Q h ") and non-hydrodynamic ("Q nh ") quadrupole mode amplitudes as a function the quench strength parameter γy for an anisotropic trap quench in d=2. Results shown are for η P = 0.5. Note that the temperature mode amplitude (not shown) matches the breathing mode amplitude for both the isotropic and anisotropic trap quench in d=2. Anisotropic Trap Quench in d = 2 We perform a similar analysis to that above, considering the case A i /A 0 = 1, and T init = 1, but now taking γ x γ y = 1, which corresponds to an anisotropic trap quench. The mode amplitudes in this case depend on the value of η/P . In this case, in particular the temperature, breathing and quadrupole modes are excited. Fig. 6 shows the absolute value of the mode amplitudes for the hydrodynamic breathing and quadrupole modes, as well as the non-hydrodynamic quadrupole mode as a function of the quench strength γ y . Not surprisingly, Fig. 6 shows that the anisotropic trap quench gives rise to a considerably larger quadrupole mode amplitude (both hydrodynamic and non-hydrodynamic) than the amplitude of the breathing mode. For a potential experimental observation of the non-hydrodynamic quadrupole mode, it is interesting to consider the relative amplitude of this non-hydrodynamic mode to the (readily observable) hydrodynamic quadrupole mode. The (absolute) amplitude ratio calculated using the above anisotropic trap quench initial condition is plotted in Fig. 7 as a function of η/P . One finds that the non-hydrodynamic
622
56386759
0
16
mode amplitude is monotonically increasing as a function of η/P . This is plausible given that for small viscosities one expects the hydrodynamic mode to be dominant, whereas one expects the non-hydrodynamic mode to dominate in the ballistic η/P → ∞ limit. The present calculation is compared to mode amplitude ratios extracted from experimental data [20] in Ref. [29]. To compare non-hydrodynamic damping rate data and theory, we follow the procedure used in Ref. [29] by employing the approximate kinetic theory relation where K ≈ 0.12 in order to relate the experimentally determined k F a to η/P (see discussion in Refs. [23,29] and references therein for more details on this relation). Using this procedure, one observes qualitative agreement of the amplitude ratios between calculation and experimental data in Fig. 7 (left panel). In addition, one can compare the non-hydrodynamic quadrupole mode damping rate, finding reasonable agreement (cf. right panel of Fig. 7). Right: non-hydrodynamic quadrupole mode damping rate Γ. Experimental data is from the reanalysis done in Ref. [29]. There are several possible reasons why a quantitative agreement between second-order hydro and experiment in Fig. 7 should not be expected. For instance, the present theory calculations neglect the presence of a pseudogap phase and pairing correlations (see e.g. Refs. [27,[34][35][36][37] on this topic). Furthermore, it is likely that the quantitative disagreement in Fig. 7 is at least in part due to the assumptions discussed in Sec. II, such as small perturbations, constant η/P and ideal equation of state. In particular, for the strongly interacting quasi-two-dimensional
623
56386759
0
16
Fermi gas near the pseudogap temperature T * , significant modification of the equation of state have been predicted and observed, cf. Ref. [34]. Studies aiming for achieving a quantitative agreement most likely will have to rely on full numerical solutions, such as e.g. those discussed in Refs. [14,23], which we leave for future work. In addition, our framework only admits a single non-hydrodynamic mode for each of the collective modes. While this may be appropriate in the kinetic theory regime, other approaches such as that of Ref. [30] indicate that our model may be too simple to capture quantitative features of early time dynamics. Finally, we note that the data shown in Fig. 7 was extracted from experiments that were not designed with the purpose of considering early time dynamics. This may contribute to the large uncertainty of the existing data, as well as possibly introducing significant systematic error. Isotropic Trap Quench in d = 3 For the case of an isotropic trap quench (A i /A 0 = 1, T init = 1) in d=3, results for the (hydrodynamic and non-hydrodynamic) breathing mode and temperature mode amplitudes are shown in Fig. 8. Unlike the case of d=2, the three-dimensional geometry is capable of supporting a non-hydrodynamic breathing mode; furthermore, we find the temperature mode amplitude to differ from the (hydrodynamic) breathing mode amplitude. The right panel of Fig. 8 shows the ratio of non-hydrodynamic to hydrodynamic breathing mode amplitudes, which reaches up to about 20% for large enough η/P . It is interesting to
624
56386759
0
16
note that the apparent saturation of this ratio at about 20% is consistent with the amplitude ratio from Ref. [29], extracted from experimental data in Ref. [31]. (Note that in the experiment of Ref. [31], the gas was released from a symmetric trap, allowed to expand for a short period, and then recaptured in a symmetric trapping potential, which is a different protocol than the trap quench considered here. For this reason we do not attempt a direct comparison to mode amplitudes of Ref. [29] in this case.) , non-hydrodynamic breathing ("B nh ") and temperature ("T") modes as a function the quench strength parameter γ for an isotropic trap quench in d=3. Results shown are for η P = 0.5. Right: Ratio of the non-hydrodynamic to hydrodynamic breathing mode amplitudes as a function of η/P for an isotropic trap quench in d=3 (result independent of γ). The maximal ratio of about 20% is roughly consistent with the results of Ref. [29], and suggests the possibility of experimental observation. VI. CONCLUSION We have utilized a second-order hydrodynamics framework in order to gain analytic insight into the collective response of Fermi gases in both two-dimensional ("pancake") and three-dimensional ("cigar") trap geometries. Our results demonstrate a number of interesting features which may be reasonably expected to qualitatively describe experimental results. In some cases we even expect quantitative agreement, such as was the case for a Fermi gas undergoing an isotropic trap quench where we found our results to match those obtained from an exact quantum mechanical scaling solution
625
56386759
0
16
[33]. For instance, our analysis demonstrates that the damping rate of the volume conserving higher-harmonic modes is proportional to the shear viscosity times the harmonic mode number (i.e. the mode winding number w, see App. A). Similar features have been predicted and experimentally observed in the context of relativistic heavy-ion collisions, cf. Refs. [28,38]. While higher-order mode excitations likely result in weaker signal-to-noise ratio, our work suggests a potential experimental avenue towards a precision extraction of shear viscosity in cold Fermi gases by analogy with the relativistic heavy-ion results. Our study also discusses the presence, damping rates, and expected mode amplitudes of non-hydrodynamic modes in trapped Fermi gases in detail. Recent studies on these non-hydrodynamic modes suggest they could provide information about the presence of quasi-particles in strongly coupled Fermi gases as well as exhibit deep connections between atomic physics and string theory [29,30]. The present results for the non-hydrodynamic mode amplitudes suggest that non-hydrodynamic modes should be accessible by state-of-the-art cold Fermi gas experiments. There are a number of ways that the results presented here can be generalized and improved. For instance, we plan to study the case of collective modes for uniform density and temperature equilibrium configurations ("Fermi gas in a box"), as well as anisotropic trap geometries in the future. Furthermore, the availability of fully numerical second-order hydrodynamic algorithms [13,23] will allow relaxing the present assumptions of constant η/P , ideal equation of state, and small perturbations around equilibrium in order to aim for a fully quantitative exploration of the collective modes of
626
56386759
0
16
trapped Fermi gases. ACKNOWLEDGMENTS This work was supported in part by the Department of Energy, DOE award No. de-sc0008132. Publication of this article was funded by the University of Colorado Boulder Libraries Open Access Fund. PR would like to thank John Thomas for fruitful discussions as well as the organizers of the workshop on "Non-Equilibrium Physics and Holography" at Oxford University in July 2016 for providing a stimulating environment for many interesting discussions on this topic. WL would like to thank Jasmine Brewer and Roman Chapurin for useful comments regarding clarity of the manuscript. Appendix A: Spatial Mode Structure for d = 2 In this appendix we collect the spatial mode structure for the low lying modes in d = 2 (see Tab. III). It is interesting to note that, of the modes found, only the temperature mode and breathing mode are associated with a non-zero value of δT . This is because they are the only two modes which change the volume of the cloud, and hence can lead to heating and cooling of the gas. Also note that each irrotational volume conserving mode (here: dipole, quadrupole, hexapole, octupole, and decapole modes) has two independent realizations related by an appropriate coordinate transformation. For example, the quadrupole mode and tilted quadrupole mode are related by a rotation of 45 o . In general, we can assign a mode the winding number w of the associated velocity field u on a circle centered on the origin. This quantity merely counts the number of full rotations made when
627
56386759
0
16
following a vector around the prescribed circle. For example, w Dip = 0, w Quad = 1, w Hex = 2,... so that the angle of coordinate rotations to get the second independent mode for a given mode is conveniently given by ∆φ = π 2(w + 1) . Of course, any rotation through an angle in the range ∆θ ∈ (0, π/(w + 1)) will produce an equally valid independent mode, but the angle in Eq. (A1) provides a uniform approach to finding an independent mode from one already found. This provides a connection for example to the approach in Ref. [29] where inequivalent polynomials under rotation up to quadrupole mode were considered in the second-order hydrodynamics framework used here. one axis and isotropic along the other two in d = 3, the breathing mode now couples to shear stresses. Hence there is now an associated non-hydrodynamic breathing mode as well as a difference in the corresponding temperature perturbation associated with the volume change of the cloud. Since there is no associated velocity field for the time independent temperature zero mode, this mode is associated with a vanishing stress tensor, and hence the mode structure is the same as in the d = 2 case. All other modes also exhibit the same spatial and frequency structure. Non-hydro Quad. Note that the tilted modes denoted Tilted-or T-for short in the table can be found by an appropriate rotation of coordinates. It should be noted that as can be seen from Tabs.III and IV, not all of
628
56386759
0
16
the individual perturbations are orthogonal (the full mode structures are, however, independent). For example, while δρ T emp = δρ B , it is not possible to construct the full mode structure δ T emp = {δρ T emp , δu T emp , δT T emp } as a linear combination of the full mode structure of the other modes. We note that as a result, care should be used when obtaining the system of equations for the amplitudes give by evaluating Eqs. (C1)-(C4) not to miss contributions from all the important modes. We also point out that in general our process for finding mode structure, it is found that mode frequencies come in pairs, one with positive and the other with negative real part. However, in allowing for a complex amplitude and taking the real part in Eqs. (C1)-(C4) we need only consider modes to have positive real part of the frequency. Let us consider a generic isotropic trap quench in d = 2 for multiple initial conditions in order to demonstrate the role of the temperature and number modes. In this case, the amplitudes take on a fairly simple form Note that the above expressions contain both phase and magnitude information, as they may be negative. We will plot phase and amplitude separately below. Additionally, we see from Eqs. (C5)-(C8) several features which should be expected. To explore this, we will break up our analysis into several cases. This is the case discussed in the main text (see Sec. V). Case 2: A
629
56386759
0
16
i /A 0 = 1, T init = 1 For this case we see from Eqs. (C5)-(C8) that the ratio A i /A 0 gives rise to a non-zero amplitude of the number mode, but leaves the location of the zero of the other two modes at γ = 1. This should be expected since this merely means that at γ = 1 we have more (A i /A 0 > 1) or less (A i /A 0 < 1) atoms in the trap than what was assumed in the equilibrium we expanded about. This should make the role of the number mode more clear and is demonstrated for the case A i /A 0 > and T init = 1 in Fig. 9. Particularly, it is only important if for some reason we chose to expand our dynamics about an equilibrium with a different number of particles than given by our initial conditions. Case 3: A i /A 0 = 1, T init = 1 For this case we see from Eqs. (C5)-(C8) that the number mode is not excited, while the value of T init alters the location of the zero for the temperature and breathing modes. This should be expected since at γ = 1 the breathing mode is excited through the temperature difference. Fig. 10 shows the case where T init < 1, demonstrating that the phase of the breathing mode vanishes at γ = 1 while the magnitude is non-zero. A positive amplitude at γ = 1 is expected since the
630
56386759
0
16
temperature is below its equilibrium value for the given cloud radius so the cloud will reduce its size to try to reach equilibrium.
631
56386759
0
16
NMR analyses on N-hydroxymethylated nucleobases – implications for formaldehyde toxicity and nucleic acid demethylases NMR studies reveal that formaldehyde, a toxic pollutant and metabolite, reacts with nucleotides to form N-hydroxymethylated adducts of varying stabilities. NMR Experiments NMR experiments were carried out using either a Bruker AVII 500 spectrometer equipped with a TXI probe, a Bruker AVIII 600 spectrometer equipped with a Prodigy N2 broadband cryoprobe, or a Bruker AVIII 700 spectrometer equipped with an inverse TCI 1 H/ 13 C/ 15 N cryoprobe. All spectrometers were operated using TOPSPIN 3 software. 1 H chemical shifts are reported in ppm relative to the solvent resonance (δH 4.7 ppm), while signal intensities were calibrated relative to 3-(trimethylsilyl)-2,2,3,3-tetradeuteropropionic acid (TSP, δH 0 ppm), which was added to each sample. The deuterium signal of D2O was used as an internal lock signal. Samples containing nucleotides and HCHO were prepared by mixing stocks of nucleotides in D2O (pD adjusted) with HCHO in D2O. TSP was added before transferring to either 3 mm or 5 mm NMR tubes for analysis by 1 H NMR. The total lapse time between mixing and data acquisition was 5-7 minutes. Concentrations of products were quantified relative to the concentration of added TSP (5. Figure S4 (2-25-fold dilutions) were carried out by pre-incubating stock solutions of nucleotides (10 mM) and HCHO (8 equivalents) for one week, before splitting the stock solutions and diluting with the appropriate quantities of D2O. EXSY analyses were carried out using a 1-dimensional gradient-selected NOESY pulse sequence employing selective refocusing with a Gaussian
632
21718451
0
16
pulse. [3][4][5] Experiments were run accumulating 16 transients with mixing times (τm) of 10-1200 ms, with the 1 H-resonance of hydrated HCHO (δH 4.89 ppm at 37 °C) being selectively irradiated. Adduct formation rates were calculated as follows: the normalised intensities of the EXSY correlations (i.e. the intensities of the 1 H-resonances corresponding to the adduct N-hydroxymethyl protons normalised to the intensity of the irradiated HCHO resonance) were plotted as a function of τm, and the initial intensity build-up rates were determined (see Figure S11). Assuming a bimolecular mechanism for adduct formation, the build-up rates represent k1[nucleotideeq], where k1 is the rate constant for adduct formation, and [nucleotideeq] is the concentration of unreacted nucleotide at equilibrium (see below). The initial adduct formation rate for 3hmUMP and 3hmTMP were calculated by dividing their accumulation rate by [ FTO catalysis was monitored using both 1 H NMR and the gradient-selected 1 dimensional heteronuclear single quantum correlation ( 1 H-13 C-HSQC) method. Samples were prepared containing FTO (prepared as reported 1 , with a final concentration of 20 µM), the nucleoside selectively 13 C-labelled of the methyl group (400 µM), 2-oxoglutarate (2OG, 5 mM), sodium ascorbate (1 mM), ferrous iron (20 µM) in ammonium formate buffer in D2O pH* 7.5. The samples were then transferred to 3 mM MATCH NMR tubes (Hilgenberg) and monitored by NMR. 1 H analyses employed NOESY water presaturation, while the 1 dimensional heteronuclear single quantum correlation (1D-1 H-13 C-HSQC) method was derived from the standard 2D 1 H-13 C-HSQC pulse sequence to remove both
633
21718451
0
16
the variable t1 period and 13 C decoupling during data acquisition. The 1/2JCH delays were optimised for 145 Hz. MS-Based FTO Activity Assay A reaction mixture containing RNA oligonucleotide (AUUGUGG-m6A-CUGCAGC, 1 µM), 2OG (10 µM), ascorbate (100 µM), ferrous iron (10 µM) and FTO (100 nM) in 50 mM Tris buffer in H2O at pH 7.5 was prepared in a 2 mL 96-well plate (Greiner) and reaction progression was monitored by MS using an Agilent RapidFire RF360 high throughput system paired with an Agilent quad time of flight (Q-TOF) mass spectrometer. Aliquots from the mixture were periodically subjected to MS analysis (at 1 minute intervals over the first 14 minutes after mixing, then at 23 minutes and 33 minute after mixing). The sample was passed through an Agilent C8 RapidFire cartridge, which isolated the oligonucleotides; the oligonucleotides were then eluted from the cartridge with 600mM octyl ammonium acetate (OAA) (20 %) and acetonitrile (80 %), and injected into the spectrometer. The RapidFire RF360 system was operated using RapidFire RF360 integrated software, and the mass spectrometer was operated using Agilent MassHunter Workstation Data Acquisition software. Signal intensities were quantified as the total ion count, and analysed using RapidFire integrator software (Agilent). Detailed RapidFire-MS procedures will be published elsewhere. General Methods All chemicals, including dried solvents, were from Sigma-Aldrich and used without further purification. Solvents used for work-up and chromatography were from Aldrich at HPLC grade. Silica gel 60 F254 analytical thin layer chromatography (TLC) plates were from Merck. Prepacked SNAP columns were used for chromatography on a
634
21718451
0
16
Biotage SP1 Purification system. Proton and Carbon NMR spectra were acquired using AVIIIHD 500 or Bruker AVIIIHD 400 or AVIIIHD 600 with N2 cryoprobe. Shifts are reported in δ ppm. Abbreviations s, d, t, q, and m denoting singlet, doublet, triplet, quartet and multiplet respectively used in 1 H NMR. Coupling constants, J, are registered in Hz to a resolution of 0.5 Hz. High Resolution (HR) mass spectrometry data (m/z) were obtained from a Bruker MicroTOF instrument using an ESI source and Time of Flight (TOF) analyzer. Values are reported as ratio of mass to charge in Daltons. Melting points were obtained using a Leica VMTG heated-stage microscope or Stuart SMP-40 automatic melting point apparatus. All reactions were carried out in an oven-dried round bottom flask. A magnetic stirrer was used to ensure homogenous mixing during the reaction. Synthesis of oligonucleotides were carried out as reported. 6 3', 5'-O-Bis(t-butyldimethylsilyl)thymidine This compound was synthesised using a reported procedure. 7 To a stirred solution of 2'deoxythymidine (2 g, 8.3 mmol) and imidazole (2.25 g, 33.026 mmol) in dry DMF in a 50 mL round bottom flask was added TBDMSCl (2.74 g, 18.16 mmol) portionwise. The reaction was stirred for four hours. The resulting mixture was concentrated under reduced pressure, then diluted with EtOAc. The EtOAc layer was washed with water (3 x 10 mL), and finally with brine. The organic layer was dried over MgSO4 and concentrated in vacuo. Mixture was purified (19:1 to 4:1 cyclohexane / EtOAc) by column chromatography which resulted in an off-white solid (3-(
635
21718451
0
16
13 C)-Methyl)thymidine This compound was prepared according to a modified version of a reported procedure. 8 3',5'-O-Bis(t-butylsilyl)-2'-O-(t-butyldimethylsilyl)adenosine This compound was prepared according to a modified version of a reported procedure. 9 To a stirred suspension of adenosine (2.12 g, 8 mmol) in 40 mL anhydrous DMF at 0 °C, di-tbutylsilyl ditrifluoromethanesulfonate (3.87 mL, 8.8 mmol) was added drop wise under an N2 atmosphere. After consumption of starting material (30 min, as assessed by TLC), the reaction was quenched immediately with imidazole (2.7 g, 40 mmol) at 0 °C. After 5 minutes, the reaction was warmed to room temperature. Then, t-butyldimethylsilyl chloride (1.45 g, 9.6 mmol) was added portion wise and the reaction was refluxed at 60 °C for 12 hr. The suspension was cooled down to room temperature, water was added and the precipitate was collected by suction filtration. The filtrate was discarded, and the white precipitate was washed with cold MeOH. The MeOH layer was evaporated under reduced pressure and the product was crystallised from CH2Cl2 to give a white solid (6-( 13 C)-Methyl)adenosine This compound was prepared according to a modified version of a reported procedure. 8
636
21718451
0
16
Core elements towards circularity: evidence from the European countries : In this paper, the authors identified key elements important for circularity: (1) Background: The primary goal of circularity is to eliminate waste and to prove the constant use of resources. In the paper, we classify studies according to circular approaches. The authors identified main elements and classified them into categories important for circularity, starting with the managing and reducing waste and the recovery of resources; and ending with the circularity of material, and general circularity-related topics and presented scientific works dedicated to each of the above-mentioned categories. The authors analyzed several core elements from the first category aiming to investigate and connect different waste streams and provided a regression model; (2) Methods: The authors used a dynamic regression model to identify relationships among variables and selected the ones, which has an impact on the increase of biowaste. The research was delivered for the 27 European Union countries during the period between 2020 and 2019; (3) Conclusions: The authors indicated that the recycling rate of wasted electrical equipment in the previous year has an impact on the increase of recycling biowaste next year. This is explained as non-metallic spare parts of electronic equipment are used as biowaste for fuel production. And the separation process of the com-posites of electric equipment takes some time, on average the effect is evident in one year period. Introduction Excessive use of natural resources, which are essential for economic growth and development, has harmed the environment while making these resources rarer and
637
237806736
0
16
more expensive [1,2]. Therefore, it is not difficult to see why the idea of a circularity, which offers new ways to create a more sustainable model of economic growth, is taking hold around the world. An early approach towards practical sustainability was envisioned and demonstrated as the saving of resources, the prevention of waste and the extension of product shelf life [3][4][5][6]. The recycling agenda requires the industry to restructure its processes towards sustainability. Case examples are proving the interlink between recycling and sustainability [7][8][9][10]. These examples are summarizing and demonstrating successful implementation by some companies from the production industry, like Hewlett Packard and Low carbon industrial manufacturing park (LOCI-MAP), etc. and study, which investigates the effect on environmental and socio-economic conditions, to investigate sustainable development [11][12][13]. A perfect circularity with 100% efficient material use cannot exist due to physical and practical limitations in material recycling [14][15][16][17]: • Material turnover requires energy costs and impacts on the environment. These effects can sometimes outweigh the effects of primary production. In any case, the impact of energy use should not go beyond environmental protection. • Materials cannot be required to be recycled or reused in the long term. For example, steel in buildings cannot be reused for many years. Therefore, material demand cannot be met. • Demand for most products is growing as the economy grows. Even the perfect material turnover is not enough to meet the growing demand. • Material turnover includes inherent processing losses. Materials can also lose quality or be contaminated. Even with stable demand,
638
237806736
0
16
additional raw materials are needed. • The supply of recycled raw materials is not in line with demand. Due to technological changes or lack thereof, substances that can only be obtained from pure extraction may be preferred. These constraints mean that recycling and recycling efficiency alone is not enough to achieve sustainable yields. The more efficient use of materials can also have repercussions, as cheaper raw materials could lead to lower prices and stimulate higher consumption [13,18]. Where higher consumption requires energy, production has an impact on the environment [19,20]. The efficient use of resources could help to protect the environment but may hamper economic growth or vice versa. So, the modelling will help to define these boundaries [21]. Compared to the linear scheme, the circularity is a more complex system [22][23][24] Complexity arises from material stocks, return flows and their management in different countries, as well as resource planning, logistics and recycling management, coordination of multi-level network activities, physical and information flow between network partners [25][26][27][28]. The study consists of three parts-the review of scientific studies and the identification of main elements important for circularity are presented at the beginning of the study. Further on, the authors presented methodology. Following the methodology, the authors revised core elements actual for circularity by applying regression analysis. Seeking practical investigations, the authors revised the data of European countries. Based on the analysis, the authors constructed a set of regression equations to describe circularity. Finally, discussion and conclusion sections are provided. Literature review on circularity The changing economic environment
639
237806736
0
16
(the program of the European Commission's Communication "Creating a circular economy. A Europe without waste") and changes in the business organization processes themselves lead to changes in the fields of materials extraction, production, marketing, and recycling [29,30]. Keeping this in mind, many companies are modernizing and reorganizing traditional supply chains and moving from "linear" to "circular", thus reducing the consumption of material resources and energy, as well as the amount of waste generated [31][32][33]. The emergence of the circularity is a natural evolutionary process, but new challenges are now being encountered, such as the seamless (no-fragmented) arrangement and aggregation of relevant functional activities over time to ensure a continuous and closed cycle of "raw material-product-waste" movements [34]. In the long run, circularity protects the world dependence on natural resources and delivers benefits to society by absorbing emissions and waste through increased material circulation and stayed limits of the natural environment [35][36][37]. The cycles of extraction, production and consumption of products are shortened, which results in a faster cycle of material flows, including the collection, sorting, and recycling of used products [38,39]. The research theme responds to the European Union (EU) research priorities in Horizon 2020, which emphasizes the need to increase the lifespan of goods, the re-use of materials, the recovery of resources using harmful technologies and the need to focus on the duration of resources over-exploitation [40]. The private sector and industry, which have a public commitment to ensure that recycled materials account for a certain proportion of products placed on the market, will play
640
237806736
0
16
a key role in shaping demand. The private sector will have to implement the solutions that are most appropriate given the extended life cycle of the product [41][42][43]. The circular scheme links traditional linear processes with product return processes involving product recovery, product recycling, dismantling, and reuse of recycled products [44]. The statement "recyclable" means that the materials can be reused. However, this reuse can be applied to just about anything, from converting used plastics to new containers we see on the roadside or burning. Thus, "recycling" is a very broad term covering less desirable forms of re-use, such as substandard standards and even incineration [4,45,46]. The components and materials are used for enhancing maintenance, reuse, remanufacture and recycle. Manufacturers talking about tank design say something more specific. They say that new containers can be used from used container materials or that the material is suitable for similar purposes [47]. Incineration and low-quality adaptation are not possible. These manufacturers are paving the way to circularity. According to the authors [48][49][50][51], the optimised use of resources is reached by having the circularity of products. Therefore, increasing recycling rates is vital to reach a circularity. European Commission is setting new targets on waste management -to increase the share of municipal waste for recycling to 70% and prepare packaging waste for recycling up to 80%. In this context, the European Commission (EC) has set a 50% recycling target for all the plastic packaging waste collected by 2025 and 55% by 2030 [5,52]. The waste management hierarchy indicates the order
641
237806736
0
16
of priority for waste reduction and management. The goal of the waste management hierarchy is to maximize the practical benefits of the products and to generate as little waste as possible. This delivers some benefits, it can help prevent greenhouse gas emissions, reduce pollution, save energy, save resources, create jobs and encourage the development of green technologies. When products are recycled, repaired, or reused, employment is generated, and when waste from one process is used as an input into others, efficiency and productivity gains are achieved [53]. Savings of resources depend on how quickly products are collected from consumers, reach recycling sites, and how quickly they are processed. The shorter duration of the processes also leads to a lower need for material (and natural) resources [19,21,54]. Products and services are rethought in the implementation of circular solutions based on durability, recyclability, reusability, repair, replacement, renewal, upgrading and reduced use of materials [6,31]. To avoid waste, increase resource productivity and decouple growth from natural resource consumption, companies need to apply these principles [37,42]. Circularity contributes by increasing resource efficiency and reducing environmental impact [55][56][57]. This can be achieved by applying or enabling one or more in line with the main circular approaches, which were first mentioned by the European Commission in 2006 and consisted reuse (R3), recycle (R8) and recover (R9). Later on, due to high critics, more circular approaches were presented [54]. The authors revised the literature and classified the studies according to these approaches (R0-R9) and mentioned them in Table 1. The authors indicated 22
642
237806736
0
16
different combinations covering different circular approaches in 65 studies. The most of authors focus on the 8th combination, which includes 7R: reduce (R2), reuse (R3), repair (R4), refurbish (R5), remanufacture (R6), recycle (R8), and recover (R9). All circular approaches are indicated under the 2nd combination, which is provided by seven studies. : According to the study, some circular approaches get higher attention and some of them -lower attention. Most of the authors focus on recycling (R8), reduce (R2), and recover (R9) in their studies. However, the lowest attention among authors gets rethink (R1), re-purpose (R7), and refuse (R0) circular approaches. To achieve circularity higher focus is required to R7, which belongs to the end-of-life extend approach. Core elements important for circularity To develop a literature review of circularity, we will review the dominant elements. The academic literature provides a comprehensive overview [58][59][60]. The authors indicated key elements mentioned under literature review and grouped them into four main categories: managing and reducing waste, resource recovery, material circularity, general circular. These categories are selected as representing a high circularity level. General circular covers best practices for delivering value following environmental and social aspects and new business models supporting circularity. The other three categories (i.e., managing and reducing waste, resource recovery, material circularity) represent circularity with efficient material use, which is a longer period could help to reach a faster circulation cycle for materials from consumption back to production. The first type -waste management is examined by different authors according to the types of the policy package. It is
643
237806736
0
16
particularly important that packagers are aligned with strategic ones economic and industrial policy, as the industrial waste sector is one of those who dictate change from linearity to circularity [61,62]. The 48 authors examine basic waste management, with applies typical policy measures such as basic provision for public service management of wastes through landfilling or burning. The 14 authors describe the characteristic of waste hierarchy apply to typical policy instruments, a fast link between waste management and resource use, as the implementation of it. In many areas, progress has been noted mainly in the recycling of industrial and organic waste [63,64]. The authors emphasize that companies that want to reduce municipal waste must focus on consumers and forms of consumption [65,66]. For example, reuse, reduce and recycle, we must effectively promote the need to change certain consumer behaviour [19] and to have a substantial waste reduction, through prevention, reduction, recycling [27,34,54,67]. The second type -resource recovery is promoted by the United Nations and the Organization for Economic Co-operation and Development [71]. Alhawari et al. performed a literature review of a double loop regeneration system that focus is on the efficient and effective use of ecosystem resources, which is beneficial optimization of environmental and economic activities. The third type -the material circularity considers recycled content in the product together with the waste (linear flow) and usefulness of the product (expressed over a lifetime) [24,68,69]. Garcés-Ayerbe et al. and Moraga et al., [23,35] make a point of change in the concept of end-of-life by the life cycle of
644
237806736
0
16
restoration and closed-loop products and want to eliminate waste, maintain the value of products and materials, promote their use of renewable energy and remove toxic chemicals. In current production and consumption practices, the "end of life" is being replaced by reducing, reusing, and recycling processes in the production, distribution and consumption of products and materials. The fourth type -topics under general circular revise circular business models and value creation. A circular business model revises the logic of how a company generates value for its customers by reducing the environmental effect. The circular business model is different from a linear one and follows the logic of designing products without waste and pollution, storing used products and materials, and restoring natural systems. Materials and Methods The circularity is quite a complex and developing process [72]. This complexity is evident also in Table 2. The study aims to identify priorities and managerial aspects important for circularity and revise real situations, helping to figure out the effective actions important for decision making. Decisions are made by applying different factors, such as: • The infrastructure important for circularity; • The revision of management priorities; • The analysis of current situations and correction of actions. The authors divided the review of circularity options into two layers and provided theoretical and practical revision under Table 3. Table 3. The two-layer methodology supporting the review of circularity options. Methodological background Output The background of circularity [73,74] The cycle is covering stages from (1) the production of products, The review of recycling aspect in waste
645
237806736
0
16
management [75][76][77][78][79][80] The basic waste hierarchy prioritises the most effective solutions for waste management. There are many alternatives identified which are combined with recycling, such as reusage, re-production, repairment, etc. The revision of recycling trends and evaluation of variables having an impact on recycling rate and the construction of dynamic regression equation The single way to reduce waste levels is the increase of recycling guaranteeing the recovery of resources Below the authors provided practical research of real situations covering the revision of two elements from the first category (municipal waste and waste recycling) and waste streams, which could be separated into: • municipal waste; • mineral waste; • package waste; • biowaste; • electronic waste (e-waste); • construction waste. The waste streams of mineral and construction waste were not revised during empirical research due to the low amount of provided data sets for EU countries. The revision of variables The authors seek to revise the main factors for constructing a new regression model. The authors selected data from the Eurostat database for a period 2000-2019 for 27 EU countries. To identify main relationships the authors revised 13 variables and identifies the significance of defined correlation. Later, variables that were statistically not significant were taken away, and the construction of the regression equation step was used only for the ones that detected as significant. The authors of this paper applied the dynamic regression model, which was the first time applied by Petris et al. [81]. The first step under the model construction procedure was the transformations of
646
237806736
0
16
time series helping to identify the dependent variable and its links with the regressors. The constructed model meets the requirements important for the construction of a simple regression model but presents dynamic interlinks. Such analysis presents trends (Fig. 1) which show that for 19 years period the recycling rate of biowaste increased from 10 per cent to 17 per cent comparing to waste generation figures. During further analysis, the authors figured out the links and their strength in data pairs. For the dependent variable, the authors took the recycling of biowaste and evaluated how the changes of other variables influence it. The regressors used for the regression model is provided below: where: rec_biow -logarithmic dependent variable of the recycling of biowaste (i.e., the ratio of composted municipal waste which is expressed in kg per capita) in year t. The results of the analysis presented in the next sub-section. Results of the analysis The data for dependent and independent variables are normalised by applying a logarithmic procedure. The constructed regression model has its graphical representations, which is provided in Figure 2. The authors delivered the tests of statistical validity. The testing of probabilities t and chi-square does not show significant heteroskedasticity and autocorrelation. Detailed results of these statistics are provided below in Appendix A. The authors indicated that the recycling rate of electronic waste has an impact on the recycling of biowaste in one year period. The authors foresaw that electronic equipment includes non-metallic components which recently are used for fuel production after being separated from other
647
237806736
0
16
spare parts. Discussion The improvement of circularity is important under various levels: society, industry, and government. The implementation of high-level circularity is too optimistic as recycling takes only one-fifth of generated waste. First and foremost, focus on waste prevention could help dispose of waste at source, saving natural resources and energy, and delivery costs. The implementation of waste reduction activities is most appropriate for various circular approaches and waste streams, one of such is the application of smarter production. Further evaluation of recycling and composting options could be considered for managing unavoidable waste and could increase material circularity. The results of the study are important for policymakers as increase circularity leads to environmental sustainability. Waste recycling saves energy, improve circularity, as supports the supplies of raw materials to produce new products. When waste cannot be avoided, recycling is the next Residual Actual Fitted best option. Recycling helps to prolong the life of landfills and is the best use of natural resources. Biowaste or composted waste is the recycling of organic matter. Organic matter, such as packing and non-metal waste, is transformed into a valuable soil replacement and guarantee the removal of organic waste from landfills. The research indicated theoretical and practical gaps in which minimization could lead to the increase of circularity and faster transformation. The research has some limitations and does not cover the revision of construction and decomposition waste and mineral waste streams and the avoidance of surplus. The high-level circularity could be reached in the long-term period as the current recycling of waste
648
237806736
0
16
is less than 20 per cent and needs to be speeded up. Conclusions In recent years, many studies are focusing on circularity. However, the attention to all circular approaches is behind: only seven studies out of 65 covers all circular approaches. The attention to circular approaches related to smarter production is still small, as well as to single repurpose approaches from the end of life extend approaches. Therefore, the application of materials approaches is highly researched by authors and appear in studies as early as 2012. The authors indicated key elements important for reaching high-level circularity. These elements are indicated during the revision of studies. These elements the authors have classified into the four categories -managing and reducing waste, resource recovery, material circularity, and general circular. The elements such as managing and reducing waste, resource-centred dimension, end-of-life concept, the recycling rate of final products, and value chains are the most interesting topics for studies. For the revision of circularity as a complex process, the authors suggested a two-layer methodology, which covers theoretical and practical investigations. The practical investigations provided for the 19 years at EU 27 countries. The authors researched trends and identified waste streams having an impact on biowaste available for recycling increase and figure out that electronic waste, package waste, and municipal waste streams play an important role. Most of the waste streams have an impact during the same period, however, electronic waste needs some time for decomposition and preparation. The author constructed the dynamic regression equation and proved its statistical validity. The future
649
237806736
0
16
works could extend delivered study into some directions such as • the revision of circularity measures and interlinks to show gaps; • the involvement of other waste streams into empirical research; • the review of waste management aspects by sectors; • the prognosis of recycling rates. The research extends the knowledge important for reaching higher circularity. Author Contributions: Literature review and data analysis -Olga Lingaitienė; conceptualization and methodology -Aurelija Burinskienė Conflicts of Interest: The authors declare no conflict of interest.
650
237806736
0
16
Determining the Influence of KemTRACE Cr and / or Micro-Aid on Growth Performance and Carcass Composition of Pigs Housed in a Commercial Environment A study was conducted to determine the interactive effects of chromium propionate (KemTRACE Cr; Kemin Industries Inc., Des Moines, IA) and Micro-Aid (Yucca schidigera-based product; Distributors Processing Inc., Porterville, CA) on growth performance and carcass composition of finishing pigs housed in a commercial environment. There were a total of 1,188 pigs (PIC 337 × 1050; initial BW = 60.3 lb) with 27 pigs/ pen and 11 pens/treatment. Pigs were split by gender upon arrival at the facility, with 5 blocks of each gender and a final mixed sex gender block. Gender blocks were randomly allotted to groups of 4 pen locations within the barn. Diets were corn-soybean meal-dried distillers grains with solubles-based and were fed in 5 phases. All nutrients were formulated to meet or exceed NRC (2012) requirement estimates. Treatments were arranged as a 2 × 2 factorial with main effects of Cr (0 vs 200 ppb) or Micro-Aid (0 vs 62.5 ppm). There were no Cr × Micro-Aid interactions observed for growth or carcass measurements. Overall, ADG and F/G were not influenced by treatment. Adding Cr alone increased (P = 0.048) ADFI, and inclusion of Micro-Aid resulted in a marginally significant increase (P = 0.076) in ADFI. For carcass characteristics, HCW, loin depth, and percentage carcass yield were not influenced by treatment. Backfat depth tended to increase (P = 0.055) and lean percentage was decreased (P = 0.014) when Cr was
651
53986946
0
16
added to diets. In summary, no synergistic effects were observed from feeding Cr and Micro-Aid in diets fed to finishing pigs housed in a commercial environment. Only marginal differences were observed from adding Cr or Micro-Aid with increased ADFI observed from feeding either. Finally, diets containing added Cr tended to be associated with carcasses having more backfat and less lean suggesting the increased ADFI was not utilized for increased muscle deposition. and S.S. Dritz 4 Summary A study was conducted to determine the interactive effects of chromium propionate (KemTRACE Cr; Kemin Industries Inc., Des Moines, IA) and Micro-Aid (Yucca schidigera-based product; Distributors Processing Inc., Porterville, CA) on growth performance and carcass composition of finishing pigs housed in a commercial environment. There were a total of 1,188 pigs (PIC 337 × 1050; initial BW = 60.3 lb) with 27 pigs/ pen and 11 pens/treatment. Pigs were split by gender upon arrival at the facility, with 5 blocks of each gender and a final mixed sex gender block. Gender blocks were randomly allotted to groups of 4 pen locations within the barn. Diets were corn-soybean meal-dried distillers grains with solubles-based and were fed in 5 phases. All nutrients were formulated to meet or exceed NRC (2012) requirement estimates. Treatments were arranged as a 2 × 2 factorial with main effects of Cr (0 vs 200 ppb) or Micro-Aid (0 vs 62.5 ppm). There were no Cr × Micro-Aid interactions observed for growth or carcass measurements. Overall, ADG and F/G were not influenced by treatment. Adding Cr alone increased
652
53986946
0
16
(P = 0.048) ADFI, and inclusion of Micro-Aid resulted in a marginally significant increase (P = 0.076) in ADFI. For carcass characteristics, HCW, loin depth, and percentage carcass yield were not influenced by treatment. Backfat depth tended to increase (P = 0.055) and lean percentage was decreased (P = 0.014) when Cr was added to diets. In summary, no synergistic effects were observed from feeding Cr and Micro-Aid in diets fed to finishing pigs housed in a commercial environment. Only marginal differences were observed from adding Cr or Micro-Aid with increased ADFI observed from feeding either. Finally, diets containing added Cr tended to be associated Introduction Chromium plays a role in carbohydrate, lipid, protein, and nucleic acid metabolism. 5,6 In addition, Cr is associated with insulin sensitivity in the form of a cofactor for glucose tolerance factor. 7 Corn-soybean meal-based diets contain a significant amount of Cr ranging from 1,000 to 3,000 ppb, but much of that is thought to be unavailable to the animal. 8 Recently, a meta-analysis was conducted including 31 different research studies that evaluated Cr supplementation in finishing pig diets. They observed that improvements in growth performance (ADG and F/G) and carcass composition (reduced backfat and increased percentage lean) can be expected with Cr supplementation. 9 Additionally, Yucca schidigera is believed to have a positive impact on gastrointestinal microflora through its saponin characteristics thereby reducing gaseous emissions and potentially improving growth performance; however, limited research exists. 10 Research evaluating the effects of Yucca schidigera supplementation in poultry is available, and would suggest
653
53986946
0
16
an improvement in F/G. 11 Additionally, research in mice with artificially induced diabetes mellitus indicates a potential reduction of circulating glucose levels when supplemented with Yucca schidigera extract through a potential insulin releasing mechanism from pancreatic β-cells. 12 Research related to the impact of Yucca schidigera on blood metabolites in swine is currently very limited, and there are no data available to determine the interactive effects of Cr and Yucca schidigera when fed to finishing pigs. Therefore, the objective of this experiment was to determine the effects of Cr supplementation with and without Micro-Aid on growth performance and carcass composition of pigs housed in a commercial environment. Procedures The Kansas State University Institutional Animal Care and Use Committee approved the protocol used in this experiment. The study was conducted at a commercial research-finishing site in southwest Minnesota. The barn was naturally ventilated and double-curtain-sided. Each pen (18 × 10 feet) was equipped with a 4-hole stainless steel feeder and cup waterer for ad libitum access to feed and water and allowed approximately 6.5 ft 2 /pig. Hourly ambient barn temperatures were recorded throughout the ex- periment (EasyLog Data Loggers; Lascar Electronics, Erie, PA). Feed additions to each individual pen were made and recorded by a robotic feeding system (FeedPro; Feedlogic Corp., Wilmar, MN). A total of 1,188 pigs (PIC 337 × 1050; initial BW = 60.3 lb) with 27 pigs/pen and 11 pens/treatment were used in a 117-d study. Pens were blocked by BW and were randomly assigned to diets with 27 pigs per pen and
654
53986946
0
16
7 pens per treatment. Pigs were split by gender upon arrival at the facility, with 5 blocks of each gender and a final mixed sex gender block. Gender blocks were randomly allotted to groups of 4 pen locations within the barn. Diets were corn-soybean meal-dried distillers grains with solubles-based and were fed in 5 phases. All nutrients were formulated to meet or exceed NRC (2012) requirement estimates (Table 1) Data were analyzed as a randomized complete block design using the GLIMMIX procedure of SAS (SAS Institute, Inc., Cary, NC) with pen as the experimental unit. Block was included in the model as a random effect and accounted for gender, location within barn, and initial BW at the time of allotment. Backfat, loin depth, and percentage lean were adjusted to a common carcass weight for analysis using HCW as a covariate. Results were considered significant at P ≤ 0.05 and marginally significant between P > 0.05 and P ≤ 0.10. Results and Discussion As expected, chemical analysis of complete diets revealed no notable differences among treatments (Tables 2 to 4). The only discrepancy was in Phase 3 when both Cr and Micro-Aid were included and the dietary level of Cr was lower than expected. There were no Cr × Micro-Aid interactions observed for the entire study (Table 5). For the grower period, added Cr increased (P < 0.028) ADG and ADFI. Added Micro-Aid in the grower period tended to worsen (P = 0.051) F/G. During the finishing period, added Cr tended to increase (P = 0.080) ADFI
655
53986946
0
16
but worsen F/G. Added Micro-Aid in the finishing period tended to increase (P = 0.088) ADFI. Overall, ADG and F/G were not influenced by treatment. Adding Micro-Aid tended to increase (P = 0.076) and adding Cr increased (P = 0.048) ADFI. For carcass characteristics, HCW, loin depth, and carcass yield were not influenced by treatment. Backfat depth tended to increase (P = 0.055) and lean percentage decreased (P = 0.014) when Cr was added into the diets. 2015) do not appear to be transferable to nursery or finishing pig production, with little research currently available. In summary, no synergistic effects were observed from feeding Cr and Micro-Aid in diets fed to finishing pigs housed in a commercial environment. Only marginal differences were observed from adding either Cr or Micro-Aid with increased ADFI observed from feeding either Cr or Micro-Aid. Finally, diets containing added Cr tended to be associated with carcasses having more backfat and less lean. This suggests the increased ADFI observed from pigs fed added Cr was not utilized to support lean deposition, but rather was converted to backfat.
656
53986946
0
16
Undersmoothing Causal Estimators With Generative Trees Average causal effects are averages of (heterogeneous) individual treatment effects (ITEs) taken over the entire target population. The estimation of average causal effects has been studied in depth, but averages are insufficient for more individualised decision-making where ITEs are more appropriate. However, estimating ITEs for every population member is challenging, particularly when estimation must be based on observational data rather than data from randomised experiments. One potential problem with observational data arises when there are large differences between the sample distributions of the input features of the treated and control units. This problem is known as covariate shift. It can lead to model misspecification the harmful effects of which can be severe for ITE estimation because point estimation is highly sensitive to regions of the common support of the input space in which the number of treated or control units is very small. Moreover, common solutions are often based on reweighing schemes involving propensity scores which were originally designed for average effects and not ITEs. In this paper, we propose Debiasing Generative Trees, a novel data augmentation method based on generative trees that debiases and undersmooths causal estimators trained on augmented data. It encourages higher modelling complexity that reduces misspecification and improves estimation of ITEs. We show empirically that our proposed approach yields models of higher complexity and more accurate predictions of ITEs, and is competitive with traditional methods for estimating average treatment effects. Our results confirm that reweighing methods can struggle with ITE estimation and that the choice of
657
247475855
0
16
model class can significantly impact prediction performance. I. INTRODUCTION In the absence of data from randomised experiments, analysts must use observational data to make inferences about the causal effects of interventions or treatments, that is, what would happen if they intervened to change the treatment status of individual units in a population.The estimation of average causal effects -the average effect of the treatment aggregated across every unit in a population -has been studied in considerable depth.However, there is now growing interest in estimating heterogeneous treatment effects for individuals characterized by a possibly large number of input variables or covariates.If there is substantial heterogeneity across units, such systems can unlock the analysis of targeted interventions, for instance, in the form of personalised healthcare based on covariates that describe patients' symptoms and health histories. The use of observational data creates challenges for the estimation of heterogeneous causal effects.First, the analyst must make assumptions, for example, that treatment selection is strongly ignorable given the available covariates.We take ignorability to hold throughout, and focus on the second problem, namely, that nonrandom treatment selection can lead to observed data in which the distributions of covariates among the treated and untreated units are very different.In practice, this can make it difficult for conventional learners to learn the true relationship between the treatment effect and covariates across the entire support of the covariates, and so result in poor performance when tested on other datasets. More generally, this issue is known as 'covariate shift', which in this setting means the learning target P (Y
658
247475855
0
16
|X) remains unchanged, while the marginal distributions of the covariate inputs P (X) for treated and untreated can be very different.Most existing methods attempt to transform the observational distribution by sample reweighing schemes usually based on propensity scores [4], [8], [19], [28], [29] (but not exclusively, see e.g.domain adaptation methods).However, reweighting seeks to standardise the observed support of X for the treated and untreated groups, and so generally performs well for estimating treatment effects averaged across the common support of X, but less so for estimating conditional average treatment effects at points outside the observed support; in other words, as pointed out by [32], reweighting does not address the problem of model misspecification which can be detrimental when it comes to estimating individualised treatment effects [33]. A promising alternative to these classical approaches is undersmoothing, where the model is allowed to fit the data very closely to capture P (X) in the two groups, and in doing so potentially produce more accurate individualised predictions.Encouraged by suggestions elsewhere -[9, footnote 3] and [23] -in this paper, we develop a novel approach to causal effect estimation that improves accuracy by undersmoothing the observed data. Specifically, we propose to undersmooth using fast and straightforward generative trees [10] to augment the existing data, and in doing so facilitate more robust learning of downstream estimators of key causal parameters.The trees are used to 'discretise' the input space into subpopulations of similar units (subclassification); the distributions of these groups are then modelled separately via mixtures of Gaussians, from which we sample equally
659
247475855
0
16
to reduce data imbalances. Data augmentation has proven effective in multiple scenarios.For instance, image transformations in computer vision [26], or oversampling minority classes in imbalanced classifi-arXiv:2203.08570v1 [cs.LG] 16 Mar 2022 cation problems [7], [15].In our case, the method we propose could be seen as oversampling underrepresented data regions instead of just classes. Generative models have also been investigated in causal inference literature [3], [22], though mostly for benchmarking purposes, where new synthetic data sets are created that closely resemble real data but with access to true effects.This work, on the other hand, goes beyond data modelling and focuses on targeted data augmentation instead. Arguably the closest work to ours that combines data augmentation and generative models within the causal inference setting is [5].Despite a similar approach on a high-level, that is, train downstream causal estimators on augmented data, we believe our frameworks differ substantially upon further examination.More precisely, [5] incorporates neural network based generative models to specifically generate counterfactuals and focuses on conditions where the treatment is continuous.In this work, our proposed method: a) is based on simple and widely-used decision trees, b) does not specifically generate counterfactuals, but oversamples heterogeneous data regions (more general), and c) works with classic discrete treatments. In terms of this paper's contributions, we show empirically that the choice of model class can have a substantial effect on estimator's final performance, and that standard reweighing methods can struggle with individual treatment effect estimation.Given our experiments, we also provide an evidence that our proposed method increases data complexity that leads to statistically
660
247475855
0
16
significant improvements in individual treatment effect estimation, while keeping the average effect predictions competitive.Our experimental setup incorporates a wide breadth of non-neural standard causal inference methods and data sets.We specifically focus on non-neural solutions as they are more commonly used by practitioners.The code accompanying this paper is available online 1 . The rest of the document is structured as follows.First, we revisit fundamental concepts that should aid understanding of the technical part of the paper.Next, we formally discuss the problem of model misspecification, followed by a thorough description of our proposed method.We then present our experimental setup and obtained results.Next section provides further discussion on the results, their implications and considered limitations of the method.Final section concludes the paper. II. PRELIMINARIES This section gives a brief overview of the essential background deemed relevant to this work.For a more extensive review, we refer the reader to classic positions on causal analysis [25], [27], and recent surveys on causal inference [14], [34]. Given two random variables T and Y , investigating effects of interventions can be described as measuring how the outcome Y differs across different inputs T .Real world systems usually contain other background covariates, denoted 1 https://github.com/misoc-mml/undersmoothing-data-augmentationas X, which have to be accounted for in the analysis as well.To formally approach the task, we take Rubin's Potential Outcomes [30] perspective, which is particularly convenient in outcome estimation without knowing the full causal graph. We start by defining the potential outcomes t , that is, the observed outcome when individual i receives treatment t = 0, 1.Given
661
247475855
0
16
this, the Individual Treatment Effect (ITE) can be written as: Thus, to compute such a value for individual i, we need access to both potential outcomes, 0 , but only one, called the factual, is observed: the other potential outcome, called the counterfactual, cannot be observed.The fact that we only observe factuals but also need the counterfactuals to properly compute causal effects is known as the fundamental problem of causal inference: ITEs are not identified by the observed data. However, parameters such as the Average Treatment Effect (ATE) and Conditional Average Treatment Effect (CATE) are identified, where and E[.] denotes mathematical expectation.The ATE is essentially the average ITE for the entire population; the CATE is the average ITE for everyone in the subpopulation characterised by X = x.The ATE is not meaningful if there is substantial heterogeneity of the ITEs between subpopulations.In such circumstances, CATE is more informative about ITEs as it allows the effect to be conditioned on the subpopulation of interest.The ITE can be thought of as a special case of CATE where individual i is the only member of the subpopulation.While IT E i cannot be identified, CAT E for the subpopulation X = x which includes individual i will be better estimate of it than AT E (under the reasonable assumption that between-subpopulation variation in ITEs is greater than that within subpopulations).Despite the fact that the aforementioned treatment effects usually cannot be calculated directly, successful methods have been developed so far that attempt to approximate those quantities.Perhaps the simplest and most naive
662
247475855
0
16
approach is regression adjustment, where a regressor, or multiple ones per each treatment value, is used to estimate potential outcomes.More advanced methods often incorporate propensity scores, where the estimator takes into account the probability of treatment assignment per each individual.For instance, Inverse Propensity Weighting [29] adjusts sample importances, further extended to more efficient and stable Doubly Robust method [12], [28].Double Machine Learning [8], on the other hand, improves existing statistical estimators using base learners.Furthermore, recent surge in machine learning also delivered powerful procedures, often pushing state-of-the-art results [17], [21], [31], [35].In the realm of ensembles, there is Causal Forest [4] that specifically targets CATE estimation.Another interesting perspective on the problem is given through metalearners [19], [24], where out of the box estimators are used in various combinations and strategies to collectively approximate causal effects. These are the most common methods that employ the usual assumptions, that is, SUTVA and strong ignorability, though there are many procedures that attempt to relax some of the assumptions as well.Here, we limit our discussion to this standard set of assumptions as it is relevant to this work.For a broader overview of available causal inference methods, as well as formal definitions of the assumptions, consult recent reviews on the topic [14], [34]. III. MODEL MISSPECIFICATION The choice of model class occurs at some point in any learning task.Such a decision is made based on available data, usually the training part of it, while the environment of the actual application can be different, a scenario often mimicked via a separate test set.The
663
247475855
0
16
occurring discrepancies between those two data sets are known as covariate shift problem.Within causal inference, this manifests as differences between observational and interventional distributions, ultimately making effect estimation extremely difficult.More formally, given input covariates x, treatment t, and outcome y, the conditional distribution P (y|x, t) remains unchanged across the entire data set, whereas marginal distributions P (x, t) differ between observational and interventional data.This is where model misspecification occurs as the model class is selected based on available observations only, which does not generalise well to later predicted interventions. Let us consider a simple example as presented in Figure 1.It consists of a single input feature x, output variable y (both continuous), and binary treatment t.For convenience, let us denote this data set as D. Note the effect is clearly heterogeneous as it differs in D(x < 0.5) and D(x > 0.5).Furthermore, the two data regions closer to the top of the figure, that is, D(x < 0.5, t = 1) and D(x > 0.5, t = 0), are in minority with respect to the rest of the data.By many learners these scarce data points will likely be treated as outliers, resulting in lower variance than needed to provide accurate estimates.Thus, naively fitting the data will lead to biased estimates, an example of which is depicted on the figure as Biased T and Biased C.However, what we aim for is an unbiased estimator that captures the data closely while still generalising well, a scenario showcased by Unbiased T and Unbiased C on the figure. For
664
247475855
0
16
ITE estimation, fitting the data closely is especially important.Although in case of average effect estimation the difference between biased and unbiased estimators can be negligible, the individualised case usually exacerbates the issue.For instance, in the presented example, the difference in ATE error is 0.44, but it grows to 0.77 in ITE error. In this work, instead of altering the sample importance, as many existing methods do, we aim to augment provided data in a way that underrepresented data regions are no longer dominated by the rest of the samples, leading to estimators no longer treating those data points as outliers and fitting them more closely, ultimately resulting in less biased solutions and more accurate ITE estimates.The following section describes our proposed method in detail. IV. DEBIASING GENERATIVE TREES As described in the previous section, model misspecification can be caused by underrepresented or missing data regions.Reweighing partially addresses this problem, but struggles with ITE estimation, not to mention propensity score approximators are subject to misspecification too.To avoid these pitfalls, we tackle the misspecification through undersmoothness by augmenting the original data with new data points that carry useful information and help achieve the final estimators better ITE predictions.As the injected samples are expected to be informative to the learners, the overall data complexity increases as a consequence.Moreover, because this is a data augmentation procedure, it is estimator agnostic, that is, it can be used by any existing estimation methods.It is also worth pointing out that simply modelling and oversampling the entire joint distribution would not work as the
665
247475855
0
16
learnt joint would include any existing data imbalances.In other words, underrepresented data regions would remain in minority, not addressing the problem at hand.This observation led us to a conclusion there is a need to identify smaller data regions, or clusters, and model their distributions in separation instead, giving us control over which areas to sample from and with what ratios.To achieve this, we incorporate recently proposed Generative Trees [10], which retain all the benefits of standard decision trees, such as simplicity, speed and transparency.They can also be easily extended to ensembles of trees, often improving the performance significantly.In practice, a standard decision tree regressor is used to learn the data.Once the tree is constructed, the samples can be assigned to tree leaves according to the learnt decision paths, forming distinct subpopulations that we are after.The distributions of these clusters are then separately modelled through Gaussian Mixture Models (GMMs).Similarly to decision trees, we again prioritise simplicity and ease of use Model S i with Gaussian Mixture Models.Obtain G i . 8: Draw N G samples from G i .Store them in X G .9: end for 10: Repeat steps 3-9 for X C .11: Merge X and X G into a single data set X M .12: Train estimator E on X M .Get debiased estimator E D .13: return debiased estimator E D here, which is certainly the case with GMMs.The next step is to sample equally from modelled distributions, that is, to draw the same amount of new samples per each GMM.In this way, we
666
247475855
0
16
reduce data imbalances.A merge of new and original data is then provided to a downstream estimator, resulting in a less biased final estimator.Through experimentation, we find that splitting the original data at the beginning of the process into treated and control units and learning two separate trees for each group helps achieve better overall effect.A stepby-step description of the proposed procedure is presented in Algorithm 1. As ensembles of trees almost always improve over simple ones, we incorporate Extremely Randomised Trees for an additional performance gain.The procedure remains the same on a high level, differing only in randomly selecting inner trees at the time of sampling.Overall, we call this approach Debiasing Generative Trees (DeGeTs) as a general framework, with DeGe Decision Trees (DeGeDTs) and DeGe Forests (DeGeFs) for realisations with Decision Trees and Extremely Randomised Trees respectively. There are a few important parameters to take care of when using the method.Firstly, depth of trees controls the granularity of identified subpopulations.Smaller clusters may translate to less accurate modelled distributions, whereas too shallow trees will bring the modelling closer to the entire joint that may result in not solving the problem of interest at all.The other tunable knob is the amount of new data samples to generate, where more data usually equates to a stronger effect, but also higher noise levels, which must be controlled to avoid destroying meaningful information in the original data.Finally, the number of components in GMMs is worth considering, where more complex distributions may require higher numbers of components. All of the parameters can
667
247475855
0
16
be found through cross-validation by using a downstream estimator's performance as a feedback signal as to which parameters work the best, which can also be tailored to a specific estimator of choice.The number of GMM components can be alternatively optimised through Bayesian Information Criterion (BIC) score.In order to make this method as general and easy to use as possible, we instead provide a set of reasonable defaults that we find work well across different data sets and settings.Default parameters: max depth = log 2 N f − 1, where N f denotes the number of input features, n samples = 0.5 × size(training data), n components ∈ [1, 5] -pick the one with the lowest BIC score. In addition, we observe the fact that DeGeTs framework goes beyond applied Generative Trees and GMMs.This is because the data splitting part can, in fact, be performed by other methods, such as clustering.Consequently, GMMs can be substituted by any other generative models. V. EXPERIMENTS We follow recent literature (e.g.[17], [31], [35]) in terms of incorporated data sets and evaluation metrics.We start with defining the later as different data sets use different sets of metrics.The source code that allows for a full replication of the presented experiments is available online 2 and is based on the CATE benchmark 3 . There are a few aspects we aim to investigate.Firstly, how the established reweighing methods perform in individual treatment effect estimation.Secondly, how the choice of model class impacts estimation accuracy (misspecification).Thirdly, how our proposed method affects the performance of the base
668
247475855
0
16
learners, and how it compares to other methods.Finally, we also study how our method influences the number of rules in prunned decision trees as an indirect measure of data complexity. Although we do perform hyperparameter search to some extent in order to get reasonable results, it is not our goal to achieve the best results possible, hence the parameters used here are likely not optimal and can be improved upon more extensive search.The main reason is the setups presented as part of this work are intended to be as general as possible.This is why in our analysis we specifically focus on the relative difference in performance between settings rather than comparing them to absolute state-of-the-art results. A. Evaluation Metrics The main focus of utilised metrics here is on the quantification of the errors made by provided predictions.Thus, the metrics are usually denoted as X , which translates to the amount of error made with respect to prediction type X (lower is better).In terms of treatment outcomes, Y (i) t and ŷ(i) t denote true and predicted outcomes respectively for treatment t and individual i.Thus, following the definition of ITE (Eq.( 1)), the difference Y 0 gives a true effect, whereas ŷ(i) 1 − ŷ(i) 0 a predicted one.Following this, we can define Precision in Estimation of Heterogeneous Effect (PEHE), which is the root mean squared error between predicted and true effects: Following the definition of ATE (Eq.( 2)), we measure the error on ATE as the absolute difference between predicted and true average effects, formally written
669
247475855
0
16
as: Given a set of treated subjects T that are part of sample E coming from an experimental study, and a set of control group C, define the true Average Treatment effect on the Treated (ATT) as: The error on ATT is then defined as the absolute difference between the true and predicted ATT: Define policy risk as: Where E[.] denotes mathematical expectation and policy π becomes π(x) = 1 if ŷ1 − ŷ0 > 0; π(x) = 0 otherwise. B. Data We incorporate a set of well-established causal inference benchmark data sets that are briefly described in the following paragraphs and summarised in Table I. IHDP.Introduced by [16], based on Infant Health Development Program (IHDP) clinical trial [6].The experiment measured various aspects of premature infants and their mothers, and how receiving specialised childcare affected the cognitive test score of the infants later on.We use a semi-synthetic version of this data set, where the outcomes are simulated through the NPCI package 4 (setting 'A') based on real pretreatment covariates.Moreover, the treatment groups are made imbalanced by removing a subset of the treated individuals.We report errors on estimated PEHE and ATE averaged over 1,000 realisations and split the data with 90/10 training/test ratios. JOBS.This data set, proposed by [1], is a combination of the experiment done by [20] as part of the National Supported Work Program (NSWP) and observational data from the Panel Study of Income Dynamics (PSID) [11].Overall, the data captures people's basic characteristics, whether they received a job training from NSWP (treatment), and their employment
670
247475855
0
16
NEWS.Introduced by [17], which consists of news articles in the form of word counts with respect to a predefined vocabulary.The treatment is represented as the device type (mobile or desktop) used to view the article, whereas the simulated outcome is defined as the user's experience.Similarly to IHDP, we report PEHE and ATE errors for this data set, averaging over 50 realisations with 90/10 training/test ratio splits. TWINS.The data set comes from official records of twin births in the US in years 1989-1991 [2].The data are preprocessed to include only individuals of the same sex and where each of them weight less than 2,000 grams.The treatment is represented as whether the individual is the heavier one of the twins, whereas the outcome is the mortality within the first year of life.As both factual and counterfactual outcomes are known from the official records, that is, mortality of both twins, one of the twins is intentionally hidden to simulate an observational setting.Here, we incorporate the approach taken by [21], where new binary features are created and flipped at random (0.33 probability) in order to hide confounding information.We report AT E and P EHE for this data set, averaged over 10 iterations with 80/20 training/test ratio splits. Debiasing Generative Trees.Our proposed method.We include the stronger performing DeGeF variation. A general approach throughout all conducted experiments was to train a method on the training set and evaluate it against appropriate metrics on the test set.5 base learners were trained and evaluated in that way: l1, l2, Simple Trees, Boosted Trees and
671
247475855
0
16
Kernel Ridge.DML and Meta-Learners were combined with different base learners as they need them to solve intermediate regression and classification tasks internally.This resulted in 3×5 = 15 combinations of distinct estimators.Similarly, DeGeF was combined with the same 5 base learners to investigate how they react to our data augmentation method.Causal Forest and dummy regressor were treated as standalone methods.Overall, we obtained 27 distinct estimators per each data set.In terms of Simple and Boosted Trees, we defaulted to ETs and CatBoost respectively.For NEWS, due to its high-dimensionality, we switched to computationally less expensive Decision Trees and LightGBM instead. As our DeGeF method is a data augmentation approach, it affects only the training set that is later used by base learners.It does not change the test set in any way as the test portion is used specifically for evaluation purposes to test how methods generalise to unseen data examples.More specifically, DeGeF injects new data samples to the existing training set, and that augmented training set is then provided to base learners. Most of our experimental runs were performed on a Linux based machine with 12 CPUs and 60 GBs of RAM.More demanding settings, such as NEWS combined with tree-based methods, were delegated to one with 96 CPUs and 500 GBs of RAM, though such a powerful machine is not required to complete those runs. Tables II -V present the main results, where we specifically focus on: a) relevant to a given data set metrics, and b) changes in performance relative to a particular base learner.The latter is calculated
672
247475855
0
16
as ((r a − r b )/r b ) × 100%, where r a and r b denote results of advanced methods and base learners respectively.The reason for analysing these relative changes rather than absolute values is because in this study we are specifically interested in how more complex approaches (including ours) affect the performance of the base learners, even if not reaching state-of-the-art results.For example, if a relative change for xl-et reads '−20', it means this estimator decreased the error by 20% when compared to plain et learner for that particular metric.Changes greater than zero denote an increase in errors (lower is better).Furthermore, Table VI shows the number of rules obtained from a prunned Decision Tree while trained on original data and augmented by degef. All presented numbers (excluding relative percentages) denote means and 95% confidence intervals. VI. DISCUSSION In terms of IHDP data set (Table II), the classic methods (dml, tl, and xl) strongly improve in ATE, but can also be unstable as it is the case with dml, specifically dml-cb and dml-lgbm.Against PEHE, the situation is much worse as those methods significantly decrease in performance when compared to the base learners, not to mention catastrophic setbacks in the worst cases (deltas above 200%).Note that not a single traditional method improves in PEHE (all deltas positive).Our degef, on the other hand, often improves in both ATE and PEHE (see negative deltas).Even in the worst cases with l1 and l2, degef is still very stable and does not destroy the predictions as it happened with
673
247475855
0
16
the other approaches.Thus, our method clearly offers the best improvements in PEHE and competitive predictions in ATE while providing a good amount of stability.In the JOBS data set (Table III), classic methods again achieve strong improvements in average effect estimation (ATT) in best cases, though they can be substantially worse as well (e.g.dml-et).In policy predictions, an equivalent of ITE, traditional techniques are even less likely to provide improvements, except the X-Learner.With respect to degef, it can also worsen the quality of predictions in ATT, as shown with degef-l1, though it does not get as bad as with dml-et.However, even in that worst example, policy predictions are not destroyed.The best cases in degef, on the other hand, achieve strong improvements in policy.Similarly to IHDP, here degef provided solid improvements in ITE predictions (policy), while staying on par with traditional methods in ATT, obtaining reasonable improvements and keeping the worst cases still better than the worst ones in the other methods, proving again its stability.TWINS data set (Table IV), proved to be very difficult for all considered methods when it comes to PEHE, though they did not worsen the predictions as well.Some good improvements in ATE can be observed, but also noticeable decreases in performance in the worst cases (combinations with dt).Our method behaves similarly to the classic ones, offering occasional gains and keeping the decreases in reasonable bounds.The stability of degef is especially noticeable in PEHE as the worst decrease (degef-dt) is still better than in other methods. The last data set, NEWS (Table V), showed the
674
247475855
0
16
traditional approaches can provide some improvements in PEHE as well, at least in their best efforts, though performance decreases are also noticeable in the worst ones.They also offer quite stable improvements in ATE, except extremely poor dml-dt.The X-Learner performs particularly well across both metrics (most deltas negative).Our proposed method offers reasonable gains in ATE as well, while keeping performance decreases at bay even in the worst efforts.Even though degef provides little improvement in PEHE, it does not destroy individualised predictions either.Overall, this data set showcases superior stability properties of degef particularly well, making it a preferable choice if small but safe performance gains are desirable over potentially higher but riskier improvements.In general terms, the results show that performance can vary substantially depending on the model class, even within the same advanced method (dml, xl, degef ).For instance, DML proved to work particularly well with L1 and L2 as base learners, whereas X-Learner often outperforms T-Learner, adding more stability to the results as well.Our proposed technique usually offers significant improvements in ITE predictions in best cases, often better than traditional methods, while keeping the predictions stable even in the worst examples.Classic methods are clearly strong in ATE estimates, but can struggle in individualised predictions.Overall, these methods (dml, xl) proved to be less stable than ours, where the worst cases can perform quite poorly, especially dml.This makes degef a safer choice on average when considering various estimators, even more so when achieving the best possible performance is not considered a priority. We also investigate the number of rules
675
247475855
0
16
in prunned Decision Trees as a proxy for data complexity.As presented in Table VI, degef significantly increases the amount of rules across all data sets, translating to an increase in data complexity.This proves the undersmoothing effect we aim for has been achieved.In addition, we observe that modest data complexity increases in IHDP and JOBS correlate with strong degef gains in ITE estimation in those two data sets, whereas a much bigger difference in TWINS (from 9.6 to 59.1) correlated with considerably lower prediction performance gains (Table IV). After combining all the results together, we can observe that degef : a) improves effect predictions (Tables II -V), and b) increases data complexity (Table VI).We thus conclude that more accurate effect prediction (as per a)) is a sign of better model generalisation.Consequently, we equate better generalisation to reduced model misspecification.Furthermore, we can observe the undersmoothing effect as per b).This is our indirect evidence that our method addresses misspecification via undersmoothing by showing that downstream estimators improve effect predictions when trained on augmented data.In terms of theoretical guarantees, we rely on [33], which provides a thorough formal analysis of the problem of undersmoothing. In terms of possible limitations of our method, we assume the data sets we work with have relatively low noise levels.This is because in noisy environments, the inner GMMs would likely pick up a lot of noise and thus sampling from them would result in even more noisy data samples.The result would be the opposite of what we aim for, that is, to increase data complexity
676
247475855
0
16
and bring new informative samples, not to introduce bias in the form of noise.Thus, our method would likely worsen base learners performance in such environments.Furthermore, we expect extremely high-dimensional data sets may cause computational issues due to the increasing depth of the inner trees.This is partly why setting a reasonable depth limit is important. VII. CONCLUSIONS Treatment effect estimation tasks are often subject to the covariate shift problem that is exhibited by discrepancies between observational and interventional distributions.This leads to model misspecification, which we tackle directly in this work by introducing a novel data augmentation method based on generative trees that provides an undersmoothing effect and helps downstream estimators achieve better robustness, ultimately leading to less biased estimators.Through our experiments, we show that the choice of model class matters, and that traditional methods can struggle in individualised effect estimation.Our proposed approach presented competitive results with existing reweighing procedures on average effect tasks while offering significantly better performance improvements on individual effect problems.The method also exhibits better stability in terms of provided gains than other approaches, rendering it a safer option overall. In terms of possible future directions, it might be interesting to investigate the feasibility of replacing generative trees with neural networks to handle extremely high-dimensional problems.Another direction would be to instantiate DeGeTs framework with alternative methods, such as standard clustering and generative neural networks.Lastly, extending our approach to noisy data sets would likely increase its potential applicability to real world problems. Fig. 1 . Fig. 1.An example highlighting model misspecification issue.T and C denote Treated
677
247475855
0
16
and Control respectively.The difference in ITE error is almost twice as in ATE. Algorithm 1 Debiasing Generative TreesInput: X -data set, E -estimator Parameter: N -number of generated samples Output: E D -debiased estimator 1: Let X G = ∅.2: Split X into treated and control units (X T and X C ). 3: Train a Decision Tree regressor on X T .4: Map X T to tree leaves.Obtain subpopulations S. 5: Let N G = N/(2 × len(S)).6: for S i in S do 7: TABLE I A SUMMARY OF INCORPORATED DATA SETS.T/C DENOTE THE AMOUNT OF TREATED AND CONTROL SAMPLES RESPECTIVELY. TABLE V NEWS RESULTS.ESTIMATORS MARKED WITH 'X' -NO RESULTS DUE TO UNREASONABLY EXCESSIVE TRAINING TIME. TABLE VI NUMBER OF RULES IN A PRUNNED DECISION TREE WITH AND WITHOUT degef AUGMENTATION.
678
247475855
0
16
Antidepressant-like activity, active components and related mechanism of Hemerocallis citrina Baroni extracts Hemerocallis citrina Baroni [Asphodelaceae], which is traditional herbal medicine, has been widely used for treating depressive disorders in Eastern-Asia countries. However, the active compounds and corresponding mechanism of anti-depression are not yet completely clarified. In this study, the anti-depressive activities of six H. citrina extracts were primarily evaluated. The results showed that the water extract of H. citrina flowers (HCW) displays significant anti-depressive activity. A total of 32 metabolites were identified from HCW by high-performance liquid chromatography/quadrupole time-of-flight mass spectrometry (HPLC-Q-TOF-MS) and nuclear magnetic resonance (NMR). And then, the anti-depressive activity of the high-level compound (rutin) in HCW was also estimated. The results indicated that rutin displayed significant anti-depressive activity and was one of the main active ingredients. Finally, the anti-depressive mechanisms of HCW and rutin were investigated based on the intestinal microorganisms. The results showed that HCW and rutin increase the diversity and richness of the intestinal flora and regulate the specific intestinal microorganisms such as Bacteroides and Desulfovibrio genera in depressed mice. This work marks the first comprehensive study of the active components, anti-depressive activities and corresponding mechanisms of different H. citrina extracts, which provide a potential possibility for developing new antidepressants. Introduction Depression, a mental disease with high morbidity and mortality, has become a severe public health problem in the 21st century. According to the prediction by the World Health Organization, depression will become the disease with the heaviest economic burden in the world by 2030 (World Health Organization, 2017;Miller and
679
251911954
0
16
Campo, 2021). At present, synthesized drugs are the most commonly used and effective treatment means in clinical traits, which have disadvantages such as low effective rate, serious side effects, and high price (Krishnan and Nestler, 2008;Carhart-Harris et al., 2021). However, traditional herbal medicines have unique advantages in preventing and treating depression as alternative and complementary therapies. Therefore, the development of new antidepressants from traditional herbal medicines has been researched hot. H. citrina Baroni (It was also called "Huang Hua Cai" in Chinese) has been widely grown in China, Japan, and Korea, and its flower buds are one of the most commonly consumed vegetables in Asia (Ma et al., 2018;Liu et al., 2020). The flower buds of H. citrina have been recorded to relieve depression in the medicinal book "Compendium of Materia Medica", which is a famous Chinese encyclopedia of medicine written by Li et al. (2017) in the Ming dynasty (Xu et al., 2020;Qing et al., 2021a). Modern pharmacology has also proved that flower buds of H. citrina extract have significant antidepressant activity, and the polyphenols and flavonoids were regarded as the main active components (Lin et al., 2013;Xu et al., 2016). However, the specific active ingredients in the extract of H. citrina, which may display prominent antidepressant-like activity, were rarely identified and needed further investigation. The active constituents undergo successive changes during plant growth, and metabolites vary in fresh and dry flower buds of H. citrina (Qing et al., 2017a;Yu et al., 2020;Qing et al., 2021b). In some previous studies (Du et al., 2014;Xu et
680
251911954
0
16
al., 2016), a chronic unpredictable mild stress (CUMS) model was used to evaluate the anti-depressive effect of H. citrina that only took one type of the flower buds (mainly using the dried flower buds of H. citrina as the materials), which may cause a series of active constituents to be neglected and not estimated. In the present study, we comprehensively evaluate the antidepressantlike activities of extracts produced by dried flower buds, fresh flower buds, and flowers of H. citrina (Supplementary Figure S1). The antidepressant activity of H. citrina. extracts and related mechanisms have been investigated in previous studies. H. citrina extracts could increase the levels of monoamine neurotransmitters, such as 5-hydroxytryptamine , dopamine (DA) and norepinephrine (NE), in the brain of depressed mice (monoamine hypothesis) (Gu et al., 2012;Lin et al., 2013;Xu et al., 2016;Qing et al., 2021a). In addition, the extracts of H. citrina were able to increase the content of BDNF (neurotrophic hypothesis) and reduce IL-1β, IL-6, TNF-α and malondialdehyde (MDA) levels (stress hypothesis) in the brain of depressed mice. With the development of human health and gut microbes, depression is inextricably linked with changes in intestinal microorganisms. However, the relationships between the antidepressant-like activity of H. citrina extracts and intestinal microorganism variations were rarely studied and need further investigation. In this study, the antidepressant-like activities of 6 different extracts from H. citrina were primarily assessed using a CUMS model. And then, the main chemical constituents of active extract were identified by HPLC-Q-TOF-MS and NMR technology. Finally, the mechanisms of antidepressant-like activity were investigated
681
251911954
0
16
based on the intestinal flora. Figure S1), the water and 80% ethanol extracts of both parts (a total of 4 different extracts with low and high-dose, namely WHCWL, WHCWH, HCWL, HCWH, WHCEL, WHCEH, HCEL, and HCEH) were employed. The result showed that the Sucrose preference test (SPT) of the model control group was significantly lower (p < 0.05) compared with the normal control group, which indicated that the CUMS mouse model was successfully established ( Figure 1A). The SPT index of different dose HCW groups (HCWL and HCWH), WHCW low-dose group (WHCWL) and fluoxetine hydrochloride (FH) group was significantly increased compared to that of the model group (p < 0.01 or p < 0.05). According to the SPT index, the HCW displayed stronger anti-depressive activity than the positive control in the corresponding dose. Compared with the normal control group, the Ingestion latency test (ILT) of mice in the model control group was significantly prolonged (p < 0.01), however, the ILT index in the FH group was significantly shorter than that in the model group (p < 0.01) ( Figure 1B). The ILT of the depressed mice in low and high-dose HCW groups (HCWL and HCWH) was significantly decreased (p < 0.05) compared to the model control group, and the high-dose HCW group had a more shorted ILT index than that of the low-dose group. In addition, The ILT of the depressed mice in the high-dose WHCW (WHCWH), WHCE (WHCEH), and HCE (HCEH) were significantly decreased (p < 0.05, or p < 0.01) compared to the model
682
251911954
0
16
control group ( Figure 1B). Compared with the normal control group, the activity time of the model control group was significantly decreased (p < 0.05), and the resting time was significantly prolonged (p < 0.01), which indicated that the CUMS mouse model was successfully FIGURE 1 The effects of different dose extracts of H. citrina flowers and fresh flower buds on the behaviors of CUMS mice. (A) Sucrose preference test, (B) ingestion latency test, (C) tail suspension activity test, and (D) tail suspension still time test. Data are reported as mean ± SD. For statistical significant, # p < 0.05, ## p < 0.01 compared with the normal control group; *p < 0.05, **p < 0.01 compared with the model control group. NC, normal group; MC, model group; FH, Fluoxetine hydrochloride group; WHCWL and WHCWH, low and high-dose of water extracts of fresh flower buds; HCWL and HCWH, low and high-dose of water extracts of flowers; WHCEL and WHCEH, low and high-dose of 80% ethanol extracts of fresh flower buds; HCEL and HCEH, low and high-dose of 80% ethanol extracts of flowers; low-dose: 200 mg/kg; high-dose, 500 mg/kg. Frontiers in Pharmacology frontiersin.org 03 established ( Figures 1C,D). Compared with the model group, the activity time was significantly prolonged (p < 0.05) and the resting time was significantly decreased (p < 0.05) in low and high-dose groups of HCW. The low-dose WHCW (WHCWL), high-dose WHCE (WHCWH), low and high-dose WHCE (WHCEL and WHCEH), and FH groups display similar antidepressive activities with the HCW groups ( Figures 1C,D).
683
251911954
0
16
The results of SPT, ILT, and Tail suspension test (TST) experiments indicated that the low and high-dose HCW groups display significant anti-depressive activity, and the water extracts of flowerings (HCWL and HCWH) showed better activities than that of ethanol extracts (HCEL and HCEH). In addition, the extracts of flowerings (HCWL, HCWH, HCEL, and HCEH) display more significant anti-depressive activities than that of flower buds (WHCWL, WHCWH, WHCEL, and WHCEH). Figure S1), the water and 80% ethanol extracts of both samples (a total of 3 different extracts with low and high-dose, namely HCWL, HCWH, DHCWL, DHCWH, DHCEL, and DHCEH) were employed. The result showed that the SPT of the model control group was significantly lower (p < 0.05) compared with the normal control group, which indicated that the CUMS mouse model was successfully established (Figure 2A). Compared to the model group, the SPT of low and high-dose HCW and DHCE groups (HCWL, HCWH, DHCWL, and DHCWH), high-dose DHCW group (DHCWH) and FH group was significantly increased (p < 0.05). As shown in Figure 2B, the ILT of mice in the model control group was significantly prolonged (p < 0.05) compared to the normal control group, which also indicated that the CUMS mouse model was successfully established. The ILT index of mice in low and high-dose HCW groups (HCWL and HCWH) and the positive control group (FH) were significantly decreased (p < 0.05) compared with the model control group. However, the ILT of mice in the other four groups (DCWL, DCWH, DHCEL, and DHCEH) were not decreased significantly.
684
251911954
0
16
Compared with the normal control group, the activity time of the model control group was significantly decreased (p < 0.05), The results of SPT, ILT and TST in the second antidepressive experiment indicated that the low and high-dose HCW groups (HCWL and HCWH) also display significant anti-depressive activity. However, the water and 80% ethanol In summary, HCW displayed significant anti-depressive activity in the twice anti-depressive experiments, and the extracts of flowers showed more potent anti-depressive activity than that of fresh and dry flower buds. Analysis and isolation of primary metabolites from H. citrina flowers The HCW and HCE of H. citrina was preliminarily analyzed by HPLC-Q-TOF-MS to find the main ingredients contributing to the antidepressant activity. A total of 32 high- FIGURE 4 The MS/MS spectra of standard 7 (A) and the metabolite 6 (B) in ESI − mode, and corresponding fragmentation behaviors. Frontiers in Pharmacology frontiersin.org 06 level metabolites, including 18 flavonoids, 7 chlorogenic acidtype, 3 polyphenols, 2 acetamide alkaloids and 2 diterpenoid saponins, were screened and tentatively identified by their tandem mass spectrometry (MS/MS) [ (Table 1 and Figure 3) ]. And then, the high-content ingredient (compound 15) was further isolated by the MS-guided isolation method (Qing et al., 2014;Qing et al., 2016;Qing et al., 2017b), and its structure was unambiguously identified by nuclear magnetic resonance (NMR) data. The specific structural identification of the primary metabolites by HPLC-Q-TOF-MS and NMR is as follows: 2.4 Identification of the metabolites by high-performance liquid chromatography/quadrupole time-offlight mass spectrometry HPLC-Q-TOF-MS is a fast and sensitive tool widely used
685
251911954
0
16
for comprehensive screening and identifying the plant's metabolite. In this study, each high-level compound which was appeared with a prominent peak in the total ion chromatogram (TIC) or the ultraviolet chromatogram was screened. The MS/MS data of metabolites were obtained by the target-MS/MS methods, and their structures were determined by their characteristic fragmentation behavior. Take metabolites 6 and 7 for example. Compound 7 was unambiguously identified as chlorogenic acid by comparing the retention time, MS and MS/MS data with the standard. The fragmentation pathways of compound 7 were investigated in detail ( Figure 4A), and compound 6 has similar fragmentation behaviors to the standard ( Figure 4B). The difference m/z value of compounds 6 and 7 was 15.9942 Da, which indicated that one of the hydroxyl groups in compound 7 was replaced by a hydrogen atom and formed the structure of 6. In the MS/MS spectrum of compound 6 ( Figure 4B), the high abundance of ions at m/z 163.0379 and 119.0463 were formed, which demonstrated that one of the hydroxyl groups connected to the benzene ring was replaced, therefore, compound 6 was tentatively identified as 5-O-pcoumaroylquinic acid . In addition, metabolites 9, 21, and 30 were unambiguously identified as caffeic acid, rutin, and quercetin, respectively, by comparing the retention time, MS and MS/MS data with the standards. The rest primary metabolites in the HCW were also identified by their MS/MS spectra (Supplementary Figure S2; Table 1). Isolation and identification of compound 15 In order to further determine the structure of the primary metabolites, the MS-guided
686
251911954
0
16
isolation, which was well-developed by our laboratory (Qing et al., 2016;Qing et al., 2017b), was employed. Compound 15 was obtained from HCW and its structure was unambiguously identified by NMR data. Compound 15 was obtained as a yellow amorphous powder. (Siewek et al., 1984). Compound 15 was reported for the first time from H. citrina. In summary, flavonoids (14-32) and chlorogenic acid-type compounds (2-4, 6-8, and 13) were the primary metabolites of the HCW, which display a crucial role in the anti-depressive activity. Rutin (21) was the highest content metabolite of HCW, and compounds 15 and 23 were also the high-level metabolites ( Figure 3B), which may have potential anti-depressive activity and need further investigation. The anti-depressive activity of rutin (compound 21) In order to find the anti-depressive active component of HCW, the anti-depressive activity of the main metabolite (21) was evaluated. The result showed that the SPT index of the model control group was significantly lower (p < 0.05) compared with the normal control group, which indicated that the CUMS mice model was successfully established ( Figure 5A). Compared to the model group, the SPT index of different doses of rutin (RTL: 0.7 mg/kg, RTM: 1.8 mg/kg, RTH: 6.3 mg/kg and RTE: 10.0 mg/kg, see Supplementary Table S1) and FH group was significantly increased (p < 0.05). As shown in Figure 5B, the ILT of mice in the model control group was significantly prolonged (p < 0.05) compared to the normal control group, which also indicated that the CUMS mice model was successfully established. The
687
251911954
0
16
ILT of mice in the RTH and RHE groups and the FH group decreased significantly (p < 0.05) compared to the model control group. Frontiers in Pharmacology frontiersin.org Compared with the normal control group, the resting time of the model control group was significantly prolonged (p < 0.05), which showed that the CUMS mice model was successfully established. The resting time of mice in the FH, RTM, RTH, and RTE group was significantly decreased (p < 0.05) compared with the model group. However, the tail suspension activity experiment failed ( Figures 5C,D). The results of SPT, ILT, and TST experiments indicated that RTH and RTE have significant anti-depressive activity, which demonstrates that rutin was one of the main anti-depressive active compounds of HCW. And the anti-depressive activities of other high-level compounds such as metabolites 15 and 23 needed further evaluation. Effect of H. citrina flowers on the intestinal flora Depression is a complicated and comprehensive mood disease, and the pathogenesis is still not completely clear. More and more studies have shown that intestinal flora affects not only gastrointestinal physiology, but also the function and behavior of the central nervous system through the microbiota-intestinal-brain axis (Collins et al., 2012;Rogers et al., 2016). However, the relationships between the antidepressant-like activity of H. citrina extracts and intestinal microorganism variations were rarely studied. Therefore, we analyzed the 16S rRNA gene sequencing to determine the effects of two H. citrina extracts (low-dose of HCW and HCE) on the gut microbiota of the depressed mice. H. citrina flowers increases the diversity and
688
251911954
0
16
richness of the intestinal flora As the depth of sequencing increases, the rarefaction curves of all the samples approach the saturation plateau, and the result indicates that the sequencing data covers all species in the sample ( Figure 6A). The Venn diagram compares the differences group between OTUs. A total of 1044 OTUs from five groups of sequencing data of intestinal flora. As shown in Figure 6B, five groups shared 430 OTUs. Unique OTUs were observed in the normal control (22), CUMS model control (48), fluoxetine (11), HCE (93) and HCW (181) groups. α diversity included the ACE, Chao1, Shannon and Simpson index, which was intended Frontiers in Pharmacology frontiersin.org to be represent the community's richness and diversity (Liang et al., 2018;Perxachs et al., 2022). As shown in Figure 6C, the HCE and HCW groups exhibited an increase in the alpha diversity of the ACE index compared with the model group (p < 0.05), and the HCE group increased in the alpha diversity of the Chao 1 index compared with the model group (p < 0.05). The results showed that HCW and HCE treatment improved the diversity and richness of mouse gut microbiota, which was decreased by depression. Principal coordinates analysis (PCoA) presented gut microbiota communities in mice, which were divided into different quadrants respectively from five groups (Simpson et al., 2021). The model group was separated from the normal group in PCoA space, and the normal group and HCW group exhibited certain polymerization tendencies (Supplementary Figure S4). The result showed that stress stimuli decrease the
689
251911954
0
16
enrichment and diversity of gut microbiota, and the HCW can reverse that phenomenon. H. citrina flowers regulates the abundance of specific intestinal flora The effects of CUMS, HCW and HCE on the composition and function of the intestinal flora were analyzed via 16S rRNA sequencing. The community composition was analyzed to obtain the abundance and diversity of each species (Pu et al., 2022). At the phylum level, Firmicutes, Bacteroidetes, and Proteobacteria were predominant in all samples but varied in their abundances ( Figure 7A). The abundance of Firmicutes in the CUMS model group (MC) was significantly higher than that of the normal group (NC) and HCW (p < 0.01), which indicated that the HCW could decrease the abundance of Firmicutes in the CUMS model group. The abundance of Bacteroidetes in the CUMS model group was significantly lower than that of the normal group and HCW (p < 0.01), which demonstrated that the HCW could increase the abundance of Bacteroidetes in the CUMS model group ( Figure 7B). In previous studies, the diversity and abundance of intestinal flora in depressed patients were decreased (Naseribafrouei et al., 2014;Zheng et al., 2016;Lin et al., 2017). Compared to the normal person, at the phylum level, the abundance of Bacteroidetes was decreased, and the level of Firmicutes was increased in the intestinal flora of depressed patients. In this study, the HCW could increase the abundance of Bacteroidetes and decrease the level of Firmicutes in the intestinal flora of depressed mice at the phylum level, which indicated that HCW has the potential
690
251911954
0
16
to be developed as an antidepressant by regulating the abundance of Firmicutes and Bacteroidetes at the phylum level. At the genera level, Bacteroides and Desulfovibrio were the predominant genus in all samples but varied in their abundances Frontiers in Pharmacology frontiersin.org ( Figure 8A). The abundance of Bacteroides was significantly decreased when the normal mice were depressed, while its abundance was significantly increased after administration of the FH and HCW. The abundance of Desulfovibrio were significantly increased when the normal mice depressed, while their abundance was significantly decreased after administration of the HCW ( Figure 8B). The above results indicated that the HCW regulates the abundance of Bacteroides and Desulfovibrio at genera level to achieve the anti-depressive activity based on the microbiota-intestinal-brain axis. Gamma-amino butyric acid (GABA), which is an important neurotransmitter, was used to transmit signals in the synapses of the nervous system. In previous studies (Ironside et al., 2021;Prévot and Sibille, 2021), the level of GABA significantly decreased in depressive patients, including major depressive disorder (MDD) patients. Previous studies indicated that the intestinal flora belonging to Bacteroides genera could produce GABA in the gut, which has an effect on the synapses of the nervous in the brain by the microbiota-intestinal-brain axis (Strandwitz et al., 2019;Izuno et al., 2021;Otaru et al., 2021). In this study, the HCW could significantly increase the abundance of Bacteroides genera in the gut of depressed mice, which indicated that the HCW plays the anti-depressive activity by increasing the abundance of Bacteroides genera and thereby improving the level of GABA. Desulfovibrio
691
251911954
0
16
genera could produce the lipopolysaccharide to disrupt the intestinal barrier and leads to the production of inflammatory factors. In previous studies (Haroon and Miller, 2017;Zhu et al., 2019), the abundance of Desulfovibrio genera was significantly increased in the intestinal flora of depressed mice, which was in accordance with the results of this study. The inflammatory factors, such as IL-1β, IL-6 and TNF-α, could damage the epithelial cells of the gut and have an effect on the frontal cortex and hippocampus of the brain by the blood circulation, which could lead to depression (Chudzik et al., 2021;Morais et al., 2021). In this study, HCW could significantly decrease the level of Desulfovibrio genera of depressive mice. The potential mechanism involved in decreasing the level of lipopolysaccharide produced by Desulfovibrio genera and thereby reducing the content of inflammatory factors in the gut, blood, and brain of depressed mice. Effect of rutin on the intestinal flora Rutin (21) is a polyphenolic compound that has been proven to have antidepressant activity. However, the Frontiers in Pharmacology frontiersin.org relationship between the antidepressant-like activity of rutin and intestinal microorganism variations was rarely reported. Therefore, we analyzed the 16S rRNA gene sequencing to investigate the effects of rutin (high-dose RT, namely RTE) on the gut microbiota of depressed mice. Rutin increases the diversity and richness of the intestinal flora According to the sample number and species OUTs, the rarefaction curves of all the samples had reached a plateau, which indicated that the sequencing data covers all species in the sample (Supplementary Figure S5A). A
692
251911954
0
16
total of 1205 OTUs from four groups of sequencing data of intestinal flora. As shown in Supplementary Figure S5B, four groups shared 792 OTUs. Unique OTUs were observed in the normal control (35), CUMS model control (13), FH (21), and RT (33) groups. As shown in Supplementary Figure S5C, the model group was separated from the normal group in PCoA space, and the normal group and RT group exhibited certain polymerization tendencies. The FH and RT groups exhibited an increase in the alpha diversity of the ACE index, Chao 1 index, and Shannon index compared with the model group (p < 0.05), and the model group decreased the alpha diversity of the ACE index and Chao 1 index compared with the normal group (p < 0.05) (Supplementary Figures S5A-D). The results showed that FH and RT could improve the diversity and richness of the gut microbiota of depressed mice. Rutin regulates the abundance of specific intestinal flora At the phylum level, Firmicutes, Bacteroidetes, and Proteobacteria were predominant in all samples but varied in their abundances (Supplementary Figure S7A). Although the variation trend of the abundance of those microbiotas was consistent with the HCW group. However, the RT group did not display significant function in regulating Firmicutes and Bacteroidetes phylum levels in the depressed mice (p > 0.05). At the genera level, the RT group could increase the abundance of Bacteroides and decrease the level of Desulfovibrio genera (Supplementary Figure S7B), which was in accordance with the HCW group. However, the effect of the RT group on
693
251911954
0
16
the level of Bacteroides and Desulfovibrio genera in depressed mice was weaker than the HCW group. The result shows that rutin is one of the main ingredients of antidepression in HCW. Preparations of H. citrina extracts Dry, fresh flower buds and flowers of H. citrina (Mengzihua, 100 Kg each, Supplementary Figure S1) were collected from Qidong County, Hunan Province of China, and were unambiguously identified by Doctor Zhixing Qing (Hunan Agricultural University). The extract experiments used water and 80% ethanol as the extraction solvent. The ratio of material to liquid was 6: 1 and the extraction time was 24 h under room temperature. The extraction solvents were concentrated by reducing pressure and dried by vacuum. Finally, 6 different extracts, which include water extracts of flowers (HCW), water extracts of fresh flower buds (WHCW), 80% ethanol extracts of flowers (HCE), 80% ethanol extracts of fresh flower buds (WHCE), water extract of dried flower buds (DHCW), and the 80% ethanol extract of dried flower buds (DHCE), were obtained for the anti-depressive experiments. Chemicals Acetonitrile and formic acid (HPLC-grade) were purchased from Merck (Darmstadt, Germany) and ROE (Newark, New Castle, United States), respectively. Deionized water was purified using a Milli-Q system (MA, United States). All of them were used for HPLC-Q-TOF-MS analysis. Three standards, including chlorogenic acid (7) High-performance liquid chromatography/quadrupole time-offlight mass spectrometry conditions Chromatography was performed using an Agilent 1290 HPLC system (Agilent Technologies, United States) consisting of an auto-sampler, thermostatted column compartment, and a tunable UV detector. Separation was carried out on a XAqua C18 (150
694
251911954
0
16
mm × 2.1 mm, 2.8 µm; Frontiers in Pharmacology frontiersin.org Accrom Technologies Co. Ltd., China). The elution system was 0.1% formic acid 1) and 0.1% formic acid in acetonitrile 2). The linear gradient elution program was as flowers: 0-30 min, 5%-45% B; 30-40 min 45%-90% B. The sample injection volume was 5 μl. The rate was set at 0.3 ml/min, and the column temperature was maintained at 30°C. Mass spectrometric experiments were performed using a 6530 Q-TOF/MS accurate mass spectrometer (Agilent Technologies, United States) in negative ionization mode, and TOF data were acquired between m/z 100 and 1000 in centroid mode. The condition of Q-TOF-MS was optimized as follows: sheath gas temperature: 350°C; sheath gas flow, 12 L/min; gas temperature, 300°C; drying gas, 10 L/min; fragmentor voltage, 150 V; skimmer voltage, 65 V, capillary voltage, 4000 V. The TOF mass spectrometer was continuously calibrated using a reference solution (masses at m/z 112.9855 and 966.0007) to obtain the high-accuracy mass measurement. The targeted MS/MS experiments were performed using variable collision energy (10-50 eV), which was optimized for each metabolite. Experimental animal Male ICR mice (weight range of 18.0-22.0 g) were purchased from Hunan Slake Jing-da Experimental Animals Co., Ltd. (Certificate number 43004700048590). The experimental animal production license number is SCXK (Xiang) 2011-0003, and the use license number is SYXK 2015-06. Animals were housed under a standard 12: 12 h light/dark schedule with the light on at 8:00 a.m. and given free assess to tap water and food pellets. The ambient temperature was controlled at (22°C ± 2°C)
695
251911954
0
16
and given a standard chow and water ad libitum for the duration of the study. All experiments and procedures were carried out according to the Regulations of Experimental Animal Administration issued by the State Committee of Science and Technology of China. Establishment of depression model by chronic unpredictable mild stress In addition to 10 mice in the normal control group, the other mice were subjected to a chronic unpredictable mild stress, including fasting (12 h), water prohibition (12 h), forced swimming (10 min), strobe (12 h), noise (30 min), restraint (12 h) (placed in a 50 ml centrifuge tube with a diameter of 3.0 cm, a length of about 10 cm, and 6 to 7 vents with a diameter of 0.5 mm), tilting cage (12 h), wet cage (12 h), reversal of day and night, etc. (specific schedule see Supplementary Table S2). Animals abstain from food and water during bondage (Lu et al., 2019). Drug administration After 28 days of continuous modeling, the animals were randomly divided into several groups (Supplementary Tables S1, S3, S4, three anti-depressive trails have been done in this study) according to the results of the sucrose preference test and body weight. The normal and model control groups were given distilled water by intragastric administration, and the other groups were given the corresponding solution of extracts with a volume of 20 ml/kg for continuous 35 days. After the last administration, the mice were tested for SPT, ILT, and TST. All antidepressant experimental procedures are shown in Supplementary Figure S9. Sucrose preference test
696
251911954
0
16
The sucrose preference test was divided into training and test period. Two days before the test as the training period, two bottles of 1% sucrose solution were given to the animals in the first 24 h, and a bottle of 1% sucrose solution was replaced with a bottle of pure water in the next 24 h. They did not abstain from food and water for 8 h before the test. During the test period (15 h), mice were given a bottle of 1% sucrose solution and a bottle of pure water, and the positions of the two bottles were swapped to avoid the influence of position preference. At the end of the test, calculate the sucrose preference index (sucrose preference index (%) = sucrose consumption/(sucrose consumption + pure water consumption) × 100%). Ingestion latency test The ingestion latency test was carried out after the SPT. The experiment was divided into 2 days, the first day was the adaptation period, and the animals were put into a square open box to adapt for 10 min. After fasting for 24 h, the ingestion latency test was carried out. A food pellet was placed in the center of the open box, and the mice were put Frontiers in Pharmacology frontiersin.org back to assess the food pellet (mice were placed in the same position and direction each time). The time between the animal was put into the cage and the first ingestion of food was recorded. Tail suspension test After the ILT, the mice were tested by tail suspension test.
697
251911954
0
16
The mice were fixed on the tail suspension device with a baffle to separate the line of sight and avoid interfering with others mice. The head was about 5 cm from the table, so the mice had no place to climb onto or grasp. The activity and rest time of the mice were recorded in the next 4 min. Collection of intestinal solutes All mice were anesthetized in a container filled with ether gas after the behavioral test, and decapitated quickly to avoid more pain. The intestines of the experimental mice were cut off under the aseptic environment, and the intestinal solutes were dug up with the aseptic knife. The intestinal solutes were collected with sterilized centrifuge tubes on the ice bag and stored in the refrigerator at −80°C for 16S rRNA analysis. DNA extraction and PCR amplification The DNA of mice intestinal samples was extracted according to the instructions of "E.Z.N.A. ® Soil DNA Kit". Each sample of DNA was diluted to 1 ng/µl with sterile water, and measured the purity and concentration via agarose gel electrophoresis. PCR was performed using TransGen AP221-02, and a specific primer (16S V4 region primers, 515F and 806R) with barcode and amplification was used with related enzymes. 1% agarose gel electrophoresis was used to detect PCR amplification product and the target band was recovered by shearing. Take the qualified PCR amplification product and send it to Shanghai Meiji Biomedical Technology Co., Ltd. for sequencing. Library construction and computer sequencing The library was constructed using NEXTFLEX Rapid DNA-Seq Kit. The
698
251911954
0
16
V4 region of the 16S rRNA gene was analyzed by high throughput sequencing using the Illumina HiSeq platform by NovaSeq PE250. After sequencing is completed, the data are subjected to low-quality read removal, splicing, filtering, and chimera removal to obtain valid data. Sequences were analyzed using Quantitative Insights into Microbial Ecology software and the UPARSE pipeline. 3.8.7 Statistical method SPSS16.0 was used for experimental data statistical analysis, and the statistically significant level was set to p ≤ 0.05. The data were expressed as mean ± standard deviation. Leven's test method was used to test normality and homogeneity of variance. Multiple samples were compared through one-way ANOVA, and the LSD test was used for statistical analysis. Statistical differences and biological significance were considered in the evaluation. Conclusion In this study, the anti-depressive activities of six extracts of H. citrina (HCW, HCE, WHCW, WHCE, DHCW, and DHCE) were evaluated by the depressive mice induced by CUMS model. The results showed that the extracts of H. citrina flowers (HCW and HCE) displays significant anti-depressive activities and the HCW has the strongest function than other extracts. A total of 32 compounds, which were mainly flavonoids and chlorogenic acid-type compounds, were identified by HPLC-Q-TOF-MS/MS and NMR. Among them, the content of rutin (compound 21) was the highest. And then, the anti-depressive activities of rutin was also estimated, the results showed that this compound displayed significant anti-depressive activity and was one of the main active compounds of HCW. Finally, the 16 s (V3+V4) region amplifiers of mice intestinal flora were sequenced to
699
251911954
0
16